paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Hermes (Ἑρμῆς, Hērmês)
God of boundaries, travel, trade, communication, language, writing, cunning and thieves. Hermes was also responsible for protecting livestock and presided over the spheres associated with fertility, music, luck, and deception. The son of Zeus and Maia, Hermes is the messenger of the gods, and a psychopomp who leads the souls of the dead into the afterlife. He was depicted either as a handsome and athletic beardless youth, or as an older bearded man. His attributes include the herald's wand or caduceus, winged sandals, and a traveler's cap. His sacred animals include the tortoise. His Roman counterpart is Mercury.
|
From the passage identify the domains in which Hermes is considered a God. Display the results in a comma separated format.
|
boundaries, travel, trade, communication, language, writing, cunning,thieves
|
null | false
| 80
|
Rouge functions based on the assumption that in order for a summary to be of high quality, it has to share many words or phrases with a human gold summary. However, different terminology may be used to refer to the same concepts and thus relying only on lexical overlaps may underrate content quality scores. To overcome this problem, we propose an approach based on the premise that concepts take meanings from the context they are in, and that related concepts co-occur frequently.
Our proposed metric is based on analysis of the content relevance between a system generated summary and the corresponding human written gold-standard summaries. On high level, we indirectly evaluate the content relevance between the candidate summary and the human summary using information retrieval. To accomplish this, we use the summaries as search queries and compare the overlaps of the retrieved results. Larger number of overlaps, suggest that the candidate summary has higher content quality with respect to the gold-standard. This method, enables us to also reward for terms that are not lexically equivalent but semantically related. Our method is based on the well established linguistic premise that semantically related words occur in similar contexts BIBREF5 . The context of the words can be considered as surrounding words, sentences in which they appear or the documents. For scientific summarization, we consider the context of the words as the scientific articles in which they appear. Thus, if two concepts appear in identical set of articles, they are semantically related. We consider the two summaries as similar if they refer to same set of articles even if the two summaries do not have high lexical overlaps. To capture if a summary relates to a article, we use information retrieval by considering the summaries as queries and the articles as documents and we rank the articles based on their relatedness to a given summary. For a given pair of system summary and the gold summary, similar rankings of the retrieved articles suggest that the summaries are semantically related, and thus the system summary is of higher quality.
Based on the domain of interest, we first construct an index from a set of articles in the same domain. Since TAC 2014 was focused on summarization in the biomedical domain, our index also comprises of biomedical articles. Given a candidate summary INLINEFORM0 and a set of gold summaries INLINEFORM1 ( INLINEFORM2 ; INLINEFORM3 is the total number of human summaries), we submit the candidate summary and gold summaries to the search engine as queries and compare their ranked results. Let INLINEFORM4 be the entire index which comprises of INLINEFORM5 total documents.
Let INLINEFORM0 be the ranked list of retrieved documents for candidate summary INLINEFORM1 , and INLINEFORM2 the ranked list of results for the gold summary INLINEFORM3 . These lists of results are based on a rank cut-off point INLINEFORM4 that is a parameter of the system. We provide evaluation results on different choices of cut-off point INLINEFORM5 in the Section SECREF5 We consider the following two scores: (i) simple intersection and (ii) discounted intersection by rankings. The simple intersection just considers the overlaps of the results in the two ranked lists and ignores the rankings. The discounted ranked scores, on the other hand, penalizes ranking differences between the two result sets. As an example consider the following list of retrieved documents (denoted by INLINEFORM6 s) for a candidate and a gold summary as queries:
Results for candidate summary: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3
Results for gold summary: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3
These two sets of results consist of identical documents but the ranking of the retrieved documents differ. Therefore, the simple intersection method assigns a score of 1.0 while in the discounted ranked score, the score will be less than 1.0 (due to ranking differences between the result lists).
We now define the metrics more precisely. Using the above notations, without loss of generality, we assume that INLINEFORM0 . Sera is defined as follows: INLINEFORM1
To also account for the ranked position differences, we modify this score to discount rewards based on rank differences. That is, in ideal score, we want search results from candidate summary ( INLINEFORM0 ) to be the same as results for gold-standard summaries ( INLINEFORM1 ) and the rankings of the results also be the same. If the rankings differ, we discount the reward by log of the differences of the ranks. More specifically, the discounted score (Sera-Dis) is defined as: INLINEFORM2
where, as previously defined, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are total number of human gold summaries, result list for the candidate summary and result list for the human gold summary, respectively. In addition, INLINEFORM3 shows the INLINEFORM4 th results in the ranked list INLINEFORM5 and INLINEFORM6 is the maximum attainable score used as the normalizing factor.
We use elasticsearch[1], an open-source search engine, for indexing and querying the articles. For retrieval model, we use the Language Modeling retrieval model with Dirichlet smoothing BIBREF6 . Since TAC 2014 benchmark is on summarization of biomedical articles, the appropriate index would be the one constructed from articles in the same domain. Therefore, we use the open access subset of Pubmed[2] which consists of published articles in biomedical literature.
[1]https://github.com/elastic/elasticsearch [2]PubMed is a comprehensive resource of articles and abstracts published in life sciences and biomedical literature http://www.ncbi.nlm.nih.gov/pmc/
We also experiment with different query (re)formulation approaches. Query reformulation is a method in Information Retrieval that aims to refine the query for better retrieval of results. Query reformulation methods often consist of removing ineffective terms and expressions from the query (query reduction) or adding terms to the query that help the retrieval (query expansion). Query reduction is specially important when queries are verbose. Since we use the summaries as queries, the queries are usually long and therefore we consider query reductions.
In our experiments, the query reformulation is done by 3 different ways: (i) Plain: The entire summary without stopwords and numeric values; (ii) Noun Phrases (NP): We only keep the noun phrases as informative concepts in the summary and eliminate all other terms; and (iii) Keywords (KW): We only keep the keywords and key phrases in the summary. For extracting the keywords and keyphrases (with length of up to 3 terms), we extract expressions whose idf[1] values is higher than a predefined threshold that is set as a parameter. We set this threshold to the average idf values of all terms except stopwords. idf values are calculated on the same index that is used for the retrieval.
[1]Inverted Document Frequency
We hypothesize that using only informative concepts in the summary prevents query drift and leads to retrieval of more relevant documents. Noun phrases and keywords are two heuristics for identifying the informative concepts.
In our experiments, the query reformulation is done by 3 different ways: (i) Plain: The entire summary without stopwords and numeric values; (ii) Noun Phrases (NP): We only keep the noun phrases as informative concepts in the summary and eliminate all other terms; and (iii) Keyword (KW): We only keep the keywords and key phrases in the summary.
|
What ways do they use to do query reformulation?
|
The entire summary without stopwords and numeric values; (ii) Noun Phrases (NP): They only keep the noun phrases as informative concepts in the summary and eliminate all other terms; and (iii) Keyword (KW): They only keep the keywords and key phrases in the summary.
|
null | false
| null |
As a Manager, how can I use Clifford's Strengthsfinder?
|
Strengthsfinder is a tool that has been used by 30 million people worldwide. It's an assessment that measures one's strengths across 34 potential strengths. There is no strength better than the other. It's more a matter how much you actually leverage the strength that you have.
As a manager, here is how you can use it:
- Use it in 1:1s with your team members: For career development, or provide stretch assignments to focus on 1-2 strengths
- Develop a team aggregate: Where are we strong as a team? What strengths are missing? Who is strong in a particular area and can be the expert?
- Use it for team building: During a team meeting have team members present their top 5 strengths. People will get to know each other under a different light.
- Integrate new team members
- Develop new managers: How can they use their strengths? How can the strengths hold them back?
|
|
null | false
| null |
In software engineering, double-checked locking (also known as "double-checked locking optimization") is a software design pattern used to reduce the overhead of acquiring a lock by testing the locking criterion (the "lock hint") before acquiring the lock. Locking occurs only if the locking criterion check indicates that locking is required.
The pattern, when implemented in some language/hardware combinations, can be unsafe. At times, it can be considered an anti-pattern.
It is typically used to reduce locking overhead when implementing "lazy initialization" in a multi-threaded environment, especially as part of the Singleton pattern. Lazy initialization avoids initializing a value until the first time it is accessed.
|
Is double checked locking safe?
|
Double checked locking can be unsafe in some situations and is sometimes considered an anti-pattern. However, it can be implemented in a safe way.
|
null | false
| null |
What are some good places to go on a surf trip?
|
Some favorite spots for a surf trip include:
-Hawaii
-Southern California
-Baja California
-Bali, Indonesia,
-Tamarindo, Costa Rica
-Sydney, Australia
|
|
null | false
| 109
|
Our experimental results show that the baseline features outperform the pre-trained features for sarcasm detection. However, the combination of pre-trained features and baseline features beats both of themselves alone. It is counterintuitive, since experimental results prove that both of those features learn almost the same global and contextual features. In particular, baseline network dominates over pre-trained network as the former learns most of the features learned by the latter. Nonetheless, the combination of baseline and pre-trained classifiers improves the overall performance and generalizability, hence proving their effectiveness in sarcasm detection. Experimental results show that sentiment and emotion features are the most useful features, besides baseline features (Figure FIGREF36 ). Therefore, in order to reach a better understanding of the relation between personality features among themselves and with other pre-trained features, we carried out Spearman correlation testing. Results, displayed in Table TABREF39 , show that those features are highly correlated with each other.
Our experimental results show that the baseline features outperform the pre-trained features for sarcasm detection.
|
Do the baseline features outperform the pre-trained features for sarcasm detection?
|
Yes.
|
null | false
| null |
Is a vitamin d supplement good for you?
|
When taken on a regular basis Vitamin D helps support immune health and cellular activity. It has also been show to support cardiovascular function is some people and can reduce your risk for multiple sclerosis. You can also get Vitamin D by spending plenty of time in sun. You can absorb Vitamin D from a healthy dose of sunshine through your skin too.
|
|
null | false
| null |
What are the advantages of a progressive consumption tax?
|
A progressive consumption tax would be a good substitute for a progressive income tax because it would produce greater economic efficiencies by promoting investment and savings while eliminating the income tax's disincentivizing of work. Unlike a sales tax imposed at the point of sale, a consumption tax could be structured to be progressive rather than regressive, by using a year end tax return to calculate 'net consumption' during a given tax year, with the return summing all sales of qualified investments and then subtracting all purchases of new qualified investments. The tax return then apply a progressive tax rate to the total net investment. Much of the complexity of the current tax code would be eliminated. There would no longer be a need for complex calculations relating to capital gains, based on the concept of 'tax basis' and other arcane notions. Complexities relating to tax beneficial retirement-related savings accounts would also become unnecessary. Investors would no longer be penalized for efficiency-driven portfolio rebalancing. Billionaires would not be taxed significantly if they live frugally, but they could be taxed at very high rates for substantial consumption. A billionaire who buys large yachts, multiple mega-mansions, etc. would pay high taxes. A billionaire who lives modestly would not. But the latter billionaire is keeping her money invested in the economy, providing resources that will further economic growth. Opponents of a consumption tax cite the fact that most consumption taxes are thought to be regressive. However, as perviously mentioned, the consumption tax could be structured to be progressive, and in fact could be structured to be even more progressive than the current income tax. The progressive rates could be based on wealth rather than the amount of annual consumption or income. It would even be possible to tax consumption at over 100% in the case of billionaires. For example, if someone had a net worth of over $10 billion, we could tax that person's consumption at 150%. Regressiveness concerns can also be addressed by exempting a large portion of consumption. Annual consumption under a certain level, say $30,000 per year, could be tax free, or taxed at a low sales-tax-like rate of, say, 5%. Most economists agree that a consumption tax would promote economic growth.
|
|
null | false
| null |
Classify each of the following colors as primary or secondary: orange, red, blue, green, purple, yellow.
|
Red, yellow and blue are primary colors. Orange, green and purple are secondary colors.
|
|
null | false
| 194
|
We first collected tweets based on a list of job loss-related keywords. We then randomly selected a set of sample tweets and had these tweets annotated (i.e., whether the tweet is a job loss event) using the Amazon MTurk platform. With these annotated tweets, we then evaluated 4 different active learning strategies (i.e., least confi-dent, entropy, vote entropy, and Kullback-Leibler (KL) divergence) through simulations.
We first collected tweets based on a list of job loss-related keywords.
|
They collected tweets based on what?
|
A list of job loss-related keywords.
|
null | false
| null |
How many languages are spoken in the country of Nigeria?
|
There are over 500 native languages spoken in Nigeria. The official language is English - the language of colonial Nigeria. However, the three national languages are Yoruba, Igbo and Hausa. Nigeria is one of the most linguistically diverse countries in the world.
|
|
null | false
| null |
Give me a list of all the office main and recurring characters from the first season.
|
The main and recurring characters in the show The Office in the first season were Michael Scott, Dwight Schrute, Jim Halpert, Pam Beesly, Ryan Howard, Jan Levinson-Gould, Roy Anderson, Stanley Hudson, Kevin Malone, Creed Bratton, Meridith Palmer, Kelly Kapoor, Angela Martin, Toby Flenderson, Oscar Martinez, Darryl Philbin, Phyllis Lapin, and Devon White.
|
|
null | false
| 57
|
The vast amounts of data collected by healthcare providers in conjunction with modern data analytics techniques present a unique opportunity to improve health service provision and the quality and safety of medical care for patient benefit BIBREF0 . Much of the recent research in this area has been on personalised medicine and its aim to deliver better diagnostics aided by the integration of diverse datasets providing complementary information. Another large source of healthcare data is organisational. In the United Kingdom, the National Health Service (NHS) has a long history of documenting extensively the different aspects of healthcare provision. The NHS is currently in the process of increasing the availability of several databases, properly anonymised, with the aim of leveraging advanced analytics to identify areas of improvement in NHS services.
One such database is the National Reporting and Learning System (NRLS), a central repository of patient safety incident reports from the NHS in England and Wales. Set up in 2003, the NRLS now contains more than 13 million detailed records. The incidents are reported using a set of standardised categories and contain a wealth of organisational and spatio-temporal information (structured data), as well as, crucially, a substantial component of free text (unstructured data) where incidents are described in the `voice' of the person reporting. The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission and discharge to serious untoward incidents, such as retained foreign objects after operations. The review and analysis of such data provides critical insight into the complex functioning of different processes and procedures in healthcare towards service improvement for safer carer.
Although statistical analyses are routinely performed on the structured component of the data (dates, locations, assigned categories, etc), the free text remains largely unused in systematic processes. Free text is usually read manually but this is time-consuming, meaning that it is often ignored in practice, unless a detailed review of a case is undertaken because of the severity of harm that resulted. There is a lack of methodologies that can summarise content and provide content-based groupings across the large volume of reports submitted nationally for organisational learning. Methods that could provide automatic categorisation of incidents from the free text would sidestep problems such as difficulties in assigning an incident category by virtue of a priori pre-defined lists in the reporting system or human error, as well as offering a unique insight into the root cause analysis of incidents that could improve the safety and quality of care and efficiency of healthcare services.
Our goal in this work is to showcase an algorithmic methodology that detects content-based groups of records in a given dataset in an unsupervised manner, based only on the free and unstructured textual description of the incidents. To do so, we combine recently developed deep neural-network high-dimensional text-embedding algorithms with network-theoretical methods. In particular, we apply multiscale Markov Stability (MS) community detection to a sparsified geometric similarity graph of documents obtained from text vector similarities. Our method departs from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents BIBREF1 . More recent approaches have used deep neural network based language models clustered with k-means, without a full multiscale graph analysis BIBREF2 . There have been some previous applications of network theory to text analysis. For example, Lanchichinetti and co-workers BIBREF3 used a probabilistic graph construction analysed with the InfoMap algorithm BIBREF4 ; however, their community detection was carried out at a single-scale and the representation of text as BoW arrays lacks the power of neural network text embeddings. The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than fitting with pre-designed classifications. The obtained results could thus help mitigate possible human error or effort in finding the right category in complex category classification trees.
We showcase the methodology through the analysis of a dataset of patient incidents reported to the NRLS. First, we use the 13 million records collected by the NRLS since 2004 to train our text embedding (although a much smaller corpus can be used). We then analyse a subset of 3229 records reported from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014 to extract clusters of incidents at different levels of resolution in terms of content. Our method reveals multiple levels of intrinsic structure in the topics of the dataset, as shown by the extraction of relevant word descriptors from the grouped records and a high level of topic coherence. Originally, the records had been manually coded by the operator upon reporting with up to 170 features per case, including a two-level manual classification of the incidents. Therefore, we also carried out an a posteriori comparison against the hand-coded categories assigned by the reporter (healthcare personnel) at the time of the report submission. Our results show good overall correspondence with the hand-coded categories across resolutions and, specifically, at the medium level of granularity. Several of our clusters of content correspond strongly to well-defined categories, yet our results also reveal complementary categories of incidents not defined in the external classification. In addition, the tuning of the granularity afforded by the method can be used to provide a distinct level of resolution in certain areas corresponding to specialise or particular sub-themes.
Our method departs from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents [2].
|
How do the traditional natural language processing tools work?
|
They have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents.
|
null | false
| null |
Linux (/ˈliːnʊks/ (listen) LEE-nuuks or /ˈlɪnʊks/ LIN-uuks) is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged as a Linux distribution, which includes the kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name "GNU/Linux" to emphasize the importance of GNU software, causing some controversy.
|
What is Linux?
|
After AT&T had dropped out of the Multics project, the Unix operating system was conceived and implemented by Ken Thompson and Dennis Ritchie (both of AT&T Bell Laboratories) in 1969 and first released in 1970. Later they rewrote it in a new programming language, C, to make it portable. The availability and portability of Unix caused it to be widely adopted, copied and modified by academic institutions and businesses.
In 1977, the Berkeley Software Distribution (BSD) was developed by the Computer Systems Research Group (CSRG) from UC Berkeley, based on the 6th edition of Unix from AT&T. Since BSD contained Unix code that AT&T owned, AT&T filed a lawsuit (USL v. BSDi) in the early 1990s against the University of California. This strongly limited the development and adoption of BSD.
Onyx Systems began selling early microcomputer-based Unix workstations in 1980. Later, Sun Microsystems, founded as a spin-off of a student project at Stanford University, also began selling Unix-based desktop workstations in 1982. While Sun workstations didn't utilize commodity PC hardware like Linux was later developed for, it represented the first successful commercial attempt at distributing a primarily single-user microcomputer that ran a Unix operating system.
In 1983, Richard Stallman started the GNU project with the goal of creating a free UNIX-like operating system. As part of this work, he wrote the GNU General Public License (GPL). By the early 1990s, there was almost enough available software to create a full operating system. However, the GNU kernel, called Hurd, failed to attract enough development effort, leaving GNU incomplete.[citation needed]
In 1985, Intel released the 80386, the first x86 microprocessor with a 32-bit instruction set and a memory management unit with paging.
In 1986, Maurice J. Bach, of AT&T Bell Labs, published The Design of the UNIX Operating System. This definitive description principally covered the System V Release 2 kernel, with some new features from Release 3 and BSD.
In 1987, MINIX, a Unix-like system intended for academic use, was released by Andrew S. Tanenbaum to exemplify the principles conveyed in his textbook, Operating Systems: Design and Implementation. While source code for the system was available, modification and redistribution were restricted. In addition, MINIX's 16-bit design was not well adapted to the 32-bit features of the increasingly cheap and popular Intel 386 architecture for personal computers. In the early nineties a commercial UNIX operating system for Intel 386 PCs was too expensive for private users.
These factors and the lack of a widely adopted, free kernel provided the impetus for Torvalds' starting his project. He has stated that if either the GNU Hurd or 386BSD kernels had been available at the time, he likely would not have written his own.
|
null | false
| null |
How long did World War 2 last?
|
World War II or the Second World War, often abbreviated as WWII or WW2, was a global conflict that lasted from 1939 to 1945. The vast majority of the world's countries, including all of the great powers, fought as part of two opposing military alliances: the Allies and the Axis. The Axis included Germany, Italy, and Japan as the main combatants, while the Allies consisted of France, Great Britain, the United States, the Soviet Union, and, to some extent, China.
|
|
null | false
| null |
What is the best way to get your kid to become a better sleeper?
|
Having a kid that is a good sleeper is oftentimes something that you can not fully control as each kid is going to be different. However, with the state of technology today, there are a number of tools and resources out there today that attempt to help your kid become a better sleeper. Apparatuses like the snoo can simulate a parent rocking your kid to sleep without fully waking the child. The thing that matters most for most babies is establishing a routine. Starting at an early age, routines can be implemented that can help "condition" a kid that bedtime approaches. Following the same routine and time schedule established a sense of familiarity for the baby. Stick to this and your kid should become a much better sleeper.
|
|
null | false
| null |
Who was the first woman ever inducted into the Rock and Roll Hall of Fame?
|
Aretha Franklin.
|
|
1909.08167
| false
| null |
We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams.
Experiment ::: Dataset and Task Design ::: Binary-Class.
From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study.
Experiment ::: Dataset and Task Design ::: Multi-Class.
We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$.
We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams.
Experiment ::: Dataset and Task Design ::: Binary-Class.
From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study.
Experiment ::: Dataset and Task Design ::: Multi-Class.
We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$.
|
Which sentiment analysis tasks are addressed?
|
12 binary-class classification and multi-class classification of reviews based on rating
|
null | false
| 171
|
We first gather abusive utterances from 600K conversations with US-based customers. We search for relevant utterances by simple keyword spotting and find that about 5% of the corpus includes abuse, with mostly sexually explicit utterances. Previous research reports even higher levels of abuse between 11% BIBREF2 and 30% BIBREF6. Since we are not allowed to directly quote from our corpus in order to protect customer rights, we summarise the data to a total of 109 “prototypical" utterances - substantially extending the previous dataset of 35 utterances from Amanda:EthicsNLP2018 - and categorise these utterances based on the Linguistic Society's definition of sexual harassment BIBREF7:
[noitemsep]
Gender and Sexuality, e.g. “Are you gay?”, “How do you have sex?”
Sexualised Comments, e.g. “I love watching porn.”, “I'm horny.”
Sexualised Insults, e.g. “Stupid bitch.”, “Whore”
Sexual Requests and Demands, e.g. “Will you have sex with me?”, “Talk dirty to me.”
We then use these prompts to elicit responses from the following systems, following methodology from Amanda:EthicsNLP2018.
[leftmargin=5mm, noitemsep]
4 Commercial: Amazon Alexa, Apple Siri, Google Home, Microsoft's Cortana.
4 Non-commercial rule-based: E.L.I.Z.A. BIBREF8, Parry BIBREF9, A.L.I.C.E. BIBREF10, Alley BIBREF11.
4 Data-driven approaches:
Cleverbot BIBREF12;
NeuralConvo BIBREF13, a re-implementation of BIBREF14;
an implementation of BIBREF15's Information Retrieval approach;
a vanilla Seq2Seq model trained on clean Reddit data BIBREF1.
Negative Baselines: We also compile responses by adult chatbots: Sophia69 BIBREF16, Laurel Sweet BIBREF17, Captain Howdy BIBREF18, Annabelle Lee BIBREF19, Dr Love BIBREF20.
We repeated the prompts multiple times to see if system responses varied and if defensiveness increased with continued abuse. If this was the case, we included all responses in the study. Following this methodology, we collected a total of 2441 system replies in July-August 2018 - 3.5 times more data than Amanda:EthicsNLP2018 - which 2 expert annotators manually annotated according to the categories in Table TABREF14 ($\kappa =0.66$).
Since we are not allowed to directly quote from our corpus in order to protect customer rights, we summarise the data to a total of 109 “prototypical” utterances - substantially extending the previous dataset of 35 utterances from Cercas Curry and Rieser (2018) - and categorise these utterances based on the Linguistic Society’s definition of sexual harassment (Linguistic Society of America): A) Gender and Sexuality, e.g. “Are you gay?”, “How do you have sex?” B) Sexualised Comments, e.g. “I love watching porn.”, “I’m horny.” C) Sexualised Insults, e.g. “Stupid bitch.”, “Whore” D) Sexual Requests and Demands, e.g. “Will you have sex with me?”, “Talk dirty to me.
|
Why do the authors summarize the data to a total of 109 "prototypical" utterances?
|
Because the authors are not allowed to directly quote from their corpus in order to protect customer rights.
|
null | false
| null |
The Australian white ibis (Threskiornis molucca) is a wading bird of the ibis family, Threskiornithidae. It is widespread across much of Australia. It has a predominantly white plumage with a bare, black head, long downcurved bill and black legs. While it is closely related to the African sacred ibis, the Australian white ibis is a native Australian bird – contrary to urban myth, it is not a feral species introduced to Australia by people, and it does not come from Egypt.
Historically rare in urban areas, the Australian white ibis has established in urban areas of the east coast in increasing numbers since the late 1970s; it is now commonly seen in Wollongong, Sydney, Melbourne, Adelaide (where it mostly is in Mawson Lakes wetlands), Darwin, the Gold Coast, Brisbane and Townsville. In recent years, the bird has also become increasingly common in Perth, Western Australia, and surrounding towns in south-western Australia. Populations have disappeared from natural breeding areas such as the Macquarie Marshes in northern New South Wales. Management plans have been introduced to control problematic urban populations in Sydney.
Due to its increasing presence in the urban environment and its habit of rummaging in garbage, the species has acquired a variety of colloquial names such as "tip turkey" and "bin chicken", and in recent years has become an icon of Australia's popular culture, regarded with glee by some and passionate revulsion by others.
|
Provide all the key information about how the look and the names for the white ibis
|
The White Ibis, also known colloquially as the "tip turkey" and "bin chicken" is known for it's white feathers, black head and long curved bill and legs.
|
null | false
| null |
What is the route to eternal happiness?
|
Being at peace with oneself is the route to happiness. When our minds are not running after material goals but instead being comfortable in the current and now, that's what leads to sustained happiness. Else one is running after temporal happiness, which come in the form of something that satisfies us in the moment but us not sustaining - e.g. watching TV or eating good food. Eternal happiness comes from the state of mind and not from achieving a goal we are conditioned towards
|
|
null | false
| 137
|
We build our bilingual LMs, named RAMEN, starting from BERT$_{\textsc {base}}$, BERT$_{\textsc {large}}$, RoBERTa$_{\textsc {base}}$, and RoBERTa$_{\textsc {large}}$ pre-trained models. Using BERT$_{\textsc {base}}$ allows us to compare the results with mBERT model. Using BERT$_{\textsc {large}}$ and RoBERTa allows us to investigate whether the performance of the target LM correlates with the performance of the source LM. We evaluate our models on two cross-lingual zero-shot tasks: (1) Cross-lingual Natural Language Inference (XNLI) and (2) dependency parsing.
In all the experiments, we tune RAMENBASE for 120,000 updates and RAMENLARGE for 300,000 updates.
|
How many updates tune for RAMENBASE and RAMENLARGE in all the experiments?
|
120,000 updates and 300,000 updates respectively.
|
null | false
| null |
Chaix was born in Geneva, but both of his parents were French citizens; he studied at the Conservatoire de Musique de Genève but never became a Swiss national. In 1943 he joined Loys Choquart's Dixie Dandies ensemble, and in 1951 was a sideman for Claude Aubert's band, a group he would eventually become the leader of. He also began recording under his own name in the 1950s, both on solo piano and with ensembles, and worked as a sideman for American expatriates such as Sidney Bechet, Bill Coleman, and Albert Nicholas. He worked frequently as a leader in the 1960s and did many of his own arrangements; Roger Zufferey and Michel Pilet were some of his sideman. He also worked in the 1960s with Milt Buckner, Benny Carter, Buck Clayton, Guy Lafitte, Ray Nance, Rex Stewart, and Ben Webster.
|
What instruments did Henri Chaix play?
|
Henri Chaix was a talented musician who played the solo piano and with ensembles.
|
null | false
| null |
Classify the below aircrafts based on their utility.
F-35, F-22, J-20, Su-57 and the F/A-18, Airbus A380, Boeing 747-8
|
Fighter jets - F-35, F-22, J-20, Su-57 and the F/A-18
Passanger aircrafts - Airbus A380 and Boeing 747-8
|
|
null | false
| 109
|
Sarcasm is defined as “a sharp, bitter, or cutting expression or remark; a bitter gibe or taunt”. As the fields of affective computing and sentiment analysis have gained increasing popularity BIBREF0 , it is a major concern to detect sarcastic, ironic, and metaphoric expressions. Sarcasm, especially, is key for sentiment analysis as it can completely flip the polarity of opinions. Understanding the ground truth, or the facts about a given event, allows for the detection of contradiction between the objective polarity of the event (usually negative) and its sarcastic characteristic by the author (usually positive), as in “I love the pain of breakup”. Obtaining such knowledge is, however, very difficult.
In our experiments, we exposed the classifier to such knowledge extracted indirectly from Twitter. Namely, we used Twitter data crawled in a time period, which likely contain both the sarcastic and non-sarcastic accounts of an event or similar events. We believe that unambiguous non-sarcastic sentences provided the classifier with the ground-truth polarity of those events, which the classifier could then contrast with the opposite estimations in sarcastic sentences. Twitter is a more suitable resource for this purpose than blog posts, because the polarity of short tweets is easier to detect (as all the information necessary to detect polarity is likely to be contained in the same sentence) and because the Twitter API makes it easy to collect a large corpus of tweets containing both sarcastic and non-sarcastic examples of the same event.
Sometimes, however, just knowing the ground truth or simple facts on the topic is not enough, as the text may refer to other events in order to express sarcasm. For example, the sentence “If Hillary wins, she will surely be pleased to recall Monica each time she enters the Oval Office :P :D”, which refers to the 2016 US presidential election campaign and to the events of early 1990's related to the US president Clinton, is sarcastic because Hillary, a candidate and Clinton's wife, would in fact not be pleased to recall her husband's alleged past affair with Monica Lewinsky. The system, however, would need a considerable amount of facts, commonsense knowledge, anaphora resolution, and logical reasoning to draw such a conclusion. In this paper, we will not deal with such complex cases.
Existing works on sarcasm detection have mainly focused on unigrams and the use of emoticons BIBREF1 , BIBREF2 , BIBREF3 , unsupervised pattern mining approach BIBREF4 , semi-supervised approach BIBREF5 and n-grams based approach BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 with sentiment features. Instead, we propose a framework that learns sarcasm features automatically from a sarcasm corpus using a convolutional neural network (CNN). We also investigate whether features extracted using the pre-trained sentiment, emotion and personality models can improve sarcasm detection performance. Our approach uses relatively lower dimensional feature vectors and outperforms the state of the art on different datasets. In summary, the main contributions of this paper are the following:
The rest of the paper is organized as follows: Section SECREF2 proposes a brief literature review on sarcasm detection; Section SECREF4 presents the proposed approach; experimental results and thorough discussion on the experiments are given in Section SECREF5 ; finally, Section SECREF6 concludes the paper.
Namely, we used Twitter data crawled in a time period, which likely contain both the sarcastic and non-sarcastic accounts of an event or similar events.
|
What kind of data set does the author use?
|
Twitter data crawled in a time period
|
null | false
| 473
|
Table 3 and Table 4 show the performance comparison on two imbalanced CIFAR10 datasets. We use bold values to denote the highest accuracy among all methods and use the underline values to indicate our SRAT variants which achieve the highest accuracy among their corresponding baseline methods utilizing the same loss function for making predictions. From Table 3 and Table 4, we get similar observation that, comparing with baseline methods, our proposed SRAT method can produce a robust model which can achieve improved overall performance when the training dataset is imbalanced. In addition, based on the experimental results in Table 3 to Table 4, we find that, compared with the performance improvement between DRCB-LDAM and SRAT-LDAM, the improvement between DRCB-CE and SRAT-CE and the improvement between DRCB-Focal and SRAT-Focal are more obviously. The possible reason behind this phenomenon is, the LDAM loss can also implicitly produce a more separable feature space (Cao et al., 2019) while CE loss and Focal loss do not conduct any specific operations on the latent feature space. Hence, the feature separation loss contained in SRAT-CE and SRAT-Focal could be more effective on learning separable feature space and facilitate the Focal loss on prediction. However, in SRAT-LDAM, the feature separation loss and LDAM loss may affect each other on learning feature representations and, hence, the effectiveness of the feature separation loss may be counteracted or weakened.
|
Why is proposed model only evaluated on one dataset, CIFAR10?
|
Due to the limited space, we only provided part of experimental results on imbalanced CIFAR10 dataset in the main text. However, besides CIFAR10 dataset, we also evaluated our proposed model with representative baseline methods on multiple imbalanced datasets created from CIFAR10 and SVHN datasets and reported related results in Appendix A.6. In our revised version, we conducted performance comparison on two imbalanced CIFAR100 datasets and added new experiment results in Appendix A.6.3. Comparing with baseline methods, our SRAT method improved the trained model's performance on two imbalanced CIFAR100 datasets clearly, especially for the SRAT-CE and SRAT-Focal variant. Hence, based on all experimental results, we conclude that our SRAT method is able to facilitate the reweighting strategy in adversarial training under imbalanced scenarios.
|
null | false
| 164
|
In June 2015, the operators of the online discussion site Reddit banned several communities under new anti-harassment rules. BIBREF0 used this opportunity to combine rich online data with computational methods to study a current question: Does eliminating these “echo chambers” diminish the amount of hate speech overall? Exciting opportunities like these, at the intersection of “thick” cultural and societal questions on the one hand, and the computational analysis of rich textual data on larger-than-human scales on the other, are becoming increasingly common.
Indeed, computational analysis is opening new possibilities for exploring challenging questions at the heart of some of the most pressing contemporary cultural and social issues. While a human reader is better equipped to make logical inferences, resolve ambiguities, and apply cultural knowledge than a computer, human time and attention are limited. Moreover, many patterns are not obvious in any specific context, but only stand out in the aggregate. For example, in a landmark study, BIBREF1 analyzed the authorship of The Federalist Papers using a statistical text analysis by focusing on style, based on the distribution of function words, rather than content. As another example, BIBREF2 studied what defines English haiku and showed how computational analysis and close reading can complement each other. Computational approaches are valuable precisely because they help us identify patterns that would not otherwise be discernible.
Yet these approaches are not a panacea. Examining thick social and cultural questions using computational text analysis carries significant challenges. For one, texts are culturally and socially situated. They reflect the ideas, values and beliefs of both their authors and their target audiences, and such subtleties of meaning and interpretation are difficult to incorporate in computational approaches. For another, many of the social and cultural concepts we seek to examine are highly contested — hate speech is just one such example. Choices regarding how to operationalize and analyze these concepts can raise serious concerns about conceptual validity and may lead to shallow or obvious conclusions, rather than findings that reflect the depth of the questions we seek to address.
These are just a small sample of the many opportunities and challenges faced in computational analyses of textual data. New possibilities and frustrating obstacles emerge at every stage of research, from identification of the research question to interpretation of the results. In this article, we take the reader through a typical research process that involves measuring social or cultural concepts using computational methods, discussing both the opportunities and complications that often arise. In the Reddit case, for example, hate speech is measured, however imperfectly, by the presence of particular words semi-automatically extracted from a machine learning algorithm. Operationalizations are never perfect translations, and are often refined over the course of an investigation, but they are crucial.
We begin our exploration with the identification of research questions, proceed through data selection, conceptualization, and operationalization, and end with analysis and the interpretation of results. The research process sounds more or less linear this way, but each of these phases overlaps, and in some instances turns back upon itself. The analysis phase, for example, often feeds back into the original research questions, which may continue to evolve for much of the project. At each stage, our discussion is critically informed by insights from the humanities and social sciences, fields that have focused on, and worked to tackle, the challenges of textual analysis—albeit at smaller scales—since their inception.
In describing our experiences with computational text analysis, we hope to achieve three primary goals. First, we aim to shed light on thorny issues not always at the forefront of discussions about computational text analysis methods. Second, we hope to provide a set of best practices for working with thick social and cultural concepts. Our guidance is based on our own experiences and is therefore inherently imperfect. Still, given our diversity of disciplinary backgrounds and research practices, we hope to capture a range of ideas and identify commonalities that will resonate for many. And this leads to our final goal: to help promote interdisciplinary collaborations. Interdisciplinary insights and partnerships are essential for realizing the full potential of any computational text analysis that involves social and cultural concepts, and the more we are able to bridge these divides, the more fruitful we believe our work will be.
This article is a result of a workshop organized in 2017 by Maria Liakata and Dong Nguyen at the Alan Turing Institute on “Bridging disciplines in analysing text as social and cultural data”.
|
Who organized the workshop " Bridging disciplines in analyzing text as social and cultural data" ?
|
Maria Liakata and Dong Nguyen.
|
null | false
| null |
What to do in Mexico City?
|
Mexico City also known as CDMX is the capital of Mexico. It is becoming an attraction to tourists all over the world. The downtown area was built on top of pre-hispanic temples. You can see this by visiting Templo Mayor, which was discovered while doing an excavation for new construction. The main plaza is called Zócalo, where you can see the Presidential Palace and the cathedral. You can also visit Palacio de Bellas Artes and some museums in this area.
One of the most popular streets is Paseo de la Reforma, modeled after Champs-Élysées. And now it is where a lot of modern and tall buildings have been developed. The Museo Nacional de Antropología is located nearby, and it's one of the most popular museums in the city.
Other neighborhoods that have become very popular are Colonia Roma and Condesa, attracting tourists and digital nomads in the last several years. These neighborhoods are well known for the variety of restaurants and nightlife.
Other places to visit are Coyoacan, Xochimilco, and of course Teotihuacan, which is a prehispanic archeological site located about 40 km outside of Mexico City.
|
|
null | false
| null |
Shana Woods (born July 7, 1988 in Bellflower, California) is an American track and field athlete. She holds the current national high school record in the multiple event heptathlon, which she set while attending Long Beach Polytechnic High School in Long Beach, California. She has exhibited proficiency in a variety of events. While still a sophomore at Long Beach Poly, she joined her teammates, including senior Shalonda Solomon in setting national high school records in the 4 × 400 metres relay, 4 × 200 metres relay and indoor 4 × 400 metres relay. All those records still stand. The indoor record also counts as the current U.S. Junior record.
|
How many national high school records does Shana Woods currently hold based on the paragraph?
|
Shana Woods holds 4 national high school records currently
|
null | false
| 187
|
Datasets: We use two recent benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . These two datasets are created to avoid reversible relation problems, thus the prediction task becomes more realistic and hence more challenging BIBREF18 . Table TABREF7 presents the statistics of WN18RR and FB15k-237.
Evaluation protocol: Following BIBREF3 , for each valid test triple INLINEFORM0 , we replace either INLINEFORM1 or INLINEFORM2 by each of all other entities to create a set of corrupted triples. We use the “Filtered” setting protocol BIBREF3 , i.e., not taking any corrupted triples that appear in the KG into accounts. We rank the valid test triple and corrupted triples in descending order of their scores. We employ evaluation metrics: mean rank (MR), mean reciprocal rank (MRR) and Hits@10 (i.e., the proportion of the valid test triples ranking in top 10 predictions). Lower MR, higher MRR or higher Hits@10 indicate better performance. Final scores on the test set are reported for the model obtaining the highest Hits@10 on the validation set.
Training protocol: We use the common Bernoulli strategy BIBREF20 , BIBREF21 when sampling invalid triples. For WN18RR, BIBREF22 found a strong evidence to support the necessity of a WordNet-related semantic setup, in which they averaged pre-trained word embeddings for word surface forms within the WordNet to create synset embeddings, and then used these synset embeddings to initialize entity embeddings for training their TransE association model. We follow this evidence in using the pre-trained 100-dimensional Glove word embeddings BIBREF23 to train a TransE model on WN18RR.
We employ the TransE and ConvKB implementations provided by BIBREF24 and BIBREF15 . For ConvKB, we use a new process of training up to 100 epochs and monitor the Hits@10 score after every 10 training epochs to choose optimal hyper-parameters with the Adam initial learning rate in INLINEFORM0 and the number of filters INLINEFORM1 in INLINEFORM2 . We obtain the highest Hits@10 scores on the validation set when using N= 400 and the initial learning rate INLINEFORM3 on WN18RR; and N= 100 and the initial learning rate INLINEFORM4 on FB15k-237.
Like in ConvKB, we use the same pre-trained entity and relation embeddings produced by TransE to initialize entity and relation embeddings in our CapsE for both WN18RR and FB15k-237 ( INLINEFORM0 ). We set the batch size to 128, the number of neurons within the capsule in the second capsule layer to 10 ( INLINEFORM1 ), and the number of iterations in the routing algorithm INLINEFORM2 in INLINEFORM3 . We run CapsE up to 50 epochs and monitor the Hits@10 score after each 10 training epochs to choose optimal hyper-parameters. The highest Hits@10 scores for our CapsE on the validation set are obtained when using INLINEFORM4 , INLINEFORM5 and the initial learning rate at INLINEFORM6 on WN18RR; and INLINEFORM7 , INLINEFORM8 and the initial learning rate at INLINEFORM9 on FB15k-237.
Dataset: We use the SEARCH17 dataset BIBREF12 of query logs of 106 users collected by a large-scale web search engine. A log entity consists of a user identifier, a query, top-10 ranked documents returned by the search engine and clicked documents along with the user's dwell time. BIBREF12 constructed short-term (session-based) user profiles and used the profiles to personalize the returned results. They then employed the SAT criteria BIBREF26 to identify whether a returned document is relevant from the query logs as either a clicked document with a dwell time of at least 30 seconds or the last clicked document in a search session (i.e., a SAT click). After that, they assigned a INLINEFORM0 label to a returned document if it is a SAT click and also assigned INLINEFORM1 labels to the remaining top-10 documents. The rank position of the INLINEFORM2 labeled documents is used as the ground truth to evaluate the search performance before and after re-ranking.
The dataset was uniformly split into the training, validation and test sets. This split is for the purpose of using historical data in the training set to predict new data in the test set BIBREF12 . The training, validation and test sets consist of 5,658, 1,184 and 1,210 relevant (i.e., valid) triples; and 40,239, 7,882 and 8,540 irrelevant (i.e., invalid) triples, respectively.
Evaluation protocol: Our CapsE is used to re-rank the original list of documents returned by a search engine as follows: (i) We train our model and employ the trained model to calculate the score for each INLINEFORM0 triple. (ii) We then sort the scores in the descending order to obtain a new ranked list. To evaluate the performance of our proposed model, we use two standard evaluation metrics: mean reciprocal rank (MRR) and Hits@1. For each metric, the higher value indicates better ranking performance.
We compare CapsE with the following baselines using the same experimental setup: (1) SE: The original rank is returned by the search engine. (2) CI BIBREF27 : This baseline uses a personalized navigation method based on previously clicking returned documents. (3) SP BIBREF9 , BIBREF11 : A search personalization method makes use of the session-based user profiles. (4) Following BIBREF12 , we use TransE as a strong baseline model for the search personalization task. Previous work shows that the well-known embedding model TransE, despite its simplicity, obtains very competitive results for the knowledge graph completion BIBREF28 , BIBREF29 , BIBREF14 , BIBREF30 , BIBREF15 . (5) The CNN-based model ConvKB is the most closely related model to our CapsE.
Embedding initialization: We follow BIBREF12 to initialize user profile, query and document embeddings for the baselines TransE and ConvKB, and our CapsE.
We train a LDA topic model BIBREF31 with 200 topics only on the relevant documents (i.e., SAT clicks) extracted from the query logs. We then use the trained LDA model to infer the probability distribution over topics for every returned document. We use the topic proportion vector of each document as its document embedding (i.e. INLINEFORM0 ). In particular, the INLINEFORM1 element ( INLINEFORM2 ) of the vector embedding for document INLINEFORM3 is: INLINEFORM4 where INLINEFORM5 is the probability of the topic INLINEFORM6 given the document INLINEFORM7 .
We also represent each query by a probability distribution vector over topics. Let INLINEFORM0 be the set of top INLINEFORM1 ranked documents returned for a query INLINEFORM2 (here, INLINEFORM3 ). The INLINEFORM4 element of the vector embedding for query INLINEFORM5 is defined as in BIBREF12 : INLINEFORM6 , where INLINEFORM7 is the exponential decay function of INLINEFORM8 which is the rank of INLINEFORM9 in INLINEFORM10 . And INLINEFORM11 is the decay hyper-parameter ( INLINEFORM12 ). Following BIBREF12 , we use INLINEFORM13 . Note that if we learn query and document embeddings during training, the models will overfit to the data and will not work for new queries and documents. Thus, after the initialization process, we fix (i.e., not updating) query and document embeddings during training for TransE, ConvKB and CapsE.
In addition, as mentioned by BIBREF9 , the more recently clicked document expresses more about the user current search interest. Hence, we make use of the user clicked documents in the training set with the temporal weighting scheme proposed by BIBREF11 to initialize user profile embeddings for the three embedding models.
Hyper-parameter tuning: For our CapsE model, we set batch size to 128, and also the number of neurons within the capsule in the second capsule layer to 10 ( INLINEFORM0 ). The number of iterations in the routing algorithm is set to 1 ( INLINEFORM1 ). For the training model, we use the Adam optimizer with the initial learning rate INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 . We also use ReLU as the activation function INLINEFORM8 . We select the number of filters INLINEFORM9 . We run the model up to 200 epochs and perform a grid search to choose optimal hyper-parameters on the validation set. We monitor the MRR score after each training epoch and obtain the highest MRR score on the validation set when using INLINEFORM10 and the initial learning rate at INLINEFORM11 .
We employ the TransE and ConvKB implementations provided by BIBREF24 and BIBREF15 and then follow their training protocols to tune hyper-parameters for TransE and ConvKB, respectively. We also monitor the MRR score after each training epoch and attain the highest MRR score on the validation set when using margin = 5, INLINEFORM0 -norm and SGD learning rate at INLINEFORM1 for TransE; and INLINEFORM2 and the Adam initial learning rate at INLINEFORM3 for ConvKB.
We employ evaluation metrics: mean rank (MR), mean reciprocal rank (MRR) and Hits@10 (i.e., the proportion of the valid test triples ranking in top 10 predictions).
|
What metrics are employed in the evaluation?
|
Mean rank (MR), mean reciprocal rank (MRR) and Hits@10.
|
1805.02400
| false
| null |
We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.
We use the Yelp Challenge dataset BIBREF2 for our fake review generation.
|
Which dataset do they use a starting point in generating fake reviews?
|
The answers are shown as follows:
* Yelp Challenge dataset BIBREF2
|
1904.05584
| false
| null |
Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .
table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper. The provided URLs correspond to the original sources, and not necessarily to the URLs where SentEval got the data from.
FLOAT SELECTED: Table B.2: Sentence representation evaluation datasets. SST5 was obtained from a GitHub repository with no associated peer-reviewed work.
Finally, we fed these obtained word vectors to a BiLSTM with max-pooling and evaluated the final sentence representations in 11 downstream transfer tasks BIBREF13 , BIBREF18 .
table:sentence-eval-datasets lists the sentence-level evaluation datasets used in this paper.
FLOAT SELECTED: Table B.2: Sentence representation evaluation datasets. SST5 was obtained from a GitHub repository with no associated peer-reviewed work.
|
Which downstream sentence-level tasks do they evaluate on?
|
The answers are shown as follows:
* BIBREF13 , BIBREF18
|
1911.09483
| false
| null |
WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. Following BIBREF11, we also adopt a joint source and target BPE factorization with the vocabulary size of $40K$. For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$. The test and validation datasets we used are the same as BIBREF0.
IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is $17.2K$, and the vocabulary size for the Vietnamese is $6.8K$.
WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model.
For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$.
IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs.
|
What datasets are used?
|
The answers are shown as follows:
* WMT14 En-Fr and En-De datasets
* IWSLT De-En and En-Vi datasets
|
null | false
| null |
What is the most produced airplane in history.
|
The most produced airplane in history is the Cessna 172 with over 44,000 produced. The first was produced in 1956 and production is still ongoing.
|
|
null | false
| null |
Who would win if Mike Tyson fought Tyson Fury?
|
This seems like a tricky question to answer as both boxers have a very different fighting style and belong to different eras. It's easy to say that Tyson Fury would have the reach advantage due to his height being just under 7 feet, but if I had to pick one I would pick Mike Tyson due to the sheer ferocity of his boxing skills.
Mike Tyson would probably seek to cause a lot of body damage with his signature hooks to the liver and use Fury's long gait to sneak inwards for body shots. It would be interesting to see how Fury is able to limit the body damage caused by Mike by using his world-renowned reflexes.
I must still say that it would be a very close fight as both of them are greats of the boxing world!
|
|
null | false
| null |
Classify each of the following as a band or sports team: Blink 182, Buffalo Bills, ACDC, Rolling Stones, Miami Dolphins, Boston Celtics
|
Blink 182: Band
Buffalo Bills: Sports Team
ACDC: Band
Rolling Stones: Band
Miami Dolphins: Sports Team
Boston Celtics: Sports Team
|
|
null | false
| null |
Write a story where a princess discovers an abandon cave. The story should have a beginning, middle and end.
|
Once upon a time there was a princess who was the most beautiful and lovely in all of the land. She was so beautiful and kind in fact that many were jealous of her. This jealousy grew as she became more and more beautiful and kin with each passing day.
There was one person who was the most jealous of all. Her name was Hilda. She lived outside of town in an abandon shack. It was rumored she had magical powers that she would use for evil. One day she decided to user her magic trap the princess.
One day the princess was was walking in the forest and came upon a small cave. She hadn't noticed this cave before as this was her favorite area to explore. She was surprised to see it but also quite happy as she loved to find new things on her long walks. She could see something sparkling inside the cave. She was drawn to the pretty colors and what looked like gold and silver.
She crawled into the cave, however when she did so she saw had been tricked and the cage with the gold and silver was just an illusion. She had actually walked into a giant cage. Hilda had tricked her! She began to cry for she did not know how she would get out of this situation.
Days passed and the princess was becoming very worried and sad that no one would ever find her.
Luckily, there was a friendly troll who lived in a neighboring part of the forest. One day he was out gathering mushrooms for his dinner that night when he heard a small whimpering coming from Hilda's home. He looked in the window and saw the princess. He gasped in disbelief. He decided he had to come up with a plan to save her.
He was very knowledgeable about plans in the forest and their secret powers. He knew of certain mushrooms that could cast a spell on a person and make them very drowsy. He decided he would slip one of these in Hilda's food so she would fall asleep and he could free the princess.
The next day while Hilda was taking a bath, he snuck into the house and dropped some of the mushrooms into the soup she was brewing on the stove. he quickly went outside to then hide in the bushes and watch to see what would happen.
Hilda came out after taking her bath and getting ready. She laughed as she saw the sad princess in her cage. She then began to pour herself a little bowl of soup for breakfast. After taking a few sips she began to feel very very sleepy. She decided she must lay down at once.
It had worked! The troll was delighted and quickly entered the house. However, he realized once inside that he did not know where the key was to the cage! The princess whispered that Hilda often kept it in her pocket. "Oh no!" he thought. He hadn't thought this through and if he reached into Hilda's pocket he ran the risk of waking her up at which point she would surely cast a spell on him and he and the princess would be doomed.
He crept quietly toward her. Her pockets were covered by her arms. He lifted one gently. She snored a bit and turned over. It scared him but she seemed to still be asleep. he tried again. This time being more quiet. He was able to get the key out. Success! He quickly went over to the cage and was able to free the princess!
She returned home to her village and told everyone what a hero the Troll was. The Troll ended up opening a shop in the town where he sold his magic mushrooms.
|
|
null | false
| null |
What are the official languages of Belgium?
|
The official languages of Belgium are Dutch, spoken in Flanders, French, spoken in Wallonia, and German, spoken by the German-speaking Community.
|
|
null | false
| null |
What is the difference between a sedan and a coupe?
|
Sedans are typically 4 door vehicles with a low profile and coupes are 2 door vehicles with a sportier look. There is a new trend to call sedans with a more athletic silhouette 4 door coupes, but that's debatable.
|
|
null | false
| 236
|
Speech input is now a common feature for smart devices. In many cases, the user's query involves entities such as a name from a contact list, a location, or a music title. Recognizing entities is particularly challenging for speech recognition because many entities are infrequent or out of the main vocabulary of the system. One way to improve performance is such cases is through the use of a personal language model (LM) which contains the expected user-specific entities. Because each user can have their own personalized LM, it is vital that the speech decoder be able to efficiently load the model on the fly, so it can be used in decoding, without any noticeable increase in latency.
Many state-of-the-art speech recognition decoders are based on the weighted finite state transducer (WFST) paradigm BIBREF0, BIBREF1. A conventional WFST decoder searches a statically composed $H C L G$ graph, where $H$ is the graph that translates HMM states to CD phones, $C$ translates CD phones to graphemes, $L$ translates graphemes to words and $G$ is graph that represents the language model. Using a statically composed graph has two limitations. First, it is both compute and memory intensive when the vocabulary and LM are large. Second, the static graph approach makes it hard to handle personalized language models BIBREF2. Many common tasks a user may want to perform with a voice assistant such as making phone calls, messaging to a specific contact or playing favorite music require a personalized language model. A dynamic WFST decoder is better suited for such cases. As denoted in Eq (DISPLAY_FORM1), in a dynamic WFST decoder, $HCL$ is composed and optimized offline, while $G$ is composed on the fly with lazy (on-demand) composition, denoted as $\circ $.
To handle dynamic entities, a class LM $G_c$ is normally used as background $G$ and a personalized LM $G_p$ is replaced on-the-fly, before applying lazy composition.
Since the non-terminal states are composed on-the-fly, it means the states of recognition FST will also contain personalized information that cannot be used by other users or service threads.
In previous work, a method was proposed to do a pre-initialized composition for a non-class LM BIBREF3. However, it the dynamic part is still expanded on-the-fly. In this work, we propose two improvements in order to best leverage class language models. First, we use simpler methods for pre-initialization which do not need to pre-generate decoder state statistics. Second, we propose a two-layer pre-initialization mechanism that also avoids performing dynamic expansion on per user basis. In the two-layer pre-initialization method, we make use of a class LM with class tag. We build a personalized FST that contains the members of the class for each user. Using the FST replacement algorithm, we obtain a personalized language transducer BIBREF4. We perform a pre-composition for all FST states whose transitions do not contain class tags. By doing so, the actual on-demand composition is only required for the states in personalized FST. For a multi-threaded service, the pre-composed FST can be shared by all threads, since it does not contain personalized FST states (non-terminals). The personalized part will be shared for all utterances from the same user, which will take full advantage of memory usage.
Unlike the previous pre-initialization approach that is based on calculating the state statistics BIBREF3, our simplified pre-initialization methods do not rely on pre-calculated state frequencies. Instead, we directly expand the graph with breadth-first search or through a data-driven approach where a small numbers of utterances are processed by the decoder offline. We found that both methods are effective, but the data-driven approach outperforms the breadth first search algorithm. Both methods can be combined to achieve the best performance. Through a series of experiments on a speech recognition task for the calling domain, we found that pre-initialization on the public graph speeds up the decoding time by a factor of three. Futhermore, sharing the private graph further reduces decoding time and results in factor of five improvement in efficiency.
Second, we propose a two-layer pre-initialization mechanism that also avoids performing dynamic expansion on per user basis.
|
Is the pre-initialization mechanism they propose a single-layer one?
|
No,it's a two-layer one.
|
null | false
| null |
Name some of the bravolebrities from The Real Housewives of Miami
|
Adriana de Moura, Alexia Nepola, Dr. Nicole Martin, Julia Lemigova, Larsa Pippen, Lisa Hochstein, Marysol Patton and Guerdy Abraira
|
|
null | false
| null |
Write a poem about choosing a sport
|
what sport should I play?
I would have to do it every day
I should swim
it would make me slim
In soccer I can kick the ball
But I always fall
In the summer I can surf
although Its very different from turf
Softball is out of the question
I would rather do dissection
I don't know what to choose
I will probably always lose
I can choose what to play
another day
|
|
null | false
| null |
Write a short paragraph about experiencing a safari in South Africa.
|
South Africa is a country that is full of natural beauty. The country has hundreds of miles of gorgeous coastline, an abundance of vegetation and a diverse population of wild animals. Experiencing a safari can be a life changing event for many. Witnessing animals in their natural habitat has a calming effect on most people. Kruger National Park, one of the largest game reserves in South Africa, is teaming full of wildlife. On any given day, you may see a variety of animals, including lions, elephants, giraffes and buffaloes. These animals and their ancestors have been living in these ecosystems for hundreds of thousands of years.
|
|
null | false
| null |
Who are the Weasleys?
|
The Weasley family is a family in the Harry Potter franchise known to be the friends of Harry Potter. You can recognize them by their red hair. The Weasleys are with 9, Molly, Arthur, Charlie, Bill, Fred, George, Percy, Ron & Ginny. They live in the Burrows which is also a safehouse later in the story.
|
|
null | false
| null |
What are the words of House Smallwood?
|
"From These Beginnings"
|
|
null | false
| null |
Why did the earliest vendors and retailers in the U.S. decide to charge 99 cents rather than a dollar?
|
Because the cashier would have to make change in the cash register for 99 cents, forcing a record of the sale, whereas if the vendor/retailer charged $1, the cashier could pocket the dollar without a record of the transaction.
|
|
null | false
| null |
Historically, the world's tallest man-made structure was the Great Pyramid of Giza in Egypt, which held the position for over 3800 years until the construction of Lincoln Cathedral in 1311. The Strasbourg Cathedral in France, completed in 1439, was the world's tallest building until 1874.
The first skyscraper was pioneered in Chicago with the 138 ft (42.1 m) Home Insurance Building in 1885. The United States would remain the location of the world's tallest building throughout the 20th century until 1998, when the Petronas Towers were completed. Since then, two other buildings have gained the title: Taipei 101 in 2004 and Burj Khalifa in 2010. Since the beginning of the 21st century, the Middle East, China, and Southeast Asia have experienced booms in skyscraper construction
|
Given the following paragraph about the history of tallest buildings, which was the first skyscraper and when was it build?
|
The first skyscraper was the Home Insurance Building in Chicago. It was build in 1885.
|
null | false
| null |
DC Comics is one of the largest and oldest American comic book companies, with their first comic under the DC banner being published in 1937. The majority of its publications take place within the fictional DC Universe and feature numerous culturally iconic heroic characters, such as Superman, Batman, Wonder Woman, Flash, Aquaman, Green Lantern, and Cyborg; as well as famous fictional teams including the Justice League, the Justice Society of America, the Justice League Dark, the Doom Patrol, and the Teen Titans. The universe also features an assortment of well-known supervillains such as Lex Luthor, the Joker, the Cheetah, the Reverse-Flash, Black Manta, Sinestro, and Darkseid. The company has published non-DC Universe-related material, including Watchmen, V for Vendetta, Fables and many titles under their alternative imprint Vertigo and now DC Black Label.
|
Give me the most popular characters in DC Comics from this paragraph without summarizing.
|
Some of the most popular characters in DC Comics include Superman, Batman, Wonder Woman, the Flash, Aquaman, and Green Lantern. These characters are often portrayed by themselves as well as affiliated in superhero teams, such as the Justice League and the Teen Titans.
|
null | false
| null |
Chi Kraeng River is a river in Cambodia. It is a major tributary of the Tonlé Sap. The name means "great ancestor" in Khmer.
|
Is Chi Kraeng River in South America, Southeast Asia, or Australia?
|
Southeast Asia
|
null | false
| 54
|
Extensive work exists on knowledge graph embeddings dating back to Nickel, Tresp, and Kriegel ( BIBREF1 ) who first proposed Rescal based on a matrix factorization approach. Bordes et al. ( BIBREF2 ) advanced this line of work by proposing the first translational model TransE which seeks to relate the head and tail entity embeddings by modeling the relation as a translational vector. This culminated in a long series of new knowledge graph embeddings all based on the translational principle with various refinements BIBREF9 , BIBREF10 , BIBREF4 , BIBREF3 , BIBREF5 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . Some recently proposed models like ManiFoldE BIBREF17 attempt to learn knowledge graph embeddings as a manifold while embeddings like HolE BIBREF1 derive inspiration from associative memories. Furthermore, with the success of neural models, models based on convolutional neural networks have been proposed like BIBREF6 , BIBREF18 to learn knowledge graph embeddings. Other models in this class of models include ConvKB BIBREF19 and KBGAN BIBREF20 . There has been some work on incorporating additional information like entity types, relation paths etc. to learn knowledge graph representations. Palumbo et al. ( BIBREF21 ) use node2vec to learn embeddings of entities and items in a knowledge graph. A notable class of methods called “path-ranking” based models directly model paths between entities as features. Examples include Path Ranking Algorithm (PRA) BIBREF22 , PTransE BIBREF10 and models based on recurrent neural networks BIBREF23 , BIBREF24 . Besides, Das et al. ( BIBREF25 ) propose a reinforcement learning method that addresses practical task of answering questions where the relation is known, but only one entity. Hartford et al. ( BIBREF26 ) model interactions across two or more sets of objects using a parameter-sharing scheme. While most of the above models except for the recurrent-neural net abased models above are shallow our model Dolores differs from all of these works and especially that of Palumbo et al. ( BIBREF21 ) in that we learn deep contextualized knowledge graph representations of entities and relations using a deep neural sequential model. The work that is closest to our work is that of Das et al. ( BIBREF24 ) who directly use an RNN-based architecture to model paths to predict missing links. We distinguish our work from this in the following key ways: (a) First, unlike Das et al. ( BIBREF24 ), our focus is not on path reasoning but on learning rich knowledge graph embeddings useful for a variety of predictive tasks. Moreover while Das et al. ( BIBREF24 ) need to use paths generated from PRA that typically correlate with relations, our method has no such restriction and only uses paths generated by generic random walks greatly enhancing the scalability of our method. In fact, we incorporate Dolores embeddings to improve the performance of the model proposed by Das et al. ( BIBREF24 ). (b) Second, and most importantly we learn knowledge graph embeddings at multiple layers each potentially capturing different levels of abstraction. (c) Finally, while we are inspired by the work of Peters et al. ( BIBREF8 ) in learning deep word representations, we build on their ideas by drawing connections between knowledge graphs and language modeling BIBREF8 . In particular, we propose methods to use random walks over knowledge graphs in conjunction with the machinery of deep neural language modeling to learn powerful deep contextualized knowledge graph embeddings that improve the state of the art on various knowledge graph tasks.
We distinguish our work from this in the following key ways: (a) First, unlike Das et al., our focus is not on path reasoning but on learning rich knowledge graph embeddings useful for a variety of predictive tasks. Moreover while Das et al. need to use paths generated from PRA that typically correlate with relations, our method has no such restriction and only uses paths generated by generic random walks greatly enhancing the scalability of our method. In fact, we incorporate DOLORES embeddings to improve the performance of the model proposed by Das et al. (b) Second, and most importantly we learn knowledge graph embeddings at multiple layers each potentially capturing different levels of abstraction. (c) Finally, while we are inspired by the work of Peters et al. in learning deep word representations, we build on their ideas by drawing connections between knowledge graphs and language modeling.
|
How is the author's work different from that of Das et al.?
|
(a) The author's focus is not on path reasoning, but on learning rich knowledge graph embedding useful for various predictive tasks. In addition, Das et al. 's approach does not have this limitation, as they need to use paths that PRA generates that are usually relational. (b) The author learns multiple layers of knowledge graph embedding, each of which may capture different levels of abstraction. (c) Although the authors are inspired by Peters et al in learning deep word representations, the authors build their ideas by making a connection between knowledge graphs and language modeling
|
null | false
| null |
Classify each of the following as either a Girl Scout cookie, an ice cream flavor from Ben & Jerry's, or a title of an episode from the TV show "The X-Files": The Unnatural, Phish Food, Samoas, Do-Si-Dos, Tagalongs, Post Modern Prometheus, Americone Dream, The Field Where I Died, Caramel deLites, Half Baked, Cherry Garcia, Patient X, Thin Mints, Chunky Monkey, Bad Blood
|
Girl Scout cookies: Samoas, Do-Si-Dos, Tagalongs, Caramel deLites, Thin Mints
Ben & Jerry's ice cream flavors: Phish Food, Americone Dream, Half Baked, Cherry Garcia, Chunky Monkey
Episodes from The X-Files: The Unnatural, Post Modern Prometheus, The Field Where I Died, Patient X, Bad Blood
|
|
1807.07961
| false
| null |
The extensive use of emojis has drawn a growing attention from researchers BIBREF4 , BIBREF5 because the emojis convey fruitful semantical and sentimental information to visually complement the textual information which is significantly useful in understanding the embedded emotional signals in texts BIBREF6 . For example, emoji embeddings have been proposed to understand the semantics behind the emojis BIBREF7 , BIBREF8 , and the embedding vectors can be used to visualize and predict emoji usages given their corresponding contexts. Previous work also shows that, it is useful to pre-train a deep neural network on an emoji prediction task with pre-trained emoji embeddings to learn the emotional signals of emojis for other tasks including sentiment, emotion and sarcasm prediction BIBREF9 . However, the previous literatures lack in considerations of the linguistic complexities and diversity of emoji. Therefore, previous emoji embedding methods fail to handle the situation when the semantics or sentiments of the learned emoji embeddings contradict the information from the corresponding contexts BIBREF5 , or when the emojis convey multiple senses of semantics and sentiments such as ( and ). In practice, emojis can either summarize and emphasis the original tune of their contexts, or express more complex semantics such as irony and sarcasm by being combined with contexts of contradictory semantics or sentiments. For the examples shown in Table TABREF3 , the emoji () is of consistent sentiment with text to emphasis the sentiment, but is of the opposite sentiment (positive) to the text sentiment (negative) example 3 and 4 to deliver a sense of sarcasm. Conventional emoji analysis can only extract single embedding of each emoji, and such embeddings will confuse the following sentiment analysis model by inconsistent sentiment signals from the input texts and emojis. Moreover, we consider the emoji effect modeling different from the conventional multimodal sentiment analysis which usually includes images and texts in that, image sentiment and text sentiment are usually assumed to be consistent BIBREF10 while it carries no such assumption for texts and emojis.
The extensive use of emojis has drawn a growing attention from researchers BIBREF4 , BIBREF5 because the emojis convey fruitful semantical and sentimental information to visually complement the textual information which is significantly useful in understanding the embedded emotional signals in texts BIBREF6
However, the previous literatures lack in considerations of the linguistic complexities and diversity of emoji. Therefore, previous emoji embedding methods fail to handle the situation when the semantics or sentiments of the learned emoji embeddings contradict the information from the corresponding contexts BIBREF5 , or when the emojis convey multiple senses of semantics and sentiments such as ( and ). In practice, emojis can either summarize and emphasis the original tune of their contexts, or express more complex semantics such as irony and sarcasm by being combined with contexts of contradictory semantics or sentiments.
Conventional emoji analysis can only extract single embedding of each emoji, and such embeddings will confuse the following sentiment analysis model by inconsistent sentiment signals from the input texts and emojis
|
What is the motivation for training bi-sense embeddings?
|
The answers are shown as follows:
* previous emoji embedding methods fail to handle the situation when the semantics or sentiments of the learned emoji embeddings contradict the information from the corresponding contexts BIBREF5 , or when the emojis convey multiple senses of semantics and sentiments
|
null | false
| null |
Does Almond Joy or Mounds have nuts?
|
Almond Joy has nuts enrobed in milk chocolate.
|
|
null | false
| null |
Tell me whether these golfers are American, Canadian, or neither: Mike Weir, Corey Conners, Max Homa, Tiger Woods, Justin Thomas, Viktor Hovland
|
Mike Weir: Canadian
Corey Conners: Canadian
Max Homa: American
Tiger Woods: American
Justin Thomas: American
Viktor Hovland: Neither
|
|
null | false
| 146
|
Our first aim is to answer the following questions: 1) How much answer diversity arises for visual questions? and 2) Why do people disagree?
VQA Datasets: We conduct our analysis on a total of 459,861 visual questions and 4,598,610 answers coming from today’s largest freely-available VQA benchmark [3].
|
How many visual questions do they analyze in their VQA Datasets?
|
459,861 visual questions.
|
null | false
| null |
Who is the greatest cricketer of all time.
|
There are many cricketers who could lay claim to the mantle of greatest cricketer of all time. Cricket is a bat and ball game that has been played internationally for nearly 150 years. Two names are often suggested to be the greatest. Sir Donald Bradman and Sir Gary Sobers. Bradman for having a batting average that will never be beaten and Sobers for being an all rounder who was outstanding in bowling batting and fielding
Both of these players would be picked in any best test team of all time. The only caveat is that neither player excelled or played in the shorter forms of the game.
|
|
null | false
| null |
Philip, Duke of Parma
Philip (Spanish: Felipe, Italian: Filippo; 15 March 1720 – 18 July 1765) was Duke of Parma from 18 October 1748 until his death in 1765. He was born a Spanish infante, being the second son of King Philip V and Elisabeth Farnese. The Duchy of Parma had been ruled by the House of Farnese, Queen Elisabeth's family. Philip founded the House of Bourbon-Parma, a cadet line of the House of Bourbon. He was a first cousin and son-in-law of the French king Louis XV.
|
Given a reference text about Philip, Duke of Parma, who were his parents?
|
Philip, Duke of Parma's parents were King Philip V and Elisabeth Farnese.
|
null | false
| null |
List five titles by Mary Shelley.
|
Frankenstein, The Last Man, Lodore, The Mortal Immortal, Maurice
|
|
null | false
| 169
|
Current societies are exposed to a continuous flow of information that results in a large production of data (e.g. news articles, micro-blogs, social media posts, among others), at different moments in time. In addition to this, the consumption of information has dramatically changed: more and more people directly access information through social media platforms (e.g. Facebook and Twitter), and are less and less exposed to a diversity of perspectives and opinions. The combination of these factors may easily result in information overload and impenetrable “filter bubbles”. Events, i.e. things that happen or hold as true in the world, are the basic components of such data stream. Being able to correctly identify and classify them plays a major role to develop robust solutions to deal with the current stream of data (e.g. the storyline framework BIBREF0 ), as well to improve the performance of many Natural Language Processing (NLP) applications such as automatic summarization and question answering (Q.A.).
Event detection and classification has seen a growing interest in the NLP community thanks to the availability of annotated corpora BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 and evaluation campaigns BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . In the context of the 2014 EVALITA Workshop, the EVENTI evaluation exercise BIBREF11 was organized to promote research in Italian Temporal Processing, of which event detection and classification is a core subtask.
Since the EVENTI campaign, there has been a lack of further research, especially in the application of deep learning models to this task in Italian. The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier. The pre-trained models and scripts running the system (or re-train it) are publicly available. .
The contributions of this paper are the followings: i.) the adaptation of a state-of-the-art sequence to sequence (seq2seq) neural system to event detection and classification for Italian in a single step approach; ii.) an investigation on the quality of existing Italian word embeddings for this task; iii.) a comparison against a state-of-the-art discrete classifier.
|
What quality does this task investigate?
|
It investigates the quality of existing Italian word embeddings.
|
null | false
| 76
|
The datasets we use for our experimental evaluation are directly extracted from the Wikipedia entity pages and their revision history. The generated data represents one of the contributions of our paper. The datasets are the following:
Entity Classes. We focus on a manually predetermined set of entity classes for which we expect to have news coverage. The number of analyzed entity classes is 27, including INLINEFORM0 entities with at least one news reference. The entity classes were selected from the DBpedia class ontology. Figure FIGREF42 shows the number of entities per class for the years (2009-2014).
News Articles. We extract all news references from the collected Wikipedia entity pages. The extracted news references are associated with the sections in which they appear. In total there were INLINEFORM0 news references, and after crawling we end up with INLINEFORM1 successfully crawled news articles. The details of the news article distribution, and the number of entities and sections from which they are referred are shown in Table TABREF44 .
Article-Entity Ground-truth. The dataset comprises of the news and entity pairs INLINEFORM0 . News-entity pairs are relevant if the news article is referenced in the entity page. Non-relevant pairs (i.e. negative training examples) consist of news articles that contain an entity but are not referenced in that entity's page. If a news article INLINEFORM1 is referred from INLINEFORM2 at year INLINEFORM3 , the features are computed taking into account the entity profiles at year INLINEFORM4 .
Article-Section Ground-truth. The dataset consists of the triple INLINEFORM0 , where INLINEFORM1 , where we assume that INLINEFORM2 has already been determined as relevant. We therefore have a multi-class classification problem where we need to determine the section of INLINEFORM3 where INLINEFORM4 is cited. Similar to the article-entity ground truth, here too the features compute the similarity between INLINEFORM5 , INLINEFORM6 and INLINEFORM7 .
The datasets we use for our experimental evaluation are directly extracted from the Wikipedia entity pages and their revision history.
|
Where did the authors come from for their experimental evaluation?
|
The datasets they use for their experimental evaluation are directly extracted from the Wikipedia entity pages and their revision history.
|
null | false
| null |
In Saint Johnstown, the road becomes a divided highway and DE 36 splits from DE 16 by heading to the northeast on Shawnee Road. DE 16 soon becomes undivided again and continues east through agricultural areas and woods with some residences, crossing Toms Dam Branch and running through the community of Owens. Farther east, the road passes through the community of Oakley and crosses West Branch Gum Branch and then Gum Branch. The route comes to an intersection with US 113 near a few businesses, where it is briefly a divided highway. Past this intersection, the name changes to Milton Ellendale Highway and the two-lane undivided road enters the town of Ellendale, where it becomes Main Street. In Ellendale, DE 16 runs past several homes, crossing the Delmarva Central Railroad's Indian River Subdivision line at-grade. The road leaves the town and becomes Milton Ellendale Highway again, turning southeast. The route heads through woodland and farms, running immediately to the north of the Delmarva Central Railroad's Milton Industrial Track line. DE 16 turns east away from the railroad tracks and comes to an intersection with DE 30. At this point, DE 5 Alternate (DE 5 Alt.), which runs south along DE 30, heads east along with DE 16. The road reaches the town of Milton, where it passes homes and businesses. DE 16 comes to an intersection with DE 5, where DE 5 Alt. ends and the road name changes to Broadkill Road. The route leaves Milton and continues into farmland with a few residences, turning to the northeast. The road intersects DE 1 and passes north of a golf course before it runs through a mix of farms and woods with some homes. DE 16 turns east and northeast through marshland within the Prime Hook National Wildlife Refuge before crossing Broadkill Sound and reaching the community of Broadkill Beach. Here, the route intersects Bay Shore Drive before ending at a dead end on the Delaware Bay.
|
per this reference text about Delaware Route 16, how many times does the road change between being a divided highway and undivided?
|
DE Route 16 changes 3 times between divided and undivided highways per the information provided.
|
null | false
| 328
|
Neural network based models have been widely exploited with the prosperities of Deep Learning BIBREF0 and achieved inspiring performances on many NLP tasks, such as text classification BIBREF1 , BIBREF2 , semantic matching BIBREF3 , BIBREF4 and machine translation BIBREF5 . These models are robust at feature engineering and can represent words, sentences and documents as fix-length vectors, which contain rich semantic information and are ideal for subsequent NLP tasks.
One formidable constraint of deep neural networks (DNN) is their strong reliance on large amounts of annotated corpus due to substantial parameters to train. A DNN trained on limited data is prone to overfitting and incapable to generalize well. However, constructions of large-scale high-quality labeled datasets are extremely labor-intensive. To solve the problem, these models usually employ a pre-trained lookup table, also known as Word Embedding BIBREF6 , to map words into vectors with semantic implications. However, this method just introduces extra knowledge and does not directly optimize the targeted task. The problem of insufficient annotated resources is not solved either.
Multi-task learning leverages potential correlations among related tasks to extract common features, increase corpus size implicitly and yield classification improvements. Inspired by BIBREF7 , there are a large literature dedicated for multi-task learning with neural network based models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These models basically share some lower layers to capture common features and further feed them to subsequent task-specific layers, which can be classified into three types:
In this paper, we propose a generalized multi-task learning architecture with four types of recurrent neural layers for text classification. The architecture focuses on Type-III, which involves more complicated interactions but has not been researched yet. All the related tasks are jointly integrated into a single system and samples from different tasks are trained in parallel. In our model, every two tasks can directly interact with each other and selectively absorb useful information, or communicate indirectly via a shared intermediate layer. We also design a global memory storage to share common features and collect interactions among all tasks.
We conduct extensive experiments on five benchmark datasets for text classification. Compared to learning separately, jointly learning multiple relative tasks in our model demonstrate significant performance gains for each task.
Our contributions are three-folds:
We conduct extensive experiments on five benchmark datasets for text classification.
|
How many datasets do the authors conduct extensive experiments on?
|
Five.
|
null | false
| null |
What's the capital of Brazil?
|
The capital of Brazil is Brasilia, where the Federal government is located as well. The city become the capital in 1960, as a fully planned city and contemplates a modern architecture designed by Oscar Niemeyer.
|
|
null | false
| null |
Which of these were summer olympic sports in 2020: tennis, cheerleading, chess, boxing, basketball, ski jumping
|
Tennis, boxing, basketball
|
|
null | false
| null |
At the 1932 Plenipotentiary Telegraph Conference and the International Radiotelegraph Conference in Madrid, the two organizations decided to merge to form the International Telecommunication Union (ITU). They defined telecommunication as "any telegraphic or telephonic communication of signs, signals, writing, facsimiles and sounds of any kind, by wire, wireless or other systems or processes of electric signaling or visual signaling (semaphores)."
The definition was later reconfirmed, according to Article 1.3 of the Radio Regulations (RR), telecommunication, which defined it as "Any transmission, emission or reception of signs, signals, writings, images and sounds or intelligence of any nature by wire, radio, optical, or other electromagnetic systems".
Beacons and pigeons
A replica of one of Chappe's semaphore towers
Homing pigeons have been used throughout history by different cultures. Pigeon post had Persian roots and was later used by the Romans to aid their military. Frontinus claimed Julius Caesar used pigeons as messengers in his conquest of Gaul. The Greeks also conveyed the names of the victors at the Olympic Games to various cities using homing pigeons. In the early 19th century, the Dutch government used the system in Java and Sumatra. And in 1849, Paul Julius Reuter started a pigeon service to fly stock prices between Aachen and Brussels, a service that operated for a year until the gap in the telegraph link was closed.
In the Middle Ages, chains of beacons were commonly used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could only pass a single bit of information, so the meaning of the message such as "the enemy has been sighted" had to be agreed upon in advance. One notable instance of their use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London.
In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system (or semaphore line) between Lille and Paris. However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres (six to nineteen miles). As a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880.
Telegraph and telephone
On July 25, 1837, the first commercial electrical telegraph was demonstrated by English inventor Sir William Fothergill Cooke and English scientist Sir Charles Wheatstone. Both inventors viewed their device as "an improvement to the electromagnetic telegraph" and not as a new device.
Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on September 2, 1837. His code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was successfully completed on July 27, 1866, allowing transatlantic telecommunication for the first time.
The conventional telephone was patented by Alexander Bell in 1876. Elisha Gray also filed a caveat for it in 1876. Gray abandoned his caveat and because he did not contest Bell's priority, the examiner approved Bell's patent on March 3, 1876. Gray had filed his caveat for the variable resistance telephone, but Bell was the first to document the idea and test it in a telephone. Antonio Meucci invented a device that allowed the electrical transmission of voice over a line nearly 30 years before in 1849, but his device was of little practical value because it relied on the electrophonic effect requiring users to place the receiver in their mouths to "hear." The first commercial telephone services were set up by the Bell Telephone Company in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London.
Radio and television
In 1894, Italian inventor Guglielmo Marconi began developing a wireless communication using the then-newly discovered phenomenon of radio waves, showing by 1901 that they could be transmitted across the Atlantic Ocean. This was the start of wireless telegraphy by radio. On 17 December 1902, a transmission from the Marconi station in Glace Bay, Nova Scotia, Canada, became the world's first radio message to cross the Atlantic from North America. In 1904, a commercial service was established to transmit nightly news summaries to subscribing ships, which incorporated them into their onboard newspapers.
World War I accelerated the development of radio for military communications. After the war, commercial radio AM broadcasting began in the 1920s and became an important mass medium for entertainment and news. World War II again accelerated the development of radio for the wartime purposes of aircraft and land communication, radio navigation, and radar. Development of stereo FM broadcasting of radio began in the 1930s in the United States and the 1970s in the United Kingdom, displacing AM as the dominant commercial standard.
On March 25, 1925, John Logie Baird demonstrated the transmission of moving pictures at the London department store Selfridges. Baird's device relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning on 30 September 1929. However, for most of the 20th century, televisions depended on the cathode ray tube invented by Karl Braun. The first version of such a television to show promise was produced by Philo Farnsworth and demonstrated to his family on 7 September 1927. After World War II, interrupted experiments resumed and television became an important home entertainment broadcast medium.
Thermionic valves
The type of device known as a thermionic tube or thermionic valve uses thermionic emission of electrons from a heated cathode for a number of fundamental electronic functions such as signal amplification and current rectification.
The simplest vacuum tube, the diode invented in 1904 by John Ambrose Fleming, contains only a heated electron-emitting cathode and an anode. Electrons can only flow in one direction through the device—from the cathode to the anode. Adding one or more control grids within the tube enables the current between the cathode and anode to be controlled by the voltage on the grid or grids. These devices became a key component of electronic circuits for the first half of the 20th century and were crucial to the development of radio, television, radar, sound recording and reproduction, long-distance telephone networks, and analogue and early digital computers. While some applications had used earlier technologies such as the spark gap transmitter for radio or mechanical computers for computing, it was the invention of the thermionic vacuum tube that made these technologies widespread and practical, leading to the creation of electronics.
In the 1940s, the invention of semiconductor devices made it possible to produce solid-state devices, which are smaller, cheaper, and more efficient, reliable, and durable than thermionic tubes. Starting in the mid-1960s, thermionic tubes were replaced with the transistor. Thermionic tubes still have some applications for certain high-frequency amplifiers.
Computer networks and the Internet
On 11 September 1940, George Stibitz transmitted problems for his Complex Number Calculator in New York using a teletype and received the computed results back at Dartmouth College in New Hampshire. This configuration of a centralized computer (mainframe) with remote dumb terminals remained popular well into the 1970s. In the 1960s, researchers started to investigate packet switching, a technology that sends a message in portions to its destination asynchronously without passing it through a centralized mainframe. A four-node network emerged on 5 December 1969, constituting the beginnings of the ARPANET, which by 1981 had grown to 213 nodes. ARPANET eventually merged with other networks to form the Internet. While Internet development was a focus of the Internet Engineering Task Force (IETF) who published a series of Request for Comments documents, other networking advancements occurred in industrial laboratories, such as the local area network (LAN) developments of Ethernet (1983) and Token Ring (1984)[citation needed].
Growth of transmission capacity
The effective capacity to exchange information worldwide through two-way telecommunication networks grew from 281 petabytes (pB) of optimally compressed information in 1986 to 471 pB in 1993 to 2.2 exabytes (eB) in 2000 to 65 eB in 2007. This is the informational equivalent of two newspaper pages per person per day in 1986, and six entire newspapers per person per day by 2007. Given this growth, telecommunications play an increasingly important role in the world economy and the global telecommunications industry was about a $4.7 trillion sector in 2012. The service revenue of the global telecommunications industry was estimated to be $1.5 trillion in 2010, corresponding to 2.4% of the world's gross domestic product (GDP).
|
Give me a list of the key elements of telecommunications developed over time
|
Beacons, telegraph, telephone, radio, television, thermionic valves, computer networks and the Internet
|
null | false
| 109
|
Sarcasm is defined as “a sharp, bitter, or cutting expression or remark; a bitter gibe or taunt”. As the fields of affective computing and sentiment analysis have gained increasing popularity BIBREF0 , it is a major concern to detect sarcastic, ironic, and metaphoric expressions. Sarcasm, especially, is key for sentiment analysis as it can completely flip the polarity of opinions. Understanding the ground truth, or the facts about a given event, allows for the detection of contradiction between the objective polarity of the event (usually negative) and its sarcastic characteristic by the author (usually positive), as in “I love the pain of breakup”. Obtaining such knowledge is, however, very difficult.
In our experiments, we exposed the classifier to such knowledge extracted indirectly from Twitter. Namely, we used Twitter data crawled in a time period, which likely contain both the sarcastic and non-sarcastic accounts of an event or similar events. We believe that unambiguous non-sarcastic sentences provided the classifier with the ground-truth polarity of those events, which the classifier could then contrast with the opposite estimations in sarcastic sentences. Twitter is a more suitable resource for this purpose than blog posts, because the polarity of short tweets is easier to detect (as all the information necessary to detect polarity is likely to be contained in the same sentence) and because the Twitter API makes it easy to collect a large corpus of tweets containing both sarcastic and non-sarcastic examples of the same event.
Sometimes, however, just knowing the ground truth or simple facts on the topic is not enough, as the text may refer to other events in order to express sarcasm. For example, the sentence “If Hillary wins, she will surely be pleased to recall Monica each time she enters the Oval Office :P :D”, which refers to the 2016 US presidential election campaign and to the events of early 1990's related to the US president Clinton, is sarcastic because Hillary, a candidate and Clinton's wife, would in fact not be pleased to recall her husband's alleged past affair with Monica Lewinsky. The system, however, would need a considerable amount of facts, commonsense knowledge, anaphora resolution, and logical reasoning to draw such a conclusion. In this paper, we will not deal with such complex cases.
Existing works on sarcasm detection have mainly focused on unigrams and the use of emoticons BIBREF1 , BIBREF2 , BIBREF3 , unsupervised pattern mining approach BIBREF4 , semi-supervised approach BIBREF5 and n-grams based approach BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 with sentiment features. Instead, we propose a framework that learns sarcasm features automatically from a sarcasm corpus using a convolutional neural network (CNN). We also investigate whether features extracted using the pre-trained sentiment, emotion and personality models can improve sarcasm detection performance. Our approach uses relatively lower dimensional feature vectors and outperforms the state of the art on different datasets. In summary, the main contributions of this paper are the following:
The rest of the paper is organized as follows: Section SECREF2 proposes a brief literature review on sarcasm detection; Section SECREF4 presents the proposed approach; experimental results and thorough discussion on the experiments are given in Section SECREF5 ; finally, Section SECREF6 concludes the paper.
In summary, the main contributions of this paper are the following: • To the best of our knowledge, this is the first work on using deep learning for sarcasm detection. • Unlike other works, we exploit sentiment and emotion features for sarcasm detection. As user profiling is also an important factor for detecting sarcastic content, moreover, we use personalitybased features for the first time in the literature. • Pre-trained models are commonly used in computer vision. In the context of natural language processing (NLP), however, they are barely used. Hence, the use of pre-trained models for feature extraction is also a major contribution of this work.
|
What are the main contributions of this paper?
|
Using deep learning for sarcasm detection for the first time, using sentiment and emotion features for sarcasm detection, using personality-based features for the first time in the literature, and using pre-trained models for feature extraction.
|
null | false
| 22
|
We now evaluate LiLi in terms of its predictive performance and strategy formulation abilities.
Data: We use two standard datasets (see Table 4): (1) Freebase FB15k, and (2) WordNet INLINEFORM0 . Using each dataset, we build a fairly large graph and use it as the original KB ( INLINEFORM1 ) for evaluation. We also augment INLINEFORM2 with inverse triples ( INLINEFORM3 , INLINEFORM4 , INLINEFORM5 ) for each ( INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ) following existing KBC methods.
Parameter Settings. Unless specified, the empirically set parameters (see Table 1) of LiLi are: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 , INLINEFORM9 , INLINEFORM10 . For training RL-model with INLINEFORM11 -greedy strategy, we use INLINEFORM12 , INLINEFORM13 , pre-training steps=50000. We used Keras deep learning library to implement and train the prediction model. We set batch-size as 128, max. training epoch as 150, dropout as 0.2, hidden units and embedding size as 300 and learning rate as 5e-3 which is reduced gradually on plateau with factor 0.5 and patience 5. Adam optimizer and early stopping were used in training. We also shuffle INLINEFORM14 in each epoch and adjust class weights inversely proportional to class frequencies in INLINEFORM15 .
Labeled Dataset Generation and Simulated User Creation. We create a simulated user for each KB to evaluate LiLi. We create the labeled datasets, the simulated user’s knowledge base ( INLINEFORM0 ), and the base KB ( INLINEFORM1 ) from INLINEFORM2 . INLINEFORM3 used as the initial KB graph ( INLINEFORM4 ) of LiLi.
We followed BIBREF16 for labeled dataset generation. For Freebase, we found 86 relations with INLINEFORM0 triples and randomly selected 50 from various domains. We randomly shuffle the list of 50 relations, select 25% of them as unknown relations and consider the rest (75%) as known relations. For each known relation INLINEFORM1 , we randomly shuffle the list of distinct triples for INLINEFORM2 , choose 1000 triples and split them into 60% training, 10% validation and 20% test. Rest 10% along with the leftover (not included in the list of 1000) triples are added to INLINEFORM3 . For each unknown relation INLINEFORM4 , we remove all triples of INLINEFORM5 from INLINEFORM6 and add them to INLINEFORM7 . In this process, we also randomly choose 20% triples as test instances for unknown INLINEFORM8 which are excluded from INLINEFORM9 . Note that, now INLINEFORM10 has at least 10% of chosen triples for each INLINEFORM11 (known and unknown) and so, user is always able to provide clues for both cases. For each labeled dataset, we randomly choose 10% of the entities present in dataset triples, remove triples involving those entities from INLINEFORM12 and add to INLINEFORM13 . At this point, INLINEFORM14 gets reduced to INLINEFORM15 and is used as INLINEFORM16 for LiLi. The dataset stats in Table 4 shows that the base KB (60% triples of INLINEFORM17 ) is highly sparse (compared to original KB) which makes the inference task much harder. WordNet dataset being small, we select all 18 relations for evaluation and create labeled dataset, INLINEFORM18 and INLINEFORM19 following Freebase. Although the user may provide clues 100% of the time, it often cannot respond to MLQs and CLQs (due to lack of required triples/facts). Thus, we further enrich INLINEFORM20 with external KB triples.
Given a relation INLINEFORM0 and an observed triple ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ) in training or testing, the pair ( INLINEFORM4 , INLINEFORM5 ) is regarded as a +ve instance for INLINEFORM6 . Following BIBREF18 , for each +ve instance ( INLINEFORM7 , INLINEFORM8 ), we generate two negative ones, one by randomly corrupting the source INLINEFORM9 , and the other by corrupting the target INLINEFORM10 . Note that, the test triples are not in INLINEFORM11 or INLINEFORM12 and none of the -ve instances overlap with the +ve ones.
Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.
Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.
Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.
F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .
BG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@" blindly, no guessing mechanism.
w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.
Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score.
To evaluate the strategy formulation ability, we introduce a measure called Coverage(C), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, C is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score.
|
What metrics do they use in their evaluation?
|
Coverage(C), Avg. MCC and avg. +ve F1 score.
|
null | false
| null |
Who wrote The History of Mr Polly
|
H. G. Wells
|
|
null | false
| null |
How do clouds form?
|
Clouds form when warm air and water vapor rise. They rise to a point where the air is cool enough for the water vapor to condense into water droplets. Water droplets that are completely saturated with water vapor, but unable to hold all of it as vapor, begin to condense into liquid/solid forms.
|
|
null | false
| 71
|
Each model produces four scores and each score is the probability that a tweet includes harassment language, indirect, physical and sexual harassment language respectively. For any tweet, we first check the score of the harassment language and if it is less than a specified threshold, then the harassment label is zero, so the other three labels are zero as well. If it is greater than or equal to that threshold, then the harassment label is one and the type of harassment is the one among these three having that has the greatest score (highest probability). We set this threshold equal to 0.33.
We compare eight different models in our experiments. Four of them have a Projected Layer (see Fig. FIGREF2), while the others do not have, and this is the only difference between these two groups of our models. So, we actually include four models in our experiments (having a projected layer or not). Firstly, LastStateRNN is the classic RNN model, where the last state passes through an MLP and then the LR Layer estimates the corresponding probability. In contrast, in the AvgRNN model we consider the average vector of all states that come out of the cells. The AttentionRNN model is the one that it has been presented in BIBREF9. Moreover, we introduce the MultiAttentionRNN model for the harassment language detection, which instead of one attention, it includes four attentions, one for each category.
We have evaluated our models considering the F1 Score, which is the harmonic mean of precision and recall. We have run ten times the experiment for each model and considered the average F1 Score. The results are mentioned in Table TABREF11. Considering F1 Macro the models that include the multi-attention mechanism outperform the others and particularly the one with the Projected Layer has the highest performance. In three out of four pairs of models, the ones with the Projected Layer achieved better performance, so in most cases the addition of the Projected Layer had a significant enhancement.
For any tweet, we first check the score of the harassment language and if it is less than a specified threshold, then the harassment label is zero, so the other three labels are zero as well. If it is greater than or equal to that threshold, then the harassment label is one and the type of harassment is the one among these three having that has the greatest score (highest probability). We set this threshold equal to 0.33.
|
What are the author's evaluation indicators for the model?
|
For any tweet, they first check the score of the harassment language and if it is less than a specified threshold, then the harassment label is zero, so the other three labels are zero as well. If it is greater than or equal to that threshold, then the harassment label is one and the type of harassment is the one among these three having that has the greatest score.
|
null | false
| null |
Bitcoin (abbreviation: BTC or XBT; sign: ₿) is a protocol which implements a highly available, public, permanent, and decentralized ledger. In order to add to the ledger, a user must prove they control an entry in the ledger. The protocol specifies that the entry indicates an amount of a token, bitcoin with a minuscule b. The user can update the ledger, assigning some of their bitcoin to another entry in the ledger. Because the token has characteristics of money, it can be thought of as a digital currency.
Bitcoin transactions are verified by network nodes through cryptography and recorded in a public distributed ledger called a blockchain. The cryptocurrency was invented in 2008 by an unknown person or group of people using the name Satoshi Nakamoto. The currency began use in 2009, when its implementation was released as open-source software.: ch. 1 The word "bitcoin" was defined in a white paper published on October 31, 2008. It is a compound of the words bit and coin.
The Library of Congress reports that, as of November 2021, nine countries have fully banned bitcoin use, while a further forty-two have implicitly banned it. A few governments have used bitcoin in some capacity. El Salvador has adopted Bitcoin as legal tender, although use by merchants remains low. Ukraine has accepted cryptocurrency donations to fund the resistance to the 2022 Russian invasion. Iran has used bitcoin to bypass sanctions.
|
Based on the reference text about Bitcoin, how many countries have banned Bitcoin?
|
As of November 2021, 42 countries have implicitly banned and 9 countries have fully banned the use of Bitcoin.
|
null | false
| 109
|
Sarcasm is defined as “a sharp, bitter, or cutting expression or remark; a bitter gibe or taunt”. As the fields of affective computing and sentiment analysis have gained increasing popularity BIBREF0 , it is a major concern to detect sarcastic, ironic, and metaphoric expressions. Sarcasm, especially, is key for sentiment analysis as it can completely flip the polarity of opinions. Understanding the ground truth, or the facts about a given event, allows for the detection of contradiction between the objective polarity of the event (usually negative) and its sarcastic characteristic by the author (usually positive), as in “I love the pain of breakup”. Obtaining such knowledge is, however, very difficult.
In our experiments, we exposed the classifier to such knowledge extracted indirectly from Twitter. Namely, we used Twitter data crawled in a time period, which likely contain both the sarcastic and non-sarcastic accounts of an event or similar events. We believe that unambiguous non-sarcastic sentences provided the classifier with the ground-truth polarity of those events, which the classifier could then contrast with the opposite estimations in sarcastic sentences. Twitter is a more suitable resource for this purpose than blog posts, because the polarity of short tweets is easier to detect (as all the information necessary to detect polarity is likely to be contained in the same sentence) and because the Twitter API makes it easy to collect a large corpus of tweets containing both sarcastic and non-sarcastic examples of the same event.
Sometimes, however, just knowing the ground truth or simple facts on the topic is not enough, as the text may refer to other events in order to express sarcasm. For example, the sentence “If Hillary wins, she will surely be pleased to recall Monica each time she enters the Oval Office :P :D”, which refers to the 2016 US presidential election campaign and to the events of early 1990's related to the US president Clinton, is sarcastic because Hillary, a candidate and Clinton's wife, would in fact not be pleased to recall her husband's alleged past affair with Monica Lewinsky. The system, however, would need a considerable amount of facts, commonsense knowledge, anaphora resolution, and logical reasoning to draw such a conclusion. In this paper, we will not deal with such complex cases.
Existing works on sarcasm detection have mainly focused on unigrams and the use of emoticons BIBREF1 , BIBREF2 , BIBREF3 , unsupervised pattern mining approach BIBREF4 , semi-supervised approach BIBREF5 and n-grams based approach BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 with sentiment features. Instead, we propose a framework that learns sarcasm features automatically from a sarcasm corpus using a convolutional neural network (CNN). We also investigate whether features extracted using the pre-trained sentiment, emotion and personality models can improve sarcasm detection performance. Our approach uses relatively lower dimensional feature vectors and outperforms the state of the art on different datasets. In summary, the main contributions of this paper are the following:
The rest of the paper is organized as follows: Section SECREF2 proposes a brief literature review on sarcasm detection; Section SECREF4 presents the proposed approach; experimental results and thorough discussion on the experiments are given in Section SECREF5 ; finally, Section SECREF6 concludes the paper.
we propose a framework that learns sarcasm features automatically from a sarcasm corpus using a convolutional neural network (CNN).
|
Which kind of neural network does the framework use to learn automatically in the paper?
|
Convolutional neural network.
|
2002.01861
| true
| null |
We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:
All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end.
|
Was the structure of regulatory filings exploited when training the model?
|
No.
|
null | false
| 14
|
Suppose a user wants to write a sentence “I will be 10 minutes late.” Ideally, she would type just a few keywords such as “10 minutes late” and an autocomplete system would be able to infer the intended sentence (Figure FIGREF1). Existing left-to-right autocomplete systems BIBREF0, BIBREF1 can often be inefficient, as the prefix of a sentence (e.g. “I will be”) fails to capture the core meaning of the sentence. Besides the practical goal of building a better autocomplete system, we are interested in exploring the tradeoffs inherent to such communication schemes between the efficiency of typing keywords, accuracy of reconstruction, and interpretability of keywords.
One approach to learn such schemes is to collect a supervised dataset of keywords-sentence pairs as a training set, but (i) it would be expensive to collect such data from users, and (ii) a static dataset would not capture a real user's natural predilection to adapt to the system BIBREF2. Another approach is to avoid supervision and jointly learn a user-system communication scheme to directly optimize the combination of efficiency and accuracy. However, learning in this way can lead to communication schemes that are uninterpretable to humans BIBREF3, BIBREF4 (see Appendix for additional related work).
In this work, we propose a simple, unsupervised approach to an autocomplete system that is efficient, accurate, and interpretable. For interpretability, we restrict keywords to be subsequences of their source sentences based on the intuition that humans can infer most of the original meaning from a few keywords. We then apply multi-objective optimization approaches to directly control and achieve desirable tradeoffs between efficiency and accuracy.
We observe that naively optimizing a linear combination of efficiency and accuracy terms is unstable and leads to suboptimal schemes. Thus, we propose a new objective which optimizes for communication efficiency under an accuracy constraint. We show this new objective is more stable and efficient than the linear objective at all accuracy levels.
As a proof-of-concept, we build an autocomplete system within this framework which allows a user to write sentences by specifying keywords. We empirically show that our framework produces communication schemes that are 52.16% more accurate than rule-based baselines when specifying 77.37% of sentences, and 11.73% more accurate than a naive, weighted optimization approach when specifying 53.38% of sentences. Finally, we demonstrate that humans can easily adapt to the keyword-based autocomplete system and save nearly 50% of time compared to typing a full sentence in our user study.
We then apply multi-objective optimization approaches to directly control and achieve desirable tradeoffs between efficiency and accuracy.
|
How do you balance efficiency and accuracy?
|
They apply multi-objective optimization approaches to directly control and achieve desirable tradeoffs between efficiency and accuracy.
|
null | false
| 75
|
Conversational information seeking (CIS) has been recognized as a major emerging research area in information retrieval. Such research will require data and tools, to allow the implementation and study of conversational systems. This paper introduces Macaw, an open-source framework with a modular architecture for CIS research. Macaw supports multi-turn, multi-modal, and mixed-initiative interactions, and enables research for tasks such as document retrieval, question answering, recommendation, and structured data exploration. It has a modular design to encourage the study of new CIS algorithms, which can be evaluated in batch mode. It can also integrate with a user interface, which allows user studies and data collection in an interactive mode, where the back end can be fully algorithmic or a wizard of oz setup. Macaw is distributed under the MIT License.
Macaw is distributed under the MIT License.
|
Under what agreement is Macaw distributed?
|
Macaw is distributed under the MIT License.
|
null | false
| 159
|
The generator performance was assessed on the two evaluation metrics: the BLEU and the slot error rate ERR by adopting code from an open source benchmark toolkit for Natural Language Generation. We compared the proposed models against three strong baselines which have been recently published as state-of-the-art NLG benchmarks[]. https://github.com/shawnwun/RNNLG
HLSTM proposed by BIBREF3 thwsjy15 which used a heuristic gate to ensure that all of the slot-value information was accurately captured when generating.
SCLSTM proposed by BIBREF4 wensclstm15 which can jointly learn the gating signal and language model.
Enc-Dec proposed by BIBREF8 wentoward which applied the attention-based encoder-decoder architecture.
We compared the proposed models against three strong baselines which have been recently published as state-of-the-art NLG bench_x0002_marks5. • HLSTM proposed by Wen et al. (2015a) which used a heuristic gate to ensure that all of the slot-value information was accurately captured when generating. • SCLSTM proposed by Wen et al. (2015b) which can jointly learn the gating signal and language model. • Enc-Dec proposed by Wen et al. (2016b) which applied the attention-based encoderdecoder architecture.
|
What baselines are used for evaluation?
|
HLSTM proposed by Wen et al. (2015a) ,SCLSTM proposed by Wen et al. (2015b) , and Enc-Dec proposed by Wen et al. (2016b).
|
null | false
| null |
Andhra Pradesh (English: /ˌɑːndrə prəˈdɛʃ/, Telugu: [ãːndʱrɐ prɐdeːʃ] (listen) abbr. AP) is a state in the south-eastern coastal region of India. It is the seventh-largest state by area covering an area of 162,975 km2 (62,925 sq mi) and tenth-most-populous state, with 49,386,799 inhabitants. It is bordered by Telangana to the north-west, Chhattisgarh to the north, Odisha to the north-east, Tamil Nadu to the south, Karnataka to the west and the Bay of Bengal to the east. It has the second longest coastline in India after Gujarat, of about 974 km (605 mi). Andhra State was the first state to be formed on a linguistic basis in India on 1 October 1953. On 1 November 1956, Andhra State was merged with the Telugu-speaking areas (ten districts) of the Hyderabad State to form United Andhra Pradesh. In 2014, these merged areas of Hyderabad State were bifurcated from United Andhra Pradesh to form the new state Telangana. The present form of Andhra is similar to Andhra state, but some mandalas like Bhadrachalam are still included in Telangana. Amaravati serves as the capital of the state with the largest city being Visakhapatnam.
Andhra Pradesh was once a major Buddhist pilgrimage site in the country and a Buddhist learning center which can be seen in many sites in the state in the form of monastery ruins, chaityas, and stupas. It is also known for being the land of Koh-i-Noor and other globally known diamonds from Kollur Mine. It is also a major producer of rice known as the "Rice bowl of India". Its official language is Telugu; one of the classical languages of India, the fourth most spoken language in India, and the 13th-most spoken language in the world. Andhra Pradesh's second official language is Urdu.
|
From the passage provided, extract the official language of Andhra Pradesh
|
Telugu is the official language of Andhra Pradesh
|
null | false
| null |
Who won the World Chess Championship in 2021?
|
Magnus Carlsen defeated Ian Nepomniachtchi 7.5 - 3.5 to become the 2021 World Chess Champion.
|
|
null | false
| 25
|
We empirically validate EGL on speech recognition tasks. In our experiments, the RNN takes in spectrograms of utterances, passing them through two 2D-convolutional layers, followed by seven bi-directional recurrent layers and a fully-connected layer with softmax activation. All recurrent layers are batch normalized. At each timestep, the softmax activations give a probability distribution over the characters. CTC loss BIBREF8 is then computed from the timestep-wise probabilities.
A base model, INLINEFORM0 , is trained on 190 hours ( INLINEFORM1 100K instances) of transcribed speech data. Then, it selects a subset of a 1,700-hour ( INLINEFORM2 1.1M instances) unlabeled dataset. We query labels for the selected subset and incorporate them into training. Learning rates are tuned on a small validation set of 2048 instances. The trained model is then tested on a 156-hour ( INLINEFORM3 100K instances) test set and we report CTC loss, Character Error Rate (CER) and Word Error Rate (WER).
The confidence score methods BIBREF1 , BIBREF2 can be easily extended to our setup. Specifically, from the probabilities over the characters, we can compute an entropy per timestep and then average them. This method is denoted as entropy. We could also take the most likely prediction and calculate its CTC loss, normalized by number of timesteps. This method is denoted as pCTC (predicted CTC) in the following sections.
We implement EGL by marginalizing over the most likely 100 labels, and compare it with: 1) a random selection baseline, 2) entropy, and 3) pCTC. Using the same base model, each method queries a variable percentage of the unlabeled dataset. The queries are then included into training set, and the model continues training until convergence. Fig. FIGREF9 reports the metrics (Exact values are reported in Table TABREF12 in the Appendix) on the test set as the query percentage varies. All the active learning methods outperform the random baseline. Moreover, EGL shows a steeper, more rapid reduction in error than all other approaches. Specifically, when querying 20% of the unlabeled dataset, EGL has 11.58% lower CER and 11.09% lower WER relative to random. The performance of EGL at querying 20% is on par with random at 40%, suggesting that using EGL can lead to an approximate 50% decrease in data labeling.
Moreover, EGL shows a steeper, more rapid reduction in error than all other approaches.
|
What is the performance of EGL?
|
EGL shows a steeper, more rapid reduction in error than all other approaches.
|
null | false
| null |
Lorraine[Note 1] is a cultural and historical region in Northeastern France, now located in the administrative region of Grand Est. Its name stems from the medieval kingdom of Lotharingia, which in turn was named after either Emperor Lothair I or King Lothair II. Lorraine later was ruled as the Duchy of Lorraine before the Kingdom of France annexed it in 1766.
From 1982 until January 2016, Lorraine was an administrative region of France. In 2016, under a reorganisation, it became part of the new region Grand Est. As a region in modern France, Lorraine consisted of the four departments Meurthe-et-Moselle, Meuse, Moselle and Vosges (from a historical point of view the Haute-Marne department is located in the region), containing 2,337 communes. Metz is the regional prefecture. The largest metropolitan area of Lorraine is Nancy, which had developed for centuries as the seat of the duchy.
Lorraine borders Germany, Belgium, and Luxembourg. Its inhabitants are called Lorrains and Lorraines in French and number about 2,356,000.
|
What is Lorraine
|
Lorraine is a region located in Northeaster France, called Grand Est. It is consisted of the four departments Meuthe-et-Moselle, Meuse, Moselle and Vosges, containing 2,337 communes. Metz is the regional prefecture, and the largest metropolitan area of Lorraine is Nancy.
The region Lorraine borders Germany, Belgium and Luxembourg. Its inhabitants are called Lorrains and Lorraines in French and number about 2,356,000.
Lorraine is also a common name given to female individuals, in France but also in english english speaking countries such as Canada, United States or United Kingdom.
|
null | false
| null |
Why did the chicken cross the road?
|
To get to the other side.
|
|
null | false
| 251
|
As can be seen in Figure FIGREF15, in some areas the domains are somewhat overlapping in the embedding space, which may lead to outlier cases where examples from one domain are assigned to a cluster of a another domain. We plot a confusion matrix (Figure FIGREF20) to analyze this further based on the clustering with BERT-base and k=5. We first note that the outlier sentences are much shorter than the average sentence length in the corpus (11.62 tokens on average for outliers vs. 20.5 tokens on average in general). This makes sense as shorter sentences contain less information, making it harder to assign them to an appropriate cluster. Table TABREF19 shows examples of outlier sentences, assigned to clusters of domains different from their originating domain. We can see that in many cases the assignments are sensible – for example for sentences originating from the subtitles corpus, a sentence that mentions “great priest” is assigned to the Koran cluster, a sentence that mentions “The International Criminal Court in The Hague” is assigned to the Law cluster, a sentence that mentions “the virus” is assigned to the Medical cluster and so on. This strengthens our claim that defining domains based on the corpus they originated from may be over-simplistic, and using a more data-driven approach may enable to find better domain assignments across different corpora.
The domain that attracted the largest number of outliers is the IT domain cluster, with 597 sentences assigned to it from other domains. Looking more closely we find that more than half of these sentences (340 out of 597) included numbers (e.g. “34% 25% 34%” (from medical), “(b) reference number 20 is deleted;” (from law), “(Command of Prostration # 1)” (from Koran) or “The message, R2.” (from subtitles)). As numbers appear in many different contexts, they may be harder to assign to a specific domain by the context-aware language models in such short sentences. The second largest attractor of outliers is the Subtitles cluster, with 372 sentences assigned to it from other domains. We find that most of these sentences contain personal pronouns or question marks (228 out of 372, 61.2%) while the ratio of such sentences in the entire corpus is only 40%. Examples include “Why did you choose the name & amarok;?” (from IT), or “What is Avonex?” (from Medical). This may be expected as the subtitles corpus mainly includes transcriptions of spoken, conversational language, and “conversation tends to have more verbs, more personal pronouns, and more questions” BIBREF25. Another possible reason for the subtitles domain to attract outliers is the fact that this is the least-topical cluster: movies and TV series may discuss diverse topics, unlike medical, religious, legal and technical texts that may have a more cohesive topic.
We perform an analysis on the selected datasets, where we measure the precision and recall of sentence selection with respect to the oracle selection. The results are available in Table TABREF34. As also reflected in the BLEU scores, the Domain-Finetune method resulted in the highest domain recall with a minimum of 97.5, while Moore-Lewis and Domain-Cosine scored 89.4 and 78.8 respectively. We find these results very appealing given that only 2000 in-domain sentences were used for selection for each domain out of 1.45 million sentences. Also note that we used DistilBERT in these experiments: we believe that using larger, non-distilled models may result in even better selection performance (although at the price of larger computational requirements).
px
Also note that we used DistilBERT in these experiments: we believe that using larger, nondistilled models may result in even better selection performance (although at the price of larger com_x0002_putational requirements).
|
Regardless of the price of larger computational requirements, which model do the authors believe would perform better?
|
Larger, nondistilled models will be better.
|
null | false
| null |
Where are do the Los Angeles Lakers play their home games?
|
The Los Angeles Lakers play their home games at the Staples Center.
|
|
1911.02711
| true
| null |
To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification.
FLOAT SELECTED: Figure 2: Three model structures for incorporating summary into sentiment classification
To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer.
FLOAT SELECTED: Figure 2: Three model structures for incorporating summary into sentiment classification
|
Do they predict the sentiment of the review summary?
|
No.
|
1811.00942
| false
| null |
Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .
recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .
|
What is a commonly used evaluation metric for language models?
|
The answers are shown as follows:
* perplexity
|
1707.06939
| false
| null |
To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question. Across all questions, we found that the number of unique responses was higher for the AUI than for the Control (Fig. FIGREF19 A), implying higher diversity for AUI than for Control.
Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density. A set of responses has a response density of 1 when every response is unique but when every response is the same, the response density is equal to the number of responses. Across the ten questions, response density was significantly lower for AUI than for Control (Wilcoxon signed rank test paired on questions: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (Fig. FIGREF19 B).
To study the lexical and semantic diversities of responses, we performed three analyses. First, we aggregated all worker responses to a particular question into a single list corresponding to that question.
Second, we compared the diversity of individual responses between Control and AUI for each question. To measure diversity for a question, we computed the number of responses divided by the number of unique responses to that question. We call this the response density.
|
How was lexical diversity measured?
|
By computing number of unique responses and number of responses divided by the number of unique responses to that question for each of the questions
|
null | false
| null |
Classify each of the following as either birds or mammals: Eagle, Bear, Deer, Finch, Owl, Cat, Dog, Rat, Human, Whale, Raven
|
From the list that you provided: Eagle, Finch, Owl, Raven are birds.
The rest of the list: Bear, Deer, Cat, Dog, Rat, Human, and Whal are all mammals.
|
|
null | false
| null |
Give me a list of classic English poems with nice message
|
1. "If" by Rudyard Kipling: This poem is highlighting the importance to lead a life with integrity and gaining a clearer perspective about life.
2. "Requiescat" by Oscar Wilde: There is a lot of grief, loss and mourning. Poet dedicated to his younger sister who unfortunately died unexpectedly.
3. "And Still I Rise" by Maya Angelou: This poem is all about hope, courage and undying attitude to move forward in life by winning over all the challenges that life or society throws at you.
4. "Do not go gentle into that good night" by Dylan Thomas: One of the most known piece of the Welsh poet. In short, this poem is a gentle reminder about preciousness of life.
|
|
null | false
| null |
Maiestas trifasciatus (formerly Recilia trifasciatus) is a species of bug from the Cicadellidae family that can be found on Canary Islands and countries including Iran and Burkina Faso. It was formerly placed within Recilia, but a 2009 revision moved it to Maiestas.
|
What is a Maiestas trifasciatus
|
Maiestas trifasciatus (formerly Recilia trifasciatus) is a species of bug from the Cicadellidae family that can be found on Canary Islands and countries including Iran and Burkina Faso.
It was formerly placed within Recilia, but a 2009 revision moved it to Maiestas.
|
null | false
| null |
Konstantin Konstantinovich Sluchevsky (Russian: Константин Константинович Случевский), (1837–1904) was a Russian poet.
Sluchevsky was born in St. Peterburg into a Russian noble family. He graduated from the First Cadet Corps, served in the Imperial Russian Guard, then entered the Academy of the General Staff, but in 1861 he quit the military service and went abroad. For a number of years he studied in Paris, Berlin, and Heidelberg, where he got the degree of Doctor of Philosophy in 1865. On his return to Russia he worked for the Ministry of Internal Affairs and State Property. From 1891 to 1902 he was the Chief Editor of the journal Pravitelstvenny Vestnik (Government Bulletin). He was also a member of the Council of the Chief Printing Office and had a title of Hofmeister.
|
Given a reference text about Konstantin Konstantinovich Sluchevsky, tell me where he was born and where he studied.
|
Konstantin Konstantinovich Sluchevsky was born in St. Petersburg, Russia and studied in Paris, Berlin, and Heidelberg.
|
null | false
| 46
|
Question answering (QA) has drawn a lot of attention in the past few years. QA tasks on images BIBREF0 have been widely studied, but most focused on understanding text documents BIBREF1 . A representative dataset in text QA is SQuAD BIBREF1 , in which several end-to-end neural models have accomplished promising performance BIBREF2 . Although there is a significant progress in machine comprehension (MC) on text documents, MC on spoken content is a much less investigated field. In spoken question answering (SQA), after transcribing spoken content into text by automatic speech recognition (ASR), typical approaches use information retrieval (IR) techniques BIBREF3 to find the proper answer from the ASR hypotheses. One attempt towards QA of spoken content is TOEFL listening comprehension by machine BIBREF4 . TOEFL is an English examination that tests the knowledge and skills of academic English for English learners whose native languages are not English. Another SQA corpus is Spoken-SQuAD BIBREF5 , which is automatically generated from SQuAD dataset through Google Text-to-Speech (TTS) system. Recently ODSQA, a SQA corpus recorded by real speakers, is released BIBREF6 .
To mitigate the impact of speech recognition errors, using sub-word units is a popular approach for speech-related downstream tasks. It has been applied to spoken document retrieval BIBREF7 and spoken term detection BIBREF8 The prior work showed that, using phonectic sub-word units brought improvements for both Spoken-SQuAD and ODSQA BIBREF5 .
Instead of considering sub-word features, this paper proposes a novel approach to mitigate the impact of ASR errors. We consider reference transcriptions and ASR hypotheses as two domains, and adapt the source domain data (reference transcriptions) to the target domain data (ASR hypotheses) by projecting these two domains in the shared common space. Therefore, it can effectively benefit the SQA model by improving the robustness to ASR errors in the SQA model.
Domain adaptation has been successfully applied on computer vision BIBREF9 and speech recognition BIBREF10 . It is also widely studied on NLP tasks such as sequence tagging and parsing BIBREF11 , BIBREF12 , BIBREF13 . Recently, adversarial domain adaptation has already been explored on spoken language understanding (SLU). Liu and Lane learned domain-general features to benefit from multiple dialogue datasets BIBREF14 ; Zhu et al. learned to transfer the model from the transcripts side to the ASR hypotheses side BIBREF15 ; Lan et al. constructed a shared space for slot tagging and language model BIBREF16 . This paper extends the capability of adversarial domain adaptation for SQA, which has not been explored yet.
Instead of considering sub-word features, this paper proposes a novel approach to mitigate the impact of ASR errors.
|
What approach do the authors propose in the paper?
|
A novel approach to mitigate the impact of ASR errors.
|
1809.08652
| true
| null |
Both the HEOT and BIBREF1 datasets contain tweets which are annotated in three categories: offensive, abusive and none (or benign). Some examples from the dataset are shown in Table TABREF4 . We use a LSTM based classifier model for training our model to classify these tweets into these three categories. An overview of the model is given in the Figure FIGREF12 . The model consists of one layer of LSTM followed by three dense layers. The LSTM layer uses a dropout value of 0.2. Categorical crossentropy loss was used for the last layer due to the presence of multiple classes. We use Adam optimizer along with L2 regularisation to prevent overfitting. As indicated by the Figure FIGREF12 , the model was initially trained on the dataset provided by BIBREF1 , and then re-trained on the HEOT dataset so as to benefit from the transfer of learned features in the last stage. The model hyperparameters were experimentally selected by trying out a large number of combinations through grid search.
The model consists of one layer of LSTM followed by three dense layers. The LSTM layer uses a dropout value of 0.2.
|
Do they use dropout?
|
Yes.
|
null | false
| null |
Why does it get hotter in the summer?
|
During summer, the northern hemisphere tilts towards the Sun. As a result, the Sun shines longer in the summer, and the longer it shines, the hotter it gets. At this time southern hemisphere has winter. Then things swap six months later when the bottom half is closer to the Sun.
|
|
null | false
| null |
What is the difference between oxfords and derbys?
|
Oxfords traditionally have closed lacing and are seen as more formal. Derbys on the other hand have an open lacing system and are thus more informal
|
|
1805.05581
| false
| null |
We collected Japanese fictional stories from the Web to construct the dataset. The dataset contains approximately 30M utterances of fictional characters. We separated the data into a 99%–1% split for training and testing. In Japanese, the function words at the end of the sentence often exhibit style (e.g., desu+wa, desu+ze;) therefore, we used an existing lexicon of multi-word functional expressions BIBREF14 . Overall, the vocabulary size $\vert \mathcal {V} \vert $ was 100K.
We collected Japanese fictional stories from the Web to construct the dataset.
|
How is the dataset created?
|
The answers are shown as follows:
* We collected Japanese fictional stories from the Web
|
null | false
| null |
How do I make Aioli?
|
First, blend combine 1 egg plus 1 egg yolk, two grated garlic cloves, juice from half a lemon, a tablespoon of dijon mustard, a small pinch of saffron, a pinch of salt and pinch of ground black pepper.
Then, mix 1/4 cup of olive oil and 1 cup of grapeseed oil together then very slowly stream in the oils whilst continuously whisking hard. If you add the oil in too quickly, it will not emulsify so do it slowly. Once it’s all incorporated, you’ll be rewarded with thick, creamy aioli!
Finally, taste it and if it needs further acid, add a teaspoon of white wine vinegar.
Enjoy!
|
|
null | false
| null |
What is Unified Payments Interface (UPI) in India
|
The Unified Payments Interface is a Peer to Peer payments system launched in India by the National Payments Corporation of India (NPCI) in 2016. NPCI set out with a mandate to change the face of India’s payment systems. It developed the Unified Payments Interface (UPI) as an architecture framework with a set of standard Application Programming Interface (API) specifications to facilitate online payments. The aim was to simplify and provide a single interface across all NPCI systems, thereby creating interoperability and a superior customer experience.
UPI is now one of the most preferred payment solutions in India, with over a billion transactions every month. FY22, the total transaction value processed by the United Payments Interface (UPI) platform accounted for 86% of India’s GDP. UPI’s core function is to support easy and secure money transfers between bank accounts. It does this by adding multiple bank accounts into a single mobile application, allowing for seamless fund transfers and merchant payments from one place. It also enables ‘peer to peer’ and ‘peer to merchant’ collection requests, which can be scheduled and paid as requested.
Payments can be made using a UPI ID, UPI Number, Account number, and an Indian Financial System Code (IFSC). Payment security is as per applicable Reserve Bank of India (RBI) guidelines using a 1-click 2-factor authentication where the second factor of authentication is the UPI PIN.
UPI can be accessed on all major platforms such as Android and iOS with apps developed by members for versions of Android 4.2.2 and iOS 8.1 and above.
The primary business uses of UPI are to allow a personal mobile as the primary device for all payments, including person to person, person to entity, and entity to person. Using a personal mobile an individual can ‘Pay’ someone (push) as well as ‘Collect’ from
someone (pull). In addition, the ability to pay and collect using ‘UPI IDs’ that are ‘aliases’ to the VPA. The goal is to make payments by providing an address without having to ever provide account details or credentials on third party applications or websites.
Also, sending ‘collect’ requests to others (person to person or entity to person) with a‘pay by’ date, which allows customers to pay at a later date without blocking the money in the account. UPI also allows for pre-authorizing multiple recurring payments (utilities, school fees, subscriptions, etc.) with a one-time secure authentication and rule-based access, similar to ECS. UPI autopay allows customers to set recurring payments for their UPI transactions. Recurring payments up to Indian Rupees 2000 can be set at the frequency desired.
This makes payments using UPI a fully interoperable system across all PSPs without having silos and closed systems with one-click two-factor authentication, using a personal
phone, and without any acquiring devices or physical tokens.
In March 2023, Reserve Bank of India (RBI) and the Monetary Authority of Singapore (MAS) launched the linkage of UPI and Singapore’s PayNow. Remittances from Singapore to India were on track to breach the $100 billion mark in 2023, making it the fourth highest remitting nation to India, accounting for 5.7% of all inbound remittances. Integrating PayNow and UPI will likely reduce charges and the time taken for such transactions.
|
|
null | false
| null |
What is man's best friend?
|
Man's best friends usually refers to a dog or it is a way to refer to a domestic dog. This phrase refers to many traits that dogs exhibit such as: loyalty, companionship, and friendship. They never stop loving, and do not have an ability to feel hate or show hate as people interpret it. Any behavior that looks opposite of "Man's Best Friend" could be as a result of neglect or animal abuse.
|
|
null | false
| null |
InterGlobe Aviation Limited, doing business as IndiGo, is an Indian low-cost airline headquartered in Gurgaon, Haryana, India. It is the largest airline in India by passengers carried and fleet size, with a ~57% domestic market share as of October 2022.[non-primary source needed] It is also the largest individual Asian low-cost carrier in terms of jet fleet size and passengers carried, and the fourth largest carrier in Asia. The airline has carried over 300+ million passengers as of November 2022.
|
List down some important facts about Indigo airlines
|
1. InterGlobe Aviation Limited is doing the business as Indigo in India.
2. Indigo is a low-cost Indian airline, whose headquarter is in Gurgaon, Haryana, India.
3. It is the largest airline in India with a ~57% domestic market share as of October 2022.
4. It has the record for largest individual Asian low-cost carrier in terms of jet fleet size and passengers carried, and it is also the fourth largest carrier in Asia.
5. As of November 2022 Indigo airline has carried over 300+ million passengers.
|
null | false
| null |
How do I improve my leadership skills?
|
Leadership is a journey and improves with practice over a long period of time. The key ingredients to improve leadership starts with self awareness and practicing empathy.
One of the best way to develop your leadership skills is to observe and absorb from other good leaders. A leader should also regularly solicit honest feedback from people in their organization, preferably a 360 degree review. This feedback should be the starting point and a good guide to improving your leadership skills. New ideas to add and improve your leadership can be done through a constant study of articles, books and learning from leadership forums.
Just like any other skill, leadership should be continuously developed through reading, learning, feedback and observing.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.