paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Pulitzer Prize 2022 was given to 4 Indian photographers from Reuters news agency. Name them
|
Danish Siddiqui(slain), Adnan Abidi, SannaIrshad Matto & Amit Dave
|
|
null | false
| null |
What are some must visit places in San Diego, CA?
|
San Diego is known as a sunny state and is one of those places in the US that receives most sun light through out the year. People from all over the US visit San Diego. Here are some must visit places in San Diego:
- San Diego Zoo
- Balboa Park
- USS Midway Museum
- Safari Park
- Seaworld San Diego
- Belmont Park
|
|
null | false
| 388
|
The current coverage of the political landscape in the press and in social media has led to an unprecedented situation. Like never before, a statement in an interview, a press release, a blog note, or a tweet can spread almost instantaneously and reach the public in no time. This proliferation speed has left little time for double-checking claims against the facts, which has proven critical in politics, e.g., during the 2016 presidential campaign in the USA, which was arguably impacted by fake news in social media and by false claims.
Investigative journalists and volunteers have been working hard trying to get to the root of a claim and to present solid evidence in favor or against it. Manual fact-checking has proven very time-consuming, and thus automatic methods have been proposed as a way to speed-up the process. For instance, there has been work on checking the factuality/credibility of a claim, of a news article, or of an information source BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. However, less attention has been paid to other steps of the fact-checking pipeline, which is shown in Figure FIGREF1.
The process starts when a document is made public. First, an intrinsic analysis is carried out in which check-worthy text fragments are identified. Then, other documents that might support or rebut a claim in the document are retrieved from various sources. Finally, by comparing a claim against the retrieved evidence, a system can determine whether the claim is likely true or likely false. For instance, BIBREF8 do this on the basis of a knowledge graph derived from Wikipedia. The outcome could then be presented to a human expert for final judgment.
In this paper, we focus on the first step: predicting check-worthiness of claims. Our contributions can be summarized as follows:
New dataset: We build a new dataset of manually-annotated claims, extracted from the 2016 US presidential and vice-presidential debates, which we gathered from nine reputable sources such as CNN, NPR, and PolitiFact, and which we release to the research community.
Modeling the context: We develop a novel approach for automatically predicting which claims should be prioritized for fact-checking, based on a rich input representation. In particular, we model not only the textual content, but also the context: how the target claim relates to the current segment, to neighboring segments and sentences, and to the debate as a whole, and also how the opponents and the public react to it.
State-of-the-art results: We achieve state-of-the-art results, outperforming a strong rivaling system by a margin, while also demonstrating that this improvement is due primarily to our modeling of the context.
We model the problem as a ranking task, and we train both Support Vector Machines (SVM) and Feed-forward Neural Networks (FNN) obtaining state-of-the-art results. We also analyze the relevance of the specific feature groups and we show that modeling the context yields a significant boost in performance. Finally, we also analyze whether we can learn to predict which facts are check-worthy with respect to each of the individual media sources, thus capturing their biases. It is worth noting that while trained on political debates, many features of our model can be potentially applied to other kinds of information sources, e.g., interviews and news.
The rest of the paper is organized as follows: Section SECREF2 discusses related work. Section SECREF3 describes the process of gathering and annotating our political debates dataset. Section SECREF4 presents our supervised approach to predicting fact-checking worthiness, including the explanation of the model and the information sources we use. Section SECREF5 presents the evaluation setup and discusses the results. Section SECREF6 provides further analysis. Finally, Section SECREF7 presents the conclusions and outlines some possible directions for future research.
In particular, we model not only the textual content, but also the context: how the target claim relates to the current segment, to neighboring segments and sentences, and to the debate as a whole, and also how the opponents and the public react to it.
|
Do the authors model the textual content?
|
Yes, they do.
|
null | false
| null |
Which of these are not playable races on Dungeons and Dragons 5th edition: Giant, Gnome, Half-Elf, Chimera, Kenku, Troll, Gnoll
|
From this list, Giant, Chimera, Troll and Gnoll are not playable races in Dungeons and Dragons 5th edition, although there is a race similar to the concept of giants called Firbolg.
|
|
2002.11268
| false
| null |
The Density Ratio method consistently outperformed Shallow Fusion for the cross-domain scenarios examined, with and without fine-tuning to audio data from the target domain. Furthermore, the gains in WER over the baseline are significantly larger for the Density Ratio method than for Shallow Fusion, with up to 28% relative reduction in WER (17.5% $\rightarrow $ 12.5%) compared to up to 17% relative reduction (17.5% $\rightarrow $ 14.5%) for Shallow Fusion, in the no fine-tuning scenario.
Furthermore, the gains in WER over the baseline are significantly larger for the Density Ratio method than for Shallow Fusion, with up to 28% relative reduction in WER (17.5% $\rightarrow $ 12.5%) compared to up to 17% relative reduction (17.5% $\rightarrow $ 14.5%) for Shallow Fusion, in the no fine-tuning scenario.
|
What metrics are used for evaluation?
|
word error rate
|
null | false
| null |
Write a witty paragraph about a chicken's thought process for crossing the road.
|
Oh my, that is a lovely shrub over there past the black track. I wonder if it smells as good as I imagine it would. I do wonder if all the fast folks passing by know this shrub and thus are not bothered by the curiosity. There is no way around it; I have to get close to confirm it.
|
|
null | false
| null |
Numerous actors have made recurring guest appearances throughout the series, including Pamela Reed as Leslie's mother and fellow politician Marlene Griggs-Knope, Ben Schwartz as Tom's fast-talking friend Jean-Ralphio and Jenny Slate as his twin sister Mona-Lisa, Jama Williamson as Tom's ex-wife Wendy, Mo Collins as morning talk show host Joan Callamezzo, Jay Jackson as television broadcaster Perd Hapley, Alison Becker as newspaper reporter Shauna Malwae-Tweep, Darlene Hunt as conservative activist Marcia Langman, and Andy Forrest as Andy's frequent shoeshine customer Kyle. Megan Mullally, the real-life wife of Nick Offerman, portrayed Ron's ex-wife Tammy in the second season's "Ron and Tammy", a role she reprised in later episodes. Lucy Lawless and Jon Glaser have recurring roles in the fifth and sixth seasons as Ron's love interest and later wife Diane Lewis, and as Leslie's arch enemy on the city council Jeremy Jamm, respectively.
Mullally's performance was well received, which made the Parks and Recreation producers feel more comfortable about using celebrity guest actors in later episodes. Other such celebrity guests included: Blake Anderson, Fred Armisen, Will Arnett, Kristen Bell, H. Jon Benjamin, Matt Besser, Chris Bosh, Louis C.K., The Decemberists, Sam Elliott, Will Forte, Ginuwine, Michael Gross, Jon Hamm, Nick Kroll, John Larroquette, Andrew Luck, Letters to Cleo, Natalie Morales, Parker Posey, Kathryn Hahn, Andy Samberg, J. K. Simmons, Roy Hibbert, Detlef Schrempf, Justin Theroux, Wilco, Henry Winkler, Peter Serafinowicz and Yo La Tengo. Paul Rudd appeared in several season four episodes as Bobby Newport, Leslie's opponent in the City Council race, and returned for two episodes in the final season.
|
Extract the celebrity guests that appeared on the show Parks and Recreation.
|
Pamela Reed, Ben Schwartz, Jenny Slate, Jama Williamson, Mo Collins, Jay Jackson, Alison Becker, Darlene Hunt, Andy Forrest, Megan Mullally, Lucy Lawless, Jon Glaser, Blake Anderson, Fred Armisen, Will Arnett, Kristen Bell, H. Jon Benjamin, Matt Besser, Chris Bosh, Louis C.K., The Decemberists, Sam Elliott, Will Forte, Ginuwine, Michael Gross, Jon Hamm, Nick Kroll, John Larroquette, Andrew Luck, Letters to Cleo,Natalie Morales, Parker Posey,
Kathryn Hahn, Andy Samberg, J. K. Simmons, Roy Hibbert, Detlef Schrempf, Justin Theroux, Wilco, Henry Winkler, Peter Serafinowicz, Yo La Tengo, Paul Rudd
|
null | false
| null |
In common usage, climate change describes global warming—the ongoing increase in global average temperature—and its effects on Earth's climate system. Climate change in a broader sense also includes previous long-term changes to Earth's climate. The current rise in global average temperature is more rapid than previous changes, and is primarily caused by humans burning fossil fuels. Fossil fuel use, deforestation, and some agricultural and industrial practices increase greenhouse gases, notably carbon dioxide and methane. Greenhouse gases absorb some of the heat that the Earth radiates after it warms from sunlight. Larger amounts of these gases trap more heat in Earth's lower atmosphere, causing global warming.
Due to climate change, deserts are expanding, while heat waves and wildfires are becoming more common. Increased warming in the Arctic has contributed to melting permafrost, glacial retreat and sea ice loss. Higher temperatures are also causing more intense storms, droughts, and other weather extremes. Rapid environmental change in mountains, coral reefs, and the Arctic is forcing many species to relocate or become extinct. Even if efforts to minimise future warming are successful, some effects will continue for centuries. These include ocean heating, ocean acidification and sea level rise.
|
From the Passage please find out what are the cause of climate change
|
1. Climate change is primarily caused by humans burning fossil fuels.
2. Fossil fuel use,
3. deforestation,
4.agricultural and industrial practices increase
|
null | false
| null |
What are the best river cruises for Europe?
|
There are many great river cruises available for Europe. Viking, AmaWaterways, Avalon Waterways,Unique Boutique River Cruises and Tauck are consistently rated in the top 5 by Travel and Leisure Magazine.
|
|
null | false
| 205
|
Poetry generation is an interesting research topic in the field of text generation. As one of the most valuable literary and cultural heritages of China, Chinese classical poetry is very familiar and loved by Chinese people from generation to generation. It has many particular characteristics in its language structure, ranging from form, sound to meaning, thus is regarded as an ideal testing task for text generation. In this paper, we propose a GPT-2 based uniformed framework for generating major types of Chinese classical poems. We define a unified format for formulating all types of training samples by integrating detailed form information, then present a simple form-stressed weighting method in GPT-2 to strengthen the control to the form of the generated poems, with special emphasis on those forms with longer body length. Preliminary experimental results show this enhanced model can generate Chinese classical poems of major types with high quality in both form and content, validating the effectiveness of the proposed strategy. The model has been incorporated into Jiuge, the most influential Chinese classical poetry generation system developed by Tsinghua University (Guo et al., 2019).
We define a unified format for formulating all types of training samples by integrating detailed form information, then present a simple form-stressed weighting method in GPT-2 to strengthen the control to the form of the generated poems, with special emphasis on those forms with longer body length.
|
What method do they present in GPT-2 to strengthen the control to the form of the generated poems?
|
A simple form-stressed weighting method.
|
null | false
| null |
Natural hydrogen (known as white hydrogen), is naturally occurring molecular hydrogen on or in Earth (as opposed to hydrogen produced in the laboratory or in industry). The name white hydrogen distinguishes it from green hydrogen, which is produced from renewable energy sources, and from grey, brown or black hydrogen, which is obtained from fossil sources or from the electrolysis of water. Natural hydrogen may be renewable, non-polluting and allows for lower cost operation compared to industrial hydrogen. Natural hydrogen has been identified in many source rocks in areas beyond the sedimentary basins where oil companies typically operate.
Origin of natural hydrogen
There are several sources of natural hydrogen:
- degassing of deep hydrogen from the Earth's crust and mantle;
- reaction of water with ultrabasic rocks (serpentinisation);
- contact of water with reducing agents in the Earth's mantle;
- interaction of water with freshly exposed rock surfaces (weathering);
- decomposition of hydroxyl ions in the structure of minerals;
- Natural radiolysis of water;
- decomposition of organic matter;
- biological activity
- Extraction
- Natural hydrogen is extracted from wells, mixed with other gases such as nitrogen or helium.
Several sources have been identified in France. Geologists Alain Prinzhofer and Eric Derville have demonstrated the existence of large reservoirs in a dozen countries, including Mali and the United States. However, their potential remains difficult to assess.
Numerous emanations on the ocean floor have been identified but are difficult to exploit. The discovery of a significant emergence in Russia in 2008 suggests the possibility of extracting native hydrogen in geological environments.
Geology
Natural hydrogen is generated continuously from a variety of natural sources. There are many known hydrogen emergences on mid-ocean ridges. Another of the known reactions, serpentinisation, occurs under the sea floor (in the oceanic crust).
Diagenetic origin (iron oxidation) in the sedimentary basins of cratons, notably in Russia. Other sources are being explored, such as mantle hydrogen, or hydrogen from radiolysis (natural electrolysis) or from bacterial activity. In France, the Alps and Pyrenees are suitable for exploitation. New Caledonia has hyperalkaline sources that show dihydrogen emissions. A large accumulation of natural hydrogen was discovered in Bourakebougou (Mali).
Characteristics
Dihydrogen is very soluble in fresh water, especially at depth (solubility increases with pressure).
https://en.wikipedia.org/wiki/Natural_hydrogen
|
Given these paragraphs about Natural hydrogen, can you name some of the countries where sources have been identified?
|
Several sources have been identified in France. Geologists Alain Prinzhofer and Eric Derville have demonstrated the existence of large reservoirs in a dozen countries, including Mali and the United States. However, their potential remains difficult to assess.
|
null | false
| null |
Where do bears live?
|
Bears are native to every continent on earth besides Antarctica. Polar bears live in the Arctic, black bears live in North America and Asia, brown bears live in North America, Asia, and Europe, and Panda bears live in Asia.
|
|
null | false
| null |
Who wrote The Picture of Dorian Grey in 1891
|
Oscar Wilde
|
|
null | false
| null |
How do you check the weather on an iPhone?
|
Open the "Weather" app. Also, you can set a Weather widget on your home screen. To do this, long-press the home screen.
|
|
null | false
| null |
What is social media?
|
Social media accounts for a variety of services that are intended to connect individuals and organizations, to create a network for sharing information. The information shared can be videos, images, music, thoughts, opinions, or a variety of other things that can be shared publicly or privately. The intent is to allow anyone to be a contributor, give users the ability to connect with and follow whom they please, and provide a broad network of information and connections.
|
|
null | false
| null |
At the 2004 Summer Olympics, she originally won the gold medal in shot put, but tested positive for stanozolol and was stripped of the medal. She later received a life ban from the IAAF in a Memorandum of 21 September 2005. Korzhanenko refused to return the gold medal, despite the pressure of the International Athletics Federation's officials. As of 2023, she has not returned the gold medal.
|
Does Irina Korzhanenko still have her gold medal?
|
Yes, although she was stripped of the medal she refused to return it.
|
null | false
| null |
Lean startup is a methodology for developing businesses and products that aims to shorten product development cycles and rapidly discover if a proposed business model is viable; this is achieved by adopting a combination of business-hypothesis-driven experimentation, iterative product releases, and validated learning. Lean startup emphasizes customer feedback over intuition and flexibility over planning. This methodology enables recovery from failures more often than traditional ways of product development.
|
Based on this passage, give a bulleted overview of the components of the lean startup methodology.
|
Lean startup is a business and product development methodology
The aim is to shorten product development cycles
The approach is to test business hypotheses
The experiments involve iterative product releases and learning from customer feedback
|
null | false
| null |
What is heliocentrism?
|
Heliocentrism is an astronomical model in which the sun lies at the center, and planets revolve around it. The model was published by Nicolaus Copernicus shortly before his death in 1543, and became one of the most significant events in the history of science. Before the acceptance of heliocentrism, it was widely believed that Earth was the center of the entire universe.
|
|
null | false
| 20
|
This research addresses the problem of representing the semantics of text documents in multi-lingual comparable corpora. We present a new approach to this problem, based on neural embeddings, and test it on the task of clustering texts into meaningful classes depending on their topics. The setting is unsupervised, meaning that one either does not have enough annotated data to train a supervised classifier or does not want to be limited with a pre-defined set of classes. There is a lot of sufficiently good approaches to this problem in the case of mono-lingual text collections, but the presence of multiple languages introduces complications.
When a text collection contains documents in several languages, it becomes impractical to simply represent the documents as vectors of words occurring in them ("bag-of-words"), as the words surface forms are different, even in closely-related languages. Thus, one has to invent means to cross the inter-lingual gap and bring all documents to some sort of shared representation, without losing information about their topics or categories.
Of course, one obvious way to solve this problem is to translate all documents into one language, and then apply any clustering algorithm. However, this requires either buying human/machine translation services (which can be expensive if you deal with large text collection) or training own statistical machine translation model (which as a rule requires big parallel corpus). This is the reason to search for other solutions.
In this paper, a novel way of reducing the problem of cross-lingual document representation to a monolingual setting is proposed. Essentially, we train Continuous Bag-of-Words models BIBREF0 on large comparable monolingual corpora for two languages our dataset consists of. This provides us with vector representations of words, allowing to measure their semantic similarity. Then, a linear transformation matrix from vectors of language A to vectors of language B is learned, using a small bilingual dictionary as training data. This matrix is then employed to `project' word and document representations from semantic space of language A to semantic space of language B. It allows not only quite accurate `translation' of words, but also of document `semantic fingerprints' (dense representations of document semantics, calculated as an average of the trained distributional vectors for all the words in document).
This approach is evaluated in a setting, where the input is a collection of documents in several languages and some number of topics to which these documents belong (we also have large monolingual corpora to train distributional models on). For each document, we are given its language, but not its topic. The task is to cluster this collection so that documents belonging to one topic were clustered together, independent of their language. Note that we are interested in clustering the collection as a whole, not each language separately (which is trivial).
Our evaluation data consists of comparable corpora of Russian and Ukrainian academic texts. On this material, we show that the `translated semantic fingerprints' method represents documents in different languages precisely enough to allow almost exact clustering according to document topics, with only 5% of incorrect assignments. It significantly outperforms both naive bag-of-words baseline and the not-so-naive method of `orthographic translation' based on Damerau-Levenshtein distance, even enriched with dictionary mappings. At the same time, it does not require large parallel corpora or a ready-made statistical machine translation model.
The rest of the paper is structured as follows. In Section "Related Work" we describe the foundations of our approach and the related work. Section "Academic texts as Comparable Corpora" introduces the employed corpora and the story behind them. Section "Learning to Translate: Ukrainian-to-Russian transformations" is dedicated to learning the transformation matrix, and Section "Experiment Design and Evaluation" describes our experimental setting and evaluation results. We discuss the findings in Section "Discussion" and conclude in Section "Conclusion and Future Work" , also suggesting directions for future work.
In this paper, a novel way of reducing the problem of cross-lingual document representation to a monolingual setting is proposed.
|
What novel approach do the authors present in this paper?
|
A novel way of reducing the problem of cross-lingual document representation to a monolingual setting.
|
null | false
| null |
Liebers is a German language surname. Notable people with the name include:
Mario Liebers (born 1960), German former competitive figure skater
Martin Liebers (born 1985), German former competitive figure skater
Matthias Liebers (born 1958), former German footballer
Peter Liebers (born 1988), German former figure skater.
|
Which notable people in this list with surname Liebers were born in the 80s?
|
Martin Liebers and Peter Liebers
|
null | false
| 247
|
With the rapid growth of social network platforms, more and more people tend to share their experiences and emotions online.[2]Corresponding Author: xuruifeng@hit.edu.cn Emotion analysis of online text becomes a new challenge in Natural Language Processing (NLP). In recent years, studies in emotion analysis largely focus on emotion classification including detection of writers' emotions BIBREF0 as well as readers' emotions BIBREF1 . There are also some information extraction tasks defined in emotion analysis BIBREF2 , BIBREF3 , such as extracting the feeler of an emotion BIBREF4 . These methods assume that emotion expressions are already observed. Sometimes, however, we care more about the stimuli, or the cause of an emotion. For instance, Samsung wants to know why people love or hate Note 7 rather than the distribution of different emotions.
Ex.1 我的手机昨天丢了,我现在很难过。
Ex.1 Because I lost my phone yesterday, I feel sad now.
In an example shown above, “sad” is an emotion word, and the cause of “sad” is “I lost my phone”. The emotion cause extraction task aims to identify the reason behind an emotion expression. It is a more difficult task compared to emotion classification since it requires a deep understanding of the text that conveys an emotions.
Existing approaches to emotion cause extraction mostly rely on methods typically used in information extraction, such as rule based template matching, sequence labeling and classification based methods. Most of them use linguistic rules or lexicon features, but do not consider the semantic information and ignore the relation between the emotion word and emotion cause. In this paper, we present a new method for emotion cause extraction. We consider emotion cause extraction as a question answering (QA) task. Given a text containing the description of an event which [id=lq]may or may not cause a certain emotion, we take [id=lq]an emotion word [id=lq]in context, such as “sad”, as a query. The question to the QA system is: “Does the described event cause the emotion of sadness?”. The [id=lq]expected answer [id=lq]is either “yes” or “no”. (see Figure FIGREF1 ). We build our QA system based on a deep memory network. The memory network has two inputs: a piece of text, [id=lq]referred to as a story in QA systems, and a query. The [id=lq]story is represented using a sequence of word embeddings.
[id=lq]A recurrent structure is implemented to mine the deep relation between a query and a text. It measure[id=lq]s the [id=lq]importance of each word in the text by [id=lq]an attention mechanism. Based on the [id=lq]learned attention result, the network maps the text into a low dimensional vector space. This vector is [id=lq]then used to generate an answer. Existing memory network based approaches to QA use weighted sum of attentions to jointly consider short text segments stored in memory. However, they do not explicitly model [id=lq]sequential information in the context. In this paper, we propose a new deep memory network architecture to model the context of each word simultaneously by multiple memory slots which capture sequential information using convolutional operations BIBREF5 , and achieves the state-of-the-art performance compared to existing methods which use manual rules, common sense knowledge bases or other machine learning models.
The rest of the paper is organized as follows. Section SECREF2 gives a review of related works on emotion analysis. Section SECREF3 presents our proposed deep memory network based model for emotion cause extraction. Section SECREF4 discusses evaluation results. Finally, Section SECREF5 concludes the work and outlines the future directions.
In this paper, we propose a new deep memory network architecture to model the context of each word simultaneously by multiple memory slots which capture sequential information using convolutional operations (Kim, 2014), and achieves the state-of-the-art performance compared to existing methods which use manual rules, common sense knowledge bases or other machine learning models
|
How does the new deep memory network architecture model the context of each word simultaneously?
|
The authors model the context of each word simultaneously by multiple memory slots.
|
1709.10445
| true
| null |
We believe that our model can help expand our understanding of word embedding; and also help reevaluate the value of etymology in data mining and machine learning. We are excited to see etymological graphs used in other ways to extract knowledge. We also are especially interested in seeing this model applied to different languages.
We also are especially interested in seeing this model applied to different languages.
|
Have the authors tried this approach on other languages?
|
No.
|
null | false
| null |
What is an amendment?
|
a change or addition to the Constitution
|
|
null | false
| 155
|
Please refer to Table TABREF7 for summary of datasets used. We performed experiments using large, diverse, manually annotated, and publicly available datasets for cyberbullying detection in social media. We cover three different types of social networks: teen oriented Q&A forum (Formspring), large microblogging platform (Twitter), and collaborative knowledge repository (Wikipedia talk pages). Each dataset addresses a different topic of cyberbullying. Twitter dataset contains examples of racism and sexism. Wikipedia dataset contains examples of personal attack. However, Formspring dataset is not specifically about any single topic. All three datasets have the problem of class imbalance where posts labeled as cyberbullying are in the minority as compared to neutral posts. Variation in the number of posts across datasets also affects vocabulary size that represents the number of distinct words encountered in the dataset. We measure the size of a post in terms of the number of words in the post. For each dataset, there are only a few posts with large size. We truncate such large posts to the size of post ranked at 95 percentile in that dataset. For example, in Wikipedia dataset, the largest post has 2846 words. However, size of post ranked at 95 percentile in that dataset is only 231. Any post larger than size 231 in Wikipedia dataset will be truncated by considering only first 231 words. This truncation affects only a small minority of posts in each dataset. However, it is required for efficiently training various models in our experiments. Details of each dataset are as follows.
Formspring BIBREF2 : It was a question and answer based website where users could openly invite others to ask and answer questions. The dataset includes 12K annotated question and answer pairs. Each post is manually labeled by three workers. Among these pairs, 825 were labeled as containing cyberbullying content by at least two Amazon Mechanical turk workers.
Twitter BIBREF3 : This dataset includes 16K annotated tweets. The authors bootstrapped the corpus collection, by performing an initial manual search of common slurs and terms used pertaining to religious, sexual, gender, and ethnic minorities. Of the 16K tweets, 3117 are labeled as sexist, 1937 as racist, and the remaining are marked as neither sexist nor racist.
Wikipedia BIBREF4 : For each page in Wikipedia, a corresponding talk page maintains the history of discussion among users who participated in its editing. This data set includes over 100k labeled discussion comments from English Wikipedia's talk pages. Each comment was labeled by 10 annotators via Crowdflower on whether it contains a personal attack. There are total 13590 comments labeled as personal attack.
We performed experiments using large, diverse, manually annotated, and publicly available datasets for cyberbullying detection in social media. We cover three different types of social networks: teen oriented Q&A forum (Formspring), large microblogging platform (Twitter), and collaborative knowledge repository (Wikipedia talk pages).
|
What methods does the author use to detect cyberbullying in social media?
|
They performed experiments using large, diverse, manually annotated, and publicly available datasets for cyberbullying detection in social media. They cover three different types of social networks: teen oriented Q&A forum (Formspring), large microblogging platform (Twitter), and collaborative knowledge repository (Wikipedia talk pages).
|
null | false
| null |
Air Lingus is the national airline of which country
|
Republic of Ireland or Eire
|
|
null | false
| null |
Why don't women have beards?
|
Facial hair is typical in our ancestors, but now it's absent in most women. The difference seems down to evolution. Humans have become less hairy in the millions of years we developed from apes. Over time, men came to prefer women with little or no facial hair. Those women will pass this "hairless" gene to their daughters. Then those girls would have a head start in the ancient dating game.
|
|
null | false
| null |
Give me a list of the most popular vehicle brands in the United States.
|
- Toyota
- BMW
- Lexus
- Porsche
- Ford
- Dodge
|
|
null | false
| 172
|
Legal documents are a rather heterogeneous class, which also manifests in their linguistic properties, including the use of named entities and references. Their type and frequency varies significantly, depending on the text type. Texts belonging to specific text type, which are to be selected for inclusion in a corpus must contain enough different named entities and references and they need to be freely available. When comparing legal documents such as laws, court decisions or administrative regulations, decisions are the best option. In laws and administrative regulations, the frequencies of person, location and organization are not high enough for NER experiments. Court decisions, on the other hand, include person, location, organization, references to law, other decision and regulation.
Court decisions from 2017 and 2018 were selected for the dataset, published online by the Federal Ministry of Justice and Consumer Protection. The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
From the table of contents, 107 documents from each court were selected (see Table ). The data was collected from the XML documents, i. e., it was extracted from the XML elements Mitwirkung, Titelzeile, Leitsatz, Tenor, Tatbestand, Entscheidungsgründe, Gründen, abweichende Meinung, and sonstiger Titel. The metadata at the beginning of the documents (name of court, date of decision, file number, European Case Law Identifier, document type, laws) and those that belonged to previous legal proceedings was deleted. Paragraph numbers were removed. The extracted data was split into sentences, tokenised using SoMaJo BIBREF16 and manually annotated in WebAnno BIBREF17.
The annotated documents are available in CoNNL-2002. The information originally represented by and through the XML markup was lost in the conversion process. We decided to use CoNNL-2002 because our primary focus was on the NER task and experiments. CoNNL is one of the best practice formats for NER datasets. All relevant tools support CoNNL, including WebAnno for manual annotation. Nevertheless, it is possible, of course, to re-insert the annotated information back into the XML documents.
We decided to use CoNNL-2002 because our primary focus was on the NER task and experiments.
|
Why did the authors decide to use CoNNL-2002?
|
Because their primary focus was on the NER task and experiments.
|
null | false
| null |
Write a rant about people that pour milk before cereal.
|
People that pour milk before cereal??? People that pour MILK before CEREAL???
Are you out of your mind?? When you pour milk before cereal you not only do a disservice to yourself but you are fighting tooth and nail to collectively drag the human race ten steps back to the stone age. You are morally bankrupt and your soul is in ANGUISH.
You absolute mongrels. You blithering neanderthals. The blatant disregard for human life is simply sickening. People that pour milk before cereal keep the company of villains and thieves. They are naught but charlatans and grifters, scum of the earth that would sell their own mothers up the river for a simple bowl of Wheaties (Wheaties?? Really?? Grow up.).
When the good lord comes back to take us to the great Frosted Flakes bowl in the sky, you will not be at his side. You will wander this scorched earth in an eternity of agony and suffering for your hubris. You will beg, BEG I say for a drop of the sweet milk you so callously poured out and at that time I will look down at you from above with nothing but contempt.
|
|
null | false
| null |
What do you wish you had spent more time doing when you were younger?
|
As I reflect on my life, perhaps the most important area I wish I had spent more time on is investing in my health and wellness. I wish I spent more time exercising, eating well-balanced meals, and spending time with family. All of these things are core parts of the experience of being a human and would ensure future years in life pass with grace and ease.
|
|
null | false
| null |
The film covers elements of coming of age and psychological drama, dealing with themes of bullying, disability, forgiveness, mental health, suicide, and friendship of opposite sexes. It follows the story of a former bully turned social outcast, who decides to reconnect and befriend the deaf girl he had bullied years prior.
|
What is the movie A Silence Voice about?
|
A Silent Voice is a Japanese animated film about a boy who used to bully a deaf girl, but was then shunned himself for bullying her. Years later, they reconnect and he tries to atone for his past. The film touches upon many charged topics especially prevalent in Japanese high schools from bullying to disability to suicide and mental health.
|
null | false
| null |
Which is a bird or fish: Yellowedge grouper, Yellow-browed warbler
|
Yellowedge grouper is a fish, Yellow-browed warbler is a bird
|
|
null | false
| 260
|
Affect is a term that subsumes emotion and longer term constructs such as mood and personality and refers to the experience of feeling or emotion BIBREF0 . BIBREF1 picard1997affective provides a detailed discussion of the importance of affect analysis in human communication and interaction. Within this context the analysis of human affect from text is an important topic in natural language understanding, examples of which include sentiment analysis from Twitter BIBREF2 , affect analysis from poetry BIBREF3 and studies of correlation between function words and social/psychological processes BIBREF4 . People exchange verbal messages which not only contain syntactic information, but also information conveying their mental and emotional states. Examples include the use of emotionally colored words (such as furious and joy) and swear words. The automated processing of affect in human verbal communication is of great importance to understanding spoken language systems, particularly for emerging applications such as dialogue systems and conversational agents.
Statistical language modeling is an integral component of speech recognition systems, with other applications such as machine translation and information retrieval. There has been a resurgence of research effort in recurrent neural networks for language modeling BIBREF5 , which have yielded performances far superior to baseline language models based on n-gram approaches. However, there has not been much effort in building neural language models of text that leverage affective information. Current literature on deep learning for language understanding focuses mainly on representations based on word semantics BIBREF6 , encoder-decoder models for sentence representations BIBREF7 , language modeling integrated with symbolic knowledge BIBREF8 and neural caption generation BIBREF9 , but to the best of our knowledge there has been no work on augmenting neural language modeling with affective information, or on data-driven approaches to generate emotional text.
Motivated by these advances in neural language modeling and affective analysis of text, in this paper we propose a model for representation and generation of emotional text, which we call the Affect-LM. Our model is trained on conversational speech corpora, common in language modeling for speech recognition applications BIBREF10 . Figure 1 provides an overview of our Affect-LM and its ability to generate emotionally colored conversational text in a number of affect categories with varying affect strengths. While these parameters can be manually tuned to generate conversational text, the affect category can also be automatically inferred from preceding context words. Specifically for model training, the affect category is derived from features generated using keyword spotting from a dictionary of emotional words, such as the LIWC (Linguistic Inquiry and Word Count) tool BIBREF11 . Our primary research questions in this paper are:
Q1:Can Affect-LM be used to generate affective sentences for a target emotion with varying degrees of affect strength through a customizable model parameter?
Q2:Are these generated sentences rated as emotionally expressive as well as grammatically correct in an extensive crowd-sourced perception experiment?
Q3:Does the automatic inference of affect category from the context words improve language modeling performance of the proposed Affect-LM over the baseline as measured by perplexity?
The remainder of this paper is organized as follows. In Section "Related Work" , we discuss prior work in the fields of neural language modeling, and generation of affective conversational text. In Section "LSTM Language Model" we describe the baseline LSTM model and our proposed Affect-LM model. Section "Experimental Setup" details the experimental setup, and in Section "Results" , we discuss results for customizable emotional text generation, perception studies for each affect category, and perplexity improvements over the baseline model before concluding the paper in Section "Conclusions and Future Work" .
Motivated by these advances in neural language modeling and affective analysis of text, in this paper we propose a model for representation and generation of emotional text, which we call the Affect-LM.
|
What is the Affect-LM in this paper?
|
It is a model for representation and generation of emotional text.
|
null | false
| null |
Salmon are typically anadromous: they hatch in the gravel beds of shallow fresh water streams, migrate to the ocean as adults and live like sea fish, then return to fresh water to reproduce. However, populations of several species are restricted to fresh water throughout their lives. Folklore has it that the fish return to the exact spot where they hatched to spawn, and tracking studies have shown this to be mostly true. A portion of a returning salmon run may stray and spawn in different freshwater systems; the percent of straying depends on the species of salmon. Homing behavior has been shown to depend on olfactory memory.
|
Given this paragraph about salmon, give me an overview of their lifecycle.
|
Salmon hatch in fresh water streams, migrate to the ocean as adults, then return to fresh water to reproduce.
|
null | false
| null |
What are some of the most memorable sports moments since the era of live television began?
|
Live television has brought thrilling moments that are remembered by many millions of sports fans. Nearly every major sport has seen one of the live tv moments. The last pitch of Don Larsen's perfect game in the 1956 World Series was one of the earliest memorable baseball moments on live tv and in pro football "the catch" by 49er Dwight Clark in the 1981 NFC championship game was watched by millions. Michael Jordan's game winning shot in game 6 of the 1998 NBA finals and the Jack Nicklaus putt on the 17th hole of the 1986 Masters are also remembered by many basketball and golf fans. One of the most memorable sports moments may be the final seconds of the 1980 US Olympic hockey team's stunning upset of the Soviet Union with the great comment "Do you believe in miracles?" from broadcaster Al Michaels.
|
|
null | false
| null |
Name some of the best tourist spots in USA?
|
Some of the best tourist spots in USA include Yellowstone National Park, Statue of Liberty, Grand Canyon National Park and Times Square
|
|
null | false
| null |
Why it's important to keep positive
|
Being positive will not let you think any negative stuffs. They will always keep you on track to accomplish your task
|
|
null | false
| null |
Why are cats better than dogs?
|
Cats are much better pets than dogs. Cats are softer and fluffier and prefer the company of their owner. Dogs just enjoy the company of any human. Cats also are much lower maintenance and don't require multiple walks per day or that you pick up their poop while it's warm. Cats clean up their own waste. Cats are the best pets.
|
|
null | false
| null |
Summer is traditionally associated with hot or warm weather. In Mediterranean climates, it is also associated with dry weather, while in other places (particularly in Eastern Asia because of the monsoon) it is associated with rainy weather. The wet season is the main period of vegetation growth within the savanna climate regime. Where the wet season is associated with a seasonal shift in the prevailing winds, it is known as a monsoon.
In the northern Atlantic Ocean, a distinct tropical cyclone season occurs from 1 June to 30 November. The statistical peak of the Atlantic hurricane season is 10 September. The Northeast Pacific Ocean has a broader period of activity, but in a similar timeframe to the Atlantic. The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and March and a peak in early September. In the North Indian basin, storms are most common from April to December, with peaks in May and November. In the Southern Hemisphere, the tropical cyclone season runs from the start of November until the end of April with peaks in mid-February to early March.
Thunderstorm season in the United States and Canada runs in the spring through summer but sometimes can run as late as October or even November in the fall. These storms can produce hail, strong winds and tornadoes, usually during the afternoon and evening.
|
Which months of the year does the Northwest Pacific see tropical cyclones?
|
The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and March and a peak in early September.
|
null | false
| null |
The French Revolution (French: Révolution française [ʁevɔlysjɔ̃ fʁɑ̃sɛːz]) was a period of radical political and societal change in France that began with the Estates General of 1789 and ended with the formation of the French Consulate in November 1799. Many of its ideas are considered fundamental principles of liberal democracy, while the values and institutions it created remain central to French political discourse.
Its causes are generally agreed to be a combination of social, political and economic factors, which the Ancien Régime proved unable to manage. In May 1789, widespread social distress led to the convocation of the Estates General, which was converted into a National Assembly in June. Continuing unrest culminated in the Storming of the Bastille on 14 July, which led to a series of radical measures by the Assembly, including the abolition of feudalism, the imposition of state control over the Catholic Church in France, and extension of the right to vote.
|
Given this paragraph on French Revolution, what were the main causes
|
The causes were multiple, including social, politic, and economic factors.
|
null | false
| null |
Give me a bulleted list of the 7 most recent UPenn Presidents
|
* M. Elizabeth Magill (2022 - present)
* Wendell Pritchett (2022 - 2022) (interim)
* Amy Gutmann (2004 - 2022)
* Judith Rodin (1994 - 2004)
* Claire Muriel Mintzer Fagin (1993 - 1994) (interim)
* Francis Sheldon Hackney (1981 - 1993)
* Martin Meyerson (1970 - 1981)
|
|
null | false
| null |
Leslie Hubert Holden, MC, AFC (6 March 1895 – 18 September 1932) was an Australian fighter ace of World War I and later a commercial aviator. A South Australian, he joined the Light Horse in May 1915, serving in Egypt and France. In December 1916, he volunteered for the Australian Flying Corps and qualified as a pilot. As a member of No. 2 Squadron on the Western Front, he gained the sobriquets "Lucky Les" and "the homing pigeon" after a series of incidents that saw him limping back to base in bullet-riddled aircraft. He was awarded the Military Cross, and went on to achieve five aerial victories flying Airco DH.5s and Royal Aircraft Factory S.E.5s.
|
Who is Leslie Hubert Holden?
|
Leslie Hubert Holden was an Australian fighter ace who served in Egypt and France during World War I.
|
1909.11833
| false
| null |
To solve this problem, we need a state tracking model independent of dialogue slots. In other words, the network should depend on the semantic similarity between slots and utterance instead of slot-specific modules. To this end, we propose the Slot-Independent Model (SIM). Our model complexity does not increase when the number of slots in dialogue tasks go up. Thus, SIM has many fewer parameters than existing dialogue state tracking models. To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN). The refined representation, in addition to cross and self-attention mechanisms, make our model achieve even better performance than slot-specific models. For instance, on Wizard-of-Oz (WOZ) 2.0 dataset BIBREF8, the SIM model obtains a joint-accuracy score of 89.5%, 1.4% higher than the previously best model GLAD, with only 22% of the number of parameters. On DSTC2 dataset, SIM achieves comparable performance with previous best models with only 19% of the model size.
Thus, SIM has many fewer parameters than existing dialogue state tracking models. To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN).
|
How do they prevent the model complexity increasing with the increased number of slots?
|
They exclude slot-specific parameters and incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN).
|
null | false
| null |
Which festival takes place next to Walibi Holland every year in august?
|
The next Walibi Holland festival is in August.
|
|
null | false
| null |
High Flying Bird is a 2019 American sports drama film directed by Steven Soderbergh, from a story suggested by André Holland, with the screenplay by Tarell Alvin McCraney. The film stars Holland, Zazie Beetz, Melvin Gregg, Sonja Sohn, Zachary Quinto, Glenn Fleshler, Jeryl Prescott, Justin Hurtt-Dunkley, Caleb McLaughlin, Bobbi Bordley, Kyle MacLachlan and Bill Duke, with additional appearances of basketball players Reggie Jackson, Karl-Anthony Towns and Donovan Mitchell. The film follows a sports agent who must pull off a plan in 72 hours, pitching a controversial opportunity to his client, a rookie basketball player during the company’s lockout. The film was shot entirely on the iPhone 8, becoming Soderbergh's second film to be shot on an iPhone, after Unsane.
|
From the following passage, extract the device used to shoot this film.
|
High Flying Bird is a 2019 American sports drama film that was shot on the iPhone 8.
|
null | false
| null |
Give me a list of the best adventure books
|
Here's a list of popular adventure books:
- Into the wild, Jon Krakauer
- Into thin air, Jon Krakauer
- Arctic Dreams, Barry Lopez
- Horizon, Barry Lopez
- On Trails, Robert Moor
- Moby Dick, Herman Melville
- Treasure Island - RL Stevenson
- White Fang - Jack London
- Life of Pi - Yann Martel
- The Hobbit - J.R.R Tolkien
- Jurassic Park - Michael Crichton
|
|
1904.09545
| false
| null |
Space requirements might still be considerable (comparable to those used by n-gram language models), and similar tricks can be used to reduce memory usage BIBREF27 . The above pseudocode is agnostic with respect to the choice of fragmentation and environment functions; task-specific choices are described in more detail for each experiment below.
Discussion
The above pseudocode is agnostic with respect to the choice of fragmentation and environment functions; task-specific choices are described in more detail for each experiment below.
Discussion
|
Which languages do they test on?
|
Answer with content missing: (Applications section) We use Wikipedia articles
in five languages
(Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English) as well as the Na dataset of Adams
et al. (2017).
Select:
Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English
|
null | false
| null |
What is the best tea near Bangalore
|
Munnar and Kunnar is the best tea produced near Bangalore
|
|
null | false
| null |
The founder of Gupta dynasty was
|
The founder of the Gupta dynasty was Chandragupta Maurya.
|
|
1910.03891
| false
| null |
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
|
What three datasets are used to measure performance?
|
The answers are shown as follows:
* Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph
|
null | false
| null |
How do you do a shuffle ball change tap move?
|
First, shuffle with your right foot, step back with on to the ball of your right foot. Next, stomp with your whole left foot. Repeat.
|
|
null | false
| null |
Why did naatu naatu get oscar for best original song?
|
Naatu Naatu became a global sensation and quickly went viral on social media after the film's release ast year. The songs quick tempo and synchronized choreography were an instant hit with the audiences. The song was composed by MM Keeravani with lyrics written by Chandrabose. Naatu Naatu already made history once in January when it won the Golden Globe for best original song, defeating contenders like Rihanna, Taylor Swift and Lady Gaga. The same month, the song also won the Critics Choice award for the best song. It did not get the award just for the music, the song captures the essence of the full movie and has other elements such as nice dance, nice setting, nice location and nice production.
|
|
null | false
| null |
The Society of Jesus, commonly known as the Jesuits, is a religious order of clerics regular of pontifical right for men in the Catholic Church headquartered in Rome. It was founded in 1540 by Ignatius of Loyola and six companions, with the approval of Pope Paul III. The society is engaged in evangelization and apostolic ministry in 112 nations. Jesuits work in education, research, and cultural pursuits. Jesuits also conduct retreats, minister in hospitals and parishes, sponsor direct social and humanitarian ministries, and promote ecumenical dialogue.
|
Who founded the Society of Jesus, and what are they more commonly known as?
|
The Society of Jesus are more commonly known as the Jesuits and were founded in 1540 by Ignatius of Loyola and six of his companions. They are known for their work in the field of education.
|
1804.03396
| false
| null |
The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as
$$\begin{split} g_t &= {\rm sigmoid}(W_gx_t+b_g) \\ s_t &= {\rm relu } (W_xx_t+b_x) \\ u_t &= g_t \odot s_t + (1 - g_t) \odot x_t~. \end{split}$$ (Eq. 18)
Here $W_g, W_x \in \mathbb {R}^{d \times 2d}$ and $b_g, b_x \in \mathbb {R}^d$ are trainable weights, $u_t$ is a $d$ -dimension vector. The function relu is the rectified linear units BIBREF43 and $\odot $ is element-wise multiply over two vectors. The same Highway Layer is applied to $q_t$ and produces $v_t$ .
Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:
Here we obtain $\mathbf {U} = [u_1^{^{\prime }}, ... , u_n^{^{\prime }}] \in \mathbb {R}^{2d \times n}$ and $\mathbf {V} = [v_1^{^{\prime }}, ... , v_m^{^{\prime }}] \in \mathbb {R}^{2d \times m}$ . Then we feed $\mathbf {U}$ and $\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query. We obtain the $8d$ -dimension query-aware context embedding vectors $h_1, ... , h_n$ as the result.
After modeling interactions between the input text and queries, we need to enhance the interactions within the input text words themselves especially for the longer text in IE settings. Therefore, we introduce Self-Matching Layer BIBREF29 in our model as
$$\begin{split} o_t &= {\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\ s_j^t &= w^T {\rm tanh}(W_hh_j+\tilde{W_h}h_t)\\ \alpha _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\alpha _i^th_i ~. \end{split}$$ (Eq. 20)
Here $W_h, \tilde{W_h} \in \mathbb {R}^{d \times 8d}$ and $w \in \mathbb {R}^d$ are trainable weights, $[h, c]$ is vector concatenation across row. Besides, $\alpha _i^t$ is the attention weight from the $t^{th}$ word to the $i^{th}$ word and $c_t$ is the enhanced contextual embeddings over the $t^{th}$ word in the input text. We obtain the $2d$ -dimension query-aware and self-enhanced embeddings of input text after this step. Finally we feed the embeddings $\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as
$$\begin{split} p_t &= {\rm LSTM}(p_{t-1}, c_t) \\ s_j^t &= w^T {\rm tanh}(W_oo_j+W_pp_{t-1})\\ \beta _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\beta _i^to_i~. \end{split}$$ (Eq. 21)
Here $\beta _{n+1}^t$ denotes the probability of generating the “ ${\rm eos}$ ” symbol since the decoder also needs to determine when to stop. Therefore, the probability of generating the answer sequence $\textbf {a}$ is as follows
$${\rm P}(\textbf {a}|\mathbf {O}) = \prod _t {\rm P}(a^t | a^1, ... , a^{t-1}, \mathbf {O})~.$$ (Eq. 23)
The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as
$$\begin{split} g_t &= {\rm sigmoid}(W_gx_t+b_g) \\ s_t &= {\rm relu } (W_xx_t+b_x) \\ u_t &= g_t \odot s_t + (1 - g_t) \odot x_t~. \end{split}$$ (Eq. 18)
The same Highway Layer is applied to $q_t$ and produces $v_t$ .
Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words:
Then we feed $\mathbf {U}$ and $\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query.
Therefore, we introduce Self-Matching Layer BIBREF29 in our model as
$$\begin{split} o_t &= {\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\ s_j^t &= w^T {\rm tanh}(W_hh_j+\tilde{W_h}h_t)\\ \alpha _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\alpha _i^th_i ~. \end{split}$$ (Eq. 20)
Finally we feed the embeddings $\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as
$$\begin{split} p_t &= {\rm LSTM}(p_{t-1}, c_t) \\ s_j^t &= w^T {\rm tanh}(W_oo_j+W_pp_{t-1})\\ \beta _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\beta _i^to_i~. \end{split}$$ (Eq. 21)
Therefore, the probability of generating the answer sequence $\textbf {a}$ is as follows
$${\rm P}(\textbf {a}|\mathbf {O}) = \prod _t {\rm P}(a^t | a^1, ... , a^{t-1}, \mathbf {O})~.$$ (Eq. 23)
|
What QA models were used?
|
A pointer network decodes the answer from a bidirectional LSTM with attention flow layer and self-matching layer, whose inputs come from word and character embeddings of the query and input text fed through a highway layer.
|
null | false
| null |
What species of fish are found in the Pamlico Sound?
|
The Pamlico Sound has three primary species of fish: Red Drum, Southern Flounder and Spotted Seatrout. Depending on the time of year, different species will be more prevalent. Visit the North Carolina Wildlife Resource Commission website to learn more.
|
|
1904.02357
| false
| null |
gordon2009sayanything use an information retrieval based system to write by alternating turns between a human and their system. clark2018mil use a similar turn-taking approach to interactivity, but employ a neural model for generation and allow the user to edit the generated sentence before accepting it. They find that users prefer a full-sentence collaborative setup (vs. shorter fragments) but are mixed with regard to the system-driven approach to interaction. roemmele2017eval experiment with a user-driven setup, where the machine doesn't generate until the user requests it to, and then the user can edit or delete at will. They leverage user-acceptance or rejection of suggestions as a tool for understanding the characteristics of a helpful generation. All of these systems involve the user in the story-writing process, but lack user involvement in the story-planning process, and so they lean on the user's ability to knit a coherent overall story together out of locally related sentences. They also do not allow a user to control the novelty or “unexpectedness” of the generations, which clark2018mil find to be a weakness. Nor do they enable iteration; a user cannot revise earlier sentences and have the system update later generations. We develop a system that allows a user to interact in all of these ways that were limitations in previous systems; it enables involvement in planning, editing, iterative revising, and control of novelty. We conduct experiments to understand which types of interaction are most effective for improving stories and for making users satisfied and engaged. We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories. The full range of interactions available to a user is: select a model, provide a topic, change diversity of content, collaborate on the planning for the story, and collaborate on the story sentences. It is entirely user-driven, as the users control how much is their own work and how much is the machine's at every stage. It supports revision; a user can modify an earlier part of a written story or of the story plan at any point, and observe how this affects later generations.
We have two main interfaces that enable human interaction with the computer. There is cross-model interaction, where the machine does all the composition work, and displays three different versions of a story written by three distinct models for a human to compare. The user guides generation by providing a topic for story-writing and by tweaking decoding parameters to control novelty, or diversity. The second interface is intra-model interaction, where a human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages to jointly create better stories.
|
How is human interaction consumed by the model?
|
The answers are shown as follows:
* displays three different versions of a story written by three distinct models for a human to compare
* human can select the model to interact with (potentially after having chosen it via cross-model), and can collaborate at all stages
|
null | false
| null |
Is Apple better than Samsung?
|
Each brands have their own pros and cons. It is the matter of preference whether or not someone likes a specific brand. You should decide on your own what is or isn't important for you in a brand and pick accordingly.
|
|
1910.10869
| false
| null |
The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances.
The ICSI Meeting Corpus BIBREF11 is a collection of meeting recordings that has been thoroughly annotated, including annotations for involvement hot spots BIBREF12, linguistic utterance units, and word time boundaries based on forced alignment. The dataset is comprised of 75 meetings and about 70 hours of real-time audio duration, with 6 speakers per meeting on average. Most of the participants are well-acquainted and friendly with each other. Hot spots were originally annotated with 8 levels and degrees, ranging from `not hot' to `luke warm' to `hot +'. Every utterance was labeled with one of these discrete labels by a single annotator. Hightened involvement is rare, being marked on only 1% of utterances.
|
How big is ICSI meeting corpus?
|
The answers are shown as follows:
* 75 meetings and about 70 hours of real-time audio duration
|
null | false
| null |
Classify each of the following as "Motorized" or "Not Motorized": Bicycle, Skateboard, Car, Lawn Mower, Tractor, Golf Cart
|
Bicycle: Not Motorized
Skateboard: Not Motorized
Car: Motorized
Lawn Mower: Motorized
Tractor: Motorized
Golf Cart: Motorized
|
|
null | false
| 377
|
In the process, we have gather a total of 449 expert judgments (one expert failed to answer a given problem), 146 missing hypotheses and 47 explanations for negative judgments.
Despite being marked as “yes”, the problems with a reported missing hypothesis should really be classified as “no”, if one does not assume external knowledge. (See below for further discussion on the reported missing hypotheses.) Thus crunching the numbers, we see that more that half of the responses express some doubt about entailment. Remember that all problems were marked as “yes” by the creators of the RTE3 testsuite — we find here that one average, one expert in two is likely to cast a doubt over this “yes”.
However, each problem was classified by three experts. The histogram below shows the distribution of number of experts casting doubt on entailment, over all problems.
[ybar interval, ymax=0.5,ymin=0, minor y tick num = 0] coordinates (0, 0.26666666666666666) (1, 0.17333333333333334) (2, 0.3466666666666667) (3, 0.21333333333333335) (4,0) ;
Unfortunately we can only draw preliminary conclusions, due to the limited number of respondents for each problem. However, we can make the following observations:
We find this level of agreement indicative of a good level of reliability. Additionally, with three experts per problem, we are very likely to discover most missing hypotheses and incorrect entailments.
In our compilation of answers, we have marked 42 problems as straight “No”, 64 as “Yes” with missing implicit hypotheses and “44” as plain “Yes”. This means that, we expect, in our opinion, 28% of problems to be incorrectly labeled in RTE3 even assuming reasonable world knowledge. An additional 42% of problems require additional (yet reasonable to assume) hypotheses for entailment to hold formally, as prescribed by RTE3. This leaves only 30% of problems to acceptable as such. The reason that the amount of doubt is larger than in the average numbers quoted above is that, for many problems, certain missing hypotheses and/or error were not detected by a majority experts, but, after careful inspection, we judge that the minority report is justified.
We have additionally tagged each missing hypothesis according to the following classification:
In the process, we have gather a total of 449 expert judgments (one expert failed to answer a given problem), 146 missing hypotheses and 47 explanations for negative judgments.
|
How much data do the authors gather?
|
They have gather a total of 449 expert judgments (one expert failed to answer a given problem), 146 missing hypotheses and 47 explanations for negative judgments.
|
null | false
| null |
How does the Harvard housing system work?
|
First-Year students live in one of the seventeen dormitories located in and around Harvard Yard. Rooms are assigned based on a brief survey sent to admitted students. The survey asks questions like “How many roommates do you want?”, “How comfortable are you with noise?”, and “What type of music do you listen to?”. After the first year, the majority of students move into one of the twelve residential Houses. Houses are assigned in March of a student’s first year. Rather than randomly assigning individual students to houses and scattering close friends, students can assemble into “blocking groups” of no greater than 8 students and each group is randomly assigned to a House. If 8 friends aren’t enough, two blocking groups can link themselves in the lottery. Linked groups will not be assigned the same House but are guaranteed to be assigned the same Neighborhood (e.g. The Quad).
|
|
null | false
| null |
From 1957 through 1961, Khrushchev openly and repeatedly threatened the West with nuclear annihilation. He claimed that Soviet missile capabilities were far superior to those of the United States, capable of wiping out any American or European city. According to John Lewis Gaddis, Khrushchev rejected Stalin's "belief in the inevitability of war," however. The new leader declared his ultimate goal was "peaceful coexistence". In Khrushchev's formulation, peace would allow capitalism to collapse on its own, as well as giving the Soviets time to boost their military capabilities, which remained for decades until Gorbachev's later "new thinking" envisioning peaceful coexistence as an end in itself rather than a form of class struggle.
|
Explain the difference of position against the Western Bloc among the Soviet leaders cited in the text.
|
Stalin believed that war against the Western Bloc was inevitable.
Krushchev instead believed the two blocs could avoid war because the war was not necessary to establish the Eastern Bloc's superiority; the Western Bloc would fail systemically.
On the other hand, Gorbachev truly believed in peaceful coexistence between the two blocs.
|
null | false
| null |
What is data engineering?
|
Data engineering deals with building systems and process that bring data together and serve that data up to the business. They ingest data, transform, data, apply data quality rules, and prepare finalized cleaned datasets for use by the business to get insights. Data engineers have to be strong developers as well if data engineering has to be done using a programming language like Python.
|
|
null | false
| null |
Tell a story with the following: A guy that lives in San Jose California, he is a data scientist, he is in his mid 40's and he is aspiring to build an LLM model that will change the world
|
John is a data scientist living in San Jose California. He is in his mid 40s and has always been passionate about data science. He has always dreamed of creating a model that would revolutionize the world, but he has been struggling to find the right approach.
One day, John stumbled across a book that discussed a new way of approaching data science. He read it and was amazed by the concepts presented in the book. He immediately started to apply the ideas in the book and soon realized that he had found his calling. He quickly put together an LLM model that was capable of solving some of the most difficult data science problems.
John shared his model with the world and it quickly became a success. He was able to solve some of the most complex data science problems and he was able to prove that his model was better than any other model available. John was able to change the world and he is now considered one of the most successful data scientists in the world.
|
|
null | false
| null |
Why would people not like Goose Goose Duck?
|
the core mechanism is built on the premise of acting/lying, which is not a prevalent skill in today’s population. Unlike Texas Hold’em, where a player only needs to lie to play well can also enjoy it without lying, GGD and WW require all players to have the capacity to lie. If one participant is a bad liar, the whole game is ruined. This is because in its essence, GGD is a game where a set of players guard a secret, and the rest of players uncover that secret. To make this process fun and intellectually stimulating, we first need to remove the more obvious secret-revealing signals like voice tones and facial expressions. What’s worse, it only takes 1 player to leak the secret for everyone, which imposes an incredibly high entry barrier
|
|
null | false
| 393
|
One of the significant challenges in contemporary information processing is the sheer volume of available data. BIBREF0 , for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing BIBREF1 , relies on hashing data into short, locality-preserving binary codes BIBREF2 . The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. Most of the algorithms from this family are data-oblivious, i.e. can generate hashes for any type of data. Nevertheless, some methods target specific kind of input data, like text or image.
In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by BIBREF3 . Their semantic hashing leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-of-words (BOW) representation. Salakhutdinov & Hinton report that binary codes allow for up to 20-fold improvement in document ranking speed, compared to real-valued representation of the same dimensionality. Moreover, they demonstrate that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning binary representation from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fully-connected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words.
Binary codes have also been applied to cross-modal retrieval where text is one of the modalities. Specifically, BIBREF4 incorporated tag information that often accompany text documents, while BIBREF5 employed siamese neural networks to learn single binary representation for text and image data.
Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. BIBREF6 proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by BIBREF7 to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOW and bag-of-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation BIBREF8 or importance sampling BIBREF9 to approximate the gradients with respect to the softmax logits.
An alternative approach to learning representation of pieces of text has been recently described by BIBREF10 . Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level.
In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by BIBREF11 on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While BIBREF11 employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents.
We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora.
|
How to evaluate binary paragraph vectors?
|
The authors evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora.
|
null | false
| null |
What are some ideas for fun rainy day activities to do with a 7 year old?
|
* Play board games! Kids 6+ love to play board games of all kinds. You likely have a few sitting at home - dust them off and have at it!
* Learn Chess! You may be surprised, but chess comes naturally to most kids, and many of them love the idea of horses, castles, kings and queens all battling against each other. While chess can take years to master, it is easy and fun to get started.
* Go to the library! Libraries are great places to spend a rainy day, while giving you a chance to get out of the house. Many libraries have kids areas and besides books, you may be able to borrow legos, art supplies and puzzles, among other items.
* Go outside and jump in some puddles! Put on your rain boots and embrace the rainy weather by jumping in some puddles!
* Make origami! Use some unfolded scrap paper (or origami paper if you have some), fire up some online tutorials, and make anything from ninja stars to paper airplanes and boats (bonus: take the boat outside and sail it in a puddle!)
* Go to your local rec center! If you're lucky enough to have a recreation center close by, then a rainy day is a great day to visit, to play basketball indoors, for example.
* Take a bus or drive to a museum! 7 year olds tend to be very curious and as long as you don't overdo it, a museum can be a fun place to kill an hour or two!
|
|
null | false
| null |
What is physical security?
|
Physical security describes security measures that are designed to deny unauthorized access to facilities, equipment, and resources and to protect personnel and property from damage or harm (such as espionage, theft, or terrorist attacks). Physical security involves the use of multiple layers of interdependent systems that can include CCTV surveillance, security guards, protective barriers, locks, access control, perimeter intrusion detection, deterrent systems, fire protection, and other systems designed to protect persons and property.
|
|
null | false
| 217
|
Extractive multi-document summarization (MDS) aims to summarize a collection of documents by selecting a small number of sentences that represent the original content appropriately. Typical objectives for assembling a summary include information coverage and non-redundancy. A wide variety of methods have been introduced to approach MDS.
Many approaches are based on sentence ranking, i.e. assigning each sentence a score that indicates how well the sentence summarizes the input BIBREF0 , BIBREF1 , BIBREF2 . A summary is created by selecting the top entries of the ranked list of sentences. Since the sentences are often treated separately, these models might allow redundancy in the summary. Therefore, they are often extended by an anti-redundancy filter while de-queuing ranked sentence lists.
Other approaches work at summary-level rather than sentence-level and aim to optimize functions of sets of sentences to find good summaries, such as KL-divergence between probability distributions BIBREF3 or submodular functions that represent coverage, diversity, etc. BIBREF4
The centroid-based model belongs to the former group: it represents sentences as bag-of-word (BOW) vectors with TF-IDF weighting and uses a centroid of these vectors to represent the whole document collection BIBREF5 . The sentences are ranked by their cosine similarity to the centroid vector. This method is often found as a baseline in evaluations where it usually is outperformed BIBREF0 , BIBREF6 .
This baseline can easily be adapted to work at the summary-level instead the sentence level. This is done by representing a summary as the centroid of its sentence vectors and maximizing the similarity between the summary centroid and the centroid of the document collection. A simple greedy algorithm is used to find the best summary under a length constraint.
In order to keep the method efficient, we outline different methods to select a small number of candidate sentences from each document in the input collection before constructing the summary.
We test these modifications on the DUC2004 dataset for multi-document summarization. The results show an improvement of Rouge scores over the original centroid method. The performance is on par with state-of-the-art methods which shows that the similarity between a summary centroid and the input centroid is a well-suited function for global summary optimization.
The summarization approach presented in this paper is fast, unsupervised and simple to implement. Nevertheless, it performs as well as more complex state-of-the-art approaches in terms of Rouge scores on the DUC2004 dataset. It can be used as a strong baseline for future research or as a fast and easy-to-deploy summarization tool.
The performance is on par with state-of-the-art methods which shows that the similarity between a summary centroid and the input centroid is a well-suited function for global summary optimization.
|
Is the performance on par with state-of-the-art methods?
|
Yes, it is.
|
null | false
| null |
In the 1994–95 season, Jágr won his first Art Ross Trophy after finishing the regular season with the most points in the NHL; he tied Eric Lindros with 70 points but won based on his 32 goals to Lindros' 29. The next year, Jágr set a record for most points, 149, by a European-born player. Both his 62 goals and 87 assists from that season still stand as career-highs. His 1995–96 totals for assists and points stand as the records for right-wingers in those categories. After the 1997–98 season, Penguins captain Ron Francis signed with the Carolina Hurricanes, leaving Jágr the Penguins' captaincy. From 1997–98 to 2000–01, Jágr would win four-straight NHL scoring titles. In 1999, he would win the Hart Memorial Trophy as the NHL's most valuable player, as well as the Lester B. Pearson Award. In 1998, he led the Czech Republic to a gold medal at the 1998 Winter Olympics.
|
How many points did Jaromír Jágr score in the 1995-1996 season?
|
In a record-setting year, Jaromír Jágr scored 149 points in the 1995-96 season.
|
null | false
| null |
Tell me about the plot to Harry Potter
|
Harry Potter is the story of a seemingly normal 11 year old body that lives with Aunt Petunia, Uncle Vernon and cousin Dudley. He comes to find out he is in fact, not a normal boy, but a wizard, a famous wizard at that. He is offered admission at Hogwarts School of Witchcraft and Wizardry and his epic battle across 7 books against Lord Voldermort plays out. After many trails, losses and against all odds, he is victorious over Lord Voldermort.
|
|
null | false
| 426
|
A.13: Descriptive statistics and information of all input data features. Variables are displayed with type, de_x005f_x005f_x005fscription, units and average statistics. We separate all features according to medical literature, including vitalsign, static, serum and haem
|
Could the authors come up with a way to use their proposed feature-time relevance maps to characterize each phenotype?
|
We have updated our results and discussion section to include a more comprehensive description of each cluster. We have also added a table of summary statistics for clinical variables (e.g., demographics) over each of the learnt clusters - these further support/corroborate the description obtained from analysis of results obtained in Figure 5 and Figure 6. This new table is in the Appendix in Figure A13.
|
null | false
| null |
What is the currency in use in the Netherlands?
|
The currency in use in the Netherlands is the euro.
|
|
null | false
| null |
Who is the greatest golfer ever?
|
Most consider Jack Nicklaus the greatest golfer as he's won 18 major championships, the most of any golfer. Major championships are considered the ultimate achievement in the sport of golf. Jack Nicklaus also has the most runner-up finishes in major championships with 19 and the third most PGA Tour tournament wins with 73. Many would argue that Tiger Woods is the greatest golfer of all-time with 15 major championships and more PGA Tour tournament wins (currently 82) because he played in a more modern era that was considered to have better competition than the era in which Jack Nicklaus played. Tiger Woods was also considered to have had the largest impact on the overall popularity of the game of golf.
|
|
null | false
| null |
What colour usually is the wax surrounding Edam cheese?
|
The colour usually is yellow or orange.
|
|
null | false
| 332
|
As language processing tools become more prevalent concern has grown over their susceptibility to social biases and their potential to propagate bias BIBREF1, BIBREF2. Natural language training data inevitably reflects biases present in our society. For example, gender bias manifests itself in training data which features more examples of men than of women. Tools trained on such data will then exhibit or even amplify the biases BIBREF3.
Gender bias is a particularly important problem for Neural Machine Translation (NMT) into gender-inflected languages. An over-prevalence of some gendered forms in the training data leads to translations with identifiable errors BIBREF0. Translations are better for sentences involving men and for sentences containing stereotypical gender roles. For example, mentions of male doctors are more reliably translated than those of male nurses BIBREF2, BIBREF4.
Recent approaches to the bias problem in NLP have involved training from scratch on artificially gender-balanced versions of the original dataset BIBREF5, BIBREF6 or with de-biased embeddings BIBREF7, BIBREF8. While these approaches may be effective, training from scratch is inefficient and gender-balancing embeddings or large parallel datasets are challenging problems BIBREF9.
Instead we propose treating gender debiasing as a domain adaptation problem, since NMT models can very quickly adapt to a new domain BIBREF10. To the best of our knowledge this work is the first to attempt NMT bias reduction by fine-tuning, rather than retraining. We consider three aspects of this adaptation problem: creating less biased adaptation data, parameter adaptation using this data, and inference with the debiased models produced by adaptation.
Regarding data, we suggest that a small, trusted gender-balanced set could allow more efficient and effective gender debiasing than a larger, noisier set. To explore this we create a tiny, handcrafted profession-based dataset for transfer learning. For contrast, we also consider fine-tuning on a counterfactual subset of the full dataset and propose a straightforward scheme for artificially gender-balancing parallel text for NMT.
We find that during domain adaptation improvement on the gender-debiased domain comes at the expense of translation quality due to catastrophic forgetting BIBREF11. We can balance improvement and forgetting with a regularised training procedure, Elastic Weight Consolidation (EWC), or in inference by a two-step lattice rescoring procedure.
We experiment with three language pairs, assessing the impact of debiasing on general domain BLEU and on the WinoMT challenge set BIBREF0. We find that continued training on the handcrafted set gives far stronger and more consistent improvements in gender-debiasing with orders of magnitude less training time, although as expected general translation performance as measured by BLEU decreases.
We further show that regularised adaptation with EWC can reduce bias while limiting degradation in general translation quality. We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model. We believe this approach, rescoring with models targeted to remove bias, is novel in NMT. The rescoring procedure improves WinoMT accuracy by up to 30% with no decrease in BLEU on the general test set.
Recent recommendations for ethics in Artificial Intelligence have suggested that social biases or imbalances in a dataset be addressed prior to model training BIBREF12. This recommendation presupposes that the source of bias in a dataset is both obvious and easily adjusted. We show that debiasing a full NMT dataset is difficult, and suggest alternative efficient and effective approaches for debiasing a model after it is trained. This avoids the need to identify and remove all possible biases prior to training, and has the added benefit of preserving privacy, since no access to the original data or knowledge of its contents is required. As evidence, in section SECREF43, we show this scheme can be applied to remove gender bias in the output of ‘black box‘ online commercial MT systems.
We find that continued training on the handcrafted set gives far stronger and more consistent improvements in gender-debiasing with orders of magnitude less training time, although as expected general translation performance as measured by BLEU decreases.
|
Does the continued training on the handcrafted set give far stronger and more consistent improvements in gender-debiasing with orders of magnitude less training time?
|
Yes, it does.
|
null | false
| null |
Why is Dungeons and Dragons so popular nowadays?
|
Dungeons and Dragons, also commonly known as D&D is a role playing game that has been around since the 70s. D&D has been enjoying somewhat of a resurgence in recent years and has gained more mainstream appeal thanks largely to the internet and streaming in particular. Shows like Critical Role and Dimension 20 where campaigns are streamed often in real time over the internet have made D&D more accessible than ever before. The success of Critical Role in particular is noteworthy, with Amazon Prime developing an animated show based on the first campaign and a second show announced adapting the second campaign. There is even a D&D movie in the works set to release in 2023 with a cast of renowned actors like Chris Pine and Michelle Rodriguez.
|
|
null | false
| null |
5 best source of water
|
Rain
Glacier
under ground water
plants
Mountain snow
|
|
null | false
| null |
What is an RSU?
|
A restricted stock unit or RSU is the right to receive a share of common stock (or an equivalent value in cash) upon certain vesting conditions.
Following satisfaction of certain vesting conditions, the company would deliver to the holder of the RSUs whole shares of common stock, cash or a combination of common stock and cash. RSUs generally vest when two conditions are met - a service vest period has been satisfied, and an exit liquidity event has occurred prior to the expiration date of the RSUs.
Some companies also require the individuals to be employed at or providing services to the company upon satisfaction of the exit liquidity event.
At this time, the RSUs will be settled and taxes will be owed on the full value of the settled shares (or cash) at regular compensation tax rates (based on the fair market value of the common stock at the time of settlement).
Unlike options, RSUs have value even if the value of the company declines (since there is no exercise price to pay), but still provide upside based on the company’s results over time. Because there is no exercise payment required, a grant of RSUs usually consists of a smaller number of shares than a grant of options.
|
|
null | false
| null |
Merrimac Butte is a 5,627-foot (1,715-metre) sandstone summit located in Grand County, Utah, United States, about 12 miles northwest of the town of Moab. Merrimac Butte is a thin, 200–600-foot-wide and 1,600-foot-long east-to-west butte with 200-foot-tall vertical Entrada Sandstone walls overlaying a Carmel Formation base.
Monitor Butte is situated immediately east of Merrimac Butte. The two buttes were named after the Monitor and Merrimack, two ironclad steamships known for clashing during the American Civil War. They can be seen from Highway 313 after it climbs out of Sevenmile Canyon en route to the Island in the Sky section of Canyonlands National Park or Dead Horse Point State Park.
|
Monitor and Merrimac refer to what in the passage?
|
Monitor and Merrimac are two buttes which sit next to one another in Utah, U.S. These names are taken from two ironclad streamships of the American Civil War.
|
null | false
| null |
How did Niagara Falls form?
|
Scientists believe a glacier originally formed the falls toward the end of the Ice Age some 13,000 years ago when tons of ice melted and eroded over time. Erosion caused the river bed to drop like a cliff, and the river continued to flow over what is now known as Niagara Falls. This slow process still occurs today when the water freezes and thaws, wearing away and moving more rock under the river surface, causing the falls to move farther upstream.
|
|
null | false
| null |
How to choose the next company you work for?
|
The company choosing depending on the follow rough criteria:
- location: whether the company is within the acceptable commute distance
- work culture: how do you treat work life balance
- the mission of the company and the risk associated with the company if it is a growing company
- the compensation
- the colleague that you will work with there
- the corresponding project and whether it will use or improve your expertise area
- the career direction that you want to pursue
|
|
null | false
| null |
Classify the following as living things and non-living things: Car, Desk, Parrot, Cheetah, Phone
|
The list can be classified as living things and non-living things as follows:
Car- Non-living thing.
Desk- Non-living thing.
Parrot- Living thing.
Cheetah- Living thing.
Phone- Non-living thing.
You can classify anything that can breathe as a living thing and which cannot breathe as a non-living thing.
|
|
null | false
| null |
Ironsand, also known as iron-sand or iron sand, is a type of sand with heavy concentrations of iron. It is typically dark grey or blackish in colour.
It is composed mainly of magnetite, Fe3O4, and also contains small amounts of titanium, silica, manganese, calcium and vanadium.
Ironsand has a tendency to heat up in direct sunlight, causing temperatures high enough to cause minor burns. As such it forms a hazard in New Zealand at popular west-coast surf beaches such as Piha.
|
How many iron atoms are in a single molecule of the main component substance of ironsand?
|
The main component of ironsand is magnetite, which has chemical formula Fe3O4. Because Fe is the chemical symbol for iron and it is followed by a 3 in this formula, there are 3 iron atoms per magnetite molecule.
|
null | false
| null |
The Antikythera mechanism (/ˌæntɪkɪˈθɪərə/ AN-tih-kih-THEER-ə) is an Ancient Greek hand-powered orrery, described as the oldest known example of an analogue computer used to predict astronomical positions and eclipses decades in advance. It could also be used to track the four-year cycle of athletic games which was similar to an Olympiad, the cycle of the ancient Olympic Games.
|
What is the first computer recognized in human history?
|
The Antikythera mechanism, an astronomical clock, was built around 100 BC and is considered the world's first computer.
|
1811.08048
| true
| null |
We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words. The results are a remarkable variety of situations and phrasings (Figure FIGREF4 ).
We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare.
|
Do all questions in the dataset allow the answers to pick from 2 options?
|
Yes.
|
null | false
| null |
Identify which animal species is alive or extinct: Cape Lion, Javan Rhino
|
Javan Rhino is alive, Cape Lion is extinct.
|
|
null | false
| null |
Tesla, Inc. (/ˈtɛslə/ TESS-lə or /ˈtɛzlə/ TEZ-lə) is an American multinational automotive and clean energy company headquartered in Austin, Texas, United States. Tesla designs and manufactures electric vehicles (electric cars and trucks), battery energy storage from home to grid-scale, solar panels and solar roof tiles, and related products and services. Tesla is one of the world's most valuable companies and is, as of 2023, the world's most valuable automaker. In 2021, the company had the most worldwide sales of battery electric vehicles and plug-in electric vehicles, capturing 21% of the battery-electric (purely electric) market and 14% of the plug-in market (which includes plug-in hybrids). Through its subsidiary Tesla Energy, the company develops and is a major installer of photovoltaic systems in the United States. Tesla Energy is also one of the largest global suppliers of battery energy storage systems, with 3.99 gigawatt-hours (GWh) installed in 2021.
Tesla was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning as Tesla Motors. The company's name is a tribute to inventor and electrical engineer Nikola Tesla. In February 2004, via a $6.5 million investment, Elon Musk became the largest shareholder of the company. He has served as CEO since 2008. According to Musk, the purpose of Tesla is to help expedite the move to sustainable transport and energy, obtained through electric vehicles and solar power. Tesla began production of its first car model, the Roadster sports car, in 2008. This was followed by the Model S sedan in 2012, the Model X SUV in 2015, the Model 3 sedan in 2017, the Model Y crossover in 2020, and the Tesla Semi truck in 2022. The company plans to start production of the Cybertruck light-duty pickup truck in 2023. The Model 3 is the all-time bestselling plug-in electric car worldwide, and, in June 2021, became the first electric car to sell 1 million units globally. Tesla's 2022 full year deliveries were around 1.31 million vehicles, a 40% increase over the previous year, and cumulative sales totaled 3 million cars as of August 2022. In October 2021, Tesla's market capitalization reached $1 trillion, the sixth company to do so in U.S. history.
Tesla has been the subject of several lawsuits, government scrutiny, journalistic criticism, and public controversies arising from statements and acts of CEO Elon Musk and from allegations of whistleblower retaliation, worker rights violations, and defects with their products.
|
From the passage list down the various models of Tesla vehicle. List the results in comma separated format.
|
Roadster, Model S, Model X, Model 3, Model Y, Tesla Semi, Cybertruck
|
null | false
| null |
What is algebra and who invented it?
|
Algebra is the study of variables and the rules for manipulating these variables in formulas.
The word algebra(in Arabic al-jabr) comes from the title of the early 9th century book ʿIlm al-jabr wa l-muqābala "The Science of Restoring and Balancing" by the Persian mathematician and astronomer al-Khwarizmi. He used the term al-jabr for the operation of moving a term from one side of an equation to the other.
here is a basic algebra equation:
2 + x = 4
by moving 2 to the other side of the equation we will have:
x = 4 - 2
x = 2
|
|
null | false
| null |
Write a brief passage arguing for the use of genetically modified crops in agriculture.
|
In recent times, genetically modified crops have developed a negative sentiment in popular culture. Much like the organic food movement of the 2000s, this change in sentiment is fueled by a general lack of scientific education. There is much more nuance to the issue than "genetically modified crops are bad". Often, people associate the term "GMO" with genetic modification in popular media, seeing it in the same light as cybernetic enhancement or cloning. Genetic modification, however, is not a mysterious science. It has a storied past throughout human history in the form of selective breeding. Almost every mass-harvested crop today has undergone "genetic modification" via selective breeding. Today, modern techniques for genetic modification are the same thing, only sped up. The key here is to make the connection between genetic modification as it has traditionally been done with the genetic modification that occurs today.
|
|
null | false
| null |
Tell me whether these are rivers or oceans: Nile, Amazon, Ganges, Hudson, Rhine
|
These are rivers.
|
|
null | false
| null |
Why is Chicago known as the Windy City
|
Many believe that Chicago is known as the Windy City because it has harsh winters and strong cold winds off the shores of Lake Michigan. Even in the summer, the winds swirling off the lake can change the outcome of a baseball game at Wrigley Field. While the weather pattern does in fact exist, Chicago is not the windiest city, nor is this the reason for the nickname. Many believe the term is a derogatory reference to Chicago's vitriolic politics and self-important politicians who are full of hot air. References to Chicago as the Windy City have been found all the way back to the late 1800s.
|
|
null | false
| null |
What would it take to build a house?
|
There are many things to consider before building a house. First, you will need to select a location and ensure that it meets the right criteria for a structure. Then you'll need to plan out what types of materials and how much money you'd like to spend. Once you have decided on location, material and budget then it's time to figure out what layout would best meet your needs.
|
|
null | false
| null |
Butterflies (Rhopalocera) are insects that have large, often brightly coloured wings, and a conspicuous, fluttering flight. The group comprises the superfamilies Hedyloidea (moth-butterflies in the Americas) and Papilionoidea. Butterfly fossils date to the Paleocene, about 56 million years ago.
Butterflies have a four-stage life cycle, as like most insects they undergo complete metamorphosis. Winged adults lay eggs on the food plant on which their larvae, known as caterpillars, will feed. The caterpillars grow, sometimes very rapidly, and when fully developed, pupate in a chrysalis. When metamorphosis is complete, the pupal skin splits, the adult insect climbs out, and after its wings have expanded and dried, it flies off. Some butterflies, especially in the tropics, have several generations in a year, while others have a single generation, and a few in cold locations may take several years to pass through their entire life cycle.
|
From the given passage, write down some interesting points on butterflies
|
1. Butterflies (Rhopalocera) are insects that have large, frequently vividly coloured wings and prominent, fluttering flight patterns.
2. The superfamilies Hedyloidea (moth-butterflies in the Americas) and Papilionoidea make up the group.
3. Around 56 million years ago, during the Paleocene, butterflies first appeared.
4. Butterflies go through a full metamorphosis, like other insects, and have a four-stage life cycle.
5. Winged adults lay their eggs on the food plant that their caterpillar-like larvae will consume.
6. The caterpillars mature, sometimes very quickly, then pupate in chrysalises once completely grown.
7. After transformation is complete, the adult insect crawls out of the pupal skin and dries its wings before taking flight.
8. Some butterflies, especially those that live in tropical climates, have multiple generations every year, while others only have one. Some butterflies that live in frigid climates may take several years to complete their whole life cycle.
|
null | false
| 293
|
Knowledge-based question answering (KBQA) aims to answer natural language questions over knowledge bases (KBs) such as DBpedia and Freebase. Formal query generation is an important component in many KBQA systems BIBREF0 , BIBREF1 , BIBREF2 , especially for answering complex questions. Given entity and relation linking results, formal query generation aims to generate correct executable queries, e.g., SPARQL queries, for the input natural language questions. An example question and its formal query are shown in Figure FIGREF1 . Generally speaking, formal query generation is expected to include but not be limited to have the capabilities of (i) recognizing and paraphrasing different kinds of constraints, including triple-level constraints (e.g., “movies" corresponds to a typing constraint for the target variable) and higher level constraints (e.g., subgraphs). For instance, “the same ... as" represents a complex structure shown in the middle of Figure FIGREF1 ; (ii) recognizing and paraphrasing aggregations (e.g., “how many" corresponds to Count); and (iii) organizing all the above to generate an executable query BIBREF3 , BIBREF4 .
There are mainly two kinds of query generation approaches for complex questions. (i) Template-based approaches choose a pre-collected template for query generation BIBREF1 , BIBREF5 . Such approaches highly rely on the coverage of templates, and perform unstably when some complex templates have very few natural language questions as training data. (ii) Approaches based on semantic parsing and neural networks learn entire representations for questions with different query structures, by using a neural network following the encode-and-compare framework BIBREF2 , BIBREF4 . They may suffer from the lack of training data, especially for long-tail questions with rarely appeared structures. Furthermore, both above approaches cannot handle questions with unseen query structures, since they cannot generate new query structures.
To cope with the above limitations, we propose a new query generation approach based on the following observation: the query structure for a complex question may rarely appear, but it usually contains some substructures that frequently appeared in other questions. For example, the query structure for the question in Figure FIGREF1 appears rarely, however, both “how many movies" and “the same ... as" are common expressions, which correspond to the two query substructures in dashed boxes. To collect such frequently appeared substructures, we automatically decompose query structures in the training data. Instead of directly modeling the query structure for the given question as a whole, we employ multiple neural networks to predict query substructures contained in the question, each of which delivers a part of the query intention. Then, we select an existing query structure for the input question by using a combinational ranking function. Also, in some cases, no existing query structure is appropriate for the input question. To cope with this issue, we merge query substructures to build new query structures. The contributions of this paper are summarized below:
Instead of directly modeling the query structure for the given question as a whole, we employ multiple neural networks to predict query substructures contained in the question, each of which delivers a part of the query intention.
|
How to predict query substructures contained in the question?
|
The authors employ multiple neural networks to predict query substructures contained in the question.
|
null | false
| 123
|
In this section, we report our results on computational metrics, hyperparameters and training configurations for our models. Table TABREF4 shows the results of the perplexity score evaluation of the evaluated models, Table TABREF5 shows hyperparameters for each encoding method and Table TABREF6 shows our training parameters. In Table TABREF6, the values for Gutenberg dataset in columns, GumbelGAN and Creative-GAN are empty as we only pretrain our LMs with the Gutenberg dataset
In Table 3, the values for Gutenberg dataset in columns, GumbelGAN and CreativeGAN are empty as we only pretrain our LMs with the Gutenberg dataset
|
What dataset is used?
|
Gutenberg dataset.
|
null | false
| null |
Prince Rogers Nelson (June 7, 1958 – April 21, 2016), commonly known mononymously as Prince, was an American singer, songwriter, musician, and record producer.
|
Who is Prince (musician)?
|
Prince Rogers Nelson (June 7, 1958 – April 21, 2016), commonly known mononymously as Prince, was an American singer, songwriter, musician, and record producer. The recipient of numerous awards and nominations, he is widely regarded as one of the greatest musicians of his generation.
He was known for his flamboyant, androgynous persona; his wide vocal range, which included a far-reaching falsetto and high-pitched screams; and his skill as a multi-instrumentalist, often preferring to play all or most of the instruments on his recordings.
Prince produced his albums himself, pioneering the Minneapolis sound. His music incorporated a wide variety of styles, including funk, R&B, rock, new wave, soul, synth-pop, pop, jazz, and hip hop.
|
null | false
| null |
What is an API gateway?
|
An API gateway is an API management tool that sits between a client and a collection of backend services. An API gateway acts as a reverse proxy to accept all application programming interface (API) calls, aggregate the various services required to fulfill them, and return the appropriate result.
|
|
null | false
| null |
How do you cook beef short ribs?
|
I prefer to cook short ribs in the dutch oven.Short ribs have plenty of fat, so I remove the fat cap and season just with salt and pepper.
1. Brown all sides and move the short ribs to a cutting board.
2. Chop and add the holy trinity of onions, carrots and celery.
3. Add 1 tablespoon of mined garlic and 2 heaping tablespoons of tomato paste.
4. Add about 1 cup of red win and 1 cup of beef broth.
5. Bring to a simmer then put the dutch oven into a preheated oven at 400 degrees.
6. Cook this for about 2 hours until the short ribs are fall off the bone tender. Internal temp on the short ribs should read about 205.
Serve the short ribs and vegetables with some home made mashed potatoes and you have an amazing dinner.
|
|
null | false
| null |
What are some tools to help combat ADD and ADHD?
|
Daily exercise, getting sufficient sleep, reducing sugar, meditation, creating lists, color coding files, using highlighters for reading
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.