paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| 389
|
We introduce an open domain question answering system based on Chinese knowledge graph in this paper. We analyze the questions and find that most of the answers to the questions are within two hops. Therefore, we only solve the problem within two hops to reduce the complexity of the system. The system consists of a topic entity selection module, a relationship recognition module and an answer selection module. Firstly, we construct a scoring mechanism to select the core entity of the question; Then we score the relationship in the two-hop subgraph of the topic entity; Finally, we build a classier to judge whether a question is simple or complicated, so that we can choose the final relationship and generate sparql query.
We analyze the questions and find that most of the answers to the questions are within two hops. Therefore, we only solve the problem within two hops to reduce the complexity of the system.
|
Why do they only solve the problem within two hops?
|
To reduce the complexity of the system.
|
null | false
| 396
|
Our political systems are unequal, and we suffer for it. Diversity in representation around decision-making tables is important for the health of our democratic institutions BIBREF0. One example of this inequity of representation is the gender disparity in politics: there are fewer women in politics than men, largely because women do not run for office at the same rate as men. This is because women face systemic barriers in political systems across the world BIBREF1. One of these barriers is online harassment BIBREF2, BIBREF3. Twitter is an important social media platform for politicians to share their visions and engage with their constituents. Women are disproportionately harassed on this platform because of their gender BIBREF4.
To raise awareness of online abuse and shift the discourse surrounding women in politics, we designed, built, and deployed ParityBOT: a Twitter bot that classifies hateful tweets directed at women in politics and then posts “positivitweets”. This paper focuses on how ParityBOT improves discourse in politics.
Previous work that addressed online harassment focused on collecting tweets directed at women engaged in politics and journalism and determining if they were problematic or abusive BIBREF5, BIBREF3, BIBREF6. Inspired by these projects, we go one step further and develop a tool that directly engages in the discourse on Twitter in political communities. Our hypothesis is that by seeing “positivitweets” from ParityBOT in their Twitter feeds, knowing that each tweet is an anonymous response to a hateful tweet, women in politics will feel encouraged and included in digital political communitiesBIBREF7. This will reduce the barrier to fair engagement on Twitter for women in politics. It will also help achieve gender balance in Canadian politics and improve gender equality in our society.
Inspired by these projects, we go one step further and develop a tool that directly engages in the discourse on Twitter in political communities. Our hypothesis is that by seeing “positivitweets” from ParityBOT in their Twitter feeds, knowing that each tweet is an anonymous response to a hateful tweet, women in politics will feel encouraged and included in digital political communities[10]. This will reduce the barrier to fair engagement on Twitter for women in politics. It will also help achieve gender balance in Canadian politics and improve gender equality in our society.
|
What are the benefits of their tool?
|
It will reduce the barrier to fair engagement on Twitter for women in politics and help achieve gender balance in Canadian politics and improve gender equality in society.
|
null | false
| null |
How many letters there are in the China language?
|
However, since there is no Chinese alphabet, there is no definite answer. But there are some statistics that will probably give you an idea. A modern Chinese dictionary has around 20,000 characters.
|
|
null | false
| null |
What is the difference between a Transformation and Action in Spark?
|
Spark's execution engine is designed to be Lazy. In effect, you're first up build up your analytics/data processing request through a series of Transformations which are then executed by an Action
Transformations are kind of operations which will transform your RDD data from one form to another. And when you apply this operation on any RDD, you will get a new RDD with transformed data Operations like map, filter are transformations.
Transformations create RDDs from each other, but when we want to work with the actual dataset, at that point action is performed. When the action is triggered after the result, new RDD is not formed like transformation. Ex:- count on a DF
|
|
null | false
| null |
These islands can be divided into three recognizable groups located on the Azores Plateau:
The Eastern Group (Grupo Oriental) of São Miguel, Santa Maria and Formigas Islets
The Central Group (Grupo Central) of Terceira, Graciosa, São Jorge, Pico and Faial
The Western Group (Grupo Ocidental) of Flores and Corvo.
|
Extract the names of the groups of islands located on the Azores plateau from the text. Separate them with a comma.
|
Eastern Group, Central Group, Western Group
|
null | false
| 42
|
We proposed a new approach to address the coverage problem in NMT, by replacing the softmax attentional transformation by sparse and constrained alternatives: sparsemax, constrained softmax, and the newly proposed constrained sparsemax. For the latter, we derived efficient forward and backward propagation algorithms. By incorporating a model for fertility prediction, our attention transformations led to sparse alignments, avoiding repeated words in the translation.
We proposed a new approach to address the coverage problem in NMT, by replacing the softmax attentional transformation by sparse and constrained alternatives: sparsemax, constrained softmax, and the newly proposed constrained sparsemax.
|
How does their approach address the coverage problem in NMT?
|
By replacing the softmax attentional transformation by sparse and constrained alternatives: sparsemax, constrained softmax, and the newly proposed constrained sparsemax.
|
null | false
| null |
Westworld is an American dystopian science fiction western television series created by Jonathan Nolan and Lisa Joy that first aired on October 2, 2016, on HBO. It was based upon the 1973 film of the same name written and directed by Michael Crichton and loosely upon its 1976 sequel, Futureworld.
|
What is WestWorld?
|
Westworld is an American dystopian science fiction western television series created by Jonathan Nolan and Lisa Joy that first aired on October 2, 2016, on HBO. It was based upon the 1973 film of the same name written and directed by Michael Crichton and loosely upon its 1976 sequel, Futureworld.
The story begins in Westworld, a fictional, technologically advanced Wild-West-themed amusement park populated by android "hosts". The park caters to high-paying guests who may indulge their wildest fantasies within the park without fear of retaliation from the hosts, who are prevented by their programming from harming humans. Later on, the series' setting expands to the real world, in the mid-21st century, where people's lives are driven and controlled by a powerful artificial intelligence named Rehoboam.
Nolan and Joy served as showrunners. The second, third and fourth season followed in April 2018, March 2020, and June 2022, respectively. Nolan and Joy planned a fifth and final season, and were in negotiations with HBO to produce it. However, in November 2022, it was announced that HBO had canceled the series. The series was removed from HBO Max on December 18, 2022.
Westworld's debut on HBO had the network's highest viewership ratings for a premiere since the first episode of True Detective, in 2014. Westworld's first season is the most-watched first season of any HBO original series. The first season also received critical acclaim and was highly praised for its performances, visuals, narrative, themes and soundtrack from Ramin Djawadi. Reception of the series declined from the second season onwards, with the scripts, characterization and narrative opacity being targeted for criticism; viewership ratings also diminished throughout its run. It has received numerous accolades, winning nine Primetime Emmy Awards out of 54 nominations. Thandiwe Newton won the Primetime Emmy Award for Outstanding Supporting Actress in a Drama Series in 2018.
|
null | false
| null |
Why do some people prefer pickleball over tennis?
|
Pickleball is played on half of a tennis court. Pickleball has been rising in popularity in the last decade. People used to think it was only for older adults. But pickleball leagues for younger players are popping up everywhere. Many people prefer pickleball over tennis because it is easier to pick up without formal lessons, the equipment is less expensive, and the matches are much shorter.
|
|
null | false
| null |
Dame Elizabeth Rosemond Taylor DBE (February 27, 1932 – March 23, 2011) was a British-American actress. She began her career as a child actress in the early 1940s and was one of the most popular stars of classical Hollywood cinema in the 1950s. She then became the world's highest paid movie star in the 1960s, remaining a well-known public figure for the rest of her life. In 1999, the American Film Institute named her the seventh-greatest female screen legend of Classic Hollywood cinema.
Born in London to socially prominent American parents, Taylor moved with her family to Los Angeles in 1939. She made her acting debut with a minor role in the Universal Pictures film There's One Born Every Minute (1942), but the studio ended her contract after a year. She was then signed by Metro-Goldwyn-Mayer and became a popular teen star after appearing in National Velvet (1944). She transitioned to mature roles in the 1950s, when she starred in the comedy Father of the Bride (1950) and received critical acclaim for her performance in the drama A Place in the Sun (1951). She starred in "Ivanhoe" with Robert Taylor and Joan Fontaine. (1952). Despite being one of MGM's most bankable stars, Taylor wished to end her career in the early 1950s. She resented the studio's control and disliked many of the films to which she was assigned. She began receiving more enjoyable roles in the mid-1950s, beginning with the epic drama Giant (1956), and starred in several critically and commercially successful films in the following years. These included two film adaptations of plays by Tennessee Williams: Cat on a Hot Tin Roof (1958), and Suddenly, Last Summer (1959); Taylor won a Golden Globe for Best Actress for the latter. Although she disliked her role as a call girl in BUtterfield 8 (1960), her last film for MGM, she won the Academy Award for Best Actress for her performance.
|
From the passage provided, extract the year in which Elizabeth Taylor moved to Los Angeles
|
Elizabeth Taylor moved to Los Angeles in the year 1939
|
null | false
| null |
In the 21st century alone, UVA teams have won seventeen NCAA championships. The men's teams have won recent NCAA titles in basketball (2019); lacrosse (2003, 2006, 2011, 2019, and 2021); baseball (2015); soccer (2009 and 2014); and tennis (2013, 2015, 2016, 2017, and 2022). UVA women have won recent NCAA titles in rowing (2010 and 2012) and swimming & diving (2021 and 2022). The Cavaliers rank first in the ACC (a power conference) with 22 men's NCAA Championships, and rank second in the conference with nine women's NCAA Championships.
|
According to this paragraph, how many NCAA championships has UVA won?
|
UVA has won 17 NCAA championships.
|
null | false
| null |
Why do people sleep?
|
Sleep gives your body an opportunity to heal and regenerate. It also gives your brain a chance to reprocess new information and explore possibilities in the form of dreams.
|
|
null | false
| null |
In Abrahamic religions, how many major commandants did Moses receive at Mt Sinai?
|
Moses received ten major commandments
|
|
1908.04531
| false
| null |
In sub-task C the goal is to classify the target of the offensive language. Only posts labeled as targeted insults (TIN) in sub-task B are considered in this task BIBREF17 . Samples are annotated with one of the following:
Individual (IND): Posts targeting a named or unnamed person that is part of the conversation. In English this could be a post such as @USER Is a FRAUD Female @USER group paid for and organized by @USER. In Danish this could be a post such as USER du er sku da syg i hoved. These examples further demonstrate that this category captures the characteristics of cyberbullying, as it is defined in section "Background" .
Group (GRP): Posts targeting a group of people based on ethnicity, gender or sexual orientation, political affiliation, religious belief, or other characteristics. In English this could be a post such as #Antifa are mentally unstable cowards, pretending to be relevant. In Danish this could be e.g. Åh nej! Svensk lorteret!
Other (OTH): The target of the offensive language does not fit the criteria of either of the previous two categories. BIBREF17 . In English this could be a post such as And these entertainment agencies just gonna have to be an ass about it.. In Danish this could be a post such as Netto er jo et tempel over lort.
In sub-task C the goal is to classify the target of the offensive language. Only posts labeled as targeted insults (TIN) in sub-task B are considered in this task BIBREF17 . Samples are annotated with one of the following:
Individual (IND): Posts targeting a named or unnamed person that is part of the conversation. In English this could be a post such as @USER Is a FRAUD Female @USER group paid for and organized by @USER. In Danish this could be a post such as USER du er sku da syg i hoved. These examples further demonstrate that this category captures the characteristics of cyberbullying, as it is defined in section "Background" .
Group (GRP): Posts targeting a group of people based on ethnicity, gender or sexual orientation, political affiliation, religious belief, or other characteristics. In English this could be a post such as #Antifa are mentally unstable cowards, pretending to be relevant. In Danish this could be e.g. Åh nej! Svensk lorteret!
Other (OTH): The target of the offensive language does not fit the criteria of either of the previous two categories. BIBREF17 . In English this could be a post such as And these entertainment agencies just gonna have to be an ass about it.. In Danish this could be a post such as Netto er jo et tempel over lort.
|
How many categories of offensive language were there?
|
3
|
null | false
| null |
The symptoms of COVID-19 are variable depending on the type of variant contracted, ranging from mild symptoms to a potentially fatal illness. Common symptoms include coughing, fever, loss of smell (anosmia) and taste (ageusia), with less common ones including headaches, nasal congestion and runny nose, muscle pain, sore throat, diarrhea, eye irritation, and toes swelling or turning purple, and in moderate to severe cases, breathing difficulties. People with the COVID-19 infection may have different symptoms, and their symptoms may change over time. Three common clusters of symptoms have been identified: one respiratory symptom cluster with cough, sputum, shortness of breath, and fever; a musculoskeletal symptom cluster with muscle and joint pain, headache, and fatigue; and a cluster of digestive symptoms with abdominal pain, vomiting, and diarrhea. In people without prior ear, nose, or throat disorders, loss of taste combined with loss of smell is associated with COVID-19 and is reported in as many as 88% of symptomatic cases.
|
What are the common symptoms of COVID-19?
|
Common symptoms of COVID-19 includes coughing, fever, loss of smell and taste. Some people may also experience headaches, nasal congestion, runny nose, muscle pain, sore throat, diarrhea, and eye irritation.
|
1911.11750
| false
| null |
A document $d$ can be defined as a finite sequence of terms (independent textual entities within a document, for example, words), namely $d=(t_1,t_2,\dots ,t_n)$. A general idea is to associate weight to each term $t_i$ within $d$, such that
A document $d$ can be defined as a finite sequence of terms (independent textual entities within a document, for example, words), namely $d=(t_1,t_2,\dots ,t_n)$.
|
What representations for textual documents do they use?
|
The answers are shown as follows:
* finite sequence of terms
|
null | false
| null |
The Jakobshorn (2,590 metres (8,500 ft)) is a mountain of the Albula Alps, overlooking Davos in the Swiss canton of Graubünden. It is also one of the five skiing regions of the Davos Klosters Mountains, offering 14 pistes.
Jakobshorn panorama
Located at the northern end of the range between the Dischmatal and Sertigtal, the Jakobshorn is served by a two-part cable car running from Davos Platz. Once on the mountain there are three chairlifts, one short cable car, and one T-Bar. Since 1995 there has also been a two-person chair-lift, running from near the base station to the near the bottom of Usser Isch.
There are a variety of pistes on the Jakobshorn—blue, red and black—and the mountain is renowned for being a centre for snowboarding; Bolgen provides an illuminated super pipe.
Pistes are very wide and the area is popular with novice skiers, there are however a number of black terrains for the more adventurous skier. The main two lifts also open during the summer, from July–October, and there are many walks on the Jakobshorn.
The Jakobshorn is also a favoured take-off point for hang-gliders and snowkiters.
|
Is the Jakobshorn in Davos worth a trip
|
The Jakobshorn (2,590 metres (8,500 ft)) is a mountain of the Albula Alps, overlooking Davos in the Swiss canton of Graubünden. It is also one of the five skiing regions of the Davos Klosters Mountains, offering 14 pistes.
There are a variety of pistes on the Jakobshorn—blue, red and black—and the mountain is renowned for being a centre for snowboarding; Bolgen provides an illuminated super pipe.
Pistes are very wide and the area is popular with novice skiers, there are however a number of black terrains for the more adventurous skier. The main two lifts also open during the summer, from July–October, and there are many walks on the Jakobshorn.
The Jakobshorn is also a favoured take-off point for hang-gliders and snowkiters.
|
null | false
| 164
|
Preparing the data can be a complex and time-consuming process, often involving working with partially or wholly unstructured data. The pre-processing steps have a big impact on the operationalizations, subsequent analyses and reproducibility efforts BIBREF30 , and they are usually tightly linked to what we intend to measure. Unfortunately, these steps tend to be underreported, but documenting the pre-processing choices made is essential and is analogous to recording the decisions taken during the production of a scholarly edition or protocols in biomedical research. Data may also vary enormously in quality, depending on how it has been generated. Many historians, for example, work with text produced from an analogue original using Optical Character Recognition (OCR). Often, there will be limited information available regarding the accuracy of the OCR, and the degree of accuracy may even vary within a single corpus (e.g. where digitized text has been produced over a period of years, and the software has gradually improved). The first step, then, is to try to correct for common OCR errors. These will vary depending on the type of text, the date at which the `original' was produced, and the nature of the font and typesetting.
One step that almost everyone takes is to tokenize the original character sequence into the words and word-like units. Tokenization is a more subtle and more powerful process than people expect. It is often done using regular expressions or scripts that have been circulating within the NLP community. Tokenization heuristics, however, can be badly confused by emoticons, creative orthography (e.g., U$A, sh!t), and missing whitespace. Multi-word terms are also challenging. Treating them as a single unit can dramatically alter the patterns in text. Many words that are individually ambiguous have clear, unmistakable meanings as terms, like “black hole" or “European Union". However, deciding what constitutes a multi-word term is a difficult problem. In writing systems like Chinese, tokenization is a research problem in its own right.
Beyond tokenization, common steps include lowercasing, removing punctuation, stemming (removing suffixes), lemmatization (converting inflections to a base lemma), and normalization, which has never been clearly defined, but often includes grouping abbreviations like “U.S.A." and “USA", ordinals like “1st" and “first", and variant spellings like “noooooo". The main goal of these steps is to improve the ratio of tokens (individual occurrences) to types (the distinct things in a corpus). Each step requires making additional assumptions about which distinctions are relevant: is “apple” different from “Apple”? Is “burnt” different from “burned”? Is “cool" different from “coooool"? Sometimes these steps can actively hide useful patterns, like social meaning BIBREF32 . Some of us therefore try do as little modification as possible.
From a multilingual perspective, English and Chinese have an unusually simple inflectional system, and so it is statistically reasonable to treat each inflection as a unique word type. Romance languages have considerably more inflections than English; many indigenous North American languages have still more. For these languages, unseen data is far more likely to include previously-unseen inflections, and therefore, dealing with inflections is more important. On the other hand, the resources for handling inflections vary greatly by language, with European languages dominating the attention of the computational linguistics community thus far.
We sometimes also remove words that are not relevant to our goals, for example by calculating vocabulary frequencies. We construct a “stoplist” of words that we are not interested in. If we are looking for semantic themes we might remove function words like determiners and prepositions. If we are looking for author-specific styles, we might remove all words except function words. Some words are generally meaningful but too frequent to be useful within a specific collection. We sometimes also remove very infrequent words. Their occurrences are too low for robust patterns and removing them helps reducing the vocabulary size.
The choice of processing steps can be guided by theory or knowledge about the domain as well as experimental investigation. When we have labels, predictive accuracy of a model is a way to assess the effect of the processing steps. In unsupervised settings, it is more challenging to understand the effects of different steps. Inferences drawn from unsupervised settings can be sensitive to pre-processing choices BIBREF33 . Stemming has been found to provide little measurable benefits for topic modeling and can sometimes even be harmful BIBREF34 . All in all, this again highlights the need to document these steps.
Finally, we can also mark up the data, e.g., by identifying entities (people, places, organizations, etc.) or parts of speech. Although many NLP tools are available for such tasks, they are often challenged by linguistic variation, such as orthographic variation in historical texts BIBREF35 and social media BIBREF32 . Moreover, the performance of NLP tools often drops when applying them outside the training domain, such as applying tools developed on newswire texts to texts written by younger authors BIBREF36 . Problems (e.g., disambiguation in named entity recognition) are sometimes resolved using considerable manual intervention. This combination of the automated and the manual, however, becomes more difficult as the scale of the data increases, and the `certainty' brought by the latter may have to be abandoned.
Tokenization heuristics, however, can be badly confused by emoticons, creative orthography (e.g., U$A, sh!t), and missing whitespace.
|
What emoticons or creative orthography are cited by the team in this paper?
|
U$A, sh!t.
|
null | false
| null |
What is Targetted Allele-Specific CRISPR/Cas9 Activation (TASCA)?
|
The CRISPR/Cas9 system is a gene editing technology that allows for precise modification of the DNA sequence of a genome. Targetted Allele-Specific CRISPR/Cas9 Activation (TASCA) enables researchers to selectively activate a specific gene within a genome by targeting specific sequences of DNA and inducing the expression of the desired gene.
TASCA can be used in a variety of applications, such as gene therapy, drug discovery, and basic research on gene function. It offers a powerful tool for studying the role of specific genes in cellular processes and disease states.
|
|
null | false
| 122
|
The sentence level classification task is an imbalanced binary classification problem that we address using BERT BIBREF0. We use BERTBASE, uncased, which consists of 12 self-attention layers, and returns a 768-dimension vector that representation a sentence. So as to make use of BERT for sentence classification, we include a fully connected layer on top of the BERT self-attention layers, which classifies the sentence embedding provided by BERT into the two classes of interest (propaganda or non-propaganda).
We attempt to exploit various data augmentation techniques to address the problem of class imbalance. Table TABREF17 shows the results of our experiments for different data augmentation techniques when, after shuffling the training data, we train the model on 75% of the training data and test it on the remaining 25% of the training data and the development data.
We observe that BERT without augmentation consistently outperforms BERT with augmentation in the experiments when the model is trained on 75% of the training data and evaluated on the rest, i.e trained and evaluated on similar data, coming from the same distribution. This is consistent with observations by Wei et al. wei2019eda that contextual word embeddings do not gain from data augmentation. The fact that we shuffle the training data prior to splitting it into training and testing subsets could imply that the model is learning to associate topic words, such as `Mueller', as propaganda. However, when we perform model evaluation using the development set, which is dissimilar to the training, we observe that synonym insertion and word dropping techniques also do not bring performance gains, while random oversampling increases performance over base BERT by 4%. Synonym insertion provides results very similar to base BERT, while random deletion harms model performance producing lower scores. We believe that this could be attributed to the fact that synonym insertion and random word dropping involve the introduction of noise to the data, while oversampling does not. As we are working with natural language data, this type of noise can in fact change the meaning of the sentence. Oversampling on the other hand purely increases the importance of the minority class by repeating training on the unchanged instances.
So as to better understand the aspects of oversampling that contribute to these gains, we perform a class-wise performance analysis of BERT with/without oversampling. The results of these experiments (Table TABREF18) show that oversampling increases the overall recall while maintaining precision. This is achieved by significantly improving the recall of the minority class (propaganda) at the cost of the recall of the majority class.
So far we have been able to establish that a) the training and test sets are dissimilar, thus requiring us to generalise our model, b) oversampling provides a method of generalisation, and c) oversampling does this while maintaining recall on the minority (and thus more interesting) class.
Given this we explore alternative methods of increasing minority class recall without a significant drop in precision. One such method is cost-sensitive classification, which differs from random oversampling in that it provides a more continuous-valued and consistent method of weighting samples of imbalanced training data; for example, random oversampling will inevitably emphasise some training instances at the expense of others. We detail our methods of using cost-sensitive classification in the next section. Further experiments with oversampling might have provided insights into the relationships between these methods, which we leave for future exploration.
However, when we perform model evaluation using the development set, which is dissimilar to the training, we observe that synonym insertion and word dropping techniques also do not bring performance gains, while random oversampling increases performance over base BERT by 4%.
|
Is random oversampling performing better in model evaluation?
|
Yes.It increases performance over base BERT by 4%.
|
null | false
| null |
What colors are typically associated with Christmas?
|
Red and Green are associated with Christmas
|
|
null | false
| null |
Limestone (calcium carbonate CaCO3) is a type of carbonate sedimentary rock which is the main source of the material lime. It is composed mostly of the minerals calcite and aragonite, which are different crystal forms of CaCO3. Limestone forms when these minerals precipitate out of water containing dissolved calcium. This can take place through both biological and nonbiological processes, though biological processes, such as the accumulation of corals and shells in the sea, have likely been more important for the last 540 million years. Limestone often contains fossils which provide scientists with information on ancient environments and on the evolution of life.About 20% to 25% of sedimentary rock is carbonate rock, and most of this is limestone. The remaining carbonate rock is mostly dolomite, a closely related rock, which contains a high percentage of the mineral dolomite, CaMg(CO3)2. Magnesian limestone is an obsolete and poorly-defined term used variously for dolomite, for limestone containing significant dolomite (dolomitic limestone), or for any other limestone containing a significant percentage of magnesium. Most limestone was formed in shallow marine environments, such as continental shelves or platforms, though smaller amounts were formed in many other environments. Much dolomite is secondary dolomite, formed by chemical alteration of limestone. Limestone is exposed over large regions of the Earth's surface, and because limestone is slightly soluble in rainwater, these exposures often are eroded to become karst landscapes. Most cave systems are found in limestone bedrock.Limestone has numerous uses: as a chemical feedstock for the production of lime used for cement (an essential component of concrete), as aggregate for the base of roads, as white pigment or filler in products such as toothpaste or paints, as a soil conditioner, and as a popular decorative addition to rock gardens. Limestone formations contain about 30% of the world's petroleum reservoirs.
|
From the passage identify the usage of Limestone. Display the results in a numbered list format.
|
1. as a chemical feedstock for the production of lime used for cement (an essential component of concrete)
2. as aggregate for the base of roads
3. as white pigment or filler in products such as toothpaste or paints
4. as a soil conditioner
5. as a popular decorative addition to rock gardens
|
null | false
| 448
|
Prior work has suggested that persistent learning is the most effective method for learning a valid EBM density. Furthermore, works that learn a valid density have avoided rejuvenation because the incorporation of newly rejuvenated samples into the persistent bank ensures that EBM updates will always include samples that are not at the steady-state. However, persistent learning without rejuvenation has shortcomings mentioned in Section 3. We present hypothesized conditions for learning a valid density that motivate the design of our MCMC initialization:
• After a certain point in training, all samples used to update the EBM must be approximate steadystate samples of the current model p(x; θ). • Persistent samples that are newly rejuvenated (up to about 50K Langevin steps since rejuvenation, and possibly many more) cannot be approximate steady-state samples for any known rejuvenation sources, including data, generators, and noise. • Persistent samples that have undergone sufficiently many lifetime Langevin updates for a model whose weights are changing very slowly can be approximate steady-state samples.
Figure: Visualization of our longrun initializatin procedure. Newly rejuvenated samples must remain the the burnin bank until they have approach the model steady-state, at which point they move to the update bank to be used for model gradients.
Both the second and third point are corroborated by prior work as well as our own observations. The third point means that persistent states updated with shortrun Langevin can eventually act as longrun Langevin samples if the optimizer learning rate is small, because the EBM samples in previous timesteps are essentially samples from the current EBM.
Learning a valid density that includes rejuvenation while satisfying the conditions above requires separating the newly rejuvenated samples from samples that are used to update the EBM. This leads us to introduce two persistent image banks: one for newly rejuvenated samples, and one for samples that will be used to update the EBM. Samples in the newly rejuvenated bank that have been updated sufficiently many times will eventually replace samples from the bank used to update the EBM, at which point newly rejuvenated states will be added to the burn-in bank. Figure shows a visualization of the MCMC initialization method. As in Section 3, we will use a pre-trained generator to efficiently obtain high-quality rejuvenated samples so that the generation process is fully synthetic, although data samples could be used as well. Our goal is to preserve the sample quality for an arbitrary number of MCMC steps. We note that this is a sufficient but not necessary condition for learning a valid EBM density. Nonetheless, stable sampling is an important step towards rigorous probabilistic EBMs. Even with our improved initialization, we find that extremely longrun trajectories of 1 million or more MCMC steps still tend to oversaturate, although to a much lesser degree. To further stabilize the appearance of extremely longrun Langevin samples, we include prior energy terms in the model. Our longrun EBMs have the form
where U (x; θ) is the model whose weights are updated, U 0 (x; θ 0 ) is a prior EBM with fixed weights θ 0 and σ is a parameter controlling the strength of a Gaussian prior. We used a prior EBM in a shortrun manner. The role of the prior EBM is to provide some stability but also to provide a tendency to oversaturate at longer trajectories so that the current EBM learns to correct oversaturation. The Gaussian prior is meant to discourage unbounded activations outside of the image hypercube. Further discussion is in Appendix H. We find that including both of these terms significantly improves the ability to learn quality synthesis over long trajectories.
Furthermore, works that learn a valid density have avoided rejuvenation because the incorporation of newly rejuvenated samples into the persistent bank ensures that EBM updates will always include samples that are not at the steady-state.
|
It seems persistent samples that are newly rejuvenated (up to about 50K Langevin steps since rejuvenation, and possibly many more) cannot be approximate steady-state samples, is it correct?
|
Your understanding is correct. Intuitively, current rejuvenation sources always generate samples very "far" from the steady-state, even if they look quite similar to steady-state samples. Steady-state convergence can only be achieved after a long burn-in process (empirically, at least 50K steps for our method. Other methods/MCMC samplers might require more).
|
null | false
| null |
Kate Elizabeth Pickett OBE FFPH FAcSS FRSA (born 1965) is a British epidemiologist and political activist who is Professor of Epidemiology in the Department of Health Sciences at the University of York, and was a National Institute for Health and Care Research Career Scientist from 2007–2012. She co-authored (with Richard G. Wilkinson) The Spirit Level: Why More Equal Societies Almost Always Do Better and is a co-founder of The Equality Trust. Pickett was awarded a 2013 Silver Rose Award from Solidar for championing equality and the 2014 Charles Cully Memorial Medal by the Irish Cancer Society.
|
Please list down some details about Kate Elizabeth Pickett from given text
|
1. Kate Elizabeth Pickett, a British epidemiologist and political activist, was a Career Scientist at the National Institute for Health and Care Research from 2007 to 2012 and is currently a Professor of Epidemiology in the Department of Health Sciences at the University of York.
2. She founded The Equality Trust and co-authored The Spirit Level: Why More Equal Societies Almost Always Do Better with Richard G. Wilkinson.
3. For promoting equality, Pickett received the Solidar Silver Rose Award in 2013 and the Irish Cancer Society's Charles Cully Memorial Medal in 2014.
|
null | false
| 290
|
Constituency BIBREF0 and dependency grammars BIBREF1 , BIBREF2 are the two main abstractions for representing the syntactic structure of a given sentence, and each of them has its own particularities BIBREF3 . While in constituency parsing the structure of sentences is abstracted as a phrase-structure tree (see Figure FIGREF6 ), in dependency parsing the tree encodes binary syntactic relations between pairs of words (see Figure FIGREF6 ).
When it comes to developing natural language processing (nlp) parsers, these two tasks are usually considered as disjoint tasks, and their improvements therefore have been obtained separately BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .
Despite the potential benefits of learning across representations, there have been few attempts in the literature to do this. klein2003fast considered a factored model that provides separate methods for phrase-structure and lexical dependency trees and combined them to obtain optimal parses. With a similar aim, ren2013combine first compute the n best constituency trees using a probabilistic context-free grammar, convert those into dependency trees using a dependency model, compute a probability score for each of them, and finally rerank the most plausible trees based on both scores. However, these methods are complex and intended for statistical parsers. Instead, we propose a extremely simple framework to learn across constituency and dependency representations.
Instead, we propose a extremely simple framework to learn across constituency and dependency representations.
|
What kind of framework do they propose?
|
A framework to learn across constituency and dependency representations.
|
null | false
| null |
Let me know some different ways that someone can get healthier.
|
Healthy habits are incremental, so making small changes every day can make a difference over time. Some ideas in which to be healthier include drinking more water each day, as most people do not consume enough water. You could also try to increase your activity and maybe schedule 30 minutes of cardiovascular exercise each day. Try to eat more whole-foods that include fruit and vegetables and avoid processed foods and of course, at least 7 hours of sleep each night is important for full recovery.
|
|
null | false
| null |
What is the best way to make a camp fire
|
Nothing screams camping out in the wilderness like building a campfire. The memories, conversations, scents, roasted marshmallows, and crackling of the wood are all part of the experience. In order to build a strong campfire, you need to perform the following steps.
Step 1. Clear a perimeter that is roughly 12 feet in diameter. Make sure you are in an open area, where tree limbs are not hanging over the campfire to avoid secondary fires. Before you start the fire, get all of the necessary components such as dry kindle/twigs, larger sticks, and larger logs, paper, and lighter
Step 2. Build a small tepee shape with the dry kindle and dry small logs. I would recommend adding paper with the kindle to assist in starting the fire.
Step 3. Stack larger sticks around the teepee structure in a square formation. Build up multiple levels, so the square structue is higher than the teepee. This will help the fire expand and spread to the surrounding structure.
Step 4. Light the paper in the teepee with a match or lighter, and watch the dry kindle start to catch. At this early stage, you will most likely see smoke. You may need to blow on the teepee structure lightly in order to add some oxygen to the fire and help it catch to the larger sticks square structure.
Step 5. Once the square structure begins to light, start to add larger logs diagonally across the structure. The diagonal formation will fuel the fire, and expand upwards .
Step 6. Keep adding larger logs to maintain the fire. With the prior kindle, sticks, and logs already hot, adding the new larger logs will burn quickly and self ignite.
Step 7. When you are done enjoying the fire, have a couple of buckets of water nearby to extinguish it. Do not leave the campfire unattended as the hot coals and wind could cause a unintended spread.
|
|
null | false
| null |
Chichén Itzá (often spelled Chichen Itza in English and traditional Yucatec Maya) was a large pre-Columbian city built by the Maya people of the Terminal Classic period. The archeological site is located in Tinúm Municipality, Yucatán State, Mexico.
Chichén Itzá was a major focal point in the Northern Maya Lowlands from the Late Classic (c. AD 600–900) through the Terminal Classic (c. AD 800–900) and into the early portion of the Postclassic period (c. AD 900–1200). The site exhibits a multitude of architectural styles, reminiscent of styles seen in central Mexico and of the Puuc and Chenes styles of the Northern Maya lowlands. The presence of central Mexican styles was once thought to have been representative of direct migration or even conquest from central Mexico, but most contemporary interpretations view the presence of these non-Maya styles more as the result of cultural diffusion.
Chichén Itzá was one of the largest Maya cities and it was likely to have been one of the mythical great cities, or Tollans, referred to in later Mesoamerican literature. The city may have had the most diverse population in the Maya world, a factor that could have contributed to the variety of architectural styles at the site.
The ruins of Chichén Itzá are federal property, and the site's stewardship is maintained by Mexico's Instituto Nacional de Antropología e Historia (National Institute of Anthropology and History). The land under the monuments had been privately owned until 29 March 2010, when it was purchased by the state of Yucatán.
Chichén Itzá is one of the most visited archeological sites in Mexico with over 2.6 million tourists in 2017.
|
What is Chichen Itza?
|
Chichén Itzá was one of the largest Maya cities which was a major focal point around AD 900. It is now one of the most visited archeological sites in Mexico.
|
null | false
| 162
|
Through the successive MP iterations, it could be argued that MPAD implicitly captures some soft notion of the hierarchical structure of documents (words $\rightarrow $ bigrams $\rightarrow $ compositions of bigrams, etc.). However, it might be beneficial to explicitly capture document hierarchy. Hierarchical architectures have brought significant improvements to many NLP tasks, such as language modeling and generation BIBREF24, BIBREF25, sentiment and topic classification BIBREF26, BIBREF27, and spoken language understanding BIBREF28, BIBREF29. Inspired by this line of research, we propose several hierarchical variants of MPAD, detailed in what follows. In all of them, we represent each sentence in the document as a word co-occurrence network, and obtain an embedding for it by applying MPAD as previously described.
MPAD-sentence-att. Here, the sentence embeddings are simply combined through self-attention.
MPAD-clique. In this variant, we build a complete graph where each node represents a sentence. We then feed that graph to MPAD, where the feature vectors of the nodes are initialized with the sentence embeddings previously obtained.
MPAD-path. This variant is similar to the clique one, except that instead of a complete graph, we build a path according to the natural flow of the text. That is, two nodes are linked by a directed edge if the two sentences they represent follow each other in the document.
MPAD-sentence-att. Here, the sentence embeddings are simply combined through self-attention. MPAD-clique. In this variant, we build a complete graph where each node represents a sentence. We then feed that graph to MPAD, where the feature vectors of the nodes are initialized with the sentence embeddings previously obtained. MPAD-path. This variant, shown in Fig. 1, is similar to the clique one, except that instead of a complete graph, we build a path according to the natural flow of the text.
|
How many variants of MPAD are there?
|
Three. MPAD-sentence-att, MPAD-clique and MPAD-path.
|
null | false
| null |
An engine or motor is a machine designed to convert one or more forms of energy into mechanical energy.
Available energy sources include potential energy (e.g. energy of the Earth's gravitational field as exploited in hydroelectric power generation), heat energy (e.g. geothermal), chemical energy, electric potential and nuclear energy (from nuclear fission or nuclear fusion). Many of these processes generate heat as an intermediate energy form, so heat engines have special importance. Some natural processes, such as atmospheric convection cells convert environmental heat into motion (e.g. in the form of rising air currents). Mechanical energy is of particular importance in transportation, but also plays a role in many industrial processes such as cutting, grinding, crushing, and mixing.
|
What are the potential source of energy for an engine?
|
Potential sources of an engine includes potential energy, heat energy chemical energy, electrical energy and nuclear energy.
|
null | false
| null |
What is a circle?
|
A circle is a geometric shape. It is primarily defined by a center point, and a circumference around that point at a fixed radius. What differs a circle from other shapes is that it does not have any edges.
|
|
1611.00514
| true
| null |
Since the introduction of i-vectors in BIBREF0 , the speaker recognition community has seen a significant increase in recognition performance. i-Vectors are low-dimensional representations of Baum-Welch statistics obtained with respect to a GMM, referred to as universal background model (UBM), in a single subspace which includes all characteristics of speaker and inter-session variability, named total variability matrix BIBREF0 . We trained on each acoustic feature a full covariance, gender-independent UBM model with 2048 Gaussians followed by a 600-dimensional i-vector extractor to establish our MFCC- and PLP-based i-vector systems. The unlabeled set of development data was used in the training of both the UBM and the i-vector extractor. The open-source Kaldi software has been used for all these processing steps BIBREF12 .
The unlabeled set of development data was used in the training of both the UBM and the i-vector extractor.
|
Do they single out a validation set from the fixed SRE training set?
|
No.
|
1909.01013
| false
| null |
In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation. For fair comparison, we integrate our model with two iterative refinement methods (Procrustes and GeoMM$_{semi}$).
In this section, we compare our model with state-of-the-art systems, including those with different degrees of supervision. The baselines include: (1) Procrustes BIBREF11, which learns a linear mapping through Procrustes Analysis BIBREF36. (2) GPA BIBREF37, an extension of Procrustes Analysis. (3) GeoMM BIBREF38, a geometric approach which learn a Mahalanobis metric to refine the notion of similarity. (4) GeoMM$_{semi}$, iterative GeoMM with weak supervision. (5) Adv-C-Procrustes BIBREF11, which refines the mapping learned by Adv-C with iterative Procrustes, which learns the new mapping matrix by constructing a bilingual lexicon iteratively. (6) Unsup-SL BIBREF13, which integrates a weak unsupervised mapping with a robust self-learning. (7) Sinkhorn-BT BIBREF28, which combines sinkhorn distance BIBREF29 and back-translation.
|
What are current state-of-the-art methods that consider the two tasks independently?
|
The answers are shown as follows:
* Procrustes
* GPA
* GeoMM
* GeoMM$_{semi}$
* Adv-C-Procrustes
* Unsup-SL
* Sinkhorn-BT
|
null | false
| null |
Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 36,000 are fishes, around 11,700 are reptiles, over 11,100 are birds, and 6,596 mammals—but it has been estimated there are around 7.77 million animal species in total. Animals range in length from 8.5 micrometres (0.00033 in) to 33.6 metres (110 ft). They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology.
|
Of the living animal species which have been described, which group has the most?
|
There are 1.05 million (1,050,000) insect species, which is the largest group in the animal kingdom.
|
null | false
| null |
Give me a list of some of the core beliefs in Stoicism
|
1. It is important to live according to Nature for a smooth flowing life.
2. Happiness is found in Virtue, our values and choices.
3. We can only control our internal world, i.e. thoughts, opinions, and decisions, but have no control over external events.
4. We are responsible for our own life and hence should not complain or blame anyone else.
5. For our personal development, we need to act in society and try to live in harmony and cooperation with others.
6. Progress is driven by having a pragmatic attitude, not via perfection.
|
|
null | false
| null |
What genres of music were popular in the late 1990s?
|
In the late 90s, there were a few dominant genres of popular music: boy bands and the so-called "Latin Explosion." Boy bands usually consisted of 5 members, each with a distinct "look" that's desirable to a range of (usually) teenage girl fans. Their songs were usually "bubblegum pop" or love songs, all with catchy hooks or choruses. Popular groups in the late 90s were the Backstreet Boys, *NSync, 98 Degrees, and O-Town. "Latin Explosion" artists were Hispanic/Latino or of Hispanic/Latino descent and the songs sometimes featured Spanish words or phrases, along with Latino beats in otherwise standard pop music arrangements. Some popular acts include Ricky Martin, Jennifer Lopez, Marc Anthony, and Enrique Igelsias.
|
|
null | false
| null |
tell me whether these are European or American cities: New York, Milan, Paris, Chicago, San Francisco, London
|
European: Milan, Paris, London
American: New York, Chicago, San Francisco
|
|
null | false
| null |
Tina Turner (born Anna Mae Bullock; November 26, 1939) is an American-born and naturalized Swiss retired singer, dancer and actress. Widely referred to as the "Queen of Rock 'n' Roll", she rose to prominence as the lead singer of the Ike & Tina Turner Revue before launching a successful career as a solo performer.
Turner began her career with Ike Turner's Kings of Rhythm in 1957. Under the name Little Ann, she appeared on her first record, "Boxtop", in 1958. In 1960, she debuted as Tina Turner with the hit duet single "A Fool in Love". The duo Ike & Tina Turner became "one of the most formidable live acts in history". They released hits such as "It's Gonna Work Out Fine", "River Deep – Mountain High", "Proud Mary", and "Nutbush City Limits" before disbanding in 1976.
|
Who is Tina Turner
|
Tina Turner was born as Anna Mae Bullock on November 26, 1939 and is an American-born and naturalized Swiss retired singer, dancer and actress. Widely referred to as the Queen of Rock 'n' Roll, she rose to prominence as the lead singer of the Ike & Tina Turner Revue before launching a successful career as a solo performer.
|
null | false
| 88
|
We propose a united representation for knowledge graph, utilizing both structure and text description information of the entities. Experiments show that our proposed jointly representation learning with gating mechanism is effective, which benefits to modeling the meaning of an entity.
In the future, we will consider the following research directions to improve our model:
In the future, we will consider the following research directions to improve our model: 1.Currently, our score function is based on TransE since the main focus of this work is how to integrate both structural and textual information. We believe our models can be further improved with the recently proposed knowledge graph embedding models. 2.We will try to design dynamical gating strategy,which is estimated according to the context information. 3.Intuitively, images of relations and entities may further improve the representation.
|
What is future work planned for improvement?
|
In the future, they will consider the following research directions to improve their model: 1.Currently, their score function is based on TransE since the main focus of this work is how to integrate both structural and textual information. They believe their models can be further improved with the recently proposed knowledge graph embedding models. 2.They will try to design dynamical gating strategy,which is estimated according to the context information. 3.Intuitively, images of relations and entities may further improve the representation.
|
1905.07464
| false
| null |
Each drug label is a collection of sections (e.g., DOSAGE & ADMINISTRATION, CONTRAINDICATIONS, and WARNINGS) where each section contains one or more sentences. Each sentence is annotated with a list of zero or more mentions and interactions. The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations. Two test sets of 57 and 66 drug labels, referred to as Test Set 1 and 2 respectively, with gold standard annotations are used to evaluate participating systems. As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later). We provide summary statistics about these datasets in Table TABREF3 . Test Set 1 closely resembles Training-22 with respect to the sections that are annotated. However, Test Set 1 is more sparse in the sense that there are more sentences per drug label (144 vs. 27), with a smaller proportion of those sentences having gold annotations (23% vs. 51%). Test Set 2 is unique in that it contains annotations from only two sections, namely DRUG INTERACTIONS and CLINICAL PHARMACOLOGY, the latter of which is not represented in Training-22 (nor Test Set 1). Lastly, Training-22, Test Set 1, and Test Set 2 all vary with respect to the distribution of interaction types, with Training-22, Test Set 1, and Test Set 2 containing a higher proportion of PD, UN, and PK interactions respectively.
The training data released for this task contains 22 drug labels, referred to as Training-22, with gold standard annotations.
As Training-22 is a relatively small dataset, we additionally utilize an external dataset with 180 annotated drug labels dubbed NLM-180 BIBREF5 (more later).
|
What training data did they use?
|
The answers are shown as follows:
* Training-22
* NLM-180
|
null | false
| 484
|
As presented in this section, we conducted two series of experiments to analyze the property of IMTC. First, we qualitatively evaluated the diversity of options learned by IMTC with intrinsic rewards, without any extrinsic rewards. Second, we quantitatively test the reusability of learned options by). Also, we can see that the magnitude of intra-option policies tends larger with constant rewards. Right: Options learned by other methods. OC produces a dead option 3 that terminates everywhere and never-ending options 0 and 1. About intra-option policies, all methods successfully avoided learning the same policy, but they only have two directions.
task adaptation on a specific task. As a baseline of termination learning method, we compared our method with OC. OC is trained with VIC rewards during pre-training. We did not compare IMTC with TC because our TC implementation failed to learn options with relatively small termination regions as reported in the paper, and there is no official public code for TC 1 . During pre-training without extrinsic rewards, IMTC receives intrinsic rewards when the current option terminates. We compare three IMTC variants with different intrinsic rewards: (i) VIC, (ii) RVIC, and (iii) constant value (R IMTC = 0.01). Note that R IMTC = 0.01 is chosen from [0.1, 0.05, 0.01] based on the task adaptation results. We also compare IMTC with vanilla VIC and RVIC with fixed termination probabilities. We used ∀ x β o (x) = 0.1 since it performed the best in task adaptation experiments, while 0.05 was used in. Note that RVIC's objective I(X s ; O|x f ) is different from ours, while IMTC and VIC share almost the same objective. Thus, the use of VIC is more natural, and the combination with RVIC is tested to show the applicability of IMTC. Further details of our VIC and RVIC implementation are found in Appendix B. In order to check only the effect of the different methods for learning beta β o , the rest of the implementation is the same for all these methods. That is, OC, vanilla VIC, and vanilla RVIC are also based on PPO and advantage estimation methods in Section 4.2. In this section, we fix the number of options as |O| = 4 for all option-learning methods. We further investigated the effect of the number of options Appendix C, where we confirmed that |O| = 4 is sufficient for most domains. All environments that we used for experiments are implemented on the MuJoCo physics simulator. We further describe the details in Appendix C.
Option Learning From Intrinsic Rewards We now qualitatively compare the options learned by IMTC with options of other methods. Learned options depend on the reward structure in the environment, which enables manually designing good reward functions for learning diverse options. Thus, we employed a reward-free RL setting where no reward is given to agents. Instead, each compared method uses some intrinsic rewards, as explained. We fix µ as µ(o|x) = 1 |O| in this experiment, since we assume that the future tasks are uniformly distributed. Intra-option policies are trained by PPO and independent GAE (8). We show network architectures and hyperparameters in Appendix C. We set the episode length to 1 × 10 4 , i.e., an agent is reset to its starting position after 1 × 10 4 steps. For all visualizations, we chose the best one from five independent runs with different random seeds. We visualized learned options in PointReach environment shown in Figure. In this environment, an agent controls the ball initially placed at the center of the room. The state space consists of positions (x, y) and velocities (∆x, ∆y) of an agent, and the action space consists of acceralations ( ∆x ∆t , ∆y ∆t ). Figure shows the options learned in this environment after 4 × 10 6 steps. Each arrow represents the mean value of intra-option policies, and the heatmaps represent β o . In this experiment, we observed the effect of IMTC clearly, for both termination regions and intra-option policies. Interestingly, we don't see clear differences between options learned with VIC and RVIC rewards, while constant rewards tend to make options peaker. OC failed to learn meaningful termination regions: option 0 and 1 never terminate, and option 3 terminates almost everywhere. This result confirms that IMTC can certainly diversify options. Moreover, compared to vanilla VIC and RVIC, intra-option policies learned by IMTC with VIC or RVIC rewards are clearer, in terms of both the magnitude and directions of policies. We believe that this is because diversifying termination regions gives more biased samples to the option classifiers employed by VIC and RVIC. Figure Transferring skills via task adaptation Now we quantitatively test the reusability of learned options by task adaptation with specific reward functions. Specifically, we first trained agents with intrinsic rewards as per the previous section. Then we transferred agents to an environment with the same state and action space but with external rewards. We prepared multiple reward functions, which we call tasks, for each domain and evaluated the averaged performance over tasks. We compare IMTC with OC, vanilla VIC, vanilla RVIC, and PPO without pre-training. Also, we compare three variants of IMTC with different intrinsic rewards during pre-training. For a fair comparison, UGAE (9) and PPO are used for all options learning methods. Note that we found UGAE is very effective in this experiments, as we show the ablation study in Appendix C.6. For vanilla VIC and vanilla RVIC, termination probability is fixed to 0.1 through pre-training and task adaptation. -greedy based on Q O with = 0.1 is used as the option selection policy µ. We hypothesize that diverse options learned by IMTC can help quickly adapt to given tasks, supposing the diversity of tasks.
Figure shows all domains used for task adaptation experiments. For simplicity, all tasks have goalbased sparse reward functions. I.e., an agent receives R t = 1.0 when it satisfies a goal condition, and otherwise the control cost −0.0001 is given. Red circles show possible goal locations for each task. When the agent fails to reach the goal after 1000 steps, it is reset to a starting position. PointReach, SwimmerReach, and AntReach are simple navigation tasks where an agent aim to just navigate itself to the goal. We also prepared tasks with object manipulation: in PointBilliard and AntBilliard an agent aims to kick the blue ball to the goal position, and in PointPush and AntPush, it has to push the block out of the way to the goal. We pre-traine options learning agents for 4 × 10 6 environmental steps and additionally trained them for 1 × 10 6 steps for each task. Figure shows learning curves and scatter plots drawn from five independent runs with different random seeds per domain. 2 Here, we observed that IMTC with VIC or RVIC rewards performed the best or was compatible with baselines. IMTC with VIC performed better than OC with VIC except for AntRearch, which backs up the effectiveness of diversifying termination regions for learning reusable options. Also, IMTC with VIC and IMTC with RCIC respectively performed better in most of the tasks than VIC and RVIC with fixed termination probabilities. This result suggests that IMTC can boost the performance of option learning methods based on option classifiers, even when the objective is different as with RVIC. On the other hand, IMTC with constant rewards (R IMTC = 0.01) performed worse than IMTC with VIC or RVIC rewards, although it also learned diverse options as we show in Figure, suggesting the importance of adjusting rewards. We further analyzed the evolution of intrinsic rewards of VIC and RVIC in Appendix C.5. In addition, we can observe that IMTC's performance is especially better than other methods in relatively complex PointBilliard, AntBilliard, and AntPush, where object manipulation is required. Considering that manipulated balls and boxes move faster than agents in these domains, a choice of options can lead to larger differences in the future state. IMTC is suitable to these domains since it maximizes the diversity of the resulting states, while PPO struggles to learn. Contrary, IMTC's performance is close to other methods in Reach tasks, where the goal states are relatively close to the starting states in terms of euclidian distances.
Gridworld experiments and limitation of the method Although IMTC successfully learned diverse options in MuJoCo experiments, our analysis in Figure shows the possibility of learning options that are not interesting but have large MI. We further investigated this possibility by visualizing options in a classical four rooms gridworld in Appendix C.8. Interestingly, we observed that IMTC could fall into diverse but unmeaningful options in that environment. We believe that IMTC is often sufficient in a large environment where a randomly-initialized agent rarely gets the same trajectory. However, when the number of possible trajectories is small, diversifying the destinations could be insufficient. In such cases, it can be necessary to extend IMTC to diversify trajectories as done in.
First, we qualitatively evaluated the diversity of options learned by IMTC with intrinsic rewards, without any extrinsic rewards.
|
Maybe you should include Termination Critic as a baseline in the large scale experiments?
|
Unfortunately, as we shortly mentioned at the beginning of section 5, we failed to reproduce options with small entropy by our implementation of termination critic. A major difficulty we had is in training the option transition model Po(xf|xs) and the marginalized Poμ(xf|xs). More precisely, in our implementation, Po produced close values for each option, leading to similar termination regions per option. This failure motivated our design decision of approximating pO(o|xs,xf) instead of Po. We will clarify this point in the revised manuscript.
|
null | false
| 42
|
We evaluated our attention transformations on three language pairs. We focused on small datasets, as they are the most affected by coverage mistakes. We use the IWSLT 2014 corpus for De-En, the KFTT corpus for Ja-En BIBREF19 , and the WMT 2016 dataset for Ro-En. The training sets have 153,326, 329,882, and 560,767 parallel sentences, respectively. Our reason to prefer smaller datasets is that this regime is what brings more adequacy issues and demands more structural biases, hence it is a good test bed for our methods. We tokenized the data using the Moses scripts and preprocessed it with subword units BIBREF20 with a joint vocabulary and 32k merge operations. Our implementation was done on a fork of the OpenNMT-py toolkit BIBREF21 with the default parameters . We used a validation set to tune hyperparameters introduced by our model. Even though our attention implementations are CPU-based using NumPy (unlike the rest of the computation which is done on the GPU), we did not observe any noticeable slowdown using multiple devices.
As baselines, we use softmax attention, as well as two recently proposed coverage models:
We also experimented combining the strategies above with the sparsemax transformation.
As evaluation metrics, we report tokenized BLEU, METEOR ( BIBREF22 , as well as two new metrics that we describe next to account for over and under-translation.
We can see that on average, the sparse models (csparsemax as well as sparsemax combined with coverage models) have higher scores on both BLEU and METEOR.
|
Do the sparse models have higher scores on METEOR?
|
Yes.
|
null | false
| null |
Which of the following are studio albums created by J. Cole: KOD, The Off-Season, Illmatic, Reasonable Doubt, The Eminem Show, Born Sinner
|
KOD, The Off-Season, Born Sinner
|
|
null | false
| null |
What's the significance of the lighthouse in Virginia Woolf's "In The Lighthouse?"
|
The Lighthouse symbolizes human desire, a force that pulsates over the indifferent sea of the natural world and guides people’s passage across it. Yet even as the Lighthouse stands constant night and day, season after season, it remains curiously unattainable. James’ frustrated desire to visit the Lighthouse begins the novel, and Mrs. Ramsay looks at the Lighthouse as she denies Mr. Ramsay the profession of love he wants so badly at the end of Chapter 1. James, finally reaching the Lighthouse in Chapter 3 a decade after he’d first wanted to go, sees that, up close, the Lighthouse looks nothing like it does from across the bay. That misty image he’d desired from a distance remains unattainable even when he can sail right up to the structure it’s supposedly attached to. The novel’s title can be understood as a description for experience itself: one moves through life propelled by desire towards the things one wants, and yet seems rarely to reach them. One’s life, then, is the process of moving towards, of reaching, of desiring. It is “to” the Lighthouse, not “at” it
|
|
null | false
| null |
Who is the best soccer player ?
|
This is a question that is often debated during dinners. To remain as factual as possible, it would certainly be appropriate to mention the name of the current Golden Ball: Karim Benzema.
|
|
1810.06743
| true
| null |
The contributions of this work are:
The contributions of this work are:
|
Do they look for inconsistencies between different UD treebanks?
|
Yes.
|
null | false
| null |
Give me a bulleted list of the best chain coffee shops in the United States
|
- Starbucks
- Dunkin Donuts
- Peet's
- Aroma Joe's
- Panera Bread
|
|
1907.05403
| false
| null |
To evaluate the performance of our approach, we used a subset of the SNIPS BIBREF12 dataset, which is readily available in RASA nlu format. Our training data consisted of 700 utterances, across 7 different intents (AddToPlaylist, BookRestaurant, GetWeather, PlayMusic, RateBook, SearchCreativeWork, and SearchScreeningEvent). In order to test our implementation of incremental components, we initially benchmarked their non-incremental counterparts, and used that as a baseline for the incremental versions (to treat the sium component as non-incremental, we simply applied all words in each utterance to it and obtained the distribution over intents after each full utterance had been processed).
We use accuracy of intent and entity recognition as our task and metric. To evaluate the components worked as intended, we then used the IncrementalInterpreter to parse the messages as individual ius. To ensure REVOKE worked as intended, we injected random incorrect words at a rate of 40%, followed by subsequent revokes, ensuring that an ADD followed by a revoke resulted in the same output as if the incorrect word had never been added. While we implemented both an update-incremental and a restart-incremental RASA nlu component, the results of the two cannot be directly compared for accuracy as the underlying models differ greatly (i.e., sium is generative, whereas Tensorflow Embedding is a discriminative neural network; moreover, sium was designed to work as a reference resolution component to physical objects, not abstract intents), nor are these results conducive to an argument of update- vs. restart-incremental approaches, as the underlying architecture of the models vary greatly.
To evaluate the performance of our approach, we used a subset of the SNIPS BIBREF12 dataset, which is readily available in RASA nlu format.
We use accuracy of intent and entity recognition as our task and metric.
|
How are their changes evaluated?
|
The changes are evaluated based on accuracy of intent and entity recognition on SNIPS dataset
|
null | false
| null |
In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately by conditioning it relative to their age, rather than simply assuming that the individual is typical of the population as a whole.
One of the many applications of Bayes' theorem is Bayesian inference, a particular approach to statistical inference. When applied, the probabilities involved in the theorem may have different probability interpretations. With Bayesian probability interpretation, the theorem expresses how a degree of belief, expressed as a probability, should rationally change to account for the availability of related evidence. Bayesian inference is fundamental to Bayesian statistics, being considered by one authority as; "to the theory of probability what Pythagoras's theorem is to geometry."
|
What is Bayes theorem
|
Bayes theorem is based on conditional probability related to prior knowledge of the event.
|
null | false
| null |
Which of the following are not noble gases: helium, sodium, neon, oxygen, argon, hydrogen, calcium, magnesium, lithium, boron, krypton, nitrogen, xenon, aluminium, radon.
|
The following are not noble gases - sodium, oxygen, hydrogen, calcium, magnesium, lithium, boron, nitrogen, aluminium.
|
|
null | false
| 125
|
Despite the American public's increasing acceptance of LGBTQ people and recent legal successes, LGBTQ individuals frequently remain the targets of hate and violence BIBREF0, BIBREF1, BIBREF2. At the core of this issue is dehumanization, “the act of perceiving or treating people as less than human” BIBREF3, a process that heavily contributes to extreme intergroup bias BIBREF4. Language is central to studying this phenomenon; like other forms of bias BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, dehumanizing attitudes are expressed through subtle linguistic manipulations, even in carefully-edited texts. It is crucial to understand the use of such linguistic signals in mainstream media, as the media's representation of marginalized social groups has far-reaching implications for social acceptance, policy, and safety.
While small-scale studies of dehumanization and media representation of marginalized communities provide valuable insights (e.g. BIBREF10), there exist no known large-scale analyses, likely due to difficulties in quantifying such a subjective and multidimensional psychological process. However, the ability to do large-scale analysis is crucial for understanding how dehumanizing attitudes have evolved over long periods of time. Furthermore, by being able to account for a greater amount of media discourse at once, large-scale techniques can provide a more complete view of the media environment to which the public is exposed.
Linguistics and computer science offer valuable methods and insights on which such large-scale techniques might be developed for the study of dehumanization. By leveraging more information about the contexts in which marginalized groups are discussed, computational linguistic methods not only enable large-scale study of a complex psychological phenomenon, but can even reveal linguistic variations and changes that are not easily identifiable through qualitative analysis alone.
In this work, we develop a computational linguistic framework for analyzing dehumanizing language, with a focus on lexical signals of dehumanization. Social psychologists have identified numerous components of dehumanization, such as negative evaluations of a target group, denial of agency, moral disgust, and likening members of a target group to non-human entities such as vermin. Drawing upon this rich body of literature, we first identify clear linguistic analogs for these components and propose several computational techniques to measure these linguistic correlates. We then apply this general framework to explore changing representations of LGBTQ groups in the New York Times over the span of three decades. We additionally use this lens of dehumanization to investigate differences in social meaning between the denotationally-similar labels gay and homosexual.
This paper aims to bridge the gaps between computational modeling, sociolinguistics, and dehumanization research with implications for several disciplines. In addition to enabling large-scale studies of dehumanizing language and media representation of marginalized social groups, these techniques can be built into systems that seek to capture both conscious and unconscious biases in text. Furthermore, this work has implications for improving machines' abilities to automatically detect hate speech and abusive language online, which are typically underpinned by dehumanizing language. Finally, our case study demonstrates that such computational analyses of discussions about marginalized groups can provide unique insights into language variation and change within sensitive sociopolitical contexts, and help us understand how people (and institutions) use language to express their ideologies and attitudes towards certain social groups.
At the core of this issue is dehumanization, “the act of perceiving or treating people as less than human” (Haslam and Stratemeyer, 2016), a process that heavily contributes to extreme intergroup bias (Haslam,2006).
|
What is dehumanization?
|
The act of perceiving or treating people as less than human.
|
null | false
| null |
Why is soccer the most popular sport in the world?
|
Soccer is the most popular sport in the world because of several reasons including: It is easy to understand the rules and thus it is easy to play. The scoring system is simple - one goal equals 1 point. It does not require a lot of equipment. As a soccer player you have a lot of freedom in decision making compared to other sports like American football where players are expected to follow specific directions and plays from the coach. Soccer can be played in many different formats - from official team sizes with 11 players vs. 11 players to smaller teams of 2 players vs 2 players. Soccer is fun and exciting to play or watch.
|
|
null | false
| null |
Vidhivilasa is a 1962 Kannada language swashbuckler film directed by S. V. Mahesh. The film stars Rajkumar, Leelavathi and K. S. Ashwath. The film is based on the story of a King who confronts destiny (fate), which appears to him in the form of a young lady, by challenging it on how he would die. The King tries in vain all means at his disposal to prevent events that destiny foretold would happen. Finally, destiny wins, employing a twisted turn of events. The film has musical score by T. Padman. The story, screenplay, dialogues and lyrics were written by H. L. Narayana Rao who happens to be the father of actor Vishnuvardhan.
|
Can you give me a summary of the story of the Vidhivilasa movie?
|
A king is confronted by destiny in person of a young lady and tries by any means possible to avoid his fate. However the king is not able to prevent his destiny.
|
null | false
| 310
|
Quality estimation (QE) refers to the task of measuring the quality of machine translation (MT) system outputs without reference to the gold translations BIBREF0 , BIBREF1 . QE research has grown increasingly popular due to the improved quality of MT systems, and potential for reductions in post-editing time and the corresponding savings in labor costs BIBREF2 , BIBREF3 . QE can be performed on multiple granularities, including at word level, sentence level, or document level. In this paper, we focus on quality estimation at word level, which is framed as the task of performing binary classification of translated tokens, assigning “OK” or “BAD” labels.
Early work on this problem mainly focused on hand-crafted features with simple regression/classification models BIBREF4 , BIBREF5 . Recent papers have demonstrated that utilizing recurrent neural networks (RNN) can result in large gains in QE performance BIBREF6 . However, these approaches encode the context of the target word by merely concatenating its left and right context words, giving them limited ability to control the interaction between the local context and the target word.
In this paper, we propose a neural architecture, Context Encoding Quality Estimation (CEQE), for better encoding of context in word-level QE. Specifically, we leverage the power of both (1) convolution modules that automatically learn local patterns of surrounding words, and (2) hand-crafted features that allow the model to make more robust predictions in the face of a paucity of labeled data. Moreover, we further utilize stacked recurrent neural networks to capture the long-term dependencies and global context information from the whole sentence.
We tested our model on the official benchmark of the WMT18 word-level QE task. On this task, it achieved highly competitive results, with the best performance over other competitors on English-Czech, English-Latvian (NMT) and English-Latvian (SMT) word-level QE task, and ranking second place on English-German (NMT) and German-English word-level QE task.
In this paper, we propose a neural architecture, Context Encoding Quality Estimation (CEQE), for better encoding of context in word-level QE.
|
What is the neural architecture that the author proposes?
|
Context Encoding Quality Estimation.
|
null | false
| 306
|
Figure FIGREF6 summarizes the proposed model architecture. The model takes character sequence as input and captures contextual features using BERT. A CRF layer is applied to extract entities from the sentence. To effectively transmit information between entity recognition and relation extraction, soft label embedding is built on the top of CRF logits. To solve the problem that one entity belongs to multiple triplets, a multi-sigmoid layer is applied. We find that adding an auxiliary global relation prediction task also improve the performance.
The model takes character sequence as input and captures contextual features using BERT.
|
How to capture contextual features?
|
The model captures contextual features using BERT.
|
null | false
| null |
How many states are there in the United States?
|
50
|
|
null | false
| 153
|
This manuscript describes our approach for the Visual Dialog Challenge 2018. We use an ensemble of three discriminative models with different encoders and decoders for our final submission. Our best performing model on 'test-std' split achieves the NDCG score of 55.46 and the MRR value of 63.77, securing third position in the challenge.
This manuscript describes our approach for the Visual Dialog Challenge 2018. We use an ensemble of three discriminative models with different encoders and decoders for our final submission. Our best performing model on ‘teststd’ split achieves the NDCG score of 55.46 and the MRR value of 63.77, securing third position in the challenge.
|
How much NDCG score does their best performing model on the‘teststd’ split?
|
55.46.
|
1904.04019
| false
| null |
We now discuss the relations among the results of the different experiments to gain some further insights into the sarcastic content of our corpora. From the in-corpus experiments, we obtain good results on SarcasmCorpus, which is the only corpus containing Amazon reviews. Unfortunately, when we train our models in a cross-corpora or all-corpora setting, our results drop dramatically, especially in the cross-corpora case. These results mean that the sarcasm in SarcasmCorpus is conveyed through features that are not present in the other corpora. This is especially true when considering that in the inter-corpora experiments, using SarcasmCorpus as a training set in all cases yields results that are only better than the ones obtained when using irony-context as a training set.
From the in-corpus experiments, we obtain good results on SarcasmCorpus, which is the only corpus containing Amazon reviews. Unfortunately, when we train our models in a cross-corpora or all-corpora setting, our results drop dramatically, especially in the cross-corpora case. These results mean that the sarcasm in SarcasmCorpus is conveyed through features that are not present in the other corpora.
|
In which domains is sarcasm conveyed in different ways?
|
The answers are shown as follows:
* Amazon reviews
|
null | false
| 214
|
Our datasets can be used for various hate speech tasks. In this paper, we focus on generative hate speech intervention.
The goal of this task is to generate a response to hate speech that can mitigate its use during a conversation. The objective can be formulated as the following equation:
where $c$ is the conversation, $r$ is the corresponding intervention response, and $D$ is the dataset. This task is closely related to the response generation and dialog generation, though several differences exist including dialog length, language cadence, and word imbalances. As a baseline, we chose the most common methods of these two tasks, such as Seq2Seq and VAE, to determine the initial feasibility of automatically generate intervention responses. More recent Reinforcement Learning method for dialog generation BIBREF21 can also be applied to this task with slight modification. Future work will explore more complex, and unique models.
Similar to BIBREF21, a generative model is considered as an agent. However, different from dialog generation, generative intervention does not have multiple turns of utterance, so the action of the agent is to select a token in the response. The state of the agent is given by the input posts and the previously generated tokens. Another result due to this difference is that the rewards with regard to ease of answering or information flow do not apply to this case, but the reward for semantic coherence does. Therefore, the reward of the agent is:
where $rw(c,r)$ is the reward with regard to the conversation $c$ and its reference response $r$ in the dataset. $p(r|c)$ denotes the probability of generating response $r$ given the conversation $c$, and $p_{back}(c|r)$ denotes the backward probability of generating the conversation based on the response, which is parameterized by another generation network. The reward is a weighted combination of these two parts, which are observed after the agent finishing generating the response. We refer the readers to BIBREF21 for details.
The methods are evaluated using F-1 score, Precision-Recall (PR) AUC, and ReceiverOperating-Characteristic (ROC) AUC.
|
What is used to evaluate the methods?
|
The methods are evaluated using F-1 score, Precision-Recall (PR) AUC, and ReceiverOperating-Characteristic (ROC) AUC.
|
null | false
| null |
Is there a Romanian Royal Family?
|
The Romanian Royal Family consists of members of the former royal family of Michael I, the last king of Romania. This includes his two daughters, Margareta and Elena. Romania was a kingdom between 1881 and 1947. Carol I of Romania was the first king, proclaimed in 1881. Michael I of Romania was the last king, forced to abdicate in 1947.
|
|
null | false
| 325
|
Training a dialogue agent to converse like a human requires extensive supervision. The most common approach is to train models to imitate humans in large corpora of crowdsourced or scraped conversations BIBREF0 . These fully-supervised conversations tend to be expensive to collect in sufficient quantity and/or occur in settings with significant differences from the deployment environment BIBREF1 . Instead, dialogue agents would ideally learn directly from dialogue, the conversations they participate in after deployment, which are usually abundant, task-specific, dynamic, and cheap. This corresponds to the way humans learn to converse—not merely observing others engaging in “expert-level” conversations, but instead actively adjusting and correcting our speech based on feedback woven throughout our own conversations BIBREF2 , BIBREF3 . Giving a dialogue agent this ability would enable it to continuously improve and adapt over its lifetime, rather than requiring additional annotation costs for each and every improvement.
However, naively training a dialogue agent on its own conversations yields poor results. For example, training a model on its own output can simply reinforce its existing failure modes, and mistakes by the agent can lead to absurd conversations that no longer resemble the target domain BIBREF4 . To combat this, one approach is to allow the agent to request feedback during conversations BIBREF5 , BIBREF6 , e.g., when it believes it is about to make a mistake. This approach, however, falls victim to the Dunning-Kruger effect BIBREF7 , which in this case suggests that a bad model will also be bad at knowing when it is doing a bad job. Regardless of when feedback is requested, existing methods typically require accompanying scalar rewards or adherence to particular templates or structure to ensure that the feedback is usable by the model BIBREF8 , BIBREF9 , BIBREF10 . These requirements may be acceptable for paid annotators, but they impose unnatural workflows on unpaid conversation partners in a standard dialogue environment. Humans are able to request and provide feedback using only natural language; ideally, dialogue agents would be able to do the same.
In this work we propose the self-feeding chatbot, a dialogue agent with the ability to extract new examples from the conversations it participates in after deployment (Figure FIGREF1 ). Concretely, in addition to being trained on the primary Dialogue task, the agent is trained to predict its speaking partner's satisfaction with its responses. When the conversation seems to be going well, the user's responses (but not the bot's own utterances) become the targets in new training examples for the Dialogue task. When the agent believes it has made a mistake, it instead requests feedback on what it could have said instead. Predicting the feedback that will be provided in a given context becomes an auxiliary task (Feedback) on which the model is also trained. Importantly, these new examples improve the agent's dialogue abilities while using only natural responses from the user that do not require special structure, accompanying numerical feedback, or additional human intervention in order to be used.
With this approach, the conversations the chatbot participates in are sliced into two complementary datasets—one largely protected from the chatbot's mistakes (Dialogue examples), and one which directly addresses them (Feedback examples). We validate our approach on the PersonaChat BIBREF11 dialogue dataset, finding empirically that regardless of the number of available supervised examples, the dialogue ability of the chatbot is always improved by adding the automatically extracted examples of either type, and improves the most by adding both.
The main contributions of this work thus include the following:
The datasets and models described in this paper are available via the ParlAI platform BIBREF12 , along with training code. Hyperparameter values are included in Appendix SECREF13 .
We validate our approach on the PersonaChat dialogue dataset, finding empirically that regardless of the number of available supervised examples, the dialogue ability of the chatbot is always improved by adding the automatically extracted examples of either type, and improves the most by adding both.
|
Is the chatbot always improved by adding the automatically extracted examples of either type?
|
Yes, it is.
|
null | false
| null |
The Warriors won the inaugural Basketball Association of America (BAA) championship in 1947, and won again in 1956, led by Hall of Fame trio Paul Arizin, Tom Gola, and Neil Johnston. After the trade of star Wilt Chamberlain in January 1965, the team finished the 1964–65 season with the NBA's worst record (17–63). Their rebuilding period was brief due in large part to the Warriors' drafting of Rick Barry four months after the trade. In 1975, star players Barry and Jamaal Wilkes powered the Warriors to their third championship, widely considered one of the biggest upsets in NBA history.
|
What was the Golden State Warriors win-loss record in the 1964-65 season ?
|
The Golden State Warriors record in the 1964-65 season was 17 wins and 63 losses.
|
null | false
| null |
What is CloudFormation?
|
AWS CloudFormation is an infrastructure as code (IaC) service that allows you to easily model, provision, and manage AWS and third-party resources.
|
|
null | false
| null |
What can help with puppy teething?
|
Puppy teething will typically last until they are 4 or 5 months old. To help with the discomfort, you can give them puppy safe chew toys, a frozen carrot or banana, bully sticks, or cooling teethers. You can also try giving them a frozen kong toy, or wet and twist a rag, then freeze it.
|
|
2001.09332
| false
| null |
The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences.
The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation" linked to certain words, it was also decided to replace every number present in the text with the particular $\langle NUM \rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\,224$ words.
The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\,829\,960$ words divided into $17\,305\,401$ sentences.
All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\,224$ words.
|
What is the dataset used as input to the Word2Vec algorithm?
|
Italian Wikipedia and Google News extraction producing final vocabulary of 618224 words
|
null | false
| null |
what makes the Sun bright ?
|
The sun is a huge massive sphere of gas and it is powered by the energy released by turning hydrogen to helium at its core. This process is called nuclear fusion reaction. Nuclear fusion involves two or more nuclei combining together to form one or more different atomic nuclei. Nuclear fusion releases termendous amount of energy in the form of light and heat.
|
|
null | false
| null |
What's your favorite stars in the sky?
|
My favorite star in the sky is Sirius, which is the brightest star at night. The star is designated α Canis Majoris and is 8.6 light years from the Earth. I won't tell you that I actually came from Sirius B which is already a white dwarf that orbits the Sirius A.
|
|
1802.05322
| false
| null |
When evaluating classifiers it is common to use accuracy, precision and recall as well as Hamming loss. Accuracy, precision and recall are defined by the the four terms true positive ( INLINEFORM0 ), true negative ( INLINEFORM1 ), false positive ( INLINEFORM2 ) and false negative ( INLINEFORM3 ) which can be seen in table TABREF16 .
It has been shown that when calculating precision and recall on multi-label classifiers, it can be advantageous to use micro averaged precision and recall BIBREF6 . The formulas for micro averaged precision are expressed as DISPLAYFORM0 DISPLAYFORM1
When evaluating classifiers it is common to use accuracy, precision and recall as well as Hamming loss.
It has been shown that when calculating precision and recall on multi-label classifiers, it can be advantageous to use micro averaged precision and recall BIBREF6 . The formulas for micro averaged precision are expressed as DISPLAYFORM0 DISPLAYFORM1
|
what evaluation metrics are discussed?
|
The answers are shown as follows:
* precision
* recall
* Hamming loss
* micro averaged precision and recall
|
null | false
| null |
Sócrates Brasileiro Sampaio de Souza Vieira de Oliveira (19 February 1954 – 4 December 2011), simply known as Sócrates [ˈsɔkɾat͡ʃis], was a Brazilian footballer who played as a midfielder. His medical degree and his political awareness, combined with style and quality of his play, earned him the nickname "Doctor Socrates".
Easily recognizable for his beard and headband, Sócrates became the "symbol of cool for a whole generation of football supporters". He is considered to be one of the greatest midfielders of his generation. In 1983, he was named South American Footballer of the Year. In 2004, he was named by Pelé in the FIFA 100 list of the world's greatest living players.
Socrates played for Brazil for seven years, scoring 22 goals and representing the nation in two World Cups. He captained the team in the 1982 FIFA World Cup; playing in midfield alongside Zico, Falcão, Toninho Cerezo and Éder, considered one of the greatest Brazilian national teams ever. He also appeared in the 1979 and 1983 Copa América. At club level, Sócrates played for Botafogo-SP before joining Corinthians in 1978. He moved to Italy to play for Fiorentina, returning to Brazil in 1985 to end his career.
|
Extract the teams that the footballer Sócrates played during his career, separate it by comma.
|
Brazil, Botafogo-SP, Corinthians, Fiorentina
|
null | false
| 85
|
There have been many advances in machine learning methods which help machines understand human behavior better than ever. One of the most important aspects of human behavior is emotion. If machines could detect human emotional expressions, it could be used to improve on verity of applications such as marketing BIBREF0 , human-computer interactions BIBREF1 , political science BIBREF2 etc.
Emotion in humans is complex and hard to distinguish. There have been many emotional models in psychology which tried to classify and point out basic human emotions such as Ekman's 6 basic emotions BIBREF3 , Plutchik's wheel of emotions BIBREF4 , or Parrott's three-level categorization of emotions BIBREF5 . These varieties show that emotions are hard to define, distinguish, and categorize even for human experts.
By adding the complexity of language and the fact that emotion expressions are very complex and context dependant BIBREF6 , BIBREF7 , BIBREF8 , we can see why detecting emotions in textual data is a challenging task. This difficulty can be seen when human annotators try to assign emotional labels to the text, but using various techniques the annotation task can be accomplished with desirable agreement among the annotators BIBREF9 .
We show this by using a recurrent neural network-based classifier that can learn to create a more informative latent representation of the target text as a whole, and we show that this can improve the final performance significantly.
|
Whether the proposed classifier can improve the final performance?
|
Yes, it can improve the final performance significantly.
|
null | false
| null |
Classify the following hockey teams as either National Hockey League (NHL) or American Hockey League (AHL): Abbotsford Canucks, Bakersfield Condors, Belleville Senators, Bridgeport Islanders, Calgary Wranglers, Charlotte Checkers, Chicago Wolves, Cleveland Monsters, Coachella Valley Firebirds, Colorado Eagles, Grand Rapids Griffins, Hartford Wolf Pack, Henderson Silver Knights, Hershey Bears, Iowa Wild, Laval Rocket, Lehigh Valley Phantoms, Manitoba Moose, Milwaukee Admirals, Ontario Reign, Providence Bruins, Rochester Americans, Rockford IceHogs, San Diego Gulls, San Jose Barracuda, Springfield Thunderbirds, Syracuse Crunch, Texas Stars, Toronto Marlies, Tucson Roadrunners, Utica Comets, Wilkes-Barre/Scranton Penguins, Anaheim Ducks, Arizona Coyotes, Boston Bruins, Buffalo Sabres, Calgary Flames, Carolina Hurricanes, Chicago Blackhawks, Colorado Avalanche, Columbus Blue Jackets, Dallas Stars, Detroit Red Wings, Edmonton Oilers, Florida Panthers, Los Angeles Kings, Minnesota Wild, Montreal Canadiens, Nashville Predators, New Jersey Devils, New York Islanders, New York Rangers, Ottawa Senators, Philadelphia Flyers, Pittsburgh Penguins, San Jose Sharks, Seattle Kraken, St Louis Blues, Tampa Bay Lightning, Toronto Maple Leafs, Vancouver Canucks, Vegas Golden Knights, Washington Capitals, Winnipeg Jets
|
AHL: Abbotsford Canucks, Bakersfield Condors, Belleville Senators, Bridgeport Islanders, Calgary Wranglers, Charlotte Checkers, Chicago Wolves, Cleveland Monsters, Coachella Valley Firebirds, Colorado Eagles, Grand Rapids Griffins, Hartford Wolf Pack, Henderson Silver Knights, Hershey Bears, Iowa Wild, Laval Rocket, Lehigh Valley Phantoms, Manitoba Moose, Milwaukee Admirals, Ontario Reign, Providence Bruins, Rochester Americans, Rockford IceHogs, San Diego Gulls, San Jose Barracuda, Springfield Thunderbirds, Syracuse Crunch, Texas Stars, Toronto Marlies, Tucson Roadrunners, Utica Comets, Wilkes-Barre/Scranton Penguins
NHL: Anaheim Ducks, Arizona Coyotes, Boston Bruins, Buffalo Sabres, Calgary Flames, Carolina Hurricanes, Chicago Blackhawks, Colorado Avalanche, Columbus Blue Jackets, Dallas Stars, Detroit Red Wings, Edmonton Oilers, Florida Panthers, Los Angeles Kings, Minnesota Wild, Montreal Canadiens, Nashville Predators, New Jersey Devils, New York Islanders, New York Rangers, Ottawa Senators, Philadelphia Flyers, Pittsburgh Penguins, San Jose Sharks, Seattle Kraken, St Louis Blues, Tampa Bay Lightning, Toronto Maple Leafs, Vancouver Canucks, Vegas Golden Knights, Washington Capitals, Winnipeg Jets
|
|
null | false
| 350
|
We evaluate our model on RACE dataset BIBREF6 , which consists of two subsets: RACE-M and RACE-H. RACE-M comes from middle school examinations while RACE-H comes from high school examinations. RACE is the combination of the two.
We compare our model with the following baselines: MRU(Multi-range Reasoning) BIBREF12 , DFN(Dynamic Fusion Networks) BIBREF11 , HCM(Hierarchical Co-Matching) BIBREF8 , OFT(OpenAI Finetuned Transformer LM) BIBREF13 , RSM(Reading Strategies Model) BIBREF14 . We also compare our model with the BERT baseline and implement the method described in the original paper BIBREF7 , which uses the final hidden vector corresponding to the first input token ([CLS]) as the aggregate representation followed by a classification layer and finally a standard classification loss is computed.
Results are shown in Table 2 . We can see that the performance of BERT $_{base}$ is very close to the previous state-of-the-art and BERT $_{large}$ even outperforms it for 3.7%. But experimental result shows that our model is more powerful and we further improve the result for 2.2% computed to BERT $_{base}$ and 2.2% computed to BERT $_{large}$ .
RACE contains 27,933 passages and 97,687 questions in total, which is recognized as one of the largest and most difficult datasets in multi-choice reading comprehension.
|
How many passages are there in the RACE dataset?
|
27,933 passages.
|
null | false
| null |
Tell me which animals are bigger than the average human: Dog, Mouse, Elephant, Rhino, Hippo, Cat, Squirrel.
|
Sure. Here are the selections from above that are larger than the average human: Elephant, Rhino, and Hippo.
|
|
1910.08418
| false
| null |
We report segmentation performance using precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF). We also report the exact-match (X) metric which computes the proportion of correctly segmented utterances. Our main results are in Figure FIGREF47, where we report averaged scores over 10 runs. As a comparison with another bilingual method inspired by the “align to segment” approach, we also include the results obtained using the statistical models of BIBREF9, denoted Pisa, in Table TABREF46.
We report segmentation performance using precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF). We also report the exact-match (X) metric which computes the proportion of correctly segmented utterances.
|
How is the word segmentation task evaluated?
|
The answers are shown as follows:
* precision, recall, and F-measure on boundaries (BP, BR, BF), and tokens (WP, WR, WF)
* exact-match (X) metric
|
null | false
| null |
Give me a list of some quotes from Bhagavad Gita.
|
1. Whatever happened, happened for the good. Whatever is happening, is happening for the good. Whatever will happen, will also happen for the good.
2. Change is the law of The Universe.
3. You are only entitles to the action, never to its fruits.
4. You are what you believe in. You become which you believe you can become.
5. One can destroy the body. The spirit is beyond destruction. No one can being an end to spirit which is everlasting.
|
|
null | false
| null |
The demarcation between science and pseudoscience has scientific, philosophical, and political implications. Philosophers debate the nature of science and the general criteria for drawing the line between scientific theories and pseudoscientific beliefs, but there is widespread agreement "that creationism, astrology, homeopathy, Kirlian photography, dowsing, ufology, ancient astronaut theory, Holocaust denialism, Velikovskian catastrophism, and climate change denialism are pseudosciences." There are implications for health care, the use of expert testimony, and weighing environmental policies.Addressing pseudoscience is part of science education and developing scientific literacy
|
What is pseudoscience?
|
Which is not based on mainstream factual science.
|
null | false
| 65
|
Word embeddings are representations of words in numerical form, as vectors of typically several hundred dimensions. The vectors are used as an input to machine learning models; for complex language processing tasks these are typically deep neural networks. The embedding vectors are obtained from specialized learning tasks, based on neural networks, e.g., word2vec BIBREF0, GloVe BIBREF1, FastText BIBREF2, ELMo BIBREF3, and BERT BIBREF4. For training, the embeddings algorithms use large monolingual corpora that encode important information about word meaning as distances between vectors. In order to enable downstream machine learning on text understanding tasks, the embeddings shall preserve semantic relations between words, and this is true even across languages.
Probably the best known word embeddings are produced by the word2vec method BIBREF5. The problem with word2vec embeddings is their failure to express polysemous words. During training of an embedding, all senses of a given word (e.g., paper as a material, as a newspaper, as a scientific work, and as an exam) contribute relevant information in proportion to their frequency in the training corpus. This causes the final vector to be placed somewhere in the weighted middle of all words' meanings. Consequently, rare meanings of words are poorly expressed with word2vec and the resulting vectors do not offer good semantic representations. For example, none of the 50 closest vectors of the word paper is related to science.
The idea of contextual embeddings is to generate a different vector for each context a word appears in and the context is typically defined sentence-wise. To a large extent, this solves the problems with word polysemy, i.e. the context of a sentence is typically enough to disambiguate different meanings of a word for humans and so it is for the learning algorithms. In this work, we describe high-quality models for contextual embeddings, called ELMo BIBREF3, precomputed for seven morphologically rich, less-resourced languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian, and Swedish. ELMo is one of the most successful approaches to contextual word embeddings. At time of its creation, ELMo has been shown to outperform previous word embeddings BIBREF3 like word2vec and GloVe on many NLP tasks, e.g., question answering, named entity extraction, sentiment analysis, textual entailment, semantic role labeling, and coreference resolution.
This report is split into further five sections. In section SECREF2, we describe the contextual embeddings ELMo. In Section SECREF3, we describe the datasets used and in Section SECREF4 we describe preprocessing and training of the embeddings. We describe the methodology for evaluation of created vectors and results in Section SECREF5. We present conclusion in Section SECREF6 where we also outline plans for further work.
At time of its creation, ELMo has been shown to outperform previous word embeddings (Peters et al., 2018) like word2vec and GloVe on many NLP tasks, e.g., question answering, named entity extraction, sentiment analysis, textual entailment, semantic role labeling, and coreference resolution.
|
What is the specialty that ELMo has in NLP tasks?
|
At time of its creation, ELMo has been shown to outperform previous word embeddings (Peters et al., 2018) like word2vec and GloVe on many NLP tasks, e.g., question answering, named entity extraction, sentiment analysis, textual entailment, semantic role labeling, and coreference resolution.
|
null | false
| 154
|
All corpora provide datasets/splits for answer selection, whereas only (WikiQA, SQuAD) and (WikiQA, SelQA) provide datasets for answer extraction and answer triggering, respectively. SQuAD is much larger in size although questions in this corpus are often paraphrased multiple times. On the contrary, SQuAD's average candidates per question ( INLINEFORM0 ) is the smallest because SQuAD extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts. Although InfoboxQA is larger than WikiQA or SelQA, the number of token types ( INLINEFORM1 ) in InfoboxQA is smaller than those two, due to the repetitive nature of infoboxes.
All corpora show similar average answer candidate lengths ( INLINEFORM0 ), except for InfoboxQA where each line in the infobox is considered a candidate. SelQA and SQuAD show similar average question lengths ( INLINEFORM1 ) because of the similarity between their annotation schemes. It is not surprising that WikiQA's average question length is the smallest, considering their questions are taken from search queries. InfoboxQA's average question length is relatively small, due to the restricted information that can be asked from the infoboxes. InfoboxQA and WikiQA show the least question-answer word overlaps over questions and answers ( INLINEFORM2 and INLINEFORM3 in Table TABREF2 ), respectively. In terms of the F1-score for overlapping words ( INLINEFORM4 ), SQuAD gives the least portion of overlaps between question-answer pairs although WikiQA comes very close.
Fig. FIGREF4 shows the distributions of seven question types grouped deterministically from the lexicons. Although these corpora have been independently developed, a general trend is found, where the what question type dominates, followed by how and who, followed by when and where, and so on.
Fig. FIGREF6 shows the distributions of answer categories automatically classified by our Convolutional Neural Network model trained on the data distributed by li:02a. Interestingly, each corpus focuses on different categories, Numeric for WikiQA and SelQA, Entity for SQuAD, and Person for InfoboxQA, which gives enough diversities for statistical learning to build robust models.
On the contrary, SQUAD’s average candidates per question (c/q) is the smallest because SQUAD extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts.
|
Why the SQUAD's average candidates per question (c/q) is the smallest?
|
Because SQUAD extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts.
|
null | false
| null |
What are some common designs in flash tattoos?
|
Common designs in flash tattoos include skulls, knives, mermaids, panthers, sparrows, hearts, eagles, anchors, flowers, roses, butterflies, hands, snakes, and faces.
|
|
null | false
| null |
Furze Hill (or Furzehill) is a hamlet situated in the New Forest National Park of Hampshire, England. It is in the civil parish of Ellingham, Harbridge and Ibsley. Its nearest town is Fordingbridge, which lies approximately 3.1 miles (5.2 km) north-west from the hamlet.
Furze Hill is a hamlet in the civil parish of Hyde. It is situated on low ground on either side of Huckles Brook on the South Gorley to Ogdens road. The houses are surrounded by fields and paddocks. Additional houses can be found at the southern edge of Gorley Common lining a narrow track to North Gorley.
Furze Hill is the most recent settlement in the parish of Hyde, becoming an established community only in the 20th century.
|
Was Furze Hill an established community in the 19th century?
|
No, it was not an established community until the 20th century.
|
1910.03467
| true
| null |
Due to the fact that Vietnamese WordNet is not available, we only exploit WordNet to tackle unknown words of Japanese texts in our Japanese$\rightarrow $Vietnamese translation system. After using Kytea, Japanese texts are applied LSW algorithm to replace OOV words by their synonyms. We choose 1-best synonym for each OOV word. Table TABREF18 shows the number of OOV words replaced by their synonyms. The replaced texts are then BPEd and trained on the proposed architecture. The largest improvement is +0.92 between (1) and (3). We observed an improvement of +0.7 BLEU points between (3) and (5) without using data augmentation described in BIBREF21.
After using Kytea, Japanese texts are applied LSW algorithm to replace OOV words by their synonyms.
|
Are synonymous relation taken into account in the Japanese-Vietnamese task?
|
Yes.
|
null | false
| null |
Identify the bird from the list: Butter, Bittern, Bitterness
|
Bittern
|
|
null | false
| null |
If I have a dollar, two nickels and a quarter, how much money in total do I have. Please respond with numbers and the dollar sign only, no words.
|
$1.35
|
|
null | false
| null |
Why are some people in Argentina called Portenos?
|
"Porteños" comes from the word Puerto, which means Port in Spanish. Porteños was first introduced to describe the people that lived near the main Port in Argentina which is in Buenos Aires, the Capital of Argentina. So Porteños is used to refer to people that live in Buenos Aires. If referring to a female the right use of the word would be PorteñA, with an "a" at the end.
|
|
null | false
| null |
Tell me which one does not fit with the group: Blue, Green, White, Yellow
|
All except White are colors of the rainbow.
|
|
null | false
| null |
In what part of the United States is the New England region?
|
New England is located in the northeast of the United States.
|
|
2003.05522
| false
| null |
Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10 We will later use its modern reformulation to show an analogy with certain neural language models and therefore their holistic character.
Frege promoted what we could call sentence holism: “Only in the context of a sentence does a word have a meaning.” BIBREF10
|
What does Frege's holistic and functional approach to meaning states?
|
The answers are shown as follows:
* Only in the context of a sentence does a word have a meaning.
|
1711.06288
| true
| null |
Due to the lack of available models for the task, we compare our framework with a previous model developed for image-to-image translation as baseline, which colorizes images without text descriptions. We carried out two human evaluations using Mechanical Turk to compare the performance of our model and the baseline. For each experiment, we randomly sampled 1,000 images from the test set and then turned these images into black and white. For each image, we generated a pair of two images using our model and the baseline, respectively. Our model took into account the caption in generation while the baseline did not. Then we randomly permuted the 2,000 generated images. In the first experiment, we presented to human annotators the 2,000 images, together with their original captions, and asked humans to rate the consistency between the generated images and the captions in a scale of 0 and 1, with 0 indicating no consistency and 1 indicating consistency. In the second experiment, we presented to human annotators the same 2,000 images without captions, but asked human annotators to rate the quality of each image without providing its original caption. The quality was rated in a scale of 0 and 1, with 0 indicating low quality and 1 indicating high quality.
We carried out two human evaluations using Mechanical Turk to compare the performance of our model and the baseline.
|
Is there any human evaluation involved in evaluating this famework?
|
Yes.
|
null | false
| null |
Schechter was born in Boston and grew up in Brookline, Massachusetts and New York City. Her father, Danny Schechter, a grandson of Russian-Jewish immigrant socialists, was a human rights activist, author, filmmaker and television producer.
|
Where was Sarah Schechter born?
|
Sarah Schechter was born in Boston, Massachusetts.
|
null | false
| null |
What is a, "one-timer" in ice hockey?
|
Ice hockey is a sport typically played with one goalie and five skaters on each of two opposing teams. A one-timer is an expression used to describe a shot taken by a player directly from a pass of another player in a single motion. This does not necessarily have to be a shot on goal, but a one-timer is so named to describe the backswing and striking of the puck from a pass, deflection or re-direction without stopping or accepting the puck first. The idea behind the name one-timer refers to the idea the shooter has one opportunity to make contact with the puck in this manner and without proper time is at risk of missing the shot entirely due to the puck being in motion as the shot is attempted.
|
|
null | false
| null |
What is supply and demand?
|
Supply and demand are economic terms in a free-market economy that dictate the price of a good or service. Typically shown on a two-dimensional axis where y is price and x is the quantity, the price of a particular good or service is when the two linear lines intersect. Supply has a negative sloping line indicating that when demand is fixed an increase in quantity of a good or service results in a lower unit price of that good or service. Demand has a positive linear line meaning that has more demand for a good or item increases while supply remains fixed then the price for that good or service increases.
|
|
null | false
| 151
|
The Humor Analysis based on Human Annotation (HAHA) 2019 BIBREF1 competition asked for analysis of two tasks in the Spanish language based on a corpus of publicly collected data described in Castro et al. BIBREF2 :
The HAHA dataset includes labeled data for 24,000 tweets and a test set of 6,000 tweets (80%/20% train/test split.) Each record includes the raw tweet text (including accents and emoticons), a binary humor label, the number of votes for each of five star ratings and a “Funniness Score” that is the average of the 1 to 5 star votes cast. Examples and data can be found on the CodaLab competition webpage.
Task and Dataset Description The Humor Analysis based on Human Annotation (HAHA) 2019 competition asked for analysis of two tasks in the Spanish language based on a corpus of publicly collected data described in Castro et al.
|
What is the baseline of this paper?
|
A corpus of publicly collected data described in Castro et al.
|
null | false
| null |
Which President of the US was less of a hawk than people supposed?
|
Ronald Regan. Although he was considered an arch conservative and ardent neoliberal, he took a liberal approach to arms control. It was Regan who drove for a reduction in strategic arms with Mikhail Gorbachev on a walk by Lake Geneva when he asked if the US was attacked by Aliens "would you help us?". Gorbachev replied "No doubt about it" and they two found a starting point to a long arms reduction discussion between the two powers and Regan took the initiative to make an offer of deep cuts in arms levels in Reykjavik less than a year later.
|
|
null | false
| null |
What is the best way to cook a steak?
|
There are many different variants to cooking a steak. You must choose the proper temperature to cook a steak. The range is from rare, medium rare, medium, medium well to well done. There are several ways to cook a steak for example a grill or in a pan. For this example we will use a grill. Remove steak from fridge and season. Allow steak to warm to near room temperature. This will prevent a steak from drying out and improve taste. Cook the steak to the desired temperature trying to only flip 1-3 times. Once done allow meat to rest for 5 minutes for best flavor.
|
|
null | false
| null |
Twitch is an American video live streaming service that focuses on video game live streaming, including broadcasts of esports competitions, in addition to offering music broadcasts, creative content, and "in real life" streams. Twitch is operated by Twitch Interactive, a subsidiary of Amazon.com, Inc. It was introduced in June 2011 as a spin-off of the general-interest streaming platform Justin.tv. Content on the site can be viewed either live or via video on demand. The games shown on Twitch's current homepage are listed according to audience preference and include genres such as real-time strategy games (RTS), fighting games, racing games, and first-person shooters.
The popularity of Twitch eclipsed that of its general-interest counterpart. In October 2013, the website had 45 million unique viewers, and by February 2014, it was considered the fourth-largest source of peak Internet traffic in the United States. At the same time, Justin.tv's parent company was re-branded as Twitch Interactive to represent the shift in focus – Justin.tv was shut down in August 2014. That month, the service was acquired by Amazon for US$970 million, which later led to the introduction of synergies with the company's subscription service Amazon Prime. Twitch acquired Curse LLC in 2016, an operator of online video gaming communities and introduced means to purchase games through links on streams along with a program allowing streamers to receive commissions on the sales of games that they play.
By 2015, Twitch had more than 100 million viewers per month. In 2017, Twitch remained the leading live streaming video service for video games in the US, and had an advantage over YouTube Gaming, which shut down its standalone app in May 2019. As of February 2020, it had 3 million broadcasters monthly and 15 million daily active users, with 1.4 million average concurrent users. As of May 2018, Twitch had over 27,000 partner channels.
|
Given this paragraph about Twitch, during what year did twitch have more than 100 million viewers per month?
|
Twitch surpassed 100 million viewers per month in the year 2015.
|
null | false
| null |
What countries make up the Benelux region?
|
The Benelux region is compromised of the following countries: Belgium, the Netherlands and Luxembourg.
|
|
null | false
| null |
Why should you live in Florida?
|
Florida is a great place to live if you like the sun, warm days, spending time at the beach and enjoying a relaxing lifestyle. There are many things to do in Florida such as visiting Disney World, Universal Studios, The Kennedy Space Center or the many aquariums through the state. There are several amazing beaches throughout the state but one of my favorites is Siesta Key because of it's powder white sand that doesn't get hot. You do have to keep an eye out for Sting Rays but beyond that the gulf is great to swim in. There are things to do all day on Siesta key from the beach to the village with multiple restaurants and shops. The only requirement is to relax and enjoy the time you have in a laid back tropical atmosphere.
|
|
null | false
| 302
|
In recent studies, deep learning models have achieved top performances in emotion detection and classification. Access to large amount of data has contributed to these high results. Numerous efforts have been dedicated to build emotion classification models, and successful results have been reported. In this work, we combine several popular emotional data sets in different genres, plus the one given for this task to train the emotion model we developed. We introduce a multigenre training mechanism, our intuition to combine different genres are a) to augment more training data, b) to generalize detection of emotion. We utilize Portable textual information such as subjectivity, sentiment, and presence of emotion words, because emotional sentences are subjective and affectual states like sentiment are strong indicator for presence of emotion.
The rest of this paper is structured as followings: section SECREF2 introduce our neural net model, in section SECREF3 we explain the experimental setup and data that is been used for training and development sets, section SECREF4 discuss the results and analyze the errors, section SECREF5 describe related works, section SECREF6 conclude our study and discuss future direction.
We utilize Portable textual information such as subjectivity, sentiment, and presence of emotion words, because emotional sentences are subjective and affectual states like sentiment are strong indicator for presence of emotion.
|
Why do they utilize Portable textual information?
|
Because emotional sentences are subjective and affectual states like sentiment are strong indicator for presence of emotion.
|
null | false
| null |
Give me tips on how to stick to habits?
|
It is sometimes tough to stick to habits, but it is possible if we follow the methodic approach.
Here are some tips on how you can do it.
1. Understand why you need to stick to a habit and why it is essential.
2. Make the habit obvious; e.g., if you want to play guitar every night after dinner, keep your guitar visible in the dining area.
3. Make habits easy. We often avoid long and complex things, so keep them simple. e.g., if you want to make a habit of exercising, start with small and simple exercises.
4. Make habits attractive and satisfying. Give yourself rewards after finishing the routine. e.g., you can eat an ice cream cone after exercise 3–4 days a week.
I hope this helps you in your habit-making journey.
|
|
null | false
| 119
|
In this paper, we showed that by jointly learning the tasks of ternary and fine-grained classification with a multitask learning model, one can greatly improve the performance on the second. This opens several avenues for future research. Since sentiment is expressed in different textual types like tweets and paragraph-sized reviews, in different languages (English, German, ..) and in different granularity levels (binary, ternary,..) one can imagine multitask approaches that could benefit from combining such resources. Also, while we opted for biLSTM networks here, one could use convolutional neural networks or even try to combine different types of networks and tasks to investigate the performance effect of multitask learning. Lastly, while our approach mainly relied on the foundations of BIBREF4 , the internal mechanisms and the theoretical guarantees of multitask learning remain to be better understood.
In this paper, we showed that by jointly learning the tasks of ternary and fine-grained classification with a multitask learning model, one can greatly improve the performance on the second.
|
What does the author prove with this paper?
|
They showed that by jointly learning the tasks of ternary and fine-grained classification with a multitask learning model, one can greatly improve the performance on the second.
|
null | false
| null |
What is the capital of Alaska?
|
Juneau is the capital
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.