paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Anne Rochelle Steely Ramirez (born October 23, 1962), née Anne Rochelle Steely, is a former long-distance runner who competed internationally for the United States. She specialized in the 3,000 meters on the track and later competed in road running events.
She made her international debut in cross country and helped the United States the women's team title at the IAAF World Cross Country Championships in 1985. Her focus changed to the 3,000 meters, a distance in which she was a two-time US champion (outdoors in 1991 and indoors in 1992). She reached the final of that event at the 1991 World Championships in Athletics and the 1992 Barcelona Olympics.
Steely's performances deteriorated and she was diagnosed with hyperthyroidism. However, she made an athletic comeback in 1998 and she won two national road titles, as well as appearing at the Goodwill Games and IAAF World Half Marathon Championships that year. She made one more appearance at the World Half Marathon event in 2000, but retired to focus on education studies.
|
List the years in which Anne Rochelle Steely Ramirez participated in Sprint events based on the passage. Use bulleted format to represent the years and sort in ascending order.
|
1985
1991
1992
1998
2000
|
null | false
| 57
|
The vast amounts of data collected by healthcare providers in conjunction with modern data analytics techniques present a unique opportunity to improve health service provision and the quality and safety of medical care for patient benefit BIBREF0 . Much of the recent research in this area has been on personalised medicine and its aim to deliver better diagnostics aided by the integration of diverse datasets providing complementary information. Another large source of healthcare data is organisational. In the United Kingdom, the National Health Service (NHS) has a long history of documenting extensively the different aspects of healthcare provision. The NHS is currently in the process of increasing the availability of several databases, properly anonymised, with the aim of leveraging advanced analytics to identify areas of improvement in NHS services.
One such database is the National Reporting and Learning System (NRLS), a central repository of patient safety incident reports from the NHS in England and Wales. Set up in 2003, the NRLS now contains more than 13 million detailed records. The incidents are reported using a set of standardised categories and contain a wealth of organisational and spatio-temporal information (structured data), as well as, crucially, a substantial component of free text (unstructured data) where incidents are described in the `voice' of the person reporting. The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission and discharge to serious untoward incidents, such as retained foreign objects after operations. The review and analysis of such data provides critical insight into the complex functioning of different processes and procedures in healthcare towards service improvement for safer carer.
Although statistical analyses are routinely performed on the structured component of the data (dates, locations, assigned categories, etc), the free text remains largely unused in systematic processes. Free text is usually read manually but this is time-consuming, meaning that it is often ignored in practice, unless a detailed review of a case is undertaken because of the severity of harm that resulted. There is a lack of methodologies that can summarise content and provide content-based groupings across the large volume of reports submitted nationally for organisational learning. Methods that could provide automatic categorisation of incidents from the free text would sidestep problems such as difficulties in assigning an incident category by virtue of a priori pre-defined lists in the reporting system or human error, as well as offering a unique insight into the root cause analysis of incidents that could improve the safety and quality of care and efficiency of healthcare services.
Our goal in this work is to showcase an algorithmic methodology that detects content-based groups of records in a given dataset in an unsupervised manner, based only on the free and unstructured textual description of the incidents. To do so, we combine recently developed deep neural-network high-dimensional text-embedding algorithms with network-theoretical methods. In particular, we apply multiscale Markov Stability (MS) community detection to a sparsified geometric similarity graph of documents obtained from text vector similarities. Our method departs from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents BIBREF1 . More recent approaches have used deep neural network based language models clustered with k-means, without a full multiscale graph analysis BIBREF2 . There have been some previous applications of network theory to text analysis. For example, Lanchichinetti and co-workers BIBREF3 used a probabilistic graph construction analysed with the InfoMap algorithm BIBREF4 ; however, their community detection was carried out at a single-scale and the representation of text as BoW arrays lacks the power of neural network text embeddings. The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than fitting with pre-designed classifications. The obtained results could thus help mitigate possible human error or effort in finding the right category in complex category classification trees.
We showcase the methodology through the analysis of a dataset of patient incidents reported to the NRLS. First, we use the 13 million records collected by the NRLS since 2004 to train our text embedding (although a much smaller corpus can be used). We then analyse a subset of 3229 records reported from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014 to extract clusters of incidents at different levels of resolution in terms of content. Our method reveals multiple levels of intrinsic structure in the topics of the dataset, as shown by the extraction of relevant word descriptors from the grouped records and a high level of topic coherence. Originally, the records had been manually coded by the operator upon reporting with up to 170 features per case, including a two-level manual classification of the incidents. Therefore, we also carried out an a posteriori comparison against the hand-coded categories assigned by the reporter (healthcare personnel) at the time of the report submission. Our results show good overall correspondence with the hand-coded categories across resolutions and, specifically, at the medium level of granularity. Several of our clusters of content correspond strongly to well-defined categories, yet our results also reveal complementary categories of incidents not defined in the external classification. In addition, the tuning of the granularity afforded by the method can be used to provide a distinct level of resolution in certain areas corresponding to specialise or particular sub-themes.
First, we use the 13 million records collected by the NRLS since 2004 to train our text embedding (although a much smaller corpus can be used). We then analyse a subset of 3229 records reported from St Mary’s Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014 to extract clusters of incidents at different levels of resolution in terms of content.
|
How do they extract clusters of incidents at different levels of resolution in terms of content?
|
After training text embedding, they analyse a subset of 3229 records reported from St Mary’s Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014.
|
null | false
| 50
|
Our datasets are scientific articles that were published on arXiv. The sets contain articles (in LaTeX format) from four computer science domains: NLP, IR, AI, and ML. They were created by filtering arXiv articles based on their primary and secondary categories. We used the following categories for the four collections: cs.cl for NLP; cs.ir for IR; cs.ai for AI and stat.ml, stat.co, stat.me or cs.lg for ML.
Table TABREF22 shows the number of documents along with the number of unique words, equations and equation units for each collection. The equations are display equations that were enumerated in the LaTeX version of the articles. Unlike inline equations, which in many instances represent variables with general meaning (e.g. INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , etc.) and even numerical values, display equations typically represent mathematical concepts with more specific semantics. For the empirical study we used a random subset of 2k singletons from the total collection, along with all equations that occur more than once. For the qualitative analysis, we used all equations.
We extracted words by tokenizing articles using the NLTK package BIBREF12 and restricted the vocabulary to noun phrases and adjectives. The vocabulary was selected by:
removing common stopwords
treating the top 25 most frequent words as stop words and removing them
including words whose term frequency is greater than or equal to 10 and whose character length is greater than or equal to 4
including the top 50 most frequent abbreviations whose character length is 3 (an exception to our previous rule)
When tokenizing equations, we first create an effective vocabulary of equation units. We convert equations into SLT format and collect collection wide frequency statistics over the equation units. The vocabulary contains all equation units whose frequency count is greater than INLINEFORM0 .
We extracted words by tokenizing articles using the NLTK package (Bird et al., 2009) and restricted the vocabulary to noun phrases and adjectives. We extracted words by tokenizing articles using the NLTK package (Bird et al., 2009) and restricted the vocabulary to noun phrases and adjectives.
|
What tool do they use to extract words?
|
The NLTK package.
|
2003.03612
| false
| null |
The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means.
We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time
|
What types of various community texts have been investigated for exploring global structure of binomials?
|
The answers are shown as follows:
* news publications, wine reviews, and Reddit
|
null | false
| null |
In 2011 Kathy Ryan, director of photography for The New York Times Magazine commissioned Prager to shoot 12, 1 minute films with some film actors from that year, inspired by "cinematic villainy". Prager won a News and Documentary Emmy Award for New Approaches to News & Documentary Programming: Arts, Lifestyle & Culture for her Touch of Evil short films.
|
What magazine was Alex Prager the director of Photography for?
|
Alex Prager was the director of photography for the New York Times Magazine.
|
null | false
| null |
Do dogs have a third eye?
|
No, dogs only have two eyes. But some say they may perceive more of the world than humans do, giving them an extra sensory-like trait.
|
|
null | false
| null |
Give me a bulleted list of the last 10 UK Prime Ministers
|
- Rishi Sunak
- Liz Truss
- Boris Johnson
- Theresa May
- David Cameron
- Gordon Brown
- Tony Blair
- John Major
- Margaret Thatcher
- James Callaghan
|
|
null | false
| null |
The UFC was founded by businessman Art Davie and Brazilian martial artist Rorion Gracie, and the first event was held in 1993 at the McNichols Sports Arena in Denver, Colorado. The purpose of the early Ultimate Fighting Championship competitions was to identify the most effective martial art in a contest with minimal rules and no weight classes between competitors of different fighting disciplines.
|
Given a reference about the creation of the UFC, tell me the member of the Gracie family who co-created the UFC and the city they hosted the first UFC event.
|
Rorion Gracie is the co-creator. The first event was held in Denver. Colorado.
|
null | false
| 353
|
Political event data has existed in various forms since the 1970s. Two of the most common political event datasets were the World Event Interaction Survey (WEIS) and the Conflict and Peace Data Bank (COPDAB) BIBREF0 , BIBREF1 . These two datasets were eventually replaced by the projects created by Philip Schrodt and various collaborators. In general, these projects were marked by the use of the Conflict and Mediation Event Observations (CAMEO) coding ontology and automated, machine-coding rather than human coding BIBREF2 , BIBREF3 . The CAMEO ontology is made up of 20 “top-level” categories that encompass actions such as “Make Statement” or “Protest”, and contains over 200 total event classifications. This ontology has served as the basis for most of the modern event datasets such as the Integrated Crisis Early Warning System (ICEWS) BIBREF4 , the Global Database of Events, Language, and Tone (GDELT), and the Phoenix dataset presented in this paper.
This type of data can prove highly useful for many types of studies. Since this type of data is inherently atomic, each observation is a record of a single event between a source and a target, it provides a disaggregated view of political events. This means that the data can be used to examine interactions below the usual monthly or yearly levels of aggregation. This approach can be used in a manner consistent with traditional hypothesis testing that is the norm in political science BIBREF5 , BIBREF6 , BIBREF7 . Additionally, event data has proven useful in forecasting models of conflict since the finer time resolution allows analysts to gain better leverage over the prediction problem than is possible when using more highly aggregated data BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Finally, the advent of daily-updated event data has led to many novel uses such as watchboarding or dashboarding. The goal in these situations is to provide an easy to understand interface that analysts can use to quickly monitor ongoing or emerging situations around the world. These applications provide a new frontier for event data that has not been considered much until this point.
The status quo of TABARI-generated, CAMEO-coded event data, which was established in the early 2000s, has remained with little change. BIBREF12 outlined many potential advances in the generation of political event data. These advances are things such as realtime processing of news stories, the incorporation of open-source natural language processing (NLP) software, and enhancements in the automated coding structure. Two publicly-available datasets, GDELT and ICEWS, have each attempted to implement some, or all, of these changes in their respective data-generating pipelines. In terms of goals, the ICEWS project seems closest to sharing the vision of the Phoenix dataset. A more in-depth comparison of Phoenix and ICEWS is presented in a later section. In short, the goal of the project presented in this chapter is to implement most of the improvements suggested in BIBREF12 .
The CAMEO ontology is made up of 20 “top-level” categories that encompass actions such as “Make Statement” or “Protest”, and contains over 200 total event classifications. This ontology has served as the basis for most of the modern event datasets such as the Integrated Crisis Early Warning System (ICEWS) (O’Brien 2010), the Global Database of Events, Language, and Tone (GDELT)1 , and the Phoenix dataset presented in this paper.
|
Does their Phoenix dataset use CAMEO ontology as a reference?
|
Yes.
|
null | false
| null |
Why is it hard to have a baby and keep a job?
|
It's hard to have a baby and keep a job because babies need a lot of time and attention. One solution is to find a good caregiver, whether it is a nanny or a daycare or some other form of group care. While your baby is with that care, you can do your work for your job. Beware that babies who have care as part of larger groups may get sick very often, so your schedule needs to be flexible to accommodate for sick days (yours or your baby's).
|
|
null | false
| null |
What is an adverb?
|
An adverb is a word or a combination of words that qualified an adjective (e.g. a describing word). Adverbs can also restrict and limit the meaning of verbs. An example of how to use an adverb in a sentence is the following: "the roads are very steep and dangerous." In this example 'very' is the adverb. Another example is "they are a happily married couple." Happily is the adverb in this second example.
|
|
null | false
| null |
Which Alanis Morissette song contains the phrase "jagged little pill" (the title of her debut album)?
|
"You learn".
|
|
null | false
| null |
I have a list of Herbs and Spices. Please divide this list into those categories: basil, bay leaf, chives, cardamon, cinnamon, fennel, cloves, lemon grass, nutmeg, pepper, oregano, turmeric, parsley, ginger, saffron, vanilla rosemary, cumin, sage, and thyme.
|
Herbs: basil, bay leaf, chives, fennel, lemon grass, oregano, parsley, rosemary, sage, thyme. Spices: cardamom, cinnamon, cloves, nutmeg, pepper, turmeric, ginger, saffron, vanilla, cumin.
|
|
null | false
| null |
In the Roman Catholic Church a consistory is a formal meeting of the College of Cardinals called by the pope. There are two kinds of consistories, extraordinary and ordinary. An "extraordinary" consistory is held to allow the pope to consult with the entire membership of the College of Cardinals. An "ordinary" consistory is ceremonial in nature and attended by cardinals resident in Rome. For example, the pope elevates new cardinals to the College at a consistory; Pope Francis has called consistories for ceremonies of canonization.
|
From the given text, Provide some info regarding consistory
|
1. A consistory is a formal gathering of the College of Cardinals in the Roman Catholic Church that is summoned by the pope.
2. Consistories come in two varieties: remarkable and ordinary.
3. The pope can consult with the full College of Cardinals by calling a "extraordinary" consistory.
4. Cardinals who live in Rome attend "ordinary" consistories, which are ceremonial in nature.
5. Pope Francis has convened consistories for canonization ceremonies, for instance, where he elevates new cardinals to the College of Cardinals.
|
null | false
| null |
What years did the Patriots win a Super Bowl?
|
The Patriots have won six Super Bowls, in the years, 2001, 2003, 2004, 2014, 2016, and 2018.
|
|
null | false
| null |
Classify each of the following as root or shoot vegetables: tomato, potato, carrot, brinjal, onion, lady finger, cucumber, beets, sweet potato
|
tomato, brinjal, lady finger, cucumber are shoot vegetables and potato, carrot, onion, beets, sweet potato are root vegetables.
|
|
null | false
| null |
Cabomba is an aquatic plant genus, one of two belonging to the family Cabombaceae. It has divided submerged leaves in the shape of a fan (hence the vernacular name fanwort) and is much favoured by aquarists as an ornamental and oxygenating plant for fish tanks. Use in the aquarium trade has led to some species being introduced to other parts of the world, such as Australia, where they have become weeds.
|
Where is Cabomba sometimes used for decoration but also utility?
|
fish tanks
|
null | false
| null |
Reading railway station is a major transport hub in Reading, Berkshire, England. It is on the northern edge of the town centre, near the main retail and commercial areas and the River Thames, 36 miles (58 km) from London Paddington. The first Reading station was opened on 30 March 1840 as the temporary western terminus of the original line of the Great Western Railway (GWR). Reading is the ninth-busiest station in the UK outside London and the second busiest interchange station outside London with over 3.8 million passengers changing trains at the station annually.
|
When was the first Reading railway station opened?
|
The first Reading railway station was opened on the 30th of March, 1840.
|
null | false
| null |
Rhoticity in English is the pronunciation of the historical rhotic consonant /r/ by English speakers. The presence or absence of rhoticity is one of the most prominent distinctions by which varieties of English can be classified. In rhotic varieties, the historical English /r/ sound is preserved in all pronunciation contexts. In non-rhotic varieties, speakers no longer pronounce /r/ in postvocalic environments—that is, when it is immediately after a vowel and not followed by another vowel. For example, in isolation, a rhotic English speaker pronounces the words hard and butter as /ˈhɑːrd/ and /ˈbʌtər/, whereas a non-rhotic speaker "drops" or "deletes" the /r/ sound, pronouncing them as /ˈhɑːd/ and /ˈbʌtə/. When an r is at the end of a word but the next word begins with a vowel, as in the phrase "better apples", most non-rhotic speakers will pronounce the /r/ in that position (the linking R), since it is followed by a vowel in this case.
The rhotic varieties of English include the dialects of South West England, Scotland, Ireland, and most of the United States and Canada. The non-rhotic varieties include most of the dialects of modern England, Wales, Australia, New Zealand, and South Africa. In some varieties, such as those of some parts of the southern and northeastern United States, rhoticity is a sociolinguistic variable: postvocalic r is deleted depending on an array of social factors, such as being more correlated today with lower socioeconomic status, greater age, certain ethnic identities, and less formal speaking contexts.
Evidence from written documents suggests that loss of postvocalic /r/ began sporadically during the mid-15th century, although these /r/-less spellings were uncommon and were restricted to private documents, especially ones written by women in England. In the mid-18th century, postvocalic /r/ was still pronounced in most environments, but by the 1740s to 1770s it was often deleted entirely, especially after low vowels. By the early 19th century, the southern British standard was fully transformed into a non-rhotic variety, though some variation persisted as late as the 1870s.
In the 18th century (and possibly 17th century), the loss of postvocalic /r/ in British English influenced southern and eastern American port cities with close connections to Britain, causing their upper-class pronunciation to become non-rhotic while the rest of the United States remained rhotic. Non-rhotic pronunciation continued to influence American prestige speech until the 1860s, when the American Civil War began to shift America's centers of wealth and political power to rhotic areas with fewer cultural connections to the old colonial and British elites. In the United States, non-rhotic speech continued to hold some level of prestige up until the mid-20th century, but rhotic speech in particular became prestigious nationwide rapidly after the Second World War, reflected in the national standard of radio and television since the mid-20th century embracing historical /r/.
|
Explain the difference between rhotic and non-rhotic dialects of English.
|
In rhotic English dialects, the historical English /r/ sound is preserved in all pronunciation contexts. In non-rhotic English dialects, speakers no longer pronounce /r/ in postvocalic environments—that is, when it is immediately after a vowel and not followed by another vowel. For example, in isolation, a rhotic English speaker pronounces the words hard and butter as /ˈhɑːrd/ and /ˈbʌtər/, whereas a non-rhotic speaker "drops" or "deletes" the /r/ sound, pronouncing them as /ˈhɑːd/ and /ˈbʌtə/. The rhotic varieties of English include the dialects of South West England, Scotland, Ireland, and most of the United States and Canada. The non-rhotic varieties include most of the dialects of modern England, Wales, Australia, New Zealand, and South Africa.
|
null | false
| null |
Who is the best ever President of the United States ?
|
There are many factors that should be considered when determining the best President of the United States. Many would say there is no "right answer" on this question, since people hold different values on what is or isn't important.
But, generally speaking, Presidents who have successfully led the United States through times of great national or global crises tend to receive the highest marks. The short list of Presidents typically are the following: George Washington, Abraham Lincoln, Franklin Delano Roosevelt (FDR) and Ronald Reagan.
Washington was key to winning the independence of the country from the British as the wartime General during the Revolutionary War, and was the first President during a very tumultuous first 8 years of the United States.
Lincoln kept the country together through the American Civil War, and eradicated slavery - a great moral achievement in finally realizing the principles of the Declaration of Independence for all people no matter the color of their skin.
FDR led the country through the Great Depression, and also was a world leader in fighting German and Japanese aggression during World War II.
Ronald Reagan managed the end of the Cold War, without firing a shot, and avoiding nuclear catastrophe, through shrewd maneuvering and negotiation with Mikhail Gorbachev.
|
|
null | false
| null |
What are the words of House Tyrell?
|
"Growing Strong"
|
|
1808.00265
| false
| null |
Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding.
Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding.
|
By how much do they outperform existing state-of-the-art VQA models?
|
The answers are shown as follows:
* the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X
|
1707.02377
| false
| null |
We compare against the following document representation baselines: bag-of-words (BoW); Denoising Autoencoders (DEA) BIBREF14 , a representation learned from reconstructing original document INLINEFORM0 using corrupted one INLINEFORM1 . SDAs have been shown to be the state-of-the-art for sentiment analysis tasks BIBREF29 . We used Kullback-Liebler divergence as the reconstruction error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into account the non-zero elements of INLINEFORM2 in the reconstruction error and employed negative sampling for the remainings; Word2Vec BIBREF1 +IDF, a representation generated through weighted average of word vectors learned using Word2Vec; Doc2Vec BIBREF2 ; Skip-thought Vectors BIBREF16 , a generic, distributed sentence encoder that extends the Word2Vec skip-gram model to sentence level. It has been shown to produce highly generic sentence representations that apply to various natural language processing tasks. We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison. In the semantic relatedness task, we further compare to LSTM-based methods BIBREF18 that have been reported on this dataset.
We also include RNNLM BIBREF11 , a recurrent neural network based language model in the comparison.
|
Which language models do they compare against?
|
The answers are shown as follows:
* RNNLM BIBREF11
|
null | false
| 339
|
In this paper, we have presented high level overview of a feature specification language for ML based TA applications and an approach to enable reuse of feature specifications across semantically related applications. Currently, there is no generic method or approach, which can be applied during TA applications' design process to define and extract features for any arbitrary application in an automated or semi-automated manner primarily because there is no standard way to specify wide range of features which can be extracted and used. We considered different classes of features including linguistic, semantic, and statistical for various levels of analysis including words, phrases, sentences, paragraphs, documents, and corpus. As a next step, we presented an approach for building a recommendation system for enabling automated reuse of features for new application scenarios which improves its underlying similarity model based upon user feedback.
To take this work forward, it is essential to have it integrated to a ML platform, which is being used by large user base for building TA applications so that to be able to populate a repository of statistically significant number of TA applications with details as specified in Section SECREF5 and thereafter refine the proposed approach so that eventually it rightly enables reuse of features across related applications.
We considered different classes of features including linguistic, semantic, and statistical for various levels of analysis including words, phrases, sentences, paragraphs, documents, and corpus.
|
What different classes of features are considered by authors?
|
The authors considered different classes of features including linguistic, semantic, and statistical for various levels of analysis including words, phrases, sentences, paragraphs, documents, and corpus.
|
null | false
| null |
What are some of the best things to do in Fort Collins, Colorado?
|
Fort Collins has many fun things to do with its proximity to the foothills of the Rocky Mountains, a vibrant downtown area, and the many world class breweries in and around town. Outdoors highlights include hikes in Lorry State Park, Horsetooth Reservoir, and a climb to Horsetooth Rock that overlooks the city. Be sure to take a walk out Old Town, where you can grab a coffee at Little Bird and shop at the many boutiques. Lastly, be sure to check out New Belgium or Odell Brewing, which are just two of the more than twenty breweries in Fort Collins.
|
|
null | false
| null |
Name different kinds of cheese
|
mozzarella, cheddar, gorgonzola, swiss, bleu, provolone, gouda, pepper jack, parmesan, cottage, asiago, brie, devil's gulch, ricotta, Gruyère, stilton, feta, & havarti
|
|
1801.02243
| true
| null |
Figure FIGREF6 shows the relationship between Tesla stock return and stock sentiment score. According the distribution of the sentiment score, the sentiment on Tesla is slightly skewed towards positive during the testing period. The price has been increased significantly during the testing period, which reflected the positive sentiment. The predicting power of sentiment score is more significant when the sentiment is more extreme and less so when the sentiment is neutral.
According the distribution of the sentiment score, the sentiment on Tesla is slightly skewed towards positive during the testing period.
|
Do the authors give any examples of major events which draw the public's attention and the impact they have on stock price?
|
Yes.
|
1708.08615
| false
| null |
The goal of reaching “human parity” in automatic CTS transcription raises the question of what should be considered human accuracy on this task. We operationalized the question by submitting the chosen test data to the same vendor-based transcription pipeline that is used at Microsoft for production data (for model training and internal evaluation purposes), and then comparing the results to ASR system output under the NIST scoring protocol. Using this methodology, and incorporating state-of-the-art convolutional and recurrent network architectures for both acoustic modeling BIBREF9 , BIBREF10 , BIBREF7 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 and language modeling BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 with extensive use of model combination, we obtained a machine error rate that was very slightly below that of the human transcription process (5.8% versus 5.9% on Switchboard data, and 11.0% versus 11.3% on CallHome English data) BIBREF19 . Since then, Saon et al. have reported even better results, along with a separate transcription experiment that puts the human error rate, on the same test data, at a lower point than measured by us (5.1% for Switchboard, 6.8% for CallHome) BIBREF20 .
We operationalized the question by submitting the chosen test data to the same vendor-based transcription pipeline that is used at Microsoft for production data (for model training and internal evaluation purposes), and then comparing the results to ASR system output under the NIST scoring protocol.
|
what standard speech transcription pipeline was used?
|
The answers are shown as follows:
* pipeline that is used at Microsoft for production data
|
null | false
| null |
What was the first British instrumental to top the USA charts
|
Telstar by The Tornados
|
|
null | false
| null |
Where does the word plumbing originate from?
|
"Plumbing" originates from the original name for Lead (Pb), Plumbum, which is what most pipes were made from at the time.
|
|
null | false
| null |
Do aliens exist?
|
Although there is no definitive evidence for the existence of extraterrestrial life, the immense number of planets that most likely exist in the universe suggest that, although the development of life can be difficult to start, there should be other life forms in the universe.
The discrepancy between the expected existence of life in the universe and the lack of any evidence is called the Fermi-Paradox.
|
|
null | false
| null |
Give me a list of animals you might raise for milk
|
Sheep, Goats, Cows
|
|
null | false
| null |
Who was Loku Nilame Adikaram Ratwatte
|
Loku Nilame Adikaram Ratwatte ( - 1827) (known as Ratwatte Nilame) was a courtier of the Kingdom of Kandy. He was the 2nd Adigar (Adigaram) from 1825 to 1827 during the British rule. He was one of the signatories of the Kandyan Convention which made the Kandyan Kingdom part of the British Empire.
|
|
null | false
| null |
Ramon Magsaysay Award 2022 was given to 4 persons
|
Sotheara Chhim(Cambodia), Bernadette Madrid(Philippines), Tadashi Hattori(Japan) & Gary Benchehib (Indonesia)
|
|
null | false
| 148
|
Classical QA systems face two main challenges related to question analysis and answer extraction. Several QA approaches were proposed in the literature for the open domain BIBREF16 , BIBREF17 and the medical domain BIBREF18 , BIBREF19 , BIBREF20 . A variety of methods were developed for question analysis, focus (topic) recognition and question type identification BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . Similarly, many different approaches tackled document or passage retrieval and answer selection and (re)ranking BIBREF25 , BIBREF26 , BIBREF27 .
An alternative approach consists in finding similar questions or FAQs that are already answered BIBREF28 , BIBREF29 . One of the earliest question answering systems based on finding similar questions and re-using the existing answers was FAQ FINDER BIBREF30 . Another system that complements the existing Q&A services of NetWellness is SimQ BIBREF2 , which allows retrieval of similar web-based consumer health questions. SimQ uses syntactic and semantic features to compute similarity between questions, and UMLS BIBREF31 as a standardized semantic knowledge source. The system achieves 72.2% precision, 78.0% recall and 75.0% F-score on NetWellness questions. However, the method was evaluated only on one question similarity dataset, and the retrieved answers were not evaluated.
The aim of the medical task at TREC 2017 LiveQA was to develop techniques for answering complex questions such as consumer health questions, as well as to identify relevant answer sources that can comply with the sensitivity of medical information retrieval.
The CMU-OAQA system BIBREF32 achieved the best performance of 0.637 average score on the medical task by using an attentional encoder-decoder model for paraphrase identification and answer ranking. The Quora question-similarity dataset was used for training. The PRNA system BIBREF33 achieved the second best performance in the medical task with 0.49 average score using Wikipedia as the first answer source and Yahoo and Google searches as secondary answer sources. Each medical question was decomposed into several subquestions. To extract the answer from the selected text passage, a bi-directional attention model trained on the SQUAD dataset was used.
Deep neural network models have been pushing the limits of performance achieved in QA related tasks using large training datasets. The results obtained by CMU-OAQA and PRNA showed that large open-domain datasets were beneficial for the medical domain. However, the best system (CMU-OAQA) relying on the same training data obtained a score of 1.139 on the LiveQA open-domain task.
While this gap in performance can be explained in part by the discrepancies between the medical test questions and the open-domain questions, it also highlights the need for larger medical datasets to support deep learning approaches in dealing with the linguistic complexity of consumer health questions and the challenge of finding correct and complete answers.
Another technique was used by ECNU-ICA team BIBREF34 based on learning question similarity via two long short-term memory (LSTM) networks applied to obtain the semantic representations of the questions. To construct a collection of similar question pairs, they searched community question answering sites such as Yahoo! and Answers.com. In contrast, the ECNU-ICA system achieved the best performance of 1.895 in the open-domain task but an average score of only 0.402 in the medical task. As the ECNU-ICA approach also relied on a neural network for question matching, this result shows that training attention-based decoder-encoder networks on the Quora dataset generalized better to the medical domain than training LSTMs on similar questions from Yahoo! and Answers.com.
The CMU-LiveMedQA team BIBREF20 designed a specific system for the medical task. Using only the provided training datasets and the assumption that each question contains only one focus, the CMU-LiveMedQA system obtained an average score of 0.353. They used a convolutional neural network (CNN) model to classify a question into a restricted set of 10 question types and crawled "relevant" online web pages to find the answers. However, the results were lower than those achieved by the systems relying on finding similar answered questions. These results support the relevance of similar question matching for the end-to-end QA task as a new way of approaching QA instead of the classical QA approaches based on Question Analysis and Answer Retrieval.
Classical QA systems face two main challenges related to question analysis and answer extraction.
|
What are the main challenges in Classical QA systems?
|
Question analysis and answer extraction.
|
null | false
| 462
|
Having demonstrated control of generalisation with linear model regularisation (Sec. 4.1), and explained DomainBed performance post-hoc via measuring complexity of neural models (Sec. 4.2), we next evaluate algorithms for practical complexity tuning. From Fig. we saw that within-domain and cross-domain evaluation have different optimal tuning. If we consider automating the search for a good regularisation parameter, these two final evaluation criteria correspond respectively to hyperparameter selection based on a validation set from the seen source domains, vs based on a validation set drawn from a held out source domain. The latter validation criterion corresponds to a unbiased estimator of expected cross-domain generalisation performance R E (f ), which our theorem bounds.
Setup We performed DG evaluation on DomainBed using ERM with linear models using the same DINO features as Section 4.1. For each held out domain, we performed hyperparameter tuning with either instance-wise cross-validation (where validation folds are drawn from the same source domains used for training) or domain-wise cross-validation (where validation folds are drawn from held-out domains not seen during training).
The results in Table report the average accuracy across held out domains, and the average selected log C regularisation strength (See Supplementary for the full breakdown of results for each held-out domain). We can see that the domain-wise cross-validation objective leads to similar or better accuracy, and similar or smaller C value selection (i.e., stronger regularisation). This outcome is opposite to the observation made by. Given the theoretical support for our domain-wise objective, and our clear empirical outcome when freed of confounding factors such as stochasticity in neural network training, we consider our result to be decisive and attribute the different observation to the inaccuracy and stochasticity of hyperparameter tuning in neural networks.
Table: Comparing model selection criteria using LinearSVC. Accuracy and selected SVM regularisation parameter 'C'. Domain-wise validation outperforms instance-wise and selects stronger regularisation (lower C).
For each held out domain, we performed hyperparameter tuning with either instance-wise cross-validation (where validation folds are drawn from the same source domains used for training) or domain-wise cross-validation (where validation folds are drawn from held-out domains not seen during training).
|
Is it instance-wise vs domain-wise validation?
|
Yes, this corresponds to using in-domain validation data vs. cross-domain data. We have revised the paper to explain this more clearly.
|
null | false
| null |
A lettering guide template is a special type of template used to write uniform characters. It consists of a sheet of plastic or other material with cut-outs of letters, numbers, and other shapes used especially for creating technical drawings. For decades they have been essential for lettering a drawing nameplate so text and other designs could be made quickly and uniformly.
Although they have been superseded by the use of computers, during the greater part of the last century they were used to relatively ease the lettering process in the creation of technical drawings. They were an indispensable tool for architects and technical illustrators in general, for labeling their drawings and plans but also for the description of projects, in which it was good practice to use a lettering template to achieve uniform and well-written text.
A lettering template could also be used by people illiterate or semi-illiterate to learn to type, or improve their handwriting. In the course of political history some politicians, such as Bettino Craxi, have used them to help people with writing difficulties. They distributed cardboard templates with the sequence of characters of their last name, so they could be easily written during the voting process.
|
per the context given, how would illiterate politicians able to vote?
|
They used lettering guide templates made of cardboard with characters of their last name
|
null | false
| null |
Meno (/ˈmiːnoʊ/; Greek: Μένων, Ménōn) is a Socratic dialogue by Plato. Meno begins the dialogue by asking Socrates whether virtue is taught, acquired by practice, or comes by nature. In order to determine whether virtue is teachable or not, Socrates tells Meno that they first need to determine what virtue is. When the characters speak of virtue, or rather arete, they refer to virtue in general, rather than particular virtues, such as justice or temperance. The first part of the work showcases Socratic dialectical style; Meno, unable to adequately define virtue, is reduced to confusion or aporia. Socrates suggests that they seek an adequate definition for virtue together. In response, Meno suggests that it is impossible to seek what one does not know, because one will be unable to determine whether one has found it.
Socrates challenges Meno's argument, often called "Meno's Paradox" or the "Learner's Paradox", by introducing the theory of knowledge as recollection (anamnesis). As presented in the dialogue, the theory proposes that souls are immortal and know all things in a disembodied state; learning in the embodied is actually a process of recollecting that which the soul knew before it came into a body. Socrates demonstrates recollection in action by posing a mathematical puzzle to one of Meno's slaves. Subsequently, Socrates and Meno return to the question of whether virtue is teachable, employing the method of hypothesis. Near the end of the dialogue, Meno poses another famous puzzle, called "The Meno Problem" or "The Value Problem for Knowledge", which questions why knowledge is valued more highly than true belief. In response, Socrates provides a famous and somewhat enigmatic distinction between knowledge and true belief.
|
Extract the Greek terms in the following paragraph, along with a short definition, in bullet points.
|
- arete: virtues in general, i.e. instead of a particular virtue
- aporia: confusion
- anamnesis: recollection of knowledge
|
null | false
| 3
|
In this section, we describe the languages experimented with, datasets used, the network hyper-parameters used in our experiments.
We use CFILT-preorder3 for prereordering English sentences.
|
What do the authors use for prereordering English sentences?
|
CFILT-preorder3
|
null | false
| null |
Whats the difference between a sloop, a cutter, and a ketch?
|
These are all categorizations of different types of sailing boats. A sloop is the most common. A sloop is a monohull sailboat with a single- rig mast supporting a main sail and and headsail. These are great boats to learn sailing on as their simple, easy to handle, and have great wind performance.
A cutter at first look is similar to a sloop in that it also has a single mast positioned near the middle of the boat. However, a cutter uses two headsails, instead of a mainsail and headsail, and generally includes a bowsprit. This allows the boat to have better stability in rough weather due to the headsails providing a lower center of gravity. The complex rigging makes this boat more challenging to sail than the sloop, but provides an excellent option for an experienced sailor.
A ketch has two masts. The main mast in the front stands taller than the mizzen mast behind it. The mizzen mast stands forward of the rudder post. The ketch has smaller sails and shorter masts. While this makes them slower than a sloop, they are very controllable in rough weather which makes them excellent boats for off shore sailing.
|
|
null | false
| null |
Photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that, through cellular respiration, can later be released to fuel the organism's activities. Some of this chemical energy is stored in carbohydrate molecules, such as sugars and starches, which are synthesized from carbon dioxide and water – hence the name photosynthesis, from the Greek phōs (φῶς), "light", and synthesis (σύνθεσις), "putting together". Most plants, algae, and cyanobacteria perform photosynthesis; such organisms are called Photoautotroph. Photosynthesis is largely responsible for producing and maintaining the oxygen content of the Earth's atmosphere, and supplies most of the energy necessary for life on Earth.
|
What is Photosynthesis
|
Photosynthesis is performed differently by different species but the process always begins when energy from light is absorbed by proteins called reaction centers that contain green chlorophyll (and other colored) pigments/chromophores. In plants, these proteins are held inside organelles called chloroplasts, which are most abundant in leaf cells, while in bacteria they are embedded in the plasma membrane. In these light-dependent reactions, some energy is used to strip electrons from suitable substances, such as water, producing oxygen gas. The hydrogen freed by the splitting of water is used in the creation of two further compounds that serve as short-term stores of energy, enabling its transfer to drive other reactions: these compounds are reduced nicotinamide adenine dinucleotide phosphate (NADPH) and adenosine triphosphate (ATP), the "energy currency" of cells.
In plants, algae and cyanobacteria, sugars are synthesized by a subsequent sequence of light-independent reactions called the Calvin cycle. In the Calvin cycle, atmospheric carbon dioxide is incorporated into already existing organic carbon compounds, such as ribulose bisphosphate (RuBP). Using the ATP and NADPH produced by the light-dependent reactions, the resulting compounds are then reduced and removed to form further carbohydrates, such as glucose. In other bacteria, different mechanisms such as the reverse Krebs cycle are used to achieve the same end.
|
null | false
| 196
|
Word embeddings are fixed-length vector representations for words BIBREF0 , BIBREF1 . In recent years, the morphology of words is drawing more and more attention BIBREF2 , especially for Chinese whose writing system is based on logograms.
UTF8gbsn With the gradual exploration of the semantic features of Chinese, scholars have found that not only words and characters are important semantic carriers, but also stroke feature of Chinese characters is crucial for inferring semantics BIBREF3 . Actually, a Chinese word usually consists of several characters, and each character can be further decomposed into a stroke sequence which is certain and changeless, and this kind of stroke sequence is very similar to the construction of English words. In Chinese, a particular sequence of strokes can reflect the inherent semantics. As shown in the upper half of Figure FIGREF3 , the Chinese character “驾" (drive) can be decomposed into a sequence of eight strokes, where the last three strokes together correspond to a root character “马" (horse) similar to the root “clar" of English word “declare" and “clarify".
Moreover, Chinese is a language originated from Oracle Bone Inscriptions (a kind of hieroglyphics). Its character glyphs have a spatial structure similar to graphs which can convey abundant semantics BIBREF4 . Additionally, the critical reason why Chinese characters are so rich in morphological information is that they are composed of basic strokes in a 2-D spatial order. However, different spatial configurations of strokes may lead to different semantics. As shown in the lower half of Figure 1, three Chinese characters “入" (enter), “八" (eight) and “人" (man) share exactly a common stroke sequence, but they have completely different semantics because of their different spatial configurations.
In addition, some biological investigations have confirmed that there are actually two processing channels for Chinese language. Specifically, Chinese readers not only activate the left brain which is a dominant hemisphere in processing alphabetic languages BIBREF5 , BIBREF6 , BIBREF7 , but also activate the areas of the right brain that are responsible for image processing and spatial information at the same time BIBREF8 . Therefore, we argue that the morphological information of characters in Chinese consists of two parts, i.e., the sequential information hidden in root-like strokes order, and the spatial information hidden in graph-like character glyphs. Along this line, we propose a novel Dual-channel Word Embedding (DWE) model for Chinese to realize the joint learning of sequential and spatial information in characters. Finally, we evaluate DWE on two representative tasks, where the experimental results exactly validate the superiority of DWE in capturing the morphological information of Chinese.
Along this line, we propose a novel Dual-channel Word Embedding (DWE) model for Chinese to realize the joint learning of sequential and spatial information in characters.
|
Why do the authors propose a novel Dual-channel Word Embedding (DWE) model?
|
To realize the joint learning of sequential and spatial information in characters.
|
null | false
| null |
Which part of the United States is known as the "four corners" area?
|
The term four corners applies to the area where Colorado, New Mexico, Utah, and Arizona come together.
|
|
null | false
| 34
|
Single-relation factoid questions are the most common form of questions found in search query logs and community question answering websites BIBREF1 , BIBREF2 . A knowledge-base (KB) such as Freebase, DBpedia, or Wikidata can help answer such questions after users reformulate them as queries. For instance, the question Where was Barack Obama born? can be answered by issuing the following KB query: $
\lambda (x).place\_of\_birth(Barack\_Obama, x)
$
However, automatically mapping a natural language question such as Where was Barack Obama born? to its corresponding KB query remains a challenging task.
There are three key issues that make learning this mapping non-trivial. First, there are many paraphrases of the same question. Second, many of the KB entries are unseen during training time; however, we still need to correctly predict them at test time. Third, a KB such as Freebase typically contains millions of entities and thousands of predicates, making it difficult for a system to predict these entities at scale BIBREF1 , BIBREF3 , BIBREF0 . In this paper, we address all three of these issues with a character-level encoder-decoder framework that significantly improves performance over state-of-the-art word-level neural models, while also providing a much more compact model that can be learned from less data.
First, we use a long short-term memory (LSTM) BIBREF4 encoder to embed the question. Second, to make our model robust to unseen KB entries, we extract embeddings for questions, predicates and entities purely from their character-level representations. Character-level modeling has been previously shown to generalize well to new words not seen during training BIBREF5 , BIBREF6 , which makes it ideal for this task. Third, to scale our model to handle the millions of entities and thousands of predicates in the KB, instead of using a large output layer in the decoder to directly predict the entity and predicate, we use a general interaction function between the question embeddings and KB embeddings that measures their semantic relevance to determine the output. The combined use of character-level modeling and a semantic relevance function allows us to successfully produce likelihood scores for the KB entries that are not present in our vocabulary, a challenging task for standard encoder-decoder frameworks.
Our novel, character-level encoder-decoder model is compact, requires significantly less data to train than previous work, and is able to generalize well to unseen entities in test time. In particular, without use of ensembles, we achieve 70.9% accuracy in the Freebase2M setting and 70.3% accuracy in the Freebase5M setting on the SimpleQuestions dataset, outperforming the previous state-of-arts of 62.7% and 63.9% BIBREF0 by 8.2% and 6.4% respectively. Moreover, we only use the training questions provided in SimpleQuestions to train our model, which cover about 24% of words in entity aliases on the test set. This demonstrates the robustness of the character-level model to unseen entities. In contrast, data augmentation is usually necessary to provide more coverage for unseen entities and predicates, as done in previous work BIBREF0 , BIBREF1 .
Our novel, character-level encoder-decoder model is compact, requires significantly less data to train than previous work, and is able to generalize well to unseen entities in test time.
|
What is the character-level encoder-decoder model?
|
It is a novel and compact model, requires significantly less data to train than previous work, and is able to generalize well to unseen entities in test time.
|
null | false
| 144
|
Despite the similarity of the Open IE paradigm, not every extracted tuple is a suitable proposition for a concept map. To reduce the effort in the subsequent steps, we therefore want to filter out unsuitable ones. A tuple is suitable if it (1) is a correct extraction, (2) is meaningful without any context and (3) has arguments that represent proper concepts. We created a guideline explaining when to label a tuple as suitable for a concept map and performed a small annotation study. Three annotators independently labeled 500 randomly sampled tuples. The agreement was 82% ( INLINEFORM0 ). We found tuples to be unsuitable mostly because they had unresolvable pronouns, conflicting with (2), or arguments that were full clauses or propositions, conflicting with (3), while (1) was mostly taken care of by the confidence filtering in § SECREF21 .
Due to the high number of tuples we decided to automate the filtering step. We trained a linear SVM on the majority voted annotations. As features, we used the extraction confidence, length of arguments and relations as well as part-of-speech tags, among others. To ensure that the automatic classification does not remove suitable propositions, we tuned the classifier to avoid false negatives. In particular, we introduced class weights, improving precision on the negative class at the cost of a higher fraction of positive classifications. Additionally, we manually verified a certain number of the most uncertain negative classifications to further improve performance. When 20% of the classifications are manually verified and corrected, we found that our model trained on 350 labeled instances achieves 93% precision on negative classifications on the unseen 150 instances. We found this to be a reasonable trade-off of automation and data quality and applied the model to the full dataset.
The classifier filtered out 43% of the propositions, leaving 1622 per topic. We manually examined the 17k least confident negative classifications and corrected 955 of them. We also corrected positive classifications for certain types of tuples for which we knew the classifier to be imprecise. Finally, each topic was left with an average of 1554 propositions (47k in total).
A tuple is suitable if it (1) is a correct extraction, (2) is meaningful without any context and (3) has arguments that represent proper concepts.
|
What kind of tuple is a suitable one?
|
It must be a correct extraction that is meaningful and independent and has arguments that represent proper concepts.
|
null | false
| null |
What links the names Botvinik, Tal, Karpov, Fischer
|
Chess World Champs
|
|
null | false
| null |
What are some good body weight exercises for the upper body?
|
Some good body weight exercises for the upper body include parallel bar dips, pull ups, one arm planks, and push ups.
|
|
1909.12140
| false
| null |
An extrinsic evaluation was carried out on the task of Open IE BIBREF7. It revealed that when applying DisSim as a preprocessing step, the performance of state-of-the-art Open IE systems can be improved by up to 346% in precision and 52% in recall, i.e. leading to a lower information loss and a higher accuracy of the extracted relations. For details, the interested reader may refer to niklaus-etal-2019-transforming.
An extrinsic evaluation was carried out on the task of Open IE BIBREF7.
|
Is the semantic hierarchy representation used for any task?
|
Yes, Open IE
|
null | false
| null |
Who won the Golden Shoe Award for the season 2018-2019?
|
Lionel Messi won this award for the 6th time.
|
|
1911.01680
| false
| null |
Prior work have shown that contextual information could be useful for SF. They utilize contextual information either in word level representation (i.e., via contextualize embedding e.g., BERT BIBREF0) or in the model computation graph (e.g., concatenating the context feature to the word feature BIBREF1). However, such methods fail to capture the explicit dependence between the context of the word and its label. Moreover, such limited use of contextual information (i.e., concatenation of the feature vector and context vector) in the model cannot model the interaction between the word representation and its context. In order to alleviate these issues, in this work, we propose a novel model to explicitly increase the predictability of the word label using its context and increasing the interactivity between word representations and its context. More specifically, in our model we use the context of the word to predict its label and by doing so our model learns label-aware context for each word in the sentence. In order to improve the interactivity between the word representation and its context, we increase the mutual information between the word representations and its context. In addition to these contributions, we also propose an auxiliary task to predict which labels are expressed in a given sentence. Our model is trained in a mutli-tasking framework. Our experiments on a SF dataset for identifying semantic concepts from natural language request to edit an image show the superiority of our model compared to previous baselines. Our model achieves the state-of-the-art results on the benchmark dataset by improving the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction.
More specifically, in our model we use the context of the word to predict its label and by doing so our model learns label-aware context for each word in the sentence. In order to improve the interactivity between the word representation and its context, we increase the mutual information between the word representations and its context.
|
How does their model utilize contextual information for each work in the given sentence in a multi-task setting? setting?
|
The answers are shown as follows:
* we use the context of the word to predict its label and by doing so our model learns label-aware context for each word in the sentence
|
null | false
| null |
Why is Captain America Steve Rogers the leader of the Avengers?
|
Steve Rogers also know as Captain America is the leader of the Avengers because he is the first Avenger. He was a the first and most successful created super solider in the era of world war 2. He is a master of warfare. He has trained in every technique of martial arts. He has a strong sense of right and wrong. He is courageous, committed and honorable. His experience in World War 2 leading the Howling Commandos gives him the most knowledge to lead the Avengers. He protects the team and understands how to make the difficult decisions.
|
|
null | false
| null |
what does it mean to have brunette hair?
|
A person with brunette hair, or a brunette, is a person with brown or black hair color. People typically fall into the category of blonde, brunette, and red hair. It is sometimes misconceived to be only brown hair but it signifies all darker hair including black hair color.
|
|
null | false
| null |
Which episode of Game of Thrones does Dany go to the House of the Undying?
|
Daenerys Targaryen visits the House of the Undying in season two, episode five ("Harrenhal")
|
|
null | false
| 210
|
BioASQ is a semantic indexing, question answering (QA) and information extraction challenge BIBREF0 . We participated in Task B of the challenge which is concerned with biomedical QA. More specifically, our system participated in Task B, Phase B: Given a question and gold-standard snippets (i.e., pieces of text that contain the answer(s) to the question), the system is asked to return a list of answer candidates.
The fifth BioASQ challenge is taking place at the time of writing. Five batches of 100 questions each were released every two weeks. Participating systems have 24 hours to submit their results. At the time of writing, all batches had been released.
The questions are categorized into different question types: factoid, list, summary and yes/no. Our work concentrates on answering factoid and list questions. For factoid questions, the system's responses are interpreted as a ranked list of answer candidates. They are evaluated using mean-reciprocal rank (MRR). For list questions, the system's responses are interpreted as a set of answers to the list question. Precision and recall are computed by comparing the given answers to the gold-standard answers. F1 score, i.e., the harmonic mean of precision and recall, is used as the official evaluation measure .
Most existing biomedical QA systems employ a traditional QA pipeline, similar in structure to the baseline system by weissenborn2013answering. They consist of several discrete steps, e.g., named-entity recognition, question classification, and candidate answer scoring. These systems require a large amount of resources and feature engineering that is specific to the biomedical domain. For example, OAQA BIBREF1 , which has been very successful in last year's challenge, uses a biomedical parser, entity tagger and a thesaurus to retrieve synonyms.
Our system, on the other hand, is based on a neural network QA architecture that is trained end-to-end on the target task. We build upon FastQA BIBREF2 , an extractive factoid QA system which achieves state-of-the-art results on QA benchmarks that provide large amounts of training data. For example, SQuAD BIBREF3 provides a dataset of $\approx 100,000$ questions on Wikipedia articles. Our approach is to train FastQA (with some extensions) on the SQuAD dataset and then fine-tune the model parameters on the BioASQ training set.
Note that by using an extractive QA network as our central component, we restrict our system's responses to substrings in the provided snippets. This also implies that the network will not be able to answer yes/no questions. We do, however, generalize the FastQA output layer in order to be able to answer list questions in addition to factoid questions.
Our approach is to train FastQA (with some extensions) on the SQuAD dataset and then fine-tune the model parameters on the BioASQ training set.
|
What dataset do they use for training FastQA?
|
SQuAD dataset.
|
null | false
| null |
Which state is the city of Springfield from in the Simpsons?
|
The popular and long running show The Simpsons have intentionally left out from which state the home town of Springfield is with the intention of relating to everyone's Springfield. There have been several references to possible states to keep fans guessing and has become part of the playful nature of the show. Alas, in 2012, the show's creator revealed that it is based on Springfield, Oregon.
|
|
null | false
| null |
what is considered the birthplace of mountain biking?
|
Fairfax California is considered the birthplace of mountain biking. Local high school students who grew up in the area would take old single-speed bikes with coaster brakes up local fire roads and then ride them down the hills as fast as they could. Eventually they started adding gears to the bikes so that they could pedal them up the hill as well. The most famous trail was named Repack, because often after the races riders would have to re-pack the coaster brakes with grease because the grease would vaporize from the heat from the hard braking as they raced down the hill. Modern bike brands like Fisher, Ritchey, Klein, Breezer, and Specialized all had their start in the Marin County foothills north of San Francisco, in the small town of Fairfax.
|
|
null | false
| null |
What is castling in chess?
|
Castling is a unique move that can be used to protect the king and develop a rook at the same time. It is the only move in chess where two pieces can move at the same time. During castling, the king moves two steps to its right or left and the rook moves across the king and goes to the position next to the king. Castling can be done only when the following two conditions are satisfied. First, there must not be any other pieces in between the king and the rook. Second, the king and the rook must not have been moved before.
|
|
1911.05960
| false
| null |
In the sentiment classification task, we tried our model on the following public datasets.
MR Movie reviews with one sentence each. Each review is classified into positive or negative BIBREF18.
IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19. Note that each movie review may contain several sentences.
SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.
We also tested our CRU model in the cloze-style reading comprehension task. We carried out experiments on the public datasets: CBT NE/CN BIBREF25. The CRU model used in these experiments is the deep-enhanced type with the convolutional filter length of 3. In the re-ranking step, we also utilized three features: Global LM, Local LM, Word-class LM, as proposed by BIBREF10, and all LMs are 8-gram trained by SRILM toolkit BIBREF27. For other settings, such as hyperparameters, initializations, etc., we closely follow the experimental setups as BIBREF10 to make the experiments more comparable.
In the sentiment classification task, we tried our model on the following public datasets.
MR Movie reviews with one sentence each.
IMDB Movie reviews from IMDB website, where each movie review is labeled with binary classes, either positive or negative BIBREF19.
SUBJ$^1$ Movie review labeled with subjective or objective BIBREF20.
We also tested our CRU model in the cloze-style reading comprehension task. We carried out experiments on the public datasets: CBT NE/CN BIBREF25.
|
What datasets are used for testing sentiment classification and reading comprehension?
|
The answers are shown as follows:
* CBT NE/CN
* MR Movie reviews
* IMDB Movie reviews
* SUBJ
|
null | false
| null |
What are a few different ways to spend a free weekend
|
- Write LLM Prompts
- Go hiking
- Play Video Games
- Watch Netflix
- Cook Molecular Gastronomy dishes
|
|
null | false
| 24
|
Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.
In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not.
In the lower half of Table 2, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not.
|
What result do they get when doing experiments by using the translated training data?
|
The quality and the size of dataset are much more important than whether the training and testing are in the same language or not.
|
null | false
| null |
How can I get better performance when querying a delta table?
|
To get better performance when querying a delta table, there are many things you can do.
First, ensure you are using the Photon engine. The Photon engine is designed to handle high-concurrency and low-latency queries. If you are using DBSQL, Photon is included free of charge and is turned on by default.
Second, you can use different indexing techniques when writing a Delta table for better read performance. Z-ordering and bloom filters are two effective techniques when used correctly.
Third, ensure your SQL Warehouse or compute resource is appropriately sized. If you are trying to query a very large dataset from a very small warehouse, you will likely get slow query results.
|
|
1810.01570
| false
| null |
The two main data sets that we will use to evaluate our architecture are the 2014 i2b2 de-identification challenge data set BIBREF2 and the nursing notes corpus BIBREF3 .
The two main data sets that we will use to evaluate our architecture are the 2014 i2b2 de-identification challenge data set BIBREF2 and the nursing notes corpus BIBREF3 .
|
Which two datasets is the system tested on?
|
The answers are shown as follows:
* 2014 i2b2 de-identification challenge data set BIBREF2
* nursing notes corpus BIBREF3
|
null | false
| null |
Why do people like to buy Apple products?
|
Many people across the planet love to buy Apple products and are very loyal to the brand. A core reason is the simplicity, high quality product experience that "just works". Apple offers an array of products from laptops to smart phones to speakers to head phones. All Apple products have simple seamless integration with the network of products offered. This gives them a very big network effects advantage that generates future sales, expanded offerings and a very very large user base. Another reason people continue to buy and upgrade Apple products is the smooth migration/upgrades and relatively high switching cost to move to a different platform.
|
|
null | false
| null |
What is gortext?
|
Gortex is a waterproof fabric that breaths well, which makes it effective clothing for vigorous activities in cold and wet conditions.
|
|
null | false
| null |
Which of the following items are types of books: novels, biographies, historical fiction, reference texts, graphic novels, newspapers, magazines, flyers, pamphlets, pocket dictionaries.
|
novels, biographies, historical fiction, reference texts, graphic novels, pocket dictionaries
|
|
null | false
| null |
What should I do to maximize my chances of getting into a top business school?
|
There are many controllable factors that can help improve your chances of attending a top business school such as Harvard, Stanford, or Wharton. Four of the most important include:
- Track record of impactful work experience
- Strong letters of recommendation from your supervisors
- Excellent GMAT score and, to a lesser extent, GPA
- Compelling essays
Work experience is often cited as the most important factor for business school admissions. Candidates should look for jobs that involve building best-in-class business skills (such as working at Goldman Sachs or McKinsey) or involve leading large groups of people to accomplish important and challenging projects. One piece of advice is to look at the resumes of successful applicants and see what types of roles they had pre-MBA.
In addition to having excellent work experience on your resume, it is equally important to have a track record of success at those jobs. To evaluate track record of strong work performance, business schools require 2-3 letters of recommendations. It is critical for these recommendations to be highly positive, and any lack of enthusiasm can be an instant red flag for admissions officers.
In terms of quantitative factors, Harvard, Stanford and Wharton have high average GMAT scores. Harvard's median GMAT score, for example, was 730 out of 800 for the class of 2024. Be sure to invest time in preparing for the GMAT, as candidates can significantly increase scores through consistent and dedicated practice.
Additionally, the essays candidates submit are very important. The goal of a business school admissions officer to is find candidates who are likely to reach senior executive leadership or other impactful roles in society. Your essay would ideally reflect a) an aspirational but attainable goal to reach a high impact position, and b) a clear path to get from where you currently are to your long-term goals, including a logical short-term goal immediate post-business school.
Some candidates opt to hire an MBA admissions consultant, which typically costs between $2000-3000 per application. Consider whether an MBA admissions consultant would be valuable for your process, and if so, be sure to evaluate several by having free upfront consultations.
|
|
null | false
| null |
In compiler design, static single assignment form (often abbreviated as SSA form or simply SSA) is a property of an intermediate representation (IR) that requires each variable to be assigned exactly once and defined before it is used. Existing variables in the original IR are split into versions, new variables typically indicated by the original name with a subscript in textbooks, so that every definition gets its own version. In SSA form, use-def chains are explicit and each contains a single element.
|
What is SSA in compiler design?
|
SSA, or static single assignment form, is a property of an intermediate representation (IR) that requires each variable to be assigned exactly once. Additionally, each variable in SSA must be defined before it is used.
|
null | false
| null |
Tell me whether these schools are Ivy League or not: Princeton, Vanderbilt, Rice, Stanford, Dartmouth, Brown
|
Princeton: Ivy League
Vanderbilt: Not Ivy League
Rice: Not Ivy League
Stanford: Not Ivy League
Dartmouth: Ivy League
Brown: Ivy League
|
|
null | false
| 94
|
The data for the project was acquired from Airdna, a data processing service that collaborates with Airbnb to produce high-accuracy data summaries for listings in geographic regions of the United States. For the sake of simplicity, we focus our analysis on Airbnb listings from Manhattan, NY, during the time period of January 1, 2016, to January 1, 2017. The data provided to us contained information for roughly 40,000 Manhattan listings that were posted on Airbnb during this defined time period. For each listing, we were given information of the amenities of the listing (number of bathrooms, number of bedrooms …), the listing’s zip code, the host’s description of the listing, the price of the listing, and the occupancy rate of the listing. Airbnb defines a home's occupancy rate, as the percentage of time that a listing is occupied over the time period that the listing is available. This gives us a reasonable metric for defining popular versus less popular listings.
The data for the project was acquired from Airdna, a data processing service that collaborates with Airbnb to produce high-accuracy data summaries for listings in geographic regions of the United States. For the sake of simplicity, we focus our analysis on Airbnb listings from Manhattan, NY, during the time period of January 1, 2016, to January 1, 2017. The data provided to us contained information for roughly 40,000 Manhattan listings that were posted on Airbnb during this defined time period.
|
How many Manhattan listings did the information they use come from?
|
Roughly 40,000.
|
null | false
| 100
|
We identified 5 common models in previous work primarily intended for learned classifiers rather than hand-crafted rules. We adapt these models to a multi-label hierarchical classification task by training a series of one-vs-all binary classifiers BIBREF34 , one for each label in the taxonomy. With the exception of the CNN and BERT models, following previous work BIBREF19 , BIBREF3 , BIBREF8 we make use of an SVM classifier using the LIBSvM framework BIBREF35 with a linear kernel. Models are trained and evaluated from coarse to fine levels of taxonomic specificity. At each level of taxonomic evaluation, a set of non-overlapping confidence scores for each binary classifier are generated and sorted to produce a list of ranked label predictions. We evaluate these ranks using Mean Average Precision BIBREF36 . ARC questions are evaluated using the standard 3,370 questions for training, 869 for development, and 3,548 for testing.
N-grams, POS, Hierarchical features: A baseline bag-of-words model incorporating both tagged and untagged unigrams and bigams. We also implement the hierarchical classification feature of Li and Roth BIBREF6 , where for a given question, the output of the classifier at coarser levels of granularity serves as input to the classifier at the current level of granularity.
Dependencies: Bigrams of Stanford dependencies BIBREF37 . For each word, we create one unlabeled bigram for each outgoing link from that word to it's dependency BIBREF20 , BIBREF3 .
Question Expansion with Hypernyms: We perform hypernym expansion BIBREF22 , BIBREF19 , BIBREF3 by including WordNet hypernyms BIBREF38 for the root dependency word, and words on it's direct outgoing links. WordNet sense is identified using Lesk word-sense disambiguation BIBREF39 , using question text for context. We implement the heuristic of Van-tu et al. BIBREF24 , where more distant hypernyms receive less weight.
Essential Terms: Though not previously reported for QC, we make use of unigrams of keywords extracted using the Science Exam Essential Term Extractor of Khashabi et al. BIBREF26 . For each keyword, we create one binary unigram feature.
CNN: Kim BIBREF28 demonstrated near state-of-the-art performance on a number of sentence classification tasks (including TREC question classification) by using pre-trained word embeddings BIBREF40 as feature extractors in a CNN model. Lei et al. BIBREF29 showed that 10 CNN variants perform within +/-2% of Kim's BIBREF28 model on TREC QC. We report performance of our best CNN model based on the MP-CNN architecture of Rao et al. BIBREF41 , which works to establish the similarity between question text and the definition text of the question classes. We adapt the MP-CNN model, which uses a “Siamese” structure BIBREF33 , to create separate representations for both the question and the question class. The model then makes use of a triple ranking loss function to minimize the distance between the representations of questions and the correct class while simultaneously maximising the distance between questions and incorrect classes. We optimize the network using the method of Tu BIBREF42 .
BERT-QC (This work): We make use of BERT BIBREF43 , a language model using bidirectional encoder representations from transformers, in a sentence-classification configuration. As the original settings of BERT do not support multi-label classification scenarios, and training a series of 406 binary classifiers would be computationally expensive, we use the duplication method of Tsoumakas et al. BIBREF34 where we enumerate multi-label questions as multiple single-label instances during training by duplicating question text, and assigning each instance one of the multiple labels. Evaluation follows the standard procedure where we generate a list of ranked class predictions based on class probabilities, and use this to calculate Mean Average Precision (MAP) and Precision@1 (P@1). As shown in Table TABREF7 , this BERT-QC model achieves our best question classification performance, significantly exceeding baseline performance on ARC by 0.12 MAP and 13.5% P@1.
At each level of taxonomic evaluation, a set of non-overlapping confidence scores for each binary classifier are generated and sorted to produce a list of ranked label predictions. We evaluate these ranks using Mean Average Precision
|
How does the author train and evaluate the model?
|
At each level of taxonomic evaluation, a set of non-overlapping confidence scores for each binary classifier are generated and sorted to produce a list of ranked label predictions. They evaluate these ranks using Mean Average Precision
|
null | false
| null |
After the Lakers lost the 2004 NBA Finals, O'Neal was traded and Bryant became the cornerstone of the franchise. He led the NBA in scoring in the 2005–06 and 2006–07 seasons and was named league MVP in 2008. On January 22, 2006, he scored a career-high 81 points; the second most points scored in a single NBA game, behind Wilt Chamberlain's 100-point game. Bryant led the team to consecutive championships in 2009 and 2010, being named NBA Finals MVP on both occasions. He continued to be among the premier players in the league through the 2012–13 season, when he suffered a torn achilles tendon at age 34. His next two seasons were cut short by injuries to his knee and shoulder, respectively. Citing physical decline, Bryant retired after the 2015–16 season. In 2017, the Lakers retired both his Nos. 8 and 24, making him the only player in NBA history to have multiple numbers retired by the same franchise.
|
Which of the achievements listed in the below passage would be considered the best example of an "outlier" for a single NBA game?
|
Kobe Bryant scoring 81 points in a single NBA game is an outlier. For context, Kobe averaged 25 points per game for his NBA career. 81 points in one game is an outlier for any NBA player in history.
|
null | false
| null |
How should I pick a name for my baby?
|
Only you can decide that, but there are some questions you may want to ask youself. Do you like inheriting family names, or do you want a novel name? Is the name easy to pronounce? Will the name remind people of someone famous? What nicknames will the name lead to? Does the name have a special origin or meaning to you? If your family speaks multiple languages, will the name be replicable in all of the languages? When you see your baby, does the name feel right to you?
|
|
null | false
| null |
How can I decide what bike I should get?
|
When it comes to bicycles for adults, there are different kinds available in the market. Select a bike based on how you'd like to use it. Bikes can be categorized as follows:
1. Road bikes - These bikes have super thin tires and are often used for biking on roads. These help you attain high speeds relatively quickly.
2. Mountain bikes - These bikes are made to sustain the rough terrain that you could come across while biking. The tires are much thicker and made to protect you and give you the support you need.
3. Hybrid/Commuter bikes - These bikes fall into the middle of the above 2 categories. If you think you might do a bit of road biking and mountain biking, then this might be your best option.
You can also go to a store that specializes in selling and repairing bikes to get more information.
|
|
null | false
| null |
A data architect is a practitioner of data architecture, a data management discipline concerned with designing, creating, deploying
and managing an organization's data architecture. Data architects define how the data will be stored, consumed, integrated and managed by different data entities and IT systems, as well as any applications using or processing that data in some way. It is closely allied with business architecture and is considered to be one of the four domains of enterprise architecture.
Role
According to the Data Management Body of Knowledge, the data architect “provides a standard common business vocabulary, expresses strategic data requirements, outlines high level integrated designs to meet these requirements, and aligns with enterprise strategy and related business architecture.”
According to the Open Group Architecture Framework (TOGAF), a data architect is expected to set data architecture principles, create models of data that enable the implementation of the intended business architecture, create diagrams showing key data entities, and create an inventory of the data needed to implement the architecture vision.
Responsibilities
Organizes data at the macro level.
Organizes data at the micro level, data models, for a new application.
Provides a logical data model as a standard for the golden source and for consuming applications to inherit.
Provides a logical data model with elements and business rules needed for the creation of data quality (DQ) rules.
|
summarize the responsibilities of a data architect
|
a data architect defines how data is collected, processed, stored, consumed, and integrated within different data entities and IT systems within an enterprise.
a data architect defines a common business vocabulary to be used across the enterprise
a data architect expresses strategic data requirements and provides high-level designs to meet the end business architecture
a data architect creates data models showing key entities, and relationships between entities.
|
null | false
| null |
Classify each of these as extinct, not extinct, or neither: Mythosaur, Triceratops, Dumbo Octopus, Dodo, Tiger, Lion turtle, Megalodon, Dire wolf, Pegasus, Seahorse
|
Extinct: Triceratops, Dodo, Megalodon, Dire wolf
Not Extinct: Dumbo Octopus, Tiger, Seahorse
Neither: Mythosaur, Lion Turtle, Pegasus
|
|
null | false
| null |
Who was the US president when WWII ended?
|
Harry S. Truman was the US president at the end of World War II
|
|
null | false
| null |
A continuous seven day cycle that runs throughout history, paying no attention whatsoever to the phases of the moon and having a fixed day of rest, was most likely first practised in Judaism, dated to the 6th century BC at the latest.
In Ancient Rome (753 BC–476 AD), every eight days there was a nundinae. It was a market day, during which children were exempted from school and agricultural workers stopped work in the field and came to the city to sell the produce of their labor or to practice religious rites.[citation needed]
The French Revolutionary Calendar (1793–1805) had ten-day weeks (called décades) and allowed décadi, one out of the ten days, as a leisure day. From 1929 to 1940, the Soviet Union utilized a calendar with five and six-day work weeks, with a rest day assigned to a worker either with a colour or number.
In cultures with a four-day workweek, the three Sabbaths derive from the culture's main religious tradition: Friday (Muslim), Saturday (Jewish, Adventist), and Sunday (Christian).
The present-day concept of the relatively longer 'week-end' first arose in the industrial north of Britain in the early 19th century and was originally a voluntary arrangement between factory owners and workers allowing Saturday afternoon off starting at 2 pm on the basis that staff would be available for work sober and refreshed on Monday morning. The Oxford English Dictionary traces the first use of the term weekend to the British magazine Notes and Queries in 1879.
In 1908, the first five-day workweek in the United States was instituted by a New England cotton mill so that Jewish workers would not have to work on the Sabbath from sundown Friday to sundown Saturday. In 1926, Henry Ford began shutting down his automotive factories for all of Saturday and Sunday, due to pressures stemming from the October Revolution,[citation needed] which witnessed the ruling class persecuted for not giving the laborers dignifying conditions. In 1929, the Amalgamated Clothing Workers of America was the first union to demand and receive a five-day workweek. The rest of the United States slowly followed, but it was not until 1940, when a provision of the 1938 Fair Labor Standards Act mandating a maximum 40-hour workweek went into effect, that the two-day weekend was adopted nationwide.
Over the succeeding decades, particularly in the 1940s to 1960s, an increasing number of countries adopted either a Friday–Saturday or a Saturday–Sunday weekend to harmonize with international markets. A series of workweek reforms in the mid-to-late 2000s and early 2010s brought much of the Arab World in synchronization with the majority of countries around the world, in terms of working hours, the length of the workweek, and the days of the weekend. The International Labour Organization (ILO) currently defines a workweek exceeding 48 hours as excessive. A 2007 study by the ILO found that at least 614.2 million people around the world were working excessive hours.
|
Given a reference text about the history of the workweek, what social forces precipitated the creation of a five-day workweek in the United States and when?
|
The creation of a five-day workweek in the United States came about as a result of organizing by and pressure from workers and labor groups in the first half of the 20th century, as they demanded dignified working conditions from their employers and the ruling class.
|
null | false
| null |
Identify which car manufacturer is Italian or American: Alfa Romeo, Cadilliac
|
Cadillac is American, Alfa Romeo is Italian
|
|
null | false
| null |
List some of the historic Khals in A Song of Ice and Fire.
|
Khal Mengo, Khal Horro, Khal Qano, Khal Zeggo, Khal Temmo
|
|
1809.08731
| true
| null |
Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks. ROUGE-L measures the similarity of two sentences based on their longest common subsequence. Generated and reference compressions are tokenized and lowercased. For multiple references, we only make use of the one with the highest score for each example.
We compare to the best n-gram-overlap metrics from toutanova2016dataset; combinations of linguistic units (bi-grams (LR2) and tri-grams (LR3)) and scoring measures (recall (R) and F-score (F)). With multiple references, we consider the union of the sets of n-grams. Again, generated and reference compressions are tokenized and lowercased.
We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length. The score of a sentence $S$ is calculated as
Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:
Due to its popularity, we also performed initial experiments with BLEU BIBREF17 . Its correlation with human scores was so low that we do not consider it in our final experiments.
Our first baseline is ROUGE-L BIBREF1 , since it is the most commonly used metric for compression tasks.
We compare to the best n-gram-overlap metrics from toutanova2016dataset;
We further compare to the negative LM cross-entropy, i.e., the log-probability which is only normalized by sentence length.
Our next baseline is perplexity, which corresponds to the exponentiated cross-entropy:
Due to its popularity, we also performed initial experiments with BLEU BIBREF17 .
|
Is ROUGE their only baseline?
|
No.
|
null | false
| null |
Explain away goal rule in soccer.
|
Away goal rule was first introduced in 1965/66 season of the UEFA European Cups Winners’ Cup and abolished at the end of 2020/21 season. All of the UEFA club competitions’ knock-out stages (except the final) are played as a two-leg fixtures (team plays one game at home and one game away against the same team). The away goal rule was a tie breaker in case the winner of the fixture in the knock-out stage could not be determined after adding up scores from both games. In case there is a tie when you add up the scores of both games, the team that scored more away goals would advance to the next round of the competition. For example, if Team A lost the first game 2:1 playing away and won the second game with the score of 1:0 at home, the aggregate score of those two games is 2:2 (a tie), but since Team A scored 1 goal playing away and Team B scored 0 away goals, Team A would advance the next round.
|
|
null | false
| null |
Is cricket game popular in some parts of the world?
|
Yes, cricket is a very popular game. It is the second most followed game after football in the world. Each team has a squad of 16 to 20 players and 11 can play a game. There are three different formats of game which are popular.
Test match is the longest format in which each team play 2 innings and have a limit of 90 overs per day with 5 day limit.
50 over games are also called one day matches with a limit of 50 overs per team in a game.
T20 or 20-20 is the short format and has lot of following. Takes about 4 hours per game.
|
|
null | false
| null |
What is the most popular opening in chess?
|
The most popular opening in chess changes dramatically based on the skill level of the players. At a professional level, some of the most popular openings include the Sicilian Defence, the Ruy Lopez, or Spanish Opening, and the Italian Game. At a beginner level, popular openings include the London Opening, Sicilian Dragon, and King's Indian Defence.
|
|
null | false
| 432
|
A key objective of our work is to test whether the introduced graph representation of NNs is informative for predicting the performance of NNs. Next, we describe the steps of our proposed framework for solving Problem 1, namely performance prediction from the NN training dynamics.
As shown in Fig., our method consists of four steps: (S1) generation of a temporal graph for the training phase of each NN; (S2) extraction of node features; (S3) construction of a feature-based graph signature that captures the NN dynamics; (S4) prediction of NN performance by training a classifier or regressor on the constructed signatures. We describe these steps in more detail next.
(S1) Graph generation. The first step involves converting the training process of each input NN into a time-evolving graph. Each NN N tri ∈ N , is represented as a series of checkpoints saved for t training epochs. Based on these checkpoints and the graph representation approaches for fc and conv layers presented in 2.2, 3.1 and 2.3, we first convert the τ th NN checkpoint into weighted graph G (τ ) i at timestamp/epoch τ . We note that although our proposed graph representation involves node features, our framework does not leverage them, and thus we consider the generated graphs unattributed. Therefore, each NN N tri is mapped to a time-evolving graph
The output of this step is a set of n time-evolving graphs {G tr1 , G tr2 , ..., G trn } corresponding to the original n NNs in N .
(S2) Feature extraction. Next, the goal is to capture the structural dynamics of the NN training process. We aim to select graph measures that can capture changes during the training process, take into account the edge weight of graphs and can be calculated efficiently. In order to do that in an interpretable way, we extract two well-known node centralities from each snapshot of each generated time-evolving graph G i : weighted degree centrality and eigenvector centrality ( § 2.1). The weighted degree is a simple function of the learnable weight matrix W during the training phase of NN, therefore it gives us insights into the training dynamics at the node/neuron/filter level. The eigenvector centrality is an extension of the degree centrality, which captures the highly influential nodes, and has been successfully used in neuroscience to capture the dynamic changes of real neural networks (or connectomes). Eigenvector centrality can be used to capture importance and connectivity of filters/neurons (i.e., the nodes in our graph representation). Also, eigenvector centrality has been used for detecting communities or clusters, and thus provides structural information about the clusterability of the NN, which is complementary to that provided by the simpler and more efficient-to-compute degree centrality.
Our choice of features is also guided by the inherent k-partite structure of our proposed graph representation, which cannot be represented well by several other commonly-used graph features. For example, clustering-based features (e.g., number of triangles, transitivity, clustering coefficient) and cycle-based metrics which account for closed paths in a graph are always equal to 0 for k-partite graphs. Moreover, connected component-related features (i.e., strong/weak connectivity) do not capture the learned edge weights, which are important for modeling the training dynamics of NNs. Other features (e.g., betweenness centrality) tend to be computationally expensive, and would add significant overhead compared to early stopping methods.
(S3) Graph signature construction. In order to be able to compare NN-based graphs (with different number of nodes and edges), we summarize the structural changes in the generated time-evolving graphs at the graph level (rather than the node level, as in (S2)), and construct a statistical summary of the extracted node centralities (signature) per time-evolving graph G i . For each snapshot G (τ ) i of G i , we create a signature vector using five node feature aggregators, which were introduced in for graph similarity: median, mean, standard deviation, skewness, and kurtosis, where all but the median are moments of the corresponding distribution. Thus, G (τ ) i is mapped to a (static) signature vector s (τ ) i ∈ R 5 , representing the statistical summary of its node features (i.e., degree or eigenvector centrality) at time τ . To put more emphasis on the most recent timestamp, we can redefine the signature at time τ as the linear weighted average of the signatures up to that point,
, or an exponential function of the previous signatures, s i ← αs
. To obtain the temporal signature of the evolving graph G i , we aggregate the (static) signatures up to timestamp/epoch t:
, where ⊕ denotes concatenation. We note that global features such as algebraic connectivity, modularity, and average shortest paths may be seen as alternative ways for constructing global graph signatures while circumventing the local feature extraction step (S2); however, these features fail to capture the structural changes in our proposed graph representations (they remain (near-)constant over time) and lead to poor performance.
(S4) Performance prediction. For the last step of performance prediction, we consider two tasks: • Classification: We train a classifier (e.g., SVM, MLP) using the training graphs {G tr1 , G tr2 , ..., G trn } represented by their temporal signatures {s t tr1 , s t tr2 , ..., s t trn }, and their corresponding accuracies A = {α 1 , α 2 , . . . , α n } mapped to labels L = {l 1 , l 2 , . . . , l n } (e.g., high/low accuracy) based on some threshold. Any test NN instance is then classified using the trained classifier.
• Linear regression: We perform linear regression to estimate the actual accuracy value α tst of a new test instance N tst based on its signature obtained through steps (S1)-(S3).
(S2) Feature extraction. Next, the goal is to capture the structural dynamics of the NN training process. We aim to select graph measures that can capture changes during the training process, take into account the edge weight of graphs and can be calculated efficiently. In order to do that in an interpretable way, we extract two well-known node centralities from each snapshot of each generated time-evolving graph Gi : weighted degree centrality and eigenvector centrality (§ 2.1). The weighted degree is a simple function of the learnable weight matrix W during the training phase of NN, therefore it gives us insights into the training dynamics at the node/neuron/filter level. The eigenvector centrality is an extension of the degree centrality, which captures the highly influential nodes, and has been successfully used in neuroscience to capture the dynamic changes of real neural networks (or connectomes) (Lohmann et al., 2010). Eigenvector centrality can be used to capture importance and connectivity of filters/neurons (i.e., the nodes in our graph representation). Also, eigenvector centrality has been used for detecting communities (Newman, 2006) or clusters (Wu et al., 2013), and thus provides structural information about the clusterability of the NN, which is complementary to that provided by the simpler and more efficient-to-compute degree centrality. Our choice of features is also guided by the inherent k-partite structure of our proposed graph representation, which cannot be represented well by several other commonly-used graph features. For example, clustering-based features (e.g., number of triangles, transitivity, clustering coefficient) and cycle-based metrics which account for closed paths in a graph are always equal to 0 for k-partite graphs. Moreover, connected component-related features (i.e., strong/weak connectivity) do not capture the learned edge weights, which are important for modeling the training dynamics of NNs. Other features (e.g., betweenness centrality) tend to be computationally expensive, and would add significant overhead compared to early stopping methods.
|
Why are degree and Eigen centrality taken into account only?
|
Based on your feedback, we have updated our paper (Sec. 3.2 (S2) and (S3)) with additional justifications and intuition behind the graph features that we leverage during the signature construction step (degree and eigenvector centrality). We chose weighted degree centrality to efficiently capture (1) the weights that are learned in each layer of the NN at the node level (filter or neuron), which are the most important aspect of the training process; and (2) the change of the learned weights during the training process of NN over time (in the proposed time-evolving graph representation). Moreover, we use the node degrees to model the NN structure since they have been found to be effective in capturing the structural roles of nodes and a key notion for structural node embedding [1]. We note that we show the difference in how the degree-based signatures change during the training process for low accuracy NNs vs. high accuracy ones in Figure-7-9 in the appendix. Eigenvector centrality is used to capture the highly influential nodes in a graph; in our case, the nodes correspond to filters of convolutional layers and neurons of fully connected layers. Additionally, eigenvector centrality has been used for detecting communities [2] or clusters [3], and thus provides structural information about the clusterability of the NN. Clusters or communities are central for network analysis and provide complementary information compared to the simpler and more efficient-to-compute degree centrality. Our hypothesis was that tracking changes in the eigenvector centrality (or node influence or clustering-related information) over time could provide complementary insights into the structural changes of the NN during training that may correlate with high or low performance.
|
1811.01734
| false
| null |
In recent years, methods based on string kernels have demonstrated remarkable performance in various text classification tasks BIBREF35 , BIBREF36 , BIBREF22 , BIBREF19 , BIBREF10 , BIBREF17 , BIBREF26 . String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams. Lodhi et al. BIBREF35 used string kernels for document categorization, obtaining very good results. String kernels were also successfully used in authorship identification BIBREF22 . More recently, various combinations of string kernels reached state-of-the-art accuracy rates in native language identification BIBREF19 and Arabic dialect identification BIBREF17 . Interestingly, string kernels have been used in cross-domain settings without any domain adaptation, obtaining impressive results. For instance, Ionescu et al. BIBREF19 have employed string kernels in a cross-corpus (and implicitly cross-domain) native language identification experiment, improving the state-of-the-art accuracy by a remarkable INLINEFORM0 . Giménez-Pérez et al. BIBREF10 have used string kernels for single-source and multi-source polarity classification. Remarkably, they obtain state-of-the-art performance without using knowledge from the target domain, which indicates that string kernels provide robust results in the cross-domain setting without any domain adaptation. Ionescu et al. BIBREF17 obtained the best performance in the Arabic Dialect Identification Shared Task of the 2017 VarDial Evaluation Campaign BIBREF37 , with an improvement of INLINEFORM1 over the second-best method. It is important to note that the training and the test speech samples prepared for the shared task were recorded in different setups BIBREF37 , or in other words, the training and the test sets are drawn from different distributions. Different from all these recent approaches BIBREF19 , BIBREF10 , BIBREF17 , we use unlabeled data from the target domain to significantly increase the performance of string kernels in cross-domain text classification, particularly in English polarity classification.
String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams.
|
What is a string kernel?
|
String kernel is a technique that uses character n-grams to measure the similarity of strings
|
1809.08652
| false
| null |
We tried Glove BIBREF2 and Twitter word2vec BIBREF3 code for training embeddings for the processed tweets. The embeddings were trained on both the datasets provided by BIBREF1 and HEOT. These embeddings help to learn distributed representations of tweets. After experimentation, we kept the size of embeddings fixed to 100.
We tried Glove BIBREF2 and Twitter word2vec BIBREF3 code for training embeddings for the processed tweets.
|
What embeddings do they use?
|
The answers are shown as follows:
* Glove
* Twitter word2vec
|
null | false
| null |
Where was poutine invented?
|
While it is the national dish of Quebec (a province of Canada) there are multiple cities that are claiming that poutine is from that city. For example Drummondville, Victoriaville, Warwick and Princeville.
|
|
null | false
| null |
How are you doing today?
|
As a Large Language Model (LLM), I don't have feelings like humans do. Thanks for asking though!
|
|
1909.08167
| false
| null |
Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models:
SO: the source-only model trained using source domain labeled data without any domain adaptation.
CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$.
DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$.
$\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method.
$\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method.
$\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method.
$\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method.
$\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training.
$\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training.
We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams.
From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study.
We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$.
Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution.
Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models:
SO: the source-only model trained using source domain labeled data without any domain adaptation.
CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$.
DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$.
$\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method.
$\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method.
$\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method.
$\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method.
$\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training.
$\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training.
We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field.
From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E.
We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3).
Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations.
|
How is DIRL evaluated?
|
The answers are shown as follows:
* Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from.
|
null | false
| 451
|
We illustrate the test accuracy and convergence time in Figure 4. The decrease in batch size leads to a positive impact on the predictive accuracy in the data sets tested. This indicates that with less memory consumption for the kernel matrix, Kernel SGD can still achieve good predictive accuracy.
|
The described method is burdensome in terms of both diskspace and loading the kernel matrix. What is the latency impact of this loading as the mini-batch increases?
|
Kernel SGD is memory efficient (i.e., linear in the number of training data, cf. Section 3.2). As shown in Table 2 of the main text, the whole kernel matrix only needs 11.72 MB memory at most. Moreover, the size of kernel can be controlled by setting the mini-batch size. We have conducted experiments with increasing the mini-batch size as shown in Figure 4 in the main text, the efficiency of Kernel SGD is excellent. Please note that all of reported execution time in our manuscript has included the time of loading the kernel matrix.
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Kenning of Kayce?
|
Herrock Kenning, also known as Herrock the Whoreson
|
|
null | false
| null |
Why are Donald Trump’s followers stereotyped as idiots?
|
The creation of a stereotype and why it was formed in the first place is difficult to trace. However, it’s widely known that the average American who supports Donald trump is less educated than those who do not support him. Additionally people who less educated are more susceptible to confirmation or other biases and the influence of others, which may be one reason there is a strong association between Trump followers and viewers of Fox News. This type of biased feedback loop can lead to the spread of incompetence.
|
|
null | false
| null |
Do you ever sing when you're alone?
|
I do often sing when I am alone. I sing in the car. I sing in the shower. I sing walking along the road when no one else is around. Singing alone is a very enjoyable way to spend time. Singing alone allows you to practice without others watching. Singing alone is a good way to be ready to sing karaoke at a work function. My favorite songs to sing alone are songs from The Sound of Music and Miley Cyrus's new song "Flowers."
|
|
1605.07683
| false
| null |
FLOAT SELECTED: Table 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB. Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al., 2014a). Concierge is made of chats extracted from a real online concierge service. (∗) Tasks 1-5 have two test sets, one using the vocabulary of the training set and the other using out-of-vocabulary words.
FLOAT SELECTED: Table 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB. Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al., 2014a). Concierge is made of chats extracted from a real online concierge service. (∗) Tasks 1-5 have two test sets, one using the vocabulary of the training set and the other using out-of-vocabulary words.
|
How large is the Dialog State Tracking Dataset?
|
1,618 training dialogs, 500 validation dialogs, and 1,117 test dialogs
|
null | false
| 424
|
The “Leave one out upper bound” (Poole et al., 2019) on the mutual information was called “L1Out” in Cheng et al. (2020), while we call it “InfoLOOB” (LOOB for “Leave One Out Bound”). InfoLOOB is the same as InfoNCE (Eq. (3)), but without the positive sample x1 in the denominator. Contrastive Log-ratio Upper Bound (CLUB), another upper bound on the mutual information, was only used for minimizing it (Cheng et al., 2020). Maximizing CLUB failed in experiments, because the embedding
distribution was not uniform as known for similar objectives (Wang & Liu, 2021). Uniform embedding distributions are required for successful contrastive learning (Wang & Isola, 2020).
|
Since InfoLOOB is an upper bound, a natural question is that why not also try using CLUB (Cheng et al., ICML 2020) as the objective?
|
Again a valid point. In Cheng et al. (2020) CLUB has been used for minimizing the mutual information as usual for an upper bound. However, learning fails when maximizing the CLUB objective. There is a collapse to a single mode both without and with modern Hopfield networks. After extensive experimentation, we were not able to solve this problem. While CLUB averages over the negatives, InfoLOOB uses a log-sum-exponential (LSE). The LSE avoids a collapse since the sample that is most similar to the anchor is strongly pushed away from the anchor and even away from the average. The LSE leads to a more uniform distribution of features on the hypersphere. Our observations are in agreement with the findings in “Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere” (https://arxiv.org/abs/2005.10242).
We now write in the paper: “Contrastive Log-ratio Upper Bound (CLUB), another upper bound on the mutual information, was only used for minimizing it (Cheng et al., 2020). Maximizing CLUB failed in experiments, because the embedding distribution was not uniform as known for similar objectives (Wang & Liu, 2021). Uniform embedding distributions are required for successful contrastive learning (Wang & Isola, 2020).”
|
null | false
| 322
|
NLG is the process of automatically generating coherent NL text from non-linguistic data BIBREF0. Recently, the field has seen an increased interest in the development of NLG systems focusing on verbalizing resources from SW data BIBREF1. The SW aims to make information available on the Web easier to process for machines and humans. However, the languages underlying this vision, i.e., RDF, SPARQL and OWL, are rather difficult to understand for non-expert users. For example, while the meaning of the OWL class expression Class: Professor SubClassOf: worksAt SOME University is obvious to every SW expert, this expression (“Every professor works at a university”) is rather difficult to fathom for lay persons.
Previous works such as SPARQL2NL BIBREF2 and SPARTIQULATION BIBREF3 have already shown the usefulness of the verbalization of SPARQL and RDF in areas such as question answering BIBREF4 and the explanation of the output of systems based onSW technologies BIBREF5. However, other SW languages are rarely investigated, such as OWL.
In this paper, we present an open-source holistic NLG framework for the SW, named LD2NL, which facilitates the verbalization of the three key languages of the SW, i.e., RDF, OWL, and SPARQL into NL. Our framework is based on a bottom-up paradigm for verbalizing SW data. Additionally, LD2NL builds upon SPARQL2NL as it is open-source and the paradigm it follows can be reused and ported to RDF and OWL. Thus, LD2NL is capable of generating either a single sentence or a summary of a given resource, rule, or query. To validate our framework, we evaluated LD2NL using experts 66 in NLP and SW as well as 20 non-experts who were lay users or non-users of SW. The results suggest that LD2NL generates texts which can be easily understood by humans. The version of LD2NL used in this paper, all experimental results will be publicly available.
Additionally, LD2NL builds upon SPARQL2NL as it is open-source and the paradigm it follows can be reused and ported to RDF and OWL.
|
Why does the LD2NL build upon SPARQL2NL?
|
Because it is open-source and the paradigm it follows can be reused and ported to RDF and OWL.
|
null | false
| null |
LaFollette was born near Morristown, New Jersey in 1781. His father, Joseph, and grandfather, Jean, were Huguenots who had escaped the persecution in France, traveling first to Jersey and then to the colonies where they operated a small farm near the Wallkill River in northern New Jersey. Jean was killed during the French and Indian Wars. Joseph married Phoebe Gobel of Morristown, New Jersey, whose father's farm along with other neighboring farms in Jockey Hollow was used by George Washington and his troops during the winter of 1780. After serving with Count Casimir Pulaski during the Revolutionary War, Joseph and his family joined the pioneers who trekked westward through the Cumberland Gap.
|
Give a comma separated list of all the people listed in this passage about Jesse LaFollette
|
Jesse LaFollette, Joseph LaFollette, Jean LaFollette, Phoebe Gobel, George Washington, Casimir Pulaski
|
null | false
| 83
|
We inspected the tweets of one fold that were misclassified by the Mazajak/SVM model (36 false positives/121 false negatives) to determine the most common errors.
had four main types:
[leftmargin=*]
Gloating: ex. يا هبيده> (“yA hbydp” - “O you delusional”) referring to fans of rival sports team for thinking they could win.
Quoting: ex. لما حد يسب ويقول يا كلب> (“lmA Hd ysb wyqwl yA klb” – “when someone swears and says: O dog”).
Idioms: ex. يا فاطر رمضان يا خاسر دينك> (“yA fATr rmDAn yA xAsr dynk” – “o you who does not fast Ramadan, you who have lost your religion”), which is a colloquial idiom.
Implicit Sarcasm: ex. يا خاين انت عايز تشكك>
في حب الشعب للريس> (“yA KAyn Ant EAwz t$kk fy Hb Al$Eb llrys” – “O traitor, (you) want to question the love of the people to the president ”) where the author is mocking the president's popularity.
had two types:
[leftmargin=*]
Mixture of offensiveness and admiration: ex. calling a girl a puppy يا كلبوبة> (“yA klbwbp” – “O puppy”) in a flirtatious manner.
Implicit offensiveness: ex. calling for cure while implying sanity in: وتشفي حكام قطر من المرض> (“wt$fy HkAm qTr mn AlmrD” – “and cure Qatar rulers from illness”).
Many errors stem from heavy use of dialectal Arabic as well as ambiguity. Since BERT was trained on Wikipedia (MSA) and Google books, the model failed to classify tweets with dialectal cues. Conversely, Mazajak/SVM is more biased towards dialects, often failing to classify MSA tweets.
We inspected the tweets of one fold that were misclassified by the Mazajak/SVM model (36 false positives/121 false negatives) to determine the most common errors. They were as follows: Four false positive types: • Gloating: ex. èYJ J .ë AK (“yA hbydp” - “O you delusional”) referring to fans of rival sports team for thinking they could win. • Quoting: ex. I. Ê¿ AK Èñ_x0010_ ®K ð I. _x001d_ Yg AÖÏ (“lmA Hd ysb wyqwl yA klb” – “when some_x0002_one swears and says: O dog”). • Idioms: ex. ½ JK X Qå
A g AK àA ÓP Q£A ¯ AK (“yA fATr rmDAn yA xAsr dynk” – “o you who does not fast Ramadan, you have lost your faith”), which is a colloquial idiom.Implicit Sarcasm: ex. _x005f QÊË I. ªË@ I _x0011_ . k ú ¯ ½¾ QK A« I K@ áK A g AK (“yA xAyn Ant EAwz t$kk fy Hb Al$Eb llrys” – “O traitor, (you) want to question people’s love for the president ”) where the author is mocking the president’s popularity.
|
What are the four false positive types in this paper?
|
Four false positive types are the gloating, the quoting, the idioms and the implicit sarcasm.
|
null | false
| 9
|
Our framework consists of a question answering model and a question generation model.
We implement two question answering models, which directly measure the semantic similarity between questions and candidate answers in the semantic space. First, to make the model's prediction more explainable, we implement a path based QA model PCNet. In this model, to get candidate answers from the triplets based document for a given question. We first retrieve the an “anchor” point $arg1_i$ or $arg2_i$ in the document fact $f_i$ . These anchors are selected based on edit distance between the words in the questions and the arguments. Then we regard all arguments in the 1-hops and 2-hops fact of the anchors as answers. However, the coverage of the candidate answers can be 100% in the model. We then implement a second end-to-end neural model KVMenNet which covers all the answers but with less interpretability. Both models generate a score $f_{qa}(q, a)$ of each candidate answer.
We then implement a generation-based model. The motivation to design this model is that we want to associate natural language phrases with knowledge based representation. It takes semantics of a candidate answer as the input and generates a question $\hat{q}$ . Then a paraphrasing model gives a score $f_{qg}(q,\hat{q})$ , which is computed between the generated question $\hat{q}$ and the original question $q$ , as the ranking score.
We get the final scores $S(q,a)$ used for ranking as follows.
$$S(q,a) = \lambda f_{qa}(q, a) + (1-\lambda ) f_{qg}(q,\hat{q})$$ (Eq. 7)
Moreover, we incorporate side information from external KBs into the three models. The details of how we use external KBs to enhance the representation of elements in the document KB will be described in section Incorporating External Knowledge.
We then implement a generation-based model. The motivation to design this model is that we want to associate natural language phrases with knowledge based representation.
|
Why is the generation-based model designed in the approach?
|
The motivation to design this model is that they want to associate natural language phrases with knowledge based representation.
|
2004.01694
| false
| null |
Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general.
Recommendations ::: (R1) Choose professional translators as raters.
In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs.
Recommendations ::: (R2) Evaluate documents, not sentences.
When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4).
Recommendations ::: (R3) Evaluate fluency in addition to adequacy.
Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality.
Recommendations ::: (R4) Do not heavily edit reference translations for fluency.
In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30).
Recommendations ::: (R5) Use original source texts.
Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT.
In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general.
Recommendations ::: (R1) Choose professional translators as raters.
In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs.
Recommendations ::: (R2) Evaluate documents, not sentences.
When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4).
Recommendations ::: (R3) Evaluate fluency in addition to adequacy.
Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality.
Recommendations ::: (R4) Do not heavily edit reference translations for fluency.
In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30).
Recommendations ::: (R5) Use original source texts.
Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT.
|
What recommendations do they offer?
|
The answers are shown as follows:
* Choose professional translators as raters
* Evaluate documents, not sentences
* Evaluate fluency in addition to adequacy
* Do not heavily edit reference translations for fluency
* Use original source texts
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.