paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Which rock band from the 2000s is the most influential?
|
The most influential rock band from the 2000s is The Strokes. They are often credited with spearheading a revival of rock. They have influenced and inspired extremely popular bands such as The Arctic Monkeys, The Killers, and Franz Ferdinand.
|
|
null | false
| null |
What is the primary advantage of a hydrogen fuel cell powered vehicle?
|
No tailpipe emissions, emit only water vapour and warm air, fastest charging technology, Refilling of hydrogen very fast, Environment friendly
|
|
null | false
| null |
The administration of London is formed of two tiers: a citywide, strategic tier and a local tier. Citywide administration is coordinated by the Greater London Authority (GLA), while local administration is carried out by 33 smaller authorities. The GLA consists of two elected components: the mayor of London, who has executive powers, and the London Assembly, which scrutinises the mayor's decisions and can accept or reject the mayor's budget proposals each year.
|
Given this paragraph, what is an important function of the London Assembly?
|
A important function of the London Assembly is to scrutinise the mayor's decisions and accept or reject the mayor's budget proposals each year.
|
null | false
| null |
The RAND Corporation (from the phrase "research and development") is a non-partisan American nonprofit global policy think tank and research institute that conducts research and development (R&D) in multiple fields and industries including national security, military-defense, aerospace, education, public health, energy, the environment (environmental science, studies), economics, political science and international relations, infrastructure, law and criminology (criminal justice), anthropology, sociology, social policy, the social sciences, natural sciences, technology, engineering, mathematics, and their application on public policy, public administration, and business administration issues.
|
What does RAND Corporation do?
|
RAND is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous.
|
null | false
| null |
Are you ever too old for anything?
|
The one thing i will always fiercely & unabashedly resent is the stance that someone is “too old for something” past a certain age. Telling someone they’re “too old” to enjoy a show or practice a hobby or have a personality or do anything that’s not considered “mature” is one of the ugliest standards society has managed to ingrain into people’s brains. Please pursue what makes you happy without shame. if it doesn’t hurt you or anyone else, then it’s no one’s business but yours. You owe no one an explanation. but you do owe yourself authenticity & happiness. And no one can decide how that looks like but you
|
|
null | false
| 112
|
Twitter, a micro-blogging and social networking site has emerged as a platform where people express themselves and react to events in real-time. It is estimated that nearly 500 million tweets are sent per day . Twitter data is particularly interesting because of its peculiar nature where people convey messages in short sentences using hashtags, emoticons, emojis etc. In addition, each tweet has meta data like location and language used by the sender. It's challenging to analyze this data because the tweets might not be grammatically correct and the users tend to use informal and slang words all the time. Hence, this poses an interesting problem for NLP researchers. Any advances in using this abundant and diverse data can help understand and analyze information about a person, an event, a product, an organization or a country as a whole. Many notable use cases of the twitter can be found here.
Along the similar lines, The Task 1 of WASSA-2017 BIBREF0 poses a problem of finding emotion intensity of four emotions namely anger, fear, joy, sadness from tweets. In this paper, we describe our approach and experiments to solve this problem. The rest of the paper is laid out as follows: Section 2 describes the system architecture, Section 3 reports results and inference from different experiments, while Section 4 points to ways that the problem can be further explored.
The rest of the paper is laid out as follows: Section 2 describes the system architecture, Section 3 reports results and inference from different experiments, while Section 4 points to ways that the problem can be further explored.
|
Do they give generalizations of patterns?
|
Yes.
|
null | false
| 94
|
Using our cleaned data set, we now built a recurrent neural network (RNN) with long short-term memory gates (LSTM). Our RNN/LSTM is trained to predict, given a description, whether a home corresponds to a high/medium/low popularity listing. The architecture of the RNN/LSTM employs Tensorflow’s Dynamic RNN package. Each sentence input is first fed into an embedding layer, where the input’s text is converted to a GloVe vector. These GloVe vectors are learned via a global word-word co-occurrence matrix using our corpus of Airbnb listing descriptions [8]. At each time step, the GloVe vectors are then fed into an LSTM layer. For each layer, the model forward propagates the output of the LSTM layer to the next time-step’s LSTM layer via a rectified linear unit (RLU) activation function. Each layer also pipes the output of the LSTM through a cross-entropy operation, to predict, for each time-step, the category of the input sequence. We finally ensemble these predictions, to create the model’s complete output prediction.
Using our cleaned data set, we now built a recurrent neural network (RNN) with long short-term memory gates (LSTM). Our RNN/LSTM is trained to predict, given a description, whether a home corresponds to a high/medium/low popularity listing.
|
What neural network did they build?
|
A recurrent neural network (RNN) with long short-term memory gates (LSTM).
|
null | false
| null |
Geography
Occitania is the second-largest region in mainland France, covering an area of 72,724 km2 (28,079 sq mi); it has a population of 5,845,102 (2017). It has a Mediterranean coast on the southeast and is neighboured by Provence-Alpes-Côte d'Azur on the east, Auvergne-Rhône-Alpes on the northeast and Nouvelle-Aquitaine on the west and northwest, as well as foreign borders of Andorra (Canillo, Encamp, La Massana, Ordino) and Spain (Aragon and Catalonia) on the south.
Departments
Code Arms 1 Department Prefecture Named after Population (2017)
09 Coat of arms of department 09 Ariège Foix Ariège River 153,153
11 Coat of arms of department 11 Aude Carcassonne Aude River 370,260
12 Coat of arms of department 12 Aveyron Rodez Aveyron River 279,206
30 Coat of arms of department 30 Gard Nîmes Gardon River 744,178
31 Coat of arms of department 31 Haute-Garonne Toulouse Garonne River 1,362,672
32 Coat of arms of department 32 Gers Auch Gers River 191,091
34 Coat of arms of department 34 Hérault Montpellier Hérault River 1,144,892
46 Coat of arms of department 46 Lot Cahors Lot River 173,828
48 Coat of arms of department 48 Lozère Mende Mont Lozère 76,601
65 Coat of arms of department 65 Hautes-Pyrénées Tarbes Pyrenees 228,530
66 Coat of arms of department 66 Pyrénées-Orientales Perpignan Pyrenees 474,452
81 Coat of arms of department 81 Tarn Albi Tarn River 387,890
82 Coat of arms of department 82 Tarn-et-Garonne Montauban Tarn River and Garonne River 258,349
Major communities
Toulouse, Occitania's prefecture and largest city
The Pont du Gard from the Roman Era, one of Occitanie's main landmarks and tourist attractions
The largest communes of Occitania are (population as of 2017:
Toulouse (479,553)
Montpellier (285,121)
Nîmes (150,610)
Perpignan (120,158)
Béziers (77,177)
Montauban (60,810)
Narbonne (54,700)
Albi (50,759)
Carcassonne (47,365)
|
What are the 13 departments and main communities of France's administrative region of Occitania?
|
The 13 Departments of Occitania are:
- Ariège
- Aude
- Aveyron
- Gard
- Haute-Garonne
- Gers
- Hérault
- Lot
- Lozère
- Hautes-Pyrénées
- Pyrénées-Orientales
- Tarn
- Tarn-et-Garonne
The main communities located within Occitania are:
- Toulouse (population: 479,553)
- Montpellier (population: 285,121)
- Nîmes (population: 150,610)
- Perpignan (population: 120,158)
- Béziers (population: 77,177)
- Montauban (population: 60,810)
- Narbonne (population: 54,700)
- Albi (population: 50,759)
- Carcassonne (population: 47,365)
|
null | false
| null |
Aaron Fenster is a medical physicist at the University of Western Ontario Robarts Research Institute in London, Ontario, Canada. He was named a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2013 for his contributions to medical imaging and three-dimensional ultrasound-guided interventions. He is also a fellow of the Canadian Academy of Health Sciences and co-program director of the Ontario Institute for Cancer Research Imaging Program. He holds Ph.D. from the University of Toronto and received further training at the Ontario Cancer Institute.
|
Give me a bulleted list of Aaron Fenster's accomplishments.
|
- Medical physicist at the University of Western Ontario Robarts Research Institute.
- Fellow of the Institute of Electrical and Electronics Engineers (IEEE).
- Fellow of the Canadian Academy of Health Sciences.
- Co-program director of the Ontario Institute for Cancer Research Imaging Program.
- Completed a Ph.D. from the University of Toronto.
|
null | false
| null |
Classify the following as either red wines or white wines: Cabernet Sauvignon, Pinot Blanc, Riesling, Chardonnay, Pinot Grigio, Sauvignon Blanc, Merlot, Syrah, Shiraz
|
Red wines: Cabernet Sauvignon, Merlot, Syrah, Shiraz
White wines: Pinot Blanc, Riesling, Chardonnay, Pinot Grigio, Sauvignon Blanc
|
|
null | false
| null |
What is the Philly Special and when was it used ?
|
The Philly Special is a trick play in American football that was used successfully by the Philadelphia Eagles against the New England Patriots in Super Bowl LXXII. Instead of snapping the ball directly to the quarterback, the ball was snapped to the running back, who lateraled it to the tight end who passed it to a wide open quarterback for a touchdown.
|
|
null | false
| null |
What is a noun?
|
A noun is a word that is used to identify a person, place, or thing.
|
|
null | false
| null |
Write a brief passage arguing for wearing fanny packs when walking around town.
|
A fanny pack, otherwise known as a belt bag or waist pack, is a functional and fashionable item to wear while going around town. It can hold any items essential to make your day comfortable, such as keys, a wallet, sunglasses, snacks, lip balm, or others. It's especially useful for when your clothes either don't have pockets, or don't have pockets large enough to fit everything you'd like to bring. In the past, fanny packs were considered ugly and bulky, but in recent times, fanny packs have become quite fashionable; luxury brands have started selling fanny packs and fashion magazine outfits often incorporate a fanny pack slung around a shoulder or around the waist. Overall, fanny packs are a capable item that is sure to improve your experience when talking a walk.
|
|
1911.03270
| true
| null |
For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation:
For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8.
|
Is the RNN model evaluated against any baseline?
|
Yes.
|
1911.03842
| false
| null |
There are a larger number of male-gendered character personas than female-gendered character personas (see Section SECREF2), so we balance existing personas using gender-swapping. For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns. Additionally, we ask annotators to swap the gender of any characters that are referred to in the persona text for a given character.
For every gendered character in the dataset, we ask annotators to create a new character with a persona of the opposite gender that is otherwise identical except for referring nouns or pronouns.
|
In the targeted data collection approach, what type of data is targetted?
|
Gendered characters in the dataset
|
null | false
| null |
What do the Olympic rings mean?
|
The Olympic rings represent the five inhabited continents of the world that participate in the Olympic Games: Africa, America, Asia, Europe and Oceania. The colors of the five Olympic rings (blue, yellow, black, green and red) are the colors that are part of almost every national flag.
|
|
null | false
| null |
Classify each of the following as either caloric or low-fat products: butter, turnips, oils, salad dressings, spinach, sugar, water, nuts, seeds, dry bread, zucchini, strawberries, dry cereal, crackers, tomatoes, egg yolk, avocado, dried fruit, ananas, red meat, pizza, salmon, ketchup, watermelon
|
yes, no, yes, yes, no, yes, no, yes, yes, yes, no, no, yes, yes, no, yes, yes, yes, no, yes, yes, yes, yes, no
|
|
null | false
| 91
|
Neural machine translation (NMT) has achieved the state-of-the-art results on a mass of language pairs with varying structural differences, such as English-French BIBREF0, BIBREF1 and Chinese-English BIBREF2. However, so far not much is known about how and why NMT works, which pose great challenges for debugging NMT models and designing optimal architectures.
The understanding of NMT models has been approached primarily from two complementary perspectives. The first thread of work aims to understand the importance of representations by analyzing the linguistic information embedded in representation vectors BIBREF3, BIBREF4 or hidden units BIBREF5, BIBREF6. Another direction focuses on understanding the importance of input words by interpreting the input-output behavior of NMT models. Previous work BIBREF7 treats NMT models as black-boxes and provides explanations that closely resemble the attention scores in NMT models. However, recent studies reveal that attention does not provide meaningful explanations since the relationship between attention scores and model output is unclear BIBREF8.
In this paper, we focus on the second thread and try to open the black-box by exploiting the gradients in NMT generation, which aims to estimate the word importance better. Specifically, we employ the integrated gradients method BIBREF9 to attribute the output to the input words with the integration of first-order derivatives. We justify the gradient-based approach via quantitative comparison with black-box methods on a couple of perturbation operations, several language pairs, and two representative model architectures, demonstrating its superiority on estimating word importance.
We analyze the linguistic behaviors of words with the importance and show its potential to improve NMT models. First, we leverage the word importance to identify input words that are under-translated by NMT models. Experimental results show that the gradient-based approach outperforms both the best black-box method and other comparative methods. Second, we analyze the linguistic roles of identified important words, and find that words of certain syntactic categories have higher importance while the categories vary across language. For example, nouns are more important for Chinese$\Rightarrow $English translation, while prepositions are more important for English-French and -Japanese translation. This finding can inspire better design principles of NMT architectures for different language pairs. For instance, a better architecture for a given language pair should consider its own language characteristics.
Neural machine translation (NMT) has achieved the state-of-the-art results on a mass of language pairs with varying structural differences, such as English-French (Bahdanau et al., 2014; Vaswani et al., 2017) and Chinese-English (Hassan et al., 2018).
|
What is the research background of this paper?
|
Neural machine translation (NMT) has achieved the state-of-the-art results on a mass of language pairs with varying structural differences, such as English-French and Chinese-English.
|
null | false
| null |
Give me five books that I should read today.
|
There are many books that you could read today, however, if I could recommend a few, I would recommend the following: The Great Gatsby, by F. Scott Fitzgerald, Let The Great World Spin, by Colum McCann, 1984 by George Orwell, Pride and Prejudice by Jane Austen, and The Diary of Anne Frank, by Anne Frank.
|
|
null | false
| null |
Castle Island is a peninsula in South Boston on the shore of Boston Harbor. In 1928, Castle Island was connected to the mainland by a narrow strip of land and is thus no longer an island. It has been the site of a fortification since 1634, and is currently a 22-acre (8.9 ha) recreation site and the location of Fort Independence.
|
Is Castle Island really an Island?
|
No, Castle Island is a peninsula off Boston Harbor as of 1928, when it was connected to the Massachusetts mainland. It is a recreation area that is the site of Fort Independence.
|
null | false
| null |
Is it possible to prevent Type 2 Diabetes?
|
Type 2 diabetes is largely preventable by staying a normal weight, exercising regularly, and eating a healthy diet (high in fruits and vegetables and low in sugar and saturated fats).Treatment involves exercise and dietary changes. If blood sugar levels are not adequately lowered, the medication metformin is typically recommended. Many people may eventually also require insulin injections. In those on insulin, routinely checking blood sugar levels is advised; however, this may not be needed in those who are not on insulin therapy. Bariatric surgery often improves diabetes in those who are obese.
|
|
null | false
| 104
|
Since the CoNLL annotations have 21 semantic roles in total, we use 21 roles in our model as well as the baseline. Following garg2012unsupervised, we set the number of PRs to 2 (excluding INLINEFORM0 , INLINEFORM1 and INLINEFORM2 ), and SRs to 21-2=19. Table TABREF27 shows the results.
In the first setting (Line 1), we train and test the monolingual model on the CoNLL data. We observe significant improvements in F1 score over the Baseline (Line 0) in both languages. Using the CoNLL 2009 dataset alone, titovcrosslingual report an F1 score of 80.9% (PU=86.8%, CO=75.7%) for German. Thus, our monolingual model outperforms their monolingual model in German. For English, they report an F1 score of 83.6% (PU=87.5%, CO=80.1%), but note that our English results are not directly comparable to theirs due to differences argument identification, as discussed in section SECREF25 . As their argument identification score is lower, perhaps their system is discarding “difficult” arguments which leads to a higher clustering score.
In the second setting (Line 2), we use the additional monolingual Europarl (EP) data for training. We get equivalent results in English and a significant improvement in German compared to our previous setting (Line 1). The German dataset in CoNLL is quite small and benefits from the additional EP training data. In contrast, the English model is already quite good due to a relatively big dataset from CoNLL, and good accuracy syntactic parsers. Unfortunately, titovcrosslingual do not report results with this setting.
The third setting (Line 3) gives the results of our multilingual model, which adds the word alignments in the EP data. Comparing with Line 2, we get non-significant improvements in both languages. titovcrosslingual obtain an F1 score of 82.7% (PU=85.0%, CO=80.6%) for German, and 83.7% (PU=86.8%, CO=80.7%) for English. Thus, for German, our multilingual Bayesian model is able to capture the cross-lingual patterns at least as well as the external penalty term in BIBREF6 . We cannot compare the English results unfortunately due to differences in argument identification.
We also compared monolingual and bilingual training data using a setting that emulates the standard supervised setup of separate training and test data sets. We train only on the EP dataset and test on the CoNLL dataset. Lines 4 and 5 of Table TABREF27 give the results. The multilingual model obtains small improvements in both languages, which confirms the results from the standard unsupervised setup, comparing lines 2 to 3.
These results indicate that little information can be learned about semantic roles from this parallel data setup. One possible explanation for this result is that the setup itself is inadequate. Given the definition of aligned arguments, only 8% of English arguments and 17% of German arguments are aligned. This plus our experiments suggest that improving the alignment model is a necessary step to making effective use of parallel data in multilingual SRI, for example by joint modeling with SRI. We leave this exploration to future work.
We get equivalent results in English and a significant improvement in German compared to our previous setting (Line 1).
|
Compared to Line 1, does Line 2 get better results in English and a significant improvement in German?
|
No.
|
null | false
| null |
How much do clouds weigh?
|
The weight of a cloud depends on how you define it. It will include the weight of water droplets in the cloud itself. You may also define weight to include the weight of the air, mostly pressing down on the cloud. If only defined by water droplets, clouds have a density of about 0.5 grams of water per cubic meter. A 1 cubic kilometer cloud contains 1 billion cubic meters. That would be 500 million grams or 500 thousand kilograms. That is equivalent to 1.1 million pounds or 551 tons of water droplets.
|
|
null | false
| 383
|
Parkinson's disease (PD) is a neurodegenerative disorder characterized by the progressive loss of dopaminergic neurons in the mid-brain producing several motor and non-motor impairments in the patients BIBREF0. Motor symptoms include among others, bradykinesia, rigidity, resting tremor, micrographia, and different speech impairments. The speech impairments observed in PD patients are typically grouped as hypokinetic dysarthria, and include symptoms such as vocal folds rigidity, bradykinesia, and reduced control of muscles and limbs involved in the speech production. The effects of dysarthria in the speech of PD patients include increased acoustic noise, reduced intensity, harsh and breathy voice quality, increased voice nasality, monopitch, monoludness, speech rate disturbances, imprecise articulation of consonants BIBREF1, and involuntary introduction of pauses BIBREF2. Clinical observations in the speech of patients can be objectively and automatically measured by using computer aided methods supported in signal processing and pattern recognition with the aim to address two main aspects: (1) to support the diagnosis of the disease by classifying healthy control (HC) subjects and patients, and (2) to predict the level of degradation of the speech of the patients according to a specific clinical scale.
Most of the studies in the literature to classify PD from speech are based on computing hand-crafted features and using classifiers such as support vector machines (SVMs) or K-nearest neighbors (KNN). For instance, in BIBREF3, the authors computed features related to perturbations of the fundamental frequency and amplitude of the speech signal to classify utterances from 20 PD patients and 20 HC subjects, Turkish speakers. Classifiers based on KNN and SVMs were considered, and accuracies of up to 75% were reported. Later, in BIBREF4 the authors proposed a phonation analysis based on several time frequency representations to assess tremor in the speech of PD patients. The extracted features were based on energy and entropy computed from time frequency representations. Several classifiers were used, including Gaussian mixture models (GMMs) and SVMs. Accuracies of up to 77% were reported in utterances of the PC-GITA database BIBREF5, formed with utterances from 50 PD patients and 50 HC subjects, Colombian Spanish native speakers. The authors from BIBREF6 computed features to model different articulation deficits in PD such as vowel quality, coordination of laryngeal and supra-laryngeal activity, precision of consonant articulation, tongue movement, occlusion weakening, and speech timing. The authors studied the rapid repetition of the syllables /pa-ta-ka/ pronounced by 24 Czech native speakers, and reported an accuracy of 88% discriminating between PD patients and HC speakers, using an SVM classifier. Additional articulation features were proposed in BIBREF7, where the authors modeled the difficulty of PD patients to start/stop the vocal fold vibration in continuous speech. The model was based on the energy content in the transitions between unvoiced and voiced segments. The authors classified PD patients and HC speakers with speech recordings in three different languages (Spanish, German, and Czech), and reported accuracies ranging from 80% to 94% depending on the language; however, the results were optimistic, since the hyper-parameters of the classifier were optimized based on the accuracy on the test set. Another articulation model was proposed in BIBREF8. The authors considered a forced alignment strategy to segment the different phonetic units in the speech utterances. The phonemes were segmented and grouped to train different GMMs. The classification was performed based on a threshold of the difference between the posterior probabilities from the models created for HC subjects and PD patients. The model was tested with Colombian Spanish utterances from the PC-GITA database BIBREF5 and with the Czech data from BIBREF9. The authors reported accuracies of up to 81% for the Spanish data, and of up to 94% for the Czech data.
In addition to the hand-crafted feature extraction models, there is a growing interest in the research community to consider deep learning models in the assessment of the speech of PD patients BIBREF10, BIBREF11, BIBREF12. Deep learning methods have the potential to extract more abstract and robust features than those manually computed. These features could help to improve the accuracy of different models to classify pathological speech, such as PD BIBREF13. A deep learning based articulation model was proposed in BIBREF11 to model the difficulties of the patients to stop/start the vibration of the vocal folds. Transitions between voiced and unvoiced segments were modeled with time-frequency representations and convolutional neural networks (CNNs). The authors considered speech recordings of PD patients and HC speakers in three languages: Spanish, German, and Czech, and reported accuracies ranging from 70% to 89%, depending on the language. However, in a language independent scenario, i.e., training the CNN with utterances from one language and testing with the remaining two, the results were not satisfactory (accuracy$<60\%$).
The classification of PD from speech in different languages has to be carefully conducted to avoid bias towards the linguistic content present in each language. For instance, Czech and German languages are richer than Spanish language in terms of consonant production, which may cause that it is easier to produce consonant sounds by Czech PD patients than by Spanish PD patients. Despite these language dependent issues, the results in the classification of PD in different languages could be improved using a transfer learning strategy among languages, i.e., to train a base model with utterances from one language, and then, to perform a fine-tuning of the weights with utterances from the target language BIBREF14. Similar approaches based on transfer learning have been recently considered to classify PD using handwriting BIBREF15. In the present study, we propose a methodology to classify PD via a transfer learning strategy with the aim to improve the accuracy in different languages. CNNs trained with utterances from one language are used to initialize a model to classify speech utterances from PD patients in a different language. The models are evaluated with speech utterances in Spanish, German, and Czech languages. The results suggest that the use of a transfer learning strategy improved the accuracy of the models over 8% with respect to those obtained when the model is trained only with utterance from the target language.
(2) to predict the level of degradation of the speech of the patients according to a specific clinical scale.
|
Based on what the authors predict the level of degradation of the speech of the patients?
|
A specific clinical scale.
|
null | false
| null |
Parkrun (stylised as parkrun) is a collection of 5-kilometre (3.1 mi) events for walkers, runners and volunteers that take place every Saturday morning at more than 2,000 locations in 22 countries across six continents. Junior Parkrun (stylised as junior parkrun) is a spin-off event that provides a 2 kilometres (1+1⁄4 mi) event for children aged 4–14 on a Sunday morning. Parkrun events are free to enter and are delivered by volunteers, supported by a small group of staff at its headquarters.
|
What is parkrun
|
Parkrun is a community event that takes place each Saturday morning where entrants walk, jog, or run a 5-kilometre course. Events take place at a range of general locations including parks, forests, rivers, lakes, beaches, and nature reserves. The first ever parkrun event took place on 2 October 2004 in Bushy Park, London involving 13 runners. Today more than 3 million people across more than 2,000 locations in 22 countries participate in parkrun each week. Parkrun events are free to enter and are delivered by volunteers.
|
2002.03438
| false
| null |
Suppose we are given a specific language model such as GPT-2 BIBREF6, GROVER BIBREF8, or CTRL BIBREF7, and it is characterized in terms of estimates of either cross-entropy $H(P,Q)$ or perplexity $\mathrm {PPL}(P,Q)$.
We can see directly that the Neyman-Pearson error of detection in the case of i.i.d. tokens is:
and similar results hold for ergodic observations.
Since we think of $H(P)$ as a constant, we observe that the error exponent for the decision problem is precisely an affine shift of the cross-entropy. Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text.
Thus we see that intuitive measures of generative text quality match a formal operational measure of indistinguishability that comes from the hypothesis testing limit.
Suppose we are given a specific language model such as GPT-2 BIBREF6, GROVER BIBREF8, or CTRL BIBREF7, and it is characterized in terms of estimates of either cross-entropy $H(P,Q)$ or perplexity $\mathrm {PPL}(P,Q)$.
We can see directly that the Neyman-Pearson error of detection in the case of i.i.d. tokens is:
and similar results hold for ergodic observations.
Since we think of $H(P)$ as a constant, we observe that the error exponent for the decision problem is precisely an affine shift of the cross-entropy. Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text.
Thus we see that intuitive measures of generative text quality match a formal operational measure of indistinguishability that comes from the hypothesis testing limit.
|
Which language models generate text that can be easier to classify as genuine or generated?
|
The answers are shown as follows:
* Outputs from models that are better in the sense of cross-entropy or perplexity are harder to distinguish from authentic text.
|
null | false
| null |
Determine which continent the following cities are located: London, Singapore, New York, Sao Paulo, Buenos Aires, Melbourne, Jakarta, Boston, Paris, Riyadh, Lagos, Casablanca, Toronto, and Berlin.
|
New York, Boston, and Toronto are based in North America. London, Paris, and Berlin are in Europe. Jakarta, Riyadh, and Singapore are in Asia. Lagos and Casablanca are in Africa. Buenos Aires and Sao Paulo are in South America. Lastly, Melbourne is in Australia.
|
|
2002.02427
| false
| null |
Feature-based models. We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words). Several classical machine learning classifiers were tested with several feature combinations, among them Random Forest (RF) achieved the best result with all features. Neural model with monolingual embeddings. We used Convolutional Neural Network (CNN) network whose structure is similar to the one proposed by BIBREF29. For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English . For the three languages, the size of the embeddings is 300 and the embeddings were fine-tuned during the training process. The CNN network was tuned with 20% of the training corpus using the $Hyperopt$ library.
For the embeddings, we relied on $AraVec$ BIBREF30 for Arabic, FastText BIBREF31 for French, and Word2vec Google News BIBREF32 for English .
|
What monolingual word representations are used?
|
AraVec for Arabic, FastText for French, and Word2vec Google News for English.
|
null | false
| null |
What is the difference between a piano and a harpsichord?
|
A harpsichord is an instrument with a keyboard, but the strings are not hit by a hammer like at a piano, but are plucked. A harpsichord was developed and built before the piano was invented. A harpsichord has a different sound than a piano. Another difference is that it is not possible to play with different volume levels on a harpsichord. Playing in a dynamic way was the main driver for developing the piano, which was therefore call piano-forte, which means quietly-loud.
|
|
null | false
| 181
|
A clinician can look at a patient's electronic health record (EHR) and not only decide whether the patient has diabetes but also produce a succinct summary of the clinical evidence. Replicating this feat with computational tools has been the focus of much research in clinical informatics. There are major initiatives underway to codify clinical knowledge into formal representations, most often as deterministic rules that can be applied in a semi-automated fashion BIBREF0 . However, representing the intuitive judgments of human experts can be challenging, particularly when the formal system does not match the expert's knowledge. For example, many deterministic disease classifiers used in clinical informatics rely heavily upon administrative codes not available at time of diagnosis. Further, developing and testing such systems is time- and labor-intensive.
We propose instead a lightweight information theoretic framework for codifying informal human knowledge and then use it to extract interpretable latent topics from text corpora. For example, to discover patients with diabetes in a set of clinical notes, a doctor can begin by specifying disease-specific anchor terms BIBREF1 , BIBREF2 , such as “diabetes” or “insulin.” Our framework then uses these to help discover both latent topics associated with diabetes and records in which diabetes-related topics occur. The user can then add (or remove) additional anchor terms (e.g., “metformin”) to improve the quality of the learned (diabetes) topics.
In this workshop paper, we introduce a simple approach to anchored information theoretic topic modeling using a novel combination of Correlation Explanation (CorEx) BIBREF3 and the information bottleneck BIBREF4 . This flexible framework enables the user to leverage domain knowledge to guide exploration of a collection of documents and to impose semantics onto latent factors learned by CorEx. We present preliminary experimental results on two text corpora (including a corpus of clinical notes), showing that anchors can be used to discover topics that are more specific and relevant. What is more, we demonstrate the potential for this framework to perform weakly supervised learning in settings where labeling documents is prohibitively expensive BIBREF5 , BIBREF6 .
With respect to interpretable machine learning, our contributions are twofold. First, our framework provides a way for human users to share domain knowledge with a statistical learning algorithm that is both convenient for the human user and easily digestible by the machine. Second, our experimental results confirm that the introduction of simple anchor words can improve the coherence and human interpretability of topics discovered from data. Both are essential to successful and interactive collaboration between machine learning and human users.
In this workshop paper, we introduce a simple approach to anchored information theoretic topic modeling using a novel combination of Correlation Explanation (CorEx) and the information bottleneck.
|
What kind of novel approach do they introduce?
|
A novel approach to anchored information theoretic topic modeling using a novel combination of Correlation Explanation (CorEx) and the information bottleneck.
|
null | false
| null |
Name five NFL teams in the NFC.
|
Five NFL teams in the NFC are the Carolina Panthers, New Orleans Saints, Philadelphia Eagles, New York Giants, and Detroit Lions.
|
|
null | false
| null |
How does increasing interest rates impact inflation?
|
Increased interest rates raises the cost of borrowing money, which tends to reduce spending and ultimately inflation.
|
|
null | false
| null |
Extract all the geographical places from the following passage: On Thursday we flew to Sandakan in Sabah. The Qatar flight had several newspapers available for passengers to read. On the front page of all of them were stories about Saddam Hussein and the invasion of Iraq. When we arrived in Borneo, we saw a two-page spread in a newspaper that dove into the history of Kurdistan. The very next article celebrated the liberation of Baghdad, the city juxtaposed against a backdrop of New York City in the United States.
|
Geographical Places: Sandakan, Sabah, Iraq, Borneo, Kurdistan, Baghdad, New York City, the United States.
|
|
null | false
| null |
Make me a bullet list of things to bring camping
|
- Tent
- Camping stove
- Water bottles
- Camping dishes
- Hiking boots
- Backpack
- Sleeping bag
|
|
null | false
| null |
Where did basketball originate?
|
Basketball was invented in the winter of 1891-92 in Springfield, Massachusetts at Springfield College (then known as the International YMCA Training School). James Naismith invented basketball as a game that could be played indoors during the colder months to keep the young men in the area fit and entertained.
|
|
null | false
| null |
Multiomics, multi-omics, integrative omics, "panomics" or "pan-omics" is a biological analysis approach in which the data sets are multiple "omes", such as the genome, proteome, transcriptome, epigenome, metabolome, and microbiome (i.e., a meta-genome and/or meta-transcriptome, depending upon how it is sequenced); in other words, the use of multiple omics technologies to study life in a concerted way. By combining these "omes", scientists can analyze complex biological big data to find novel associations between biological entities, pinpoint relevant biomarkers and build elaborate markers of disease and physiology. In doing so, multiomics integrates diverse omics data to find a coherently matching geno-pheno-envirotype relationship or association. The OmicTools service lists more than 99 softwares related to multiomic data analysis, as well as more than 99 databases on the topic.
Systems biology approaches are often based upon the use of panomic analysis data. The American Society of Clinical Oncology (ASCO) defines panomics as referring to "the interaction of all biological functions within a cell and with other body functions, combining data collected by targeted tests ... and global assays (such as genome sequencing) with other patient-specific information."
Single-cell multiomics
A branch of the field of multiomics is the analysis of multilevel single-cell data, called single-cell multiomics. This approach gives us an unprecedent resolution to look at multilevel transitions in health and disease at the single cell level. An advantage in relation to bulk analysis is to mitigate confounding factors derived from cell to cell variation, allowing the uncovering of heterogeneous tissue architectures.
Methods for parallel single-cell genomic and transcriptomic analysis can be based on simultaneous amplification or physical separation of RNA and genomic DNA. They allow insights that cannot be gathered solely from transcriptomic analysis, as RNA data do not contain non-coding genomic regions and information regarding copy-number variation, for example. An extension of this methodology is the integration of single-cell transcriptomes to single-cell methylomes, combining single-cell bisulfite sequencing to single cell RNA-Seq. Other techniques to query the epigenome, as single-cell ATAC-Seq and single-cell Hi-C also exist.
A different, but related, challenge is the integration of proteomic and transcriptomic data. One approach to perform such measurement is to physically separate single-cell lysates in two, processing half for RNA, and half for proteins. The protein content of lysates can be measured by proximity extension assays (PEA), for example, which use DNA-barcoded antibodies. A different approach uses a combination of heavy-metal RNA probes and protein antibodies to adapt mass cytometry for multiomic analysis.
Multiomics and machine learning
In parallel to the advances in highthroughput biology, machine learning applications to biomedical data analysis are flourishing. The integration of multi-omics data analysis and machine learning has led to the discovery of new biomarkers. For example, one of the methods of the mixOmics project implements a method based on sparse Partial Least Squares regression for selection of features (putative biomarkers).
https://en.wikipedia.org/wiki/Multiomics
|
Given these paragraphs about Asperger Multiomics, what is the goal in combining all the "omes"?
|
By combining these "omes", scientists can analyze complex biological big data to find novel associations between biological entities, pinpoint relevant biomarkers and build elaborate markers of disease and physiology. In doing so, multiomics integrates diverse omics data to find a coherently matching geno-pheno-envirotype relationship or association.
|
null | false
| null |
In Paul Simon's famous song, how many different ways are there to leave your lover?
|
Paul Simon tells us there are fifty ways to leave your lover.
|
|
null | false
| 145
|
The Finnish text data used for the language modeling task is provided by BIBREF13. The dataset consists mainly of newspapers and books of around 144 million word tokens and 4.2 million unique tokens. We use a Morfessor 2.0 BIBREF14 using the basic unsupervised Morfessor Baseline algorithm BIBREF15 with a corpus weight parameter ($\alpha $) of 0.001. We have a vocabulary of 34K subword tokens for the left+right-marked (+m+) markings and 19K subword tokens for the left-marked (+m) markings. We also pre-process the data to remove any punctuation marks such that we can use the same data with an ASR system. The input is one sentence per line and we shuffle the sentences at each epoch. The data is randomly divided into- training dataset and a validation dataset. The test dataset consists of 2850 Finnish news articles obtained from the Finnish national broadcaster YLE.
The test dataset consists of 2850 Finnish news articles obtained from the Finnish national broadcaster YLE.
|
How many news articles does the test dataset consist of?
|
2850 Finnish news articles.
|
null | false
| null |
Name five teams in the Ligue 1 in France.
|
Five teams in League 1 are Lyon, Nice, Marseille, Lens, and PSG.
|
|
null | false
| null |
A hot dog (commonly spelled hotdog) is a food consisting of a grilled or steamed sausage served in the slit of a partially sliced bun. The term hot dog can refer to the sausage itself. The sausage used is a wiener (Vienna sausage) or a frankfurter (Frankfurter Würstchen, also just called frank). The names of these sausages commonly refer to their assembled dish. Some consider a hot dog to technically be a sandwich. Hot dog preparation and condiments vary worldwide. Typical condiments include mustard, ketchup, relish, onions in tomato sauce, and cheese sauce. Common garnishes include sauerkraut, diced onions, jalapeños, chili, grated cheese, coleslaw, bacon, and olives. Hot dog variants include the corn dog and pigs in a blanket. The hot dog's cultural traditions include the Nathan's Hot Dog Eating Contest and the Oscar Mayer Wienermobile.
These types of sausages were culturally imported from Germany and became popular in the United States. It became a working-class street food in the U.S., sold at stands and carts. The hot dog became closely associated with baseball and American culture. Although particularly connected with New York City and its cuisine, the hot dog eventually became ubiquitous throughout the US during the 20th century. Its preparation varies regionally in the country, emerging as an important part of other regional cuisines, including Chicago street cuisine.
|
What are the common toppings that can go on a hotdog?
|
Common toppings or garnishes include sauerkraut, diced onions, jalapeños, chili, grated cheese, coleslaw, bacon, and olives
|
null | false
| null |
Spoetzl Brewery (originally Shiner Brewing Association) is a brewery located in Shiner, Texas, United States. It produces a diverse line of Shiner beers, including their flagship Shiner Bock, a dark lager that is now distributed throughout the US. The brewery is owned by the Gambrinus Company, a family-owned company based in San Antonio, which also owns Trumer Brewery in Berkeley, California.
|
Who brews Shiner Bock beer?
|
Spoetzl Brewery (originally Shiner Brewing Association) is a brewery located in Shiner, Texas, United States. It produces a diverse line of Shiner beers, including their flagship Shiner Bock, a dark lager that is distributed throughout the US.
|
null | false
| 175
|
Electroencephalography (EEG) is a non invasive way of measuring electrical activity of human brain. In BIBREF0 we demonstrated deep learning based automatic speech recognition (ASR) using EEG signals for a limited English vocabulary of four words and five vowels. In this paper we extend our work for a much larger English vocabulary and we use state-of-art end-to-end continuous speech recognition models to perform recognition. In our prior work we predicted isolated words and vowels.
ASR systems forms the front end or back end in many cutting edge voice activated technologies like Amazon Alexa, Apple Siri, Windows Cortana, Samsung Bixby etc. Unfortunately these systems are trained to recognize text only from acoustic features. This limits technology accessibility to people with speaking disabilities and disorders. The research work presented in this paper tries to address this issue by investigating speech recognition using only EEG signals with no acoustic input and also by combining EEG features along with traditional acoustic features to perform recognition. We believe the former will help with speech restoration for people who can not speak at all and the latter will help people who are having speaking disabilities like broken or discontinued speech etc to use voice activated technologies with better user experience there by helping in improving technology accessibility.
ASR performance is degraded in presence of noisy speech and in real life situations most of the speech is noisy. Inspired from the unique robustness to environmental artifacts exhibited by the human auditory cortex BIBREF1 , BIBREF2 we used very noisy speech data for this work and demonstrated lower word error rate (WER) for smaller corpus using EEG features, concatenation of EEG features and acoustic features.
In BIBREF3 authors decode imagined speech from EEG using synthetic EEG data and connectionist temporal classification (CTC) network but in our work we use real EEG data, use EEG data recorded along with acoustics. In BIBREF4 authors perform envisioned speech recognition using random forest classifier but in our case we use end to end state of art models and perform recognition for noisy speech. In BIBREF5 authors demonstrate speech recognition using electrocorticography (ECoG) signals, which are invasive in nature but in our work we use non invasive EEG signals.
This work is mainly motivated by the results explained in BIBREF0 , BIBREF6 , BIBREF7 , BIBREF3 . In BIBREF6 the authors used classification approach for identifying phonological categories in imagined and silent speech but in our work we used continuous speech recognition state of art models and our models were predicting words, characters at each time step. Similarly in BIBREF7 neural network based classification approach was used for predicting phonemes.
Major contribution of this paper is the demonstration of end to end continuous noisy speech recognition using only EEG features and this paper further validates the concepts introduced in BIBREF0 for a much larger English corpus.
We believe the former will help with speech restoration for people who can not speak at all and the latter will help people who are having speaking disabilities like broken or discontinued speech etc to use voice activated technologies with better user experience there by helping in improving technology accessibility.
|
Will the authors' work help people who are having speaking disabilities like broken or discontinued speech?
|
Yes, it will.
|
null | false
| 80
|
We provided an analysis of existing evaluation metrics for scientific summarization with evaluation of all variants of Rouge. We showed that Rouge may not be the best metric for summarization evaluation; especially in summaries with high terminology variations and paraphrasing (e.g. scientific summaries). Furthermore, we showed that different variants of Rouge result in different correlation values with human judgments, indicating that not all Rouge scores are equally effective. Among all variants of Rouge, Rouge-2 and Rouge-3 are better correlated with manual judgments in the context of scientific summarization. We furthermore proposed an alternative and more effective approach for scientific summarization evaluation (Summarization Evaluation by Relevance Analysis - Sera). Results revealed that in general, the proposed evaluation metric achieves higher correlations with semi-manual pyramid evaluation scores in comparison with Rouge.
Our analysis on the effectiveness of evaluation measures for scientific summaries was performed using correlations with manual judgments. An alternative approach to follow would be to use statistical significance testing on the ability of the metrics to distinguish between the summarizers (similar to Rankel:2011). We studied the effectiveness of existing summarization evaluation metrics in the scientific text genre and proposed an alternative superior metric. Another extension of this work would be to evaluate automatic summarization evaluation in other genres of text (such as social media). Our proposed method only evaluates the content quality of the summary. Similar to most of existing summarization evaluation metrics, other qualities such as linguistic cohesion, coherence and readability are not captured by this method. Developing metrics that also incorporate these qualities is yet another future direction to follow.
Our proposed method only evaluates the content quality of the summary. Similar to most of existing summarization evaluation metrics, other qualities such as linguistic cohesion, coherence and readability are not captured by this method. Developing metrics that also incorporate these qualities is yet another future direction to follow.
|
What work will they do in the future?
|
To develope metrics that also incorporate qualities such as linguistic cohesion, coherence and, readability besides the content quality of the summary.
|
null | false
| null |
Can I have a lizard as a pet?
|
Lizards can make great pets if they are well-maintained. They look like dinosaurs, without the typical danger associated with a dinosaur. However, lizards are not social animals and will likely not form a great bond. Mammals typically form a better human to animal bond.
|
|
null | false
| null |
Give me a bullet list of five different European languages
|
- Spanish
- French
- German
- Dutch
- Italian
|
|
null | false
| null |
What age would a child become a tween?
|
Generally a tween is in between their child years and teenage years, between the ages of10 and 13.
|
|
null | false
| null |
What is the best way to relax after work?
|
Ideally a person should do something post work that relaxes them. For example, take a hot shower, watch tv, cook, run, workout or anything that might interest you. I could also be beneficial to have a nice hot meal or hot tea to relax. Reading a book might also be a good option.
|
|
null | false
| null |
Who became king of Holland in 1806?
|
William I of the Netherlands became king of Holland in 1806.
|
|
null | false
| null |
How is a grilled cheese sandwich prepared?
|
To prepare a grilled cheese sandwich, first select two pieces of standard white sandwich bread and lightly apply butter to the outside of each piece. Use room temperature butter as it is much easier to spread than cold butter straight from the refrigerator. Turn on an electric griddle and set the temperature to 350 degrees Fahrenheit. Place one slice of the bread butter side down on the griddle. Next place two slices of an easily meltable cheese on top of the bread. Finally place the remaining piece of bread on top of the cheese with the buttered side facing up. Periodically use a spatula to lift the sandwich and inspect the bottom surface. When the surface is toasted to a light golden brown flip the sandwich. Periodically inspect the bottom surface. When it has reached a light golden brow, remove the sandwich from the griddle and place it on a plate.
|
|
null | false
| 17
|
Each document INLINEFORM0 is represented by two vectors in INLINEFORM1 and INLINEFORM2 . The first vector exists in a “monolingual space” (of dimensionality INLINEFORM3 ) and is based on a bag-of-words representation of the document. The second vector exists in a “crosslingual space” (of dimensionality INLINEFORM4 ) which is common to all languages. More details about these representations are discussed in § SECREF4 .
With our clustering algorithm, we maintain two types of centroid functions for each monolingual cluster. The first is a centroid function H : N × L → Rk1 ∪ {⊥} that assists in associating each document with a monolingual cluster. The second is a centroid function G: N → Rk2 ∪ {⊥} that assists in associating each monolingual cluster with a crosslingual cluster. The ⊥ symbol is reserved to denote documents which are not associated with any cluster yet.
|
What two functions do the authors maintain for monolingual clustering?
|
The first is a centroid function H ,The second is a centroid function G.
|
null | false
| null |
What are good native plants to grow in Ohio?
|
Echinacea purpurea, Pycnanthemum tenuifolium, and Schizachyrium scoparium are all native to Ohio in the United States. Each of these plants will grow well in full sun, and these plants can adapt to a good range of soil conditions common in Ohio.
|
|
null | false
| null |
The name 'Qin' is believed to be the etymological ancestor of the modern-day European name of the country, China. The word probably made its way into the Indo-Aryan languages first as 'Cina' or 'Sina' and then into Greek and Latin as 'Sinai' or 'Thinai'. It was then transliterated into English and French as 'China' and 'Chine'. This etymology is dismissed by some scholars, who suggest that 'Sina' in Sanskrit evolved much earlier before the Qin dynasty. 'Jin', a state controlled by the Zhou dynasty in seventh century BC, is another possible origin. Others argued for the state of Jing (荆, another name for Chu), as well as other polities in the early period as the source of the name.
|
Exact all the names related to 'Qin' in the paragraph
|
Qin, Cina, Sina, Sinai, Thinai, Jin, Jing, Chu are names related to Qin
|
null | false
| null |
When buying a used car, what are some red flags to look out for?
|
A car is a costly purchase and you should be especially careful when buying it used.The first thing to look out for is any visual damages you can find by just thoroughly inspecting it.Ask for the full service history of the vehicle and look for any outliers.Use trusted third party websites to check for accident, stolen vehicle and insurance write off history.Its also better to get the car inspected by a professional car mechanic for a complete peace of mind.
|
|
null | false
| null |
classify the following people as males vs females:
john oliver
stephen colbert
vir das
oprah winfrey
michelle obama
donald trump
queen of england
halle berry
joseph stalin
|
males:
john oliver
stephen colbert
vir das
donald trump
joseph stalin
females:
oprah winfrey
michelle obama
queen of england
halle berry
|
|
null | false
| 360
|
Neural machine translation has recently achieved impressive results BIBREF0 , BIBREF1 , while learning from raw, sentence-aligned parallel text and using little in the way of external linguistic information. However, we hypothesize that various levels of linguistic annotation can be valuable for neural machine translation. Lemmatisation can reduce data sparseness, and allow inflectional variants of the same word to explicitly share a representation in the model. Other types of annotation, such as parts-of-speech (POS) or syntactic dependency labels, can help in disambiguation. In this paper we investigate whether linguistic information is beneficial to neural translation models, or whether their strong learning capability makes explicit linguistic features redundant.
Let us motivate the use of linguistic features using examples of actual translation errors by neural MT systems. In translation out of English, one problem is that the same surface word form may be shared between several word types, due to homonymy or word formation processes such as conversion. For instance, close can be a verb, adjective, or noun, and these different meanings often have distinct translations into other languages. Consider the following English INLINEFORM0 German example:
For the English source sentence in Example SECREF4 (our translation in Example SECREF5 ), a neural MT system (our baseline system from Section SECREF4 ) mistranslates close as a verb, and produces the German verb schließen (Example SECREF6 ), even though close is an adjective in this sentence, which has the German translation nah. Intuitively, part-of-speech annotation of the English input could disambiguate between verb, noun, and adjective meanings of close.
As a second example, consider the following German INLINEFORM0 English example:
German main clauses have a verb-second (V2) word order, whereas English word order is generally SVO. The German sentence (Example UID7 ; English reference in Example UID8 ) topicalizes the predicate gefährlich 'dangerous', putting the subject die Route 'the route' after the verb. Our baseline system (Example UID9 ) retains the original word order, which is highly unusual in English, especially for prose in the news domain. A syntactic annotation of the source sentence could support the attentional encoder-decoder in learning which words in the German source to attend (and translate) first.
We will investigate the usefulness of linguistic features for the language pair German INLINEFORM0 English, considering the following linguistic features:
The inclusion of lemmas is motivated by the hope for a better generalization over inflectional variants of the same word form. The other linguistic features are motivated by disambiguation, as discussed in our introductory examples.
The neural machine translation system is implemented as an attentional encoder-decoder network with recurrent neural networks.
|
What is implemented as an attentional encoder-decoder network with recurrent neural networks?
|
The neural machine translation system is implemented as an attentional encoder-decoder network with recurrent neural networks.
|
null | false
| null |
Dried fruit is fruit from which the majority of the original water content has been removed either naturally, through sun drying, or through the use of specialized dryers or dehydrators. Dried fruit has a long tradition of use dating back to the fourth millennium BC in Mesopotamia, and is prized because of its sweet taste, nutritive value and long shelf life.
Today, dried fruit consumption is widespread. Nearly half of the dried fruits sold are raisins, followed by dates, prunes, figs, apricots, peaches, apples, and pears. These are referred to as "conventional" or "traditional" dried fruits: fruits that have been dried in the sun or in heated wind tunnel dryers. Many fruits such as cranberries, blueberries, cherries, strawberries, and mango are infused with a sweetener (e.g. sucrose syrup) prior to drying. Some products sold as dried fruit, like papaya, kiwifruit and pineapple, are most often candied fruit.
Dried fruits retain most of the nutritional value of fresh fruits. The specific nutrient content of the different dried fruits reflects their fresh counterpart and the processing method.
|
How much nutritional value do dried fruits retain?
|
Dried fruits retain most of the nutritional value of fresh fruits. The specific nutrient content of the different dried fruits reflects their fresh counterpart and the processing method.
|
null | false
| null |
What was the purpose of Major League Baseball's rule changes this year?
|
Major League Baseball has been facing a decline in interest for years. The reasons commonly given for this decline are the length of games and lack of action attributed to the shift.
In an effort to speed up the games and make them more interesting, baseball implemented a number of changes.
- They made the bases bigger to encourage more base stealing.
- They banned the "shift". The "shift" became prevalent over the last decade and is seen as one of the main contributors to a reduction in scoring. They hope to increase offense with the ban on the shift.
- A pitch clock. Pitchers have to deliver a pitch within 15 seconds if no runner is on base; if a runner is on base, the pitcher must deliver the pitch within 20 seconds. Batters also must be in the batter's box and ready to hit with at least 8 seconds remaining on the clock. If a pitcher fails to deliver the pitch within the allotted time, the batter is awarded an automatic ball. If the batter is not ready to hit within 8 seconds, the batter receives an automatic strike.
|
|
null | false
| 127
|
Adversarial examples, a term introduced in BIBREF0, are inputs transformed by small perturbations that machine learning models consistently misclassify. The experiments are conducted in the context of computer vision (CV), and the core idea is encapsulated by an illustrative example: after imperceptible noises are added to a panda image, an image classifier predicts, with high confidence, that it is a gibbon. Interestingly, these adversarial examples can also be used to improve the classifier — either as additional training data BIBREF0 or as a regularisation objective BIBREF1 — thus providing motivation for generating effective adversarial examples.
The germ of this paper comes from our investigation of adversarial attack methods for natural language processing (NLP) tasks, e.g. sentiment classification, which drives us to quantify what is an “effective” or “good” adversarial example. In the context of images, a good adversarial example is typically defined according two criteria:
it has successfully fooled the classifier;
it is visually similar to the original example.
In NLP, defining a good adversarial example is a little more involving, because while criterion (b) can be measured with a comparable text similarity metric (e.g. BLEU or edit distance), an adversarial example should also:
be fluent or natural;
preserve its original label.
These two additional criteria are generally irrelevant for images, as adding minor perturbations to an image is unlikely to: (1) create an uninterpretable image (where else changing one word in a sentence can render a sentence incoherent), or (2) change how we perceive the image, say from seeing a panda to a gibbon (but a sentence's sentiment can be reversed by simply adding a negative adverb such as not). Without considering criterion (d), generating adversarial examples in NLP would be trivial, as the model can learn to simply replace a positive adjective (amazing) with a negative one (awful) to attack a sentiment classifier.
To the best of our knowledge, most studies on adversarial example generation in NLP have largely ignored these additional criteria BIBREF2, BIBREF3, BIBREF4, BIBREF5. We believe the lack of a rigorous evaluation framework partially explains why adversarial training for NLP models has not seen the same extent of improvement compared to CV models. As our experiments reveal, examples generated from most attacking methods are successful in fooling the classifier, but their language is often unnatural and the original label is not properly preserved.
The core contribution of our paper is to introduce a systematic, rigorous evaluation framework to assess the quality of adversarial examples for NLP. We focus on sentiment classification as the target task, as it is a popular application that highlights the importance of criteria discussed above. We test a number of attacking methods and also propose an alternative approach (based on an auto-encoder) for generating adversarial examples. We learn that a number of factors can influence the performance of adversarial attacks, including architecture of the classifier, sentence length and input domain.
We test a number of attacking methods and also propose an alternative approach (based on an auto-encoder) for generating adversarial examples.
|
What has been done for generating adversarial examples?
|
A number of attacking methods and also propose an alternative approach (based on an auto-encoder).
|
null | false
| 204
|
Recent years have witnessed increased performance in many computational linguistics tasks such as syntactic and semantic parsing BIBREF0 , BIBREF1 , emotion classification BIBREF2 , and sentiment analysis BIBREF3 , BIBREF4 , BIBREF5 , especially concerning the applicability of such tools to noisy online data. Moreover, the field has made substantial progress in developing multilingual models and extending semantic annotation resources to languages beyond English BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .
Concurrently, it has been argued for mental health research that it would constitute a `valuable critical step' BIBREF10 to analyse first-hand accounts by individuals with lived experience of severe mental health issues in blog posts, tweets, and discussion forums. Several severe mental health difficulties, e.g., bipolar disorder (BD) and schizophrenia are considered as chronic and clinical recovery, defined as being relapse and symptom free for a sustained period of time BIBREF11 , is considered difficult to achieve BIBREF12 , BIBREF13 , BIBREF14 . Moreover, clinically recovered individuals often do not regain full social and educational/vocational functioning BIBREF15 , BIBREF16 . Therefore, research originating from initiatives by people with lived experience of mental health issues has been advocating emphasis on the individual's goals in recovery BIBREF17 , BIBREF18 . This movement gave rise to the concept of personal recovery BIBREF19 , BIBREF20 , loosely defined as a `way of living a satisfying, hopeful, and contributing life even with limitations caused by illness' BIBREF18 . The aspects of personal recovery have been conceptualised in various ways BIBREF21 , BIBREF22 , BIBREF23 . According to the frequently used CHIME model BIBREF24 , its main components are Connectedness, Hope and optimism, Identity, Meaning and purpose, and Empowerment. Here, we focus on BD, which is characterised by recurring episodes of depressed and elated (hypomanic or manic) mood BIBREF25 , BIBREF12 . Bipolar spectrum disorders were estimated to affect approximately 2% of the UK population BIBREF13 with rates ranging from 0.1%-4.4% across 11 other European, American and Asian countries BIBREF26 . Moreover, BD is associated with a high risk of suicide BIBREF27 , making its prevention and treatment important tasks for society. BD-specific personal recovery research is motivated by mainly two facts: First, the pole of positive/elevated mood and ongoing mood instability constitute core features of BD and pose special challenges compared to other mental health issues, such as unipolar depression BIBREF25 . Second, unlike for some other severe mental health difficulties, return to normal functioning is achievable given appropriate treatment BIBREF28 , BIBREF16 , BIBREF29 .
A substantial body of qualitative and quantitative research has shown the importance of personal recovery for individuals diagnosed with BD BIBREF22 , BIBREF25 , BIBREF30 , BIBREF31 , BIBREF23 . Qualitative evidence mainly comes from (semi-)structured interviews and focus groups and has been criticised for small numbers of participants BIBREF10 , lacking complementary quantitative evidence from larger samples BIBREF32 . Some quantitative evidence stems from the standardised bipolar recovery questionnaire BIBREF30 and a randomised control trial for recovery-focused cognitive-behavioural therapy BIBREF31 . Critically, previous research has taken place only in structured settings. What is more, the recovery concept emerged from research primarily conducted in English-speaking countries, mainly involving researchers and participants of Western ethnicity. This might have led to a lack of non-Western notions of wellbeing in the concept, such as those found in indigenous peoples BIBREF32 , limiting its the applicability to a general population. Indeed, the variation in BD prevalence rates from 0.1% in India to 4.4% in the US is striking. It has been shown that culture is an important factor in the diagnosis of BD BIBREF33 , as well as on the causes attributed to mental health difficulties in general and treatments considered appropriate BIBREF34 , BIBREF35 . While approaches to mental health classification from texts have long ignored the cultural dimension BIBREF36 , first studies show that online language of individuals affected by depression or related mental health difficulties differs significantly across cultures BIBREF37 , BIBREF36 .
Hence, it seems timely to take into account the wealth of accounts of mental health difficulties and recovery stories from individuals of diverse ethnic and cultural backgrounds that are available in a multitude of languages on the internet. Corpus and computational linguistic methods are explicitly designed for processing large amounts of linguistic data BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , and as discussed above, recent advances have made it feasible to apply them to noisy user-generated texts from diverse domains, including mental health BIBREF42 , BIBREF43 . Computer-aided analysis of public social media data enables us to address several shortcomings in the scientific underpinning of personal recovery in BD by overcoming the small sample sizes of lab-collected data and including accounts from a more heterogeneous population.
In sum, our research questions are as follows: (1) How is personal recovery discussed online by individuals meeting criteria for BD? (2) What new insights do we get about personal recovery and factors that facilitate or hinder it? We will investigate these questions in two parts, looking at English-language data by westerners and at multilingual data by individuals of diverse ethnicities.
Concurrently, it has been argued for mental health research that it would constitute a ‘valuable critical step’ (Stuart et al., 2017) to analyse first-hand accounts by individuals with lived experience of severe mental health issues in blog posts, tweets, and discussion forums.
|
What first-hand accounts did this paper analyse?
|
This paper is to analyse first-hand accounts by individuals with lived experience of severe mental health issues.
|
null | false
| null |
1764–1766: Taxes imposed and withdrawn
Main articles: Sugar Act, Currency Act, Quartering Acts, Stamp Act 1765, and Declaratory Act
Further information: No taxation without representation and Virtual representation
Notice of the Stamp Act 1765 in a colonial newspaper
In 1764 Parliament passed the Sugar Act, decreasing the existing customs duties on sugar and molasses but providing stricter measures of enforcement and collection. That same year, Prime Minister George Grenville proposed direct taxes on the colonies to raise revenue, but he delayed action to see whether the colonies would propose some way to raise the revenue themselves.
Grenville had asserted in 1762 that the whole revenue of the custom houses in America amounted to one or two thousand pounds sterling a year, and that the English exchequer was paying between seven and eight thousand pounds a year to collect. Adam Smith wrote in The Wealth of Nations that Parliament "has never hitherto demanded of [the American colonies] anything which even approached to a just proportion to what was paid by their fellow subjects at home." Benjamin Franklin would later testify in Parliament in 1766 to the contrary, reporting that Americans already contributed heavily to the defense of the Empire. He argued that local colonial governments had raised, outfitted, and paid 25,000 soldiers to fight France in just the French and Indian War alone—as many as Britain itself sent—and spent many millions from American treasuries doing so.
Parliament finally passed the Stamp Act in March 1765, which imposed direct taxes on the colonies for the first time. All official documents, newspapers, almanacs, and pamphlets were required to have the stamps—even decks of playing cards. The colonists did not object that the taxes were high; they were actually low. They objected to their lack of representation in the Parliament, which gave them no voice concerning legislation that affected them. The British were, however, reacting to an entirely different issue: at the conclusion of the recent war the Crown had to deal with approximately 1,500 politically well-connected British Army officers. The decision was made to keep them on active duty with full pay, but they—and their commands—also had to be stationed somewhere. Stationing a standing army in Great Britain during peacetime was politically unacceptable, so they determined to station them in America and have the Americans pay them through the new tax. The soldiers had no military mission however; they were not there to defend the colonies because there was no current threat to the colonies.
The Sons of Liberty formed shortly after the Act in 1765, and they used public demonstrations, boycotts, and threats of violence to ensure that the British tax laws were unenforceable. In Boston, the Sons of Liberty burned the records of the vice admiralty court and looted the home of chief justice Thomas Hutchinson. Several legislatures called for united action, and nine colonies sent delegates to the Stamp Act Congress in New York City in October. Moderates led by John Dickinson drew up a Declaration of Rights and Grievances stating that taxes passed without representation violated their rights as Englishmen, and colonists emphasized their determination by boycotting imports of British merchandise.
The Parliament at Westminster saw itself as the supreme lawmaking authority throughout the Empire and thus entitled to levy any tax without colonial approval or even consultation. They argued that the colonies were legally British corporations subordinate to the British Parliament, and they pointed to numerous instances where Parliament had made laws in the past that were binding on the colonies. Parliament insisted that the colonists effectively enjoyed a "virtual representation", as most British people did, since only a small minority of the British population elected representatives to Parliament. However, Americans such as James Otis maintained that there was no one in Parliament responsible specifically for any colonial constituency, so they were not "virtually represented" by anyone in Parliament at all.
The Rockingham government came to power in July 1765, and Parliament debated whether to repeal the stamp tax or to send an army to enforce it. Benjamin Franklin made the case for repeal, explaining that the colonies had spent heavily in manpower, money, and blood defending the empire in a series of wars against the French and indigenous people, and that further taxes to pay for those wars were unjust and might bring about a rebellion. Parliament agreed and repealed the tax on February 21, 1766, but they insisted in the Declaratory Act of March 1766 that they retained full power to make laws for the colonies "in all cases whatsoever". The repeal nonetheless caused widespread celebrations in the colonies.
1767–1773: Townshend Acts and the Tea Act
Main articles: Townshend Acts and Tea Act
Further information: Crisis of 1772, Massachusetts Circular Letter, Boston Massacre, and Boston Tea Party
Letter III of John Dickinson's Letters from a Farmer in Pennsylvania, published in the Pennsylvania Chronicle, December 1767
In 1767, the Parliament passed the Townshend Acts which placed duties on a number of staple goods, including paper, glass, and tea, and established a Board of Customs in Boston to more rigorously execute trade regulations. The new taxes were enacted on the belief that Americans only objected to internal taxes and not to external taxes such as custom duties. However, in his widely read pamphlet, Letters from a Farmer in Pennsylvania, John Dickinson argued against the constitutionality of the acts because their purpose was to raise revenue and not to regulate trade. Colonists responded to the taxes by organizing new boycotts of British goods. These boycotts were less effective, however, as the goods taxed by the Townshend Acts were widely used.
In February 1768, the Assembly of Massachusetts Bay issued a circular letter to the other colonies urging them to coordinate resistance. The governor dissolved the assembly when it refused to rescind the letter. Meanwhile, a riot broke out in Boston in June 1768 over the seizure of the sloop Liberty, owned by John Hancock, for alleged smuggling. Customs officials were forced to flee, prompting the British to deploy troops to Boston. A Boston town meeting declared that no obedience was due to parliamentary laws and called for the convening of a convention. A convention assembled but only issued a mild protest before dissolving itself. In January 1769, Parliament responded to the unrest by reactivating the Treason Act 1543 which called for subjects outside the realm to face trials for treason in England. The governor of Massachusetts was instructed to collect evidence of said treason, and the threat caused widespread outrage, though it was not carried out.
On March 5, 1770, a large crowd gathered around a group of British soldiers on a Boston street. The crowd grew threatening, throwing snowballs, rocks, and debris at them. One soldier was clubbed and fell. There was no order to fire, but the soldiers panicked and fired into the crowd. They hit 11 people; three civilians died of wounds at the scene of the shooting, and two died shortly after the incident. The event quickly came to be called the Boston Massacre. The soldiers were tried and acquitted (defended by John Adams), but the widespread descriptions soon began to turn colonial sentiment against the British. This accelerated the downward spiral in the relationship between Britain and the Province of Massachusetts.
A new ministry under Lord North came to power in 1770, and Parliament withdrew all taxes except the tax on tea, giving up its efforts to raise revenue while maintaining the right to tax. This temporarily resolved the crisis, and the boycott of British goods largely ceased, with only the more radical patriots such as Samuel Adams continuing to agitate.[citation needed]
On June 9, 1772, the Sons of Liberty burned HMS Gaspee, a British customs schooner in Narragansett Bay.
In June 1772, American patriots, including John Brown, burned a British warship that had been vigorously enforcing unpopular trade regulations, in what became known as the Gaspee Affair. The affair was investigated for possible treason, but no action was taken.
In 1772, it became known that the Crown intended to pay fixed salaries to the governors and judges in Massachusetts, which had been paid by local authorities. This would reduce the influence of colonial representatives over their government. Samuel Adams in Boston set about creating new Committees of Correspondence, which linked Patriots in all 13 colonies and eventually provided the framework for a rebel government. Virginia, the largest colony, set up its Committee of Correspondence in early 1773, on which Patrick Henry and Thomas Jefferson served.
A total of about 7,000 to 8,000 Patriots served on Committees of Correspondence at the colonial and local levels, comprising most of the leadership in their communities. Loyalists were excluded. The committees became the leaders of the American resistance to British actions, and later largely determined the war effort at the state and local level. When the First Continental Congress decided to boycott British products, the colonial and local Committees took charge, examining merchant records and publishing the names of merchants who attempted to defy the boycott by importing British goods.
In 1773, private letters were published in which Massachusetts Governor Thomas Hutchinson claimed that the colonists could not enjoy all English liberties, and in which Lieutenant Governor Andrew Oliver called for the direct payment of colonial officials. The letters' contents were used as evidence of a systematic plot against American rights, and discredited Hutchinson in the eyes of the people; the colonial Assembly petitioned for his recall. Benjamin Franklin, postmaster general for the colonies, acknowledged that he leaked the letters, which led to him being berated by British officials and removed from his position.
The December 16, 1773 Boston Tea Party, led by Samuel Adams and Sons of Liberty, has become a mainstay of American patriotic lore.
Meanwhile, Parliament passed the Tea Act lowering the price of taxed tea exported to the colonies, to help the British East India Company undersell smuggled untaxed Dutch tea. Special consignees were appointed to sell the tea to bypass colonial merchants. The act was opposed by those who resisted the taxes and also by smugglers who stood to lose business.[citation needed] In most instances, the consignees were forced by the Americans to resign and the tea was turned back, but Massachusetts governor Hutchinson refused to allow Boston merchants to give in to pressure. A town meeting in Boston determined that the tea would not be landed, and ignored a demand from the governor to disperse. On December 16, 1773, a group of men, led by Samuel Adams and dressed to evoke the appearance of indigenous people, boarded the ships of the East India Company and dumped £10,000 worth of tea from their holds (approximately £636,000 in 2008) into Boston Harbor. Decades later, this event became known as the Boston Tea Party and remains a significant part of American patriotic lore.
1774–1775: Intolerable Acts
Main article: Intolerable Acts
Further information: Quebec Act and Continental Association
A 1774 etching from The London Magazine depicts Prime Minister Lord North, author of the Boston Port Act, forcing the Intolerable Acts down the throat of America, whose arms are restrained by Lord Chief Justice Mansfield, and a tattered "Boston Petition" lays trampled on the ground beside her. Lord Sandwich pins down her feet and peers up her robes; behind them, Mother Britannia weeps while France and Spain look on.
The British government responded by passing several measures that came to be known as the Intolerable Acts, further darkening colonial opinion towards England. They consisted of four laws enacted by the British parliament. The first was the Massachusetts Government Act which altered the Massachusetts charter and restricted town meetings. The second act was the Administration of Justice Act which ordered that all British soldiers to be tried were to be arraigned in Britain, not in the colonies. The third Act was the Boston Port Act, which closed the port of Boston until the British had been compensated for the tea lost in the Boston Tea Party. The fourth Act was the Quartering Act of 1774, which allowed royal governors to house British troops in the homes of citizens without requiring permission of the owner.
In response, Massachusetts patriots issued the Suffolk Resolves and formed an alternative shadow government known as the Provincial Congress which began training militia outside British-occupied Boston. In September 1774, the First Continental Congress convened, consisting of representatives from each colony, to serve as a vehicle for deliberation and collective action. During secret debates, conservative Joseph Galloway proposed the creation of a colonial Parliament that would be able to approve or disapprove acts of the British Parliament, but his idea was tabled in a vote of 6 to 5 and was subsequently removed from the record.[citation needed] Congress called for a boycott beginning on December 1, 1774, of all British goods; it was enforced by new local committees authorized by the Congress.
|
From the following passage, which taxes/acts led to the American Revolution?
|
The following Acts passed by the British Parliament led to the American Revolution:
1) The Sugar Act in 1764 provided stricter measures of enforcement and collection for sugar taxes.
2) The Stamp Act in 1765 imposed direct taxes on official documents, newspapers, almanacs, pamphlets, and playing cards.
3) The Declaratory Act in 1766 declared the British Parliament retained full taxation powers for the American Colonies.
4) The Townshend Acts in 1767 added duties to goods like paper, glass, and tea.
5) The Tea Act in 1773 lowered the taxes on tea to benefit the British East India Company.
6) The so-called Intolerable Acts in 1774 consisted of 4 laws that increased Britain's sovereignty over the American Colonies. These laws include the rights of Britain to restrict town meetings, try British soldiers for crimes committed in the American colonies, close the port of Boston, and require housing of British troops in the homes of citizens.
|
null | false
| null |
Which of these teams play in the National Rugby League (NRL) or the Australian Football League (AFL)? Sydney Swans, Penrith Panthers, South Sydney Rabbitohs, Brisbane Lions, Brisbane Broncos, Manly Sea Eagles & Fremantle Dockers
|
NRL teams: Penrith Panthers, South Sydney Rabbitohs, Brisbane Broncos, Manly Sea Eagles
AFL teams: Sydney Swans, Brisbane Lions, Fremantle Dockers
|
|
null | false
| null |
Identify which instrument is string or woodwind: Brac, Piri
|
Piri is woodwind, Brac is string.
|
|
null | false
| null |
Give me a bulleted list of 5 states in alphabetical order
|
Alabama, Alaska, Arizona, Arkansas, California
|
|
null | false
| 83
|
We inspected the tweets of one fold that were misclassified by the Mazajak/SVM model (36 false positives/121 false negatives) to determine the most common errors.
had four main types:
[leftmargin=*]
Gloating: ex. يا هبيده> (“yA hbydp” - “O you delusional”) referring to fans of rival sports team for thinking they could win.
Quoting: ex. لما حد يسب ويقول يا كلب> (“lmA Hd ysb wyqwl yA klb” – “when someone swears and says: O dog”).
Idioms: ex. يا فاطر رمضان يا خاسر دينك> (“yA fATr rmDAn yA xAsr dynk” – “o you who does not fast Ramadan, you who have lost your religion”), which is a colloquial idiom.
Implicit Sarcasm: ex. يا خاين انت عايز تشكك>
في حب الشعب للريس> (“yA KAyn Ant EAwz t$kk fy Hb Al$Eb llrys” – “O traitor, (you) want to question the love of the people to the president ”) where the author is mocking the president's popularity.
had two types:
[leftmargin=*]
Mixture of offensiveness and admiration: ex. calling a girl a puppy يا كلبوبة> (“yA klbwbp” – “O puppy”) in a flirtatious manner.
Implicit offensiveness: ex. calling for cure while implying sanity in: وتشفي حكام قطر من المرض> (“wt$fy HkAm qTr mn AlmrD” – “and cure Qatar rulers from illness”).
Many errors stem from heavy use of dialectal Arabic as well as ambiguity. Since BERT was trained on Wikipedia (MSA) and Google books, the model failed to classify tweets with dialectal cues. Conversely, Mazajak/SVM is more biased towards dialects, often failing to classify MSA tweets.
Two false negative types: • Mixture of offensiveness and admiration: ex. calling a girl a puppy éK .ñJ. Ê¿ AK (“yA klbwbp” – “O puppy”) in a flirtatious manner. • Implicit offensiveness: ex. call_x0002_ing for cure while implying sanity: QÖ Ï@ áÓ ¼YÊK . ÐA¾k ù ® ð (“wt$fy HkAm bldk mn AlmrD” – “and cure rulers of your country from illness”).
|
What are the two false negative types in this paper?
|
Two false negative types are the mixture and the implicit offensiveness.
|
null | false
| 34
|
Single-relation factoid questions are the most common form of questions found in search query logs and community question answering websites BIBREF1 , BIBREF2 . A knowledge-base (KB) such as Freebase, DBpedia, or Wikidata can help answer such questions after users reformulate them as queries. For instance, the question Where was Barack Obama born? can be answered by issuing the following KB query: $
\lambda (x).place\_of\_birth(Barack\_Obama, x)
$
However, automatically mapping a natural language question such as Where was Barack Obama born? to its corresponding KB query remains a challenging task.
There are three key issues that make learning this mapping non-trivial. First, there are many paraphrases of the same question. Second, many of the KB entries are unseen during training time; however, we still need to correctly predict them at test time. Third, a KB such as Freebase typically contains millions of entities and thousands of predicates, making it difficult for a system to predict these entities at scale BIBREF1 , BIBREF3 , BIBREF0 . In this paper, we address all three of these issues with a character-level encoder-decoder framework that significantly improves performance over state-of-the-art word-level neural models, while also providing a much more compact model that can be learned from less data.
First, we use a long short-term memory (LSTM) BIBREF4 encoder to embed the question. Second, to make our model robust to unseen KB entries, we extract embeddings for questions, predicates and entities purely from their character-level representations. Character-level modeling has been previously shown to generalize well to new words not seen during training BIBREF5 , BIBREF6 , which makes it ideal for this task. Third, to scale our model to handle the millions of entities and thousands of predicates in the KB, instead of using a large output layer in the decoder to directly predict the entity and predicate, we use a general interaction function between the question embeddings and KB embeddings that measures their semantic relevance to determine the output. The combined use of character-level modeling and a semantic relevance function allows us to successfully produce likelihood scores for the KB entries that are not present in our vocabulary, a challenging task for standard encoder-decoder frameworks.
Our novel, character-level encoder-decoder model is compact, requires significantly less data to train than previous work, and is able to generalize well to unseen entities in test time. In particular, without use of ensembles, we achieve 70.9% accuracy in the Freebase2M setting and 70.3% accuracy in the Freebase5M setting on the SimpleQuestions dataset, outperforming the previous state-of-arts of 62.7% and 63.9% BIBREF0 by 8.2% and 6.4% respectively. Moreover, we only use the training questions provided in SimpleQuestions to train our model, which cover about 24% of words in entity aliases on the test set. This demonstrates the robustness of the character-level model to unseen entities. In contrast, data augmentation is usually necessary to provide more coverage for unseen entities and predicates, as done in previous work BIBREF0 , BIBREF1 .
However, automatically mapping a natural language question such as “Where was Barack Obama born? ”to its corresponding KB query remains a challenging task.
|
What mapping remains a challenging task?
|
Automatically mapping a natural language question such as “Where was Barack Obama born? ”to its corresponding KB query remains a challenging task.
|
null | false
| null |
A black hole is a region of spacetime where gravity is so strong that nothing, including light or other electromagnetic waves, has enough energy to escape its event horizon. The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of no escape is called the event horizon. Although it has a great effect on the fate and circumstances of an object crossing it, it has no locally detectable features according to general relativity. In many ways, a black hole acts like an ideal black body, as it reflects no light. Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly.
|
Is black hole truly black?
|
According to general relativity, a black hole emits no lights at all. Thus it is 100% black by definition. However, when quantum effects are included, a black hole has finite temperature, emitting lights called Hawking radiation. In this sense, a black hole is not truly black.
|
null | false
| null |
What is the capital of the Netherlands?
|
The capital of the Netherlands is Amsterdam.
|
|
null | false
| null |
What are some of the most famous fantasy books ever written?
|
The Lord of the Rings Trilogy and 'The Hobbit' by J.R.R. Tolkien are particularly famous in the fantasy genre. 'The Wheel of Time' by Robert Jordan is another famous fantasy series. "A Game of Thrones' by George R.R. Martin is another famous set of fantasy books.
|
|
null | false
| null |
Classify the below substances based on their natural physical state.
Water, Mercury, Copper, Oxygen, Nitrogen, Wood
|
Solid - Copper, Wood
Liquid - Water, Mercury
Gas - Oxygen, Nitrogen
|
|
null | false
| 97
|
Results for IV evaluation are shown in tab:intrinsic, and for OOV in tab:oov.
Like in FT, the use of subword information in both LV-N and LV-M results in 1) better representation of rare words, as evidenced by the increase in RW correlation, and 2) significant improvement on the GSyn and MSR tasks, in evidence of subwords encoding information about a word's syntactic function (the suffix “ly”, for example, suggests an adverb).
There seems to a trade-off between capturing semantics and syntax as in both LV-N and FT there is an accompanying decrease on the GSem tasks in exchange for gains on the GSyn and MSR tasks. Morphological segmentation in LV-M appears to favor syntax less strongly than do simple n-grams.
On the downstream tasks, we only observe statistically significant ( $p < .05$ under a random permutation test) improvement on the chunking task, and it is a very small gain. We attribute this to both regular and subword models having very similar quality on frequent IV word representation. Statistically, these are the words are that are most likely to appear in the downstream task instances, and so the superior representation of rare words
has, due to their nature, little impact on overall accuracy. Because in all tasks OOV words are mapped to the “ $\langle $ unk $\rangle $ ” token, the subword models are not being used to the fullest, and in future work we will investigate whether generating representations for all words improves task performance.
In OOV representation (tab:oov), LV-N and FT work almost identically, as is to be expected. Both find highly coherent neighbors for the words “hellooo”, “marvelicious”, and “rereread”. Interestingly, the misspelling of “louisana” leads to coherent name-like neighbors, although none is the expected correct spelling “louisiana”. All models stumble on the made-up prefix “tuz”. A possible fix would be to down-weigh very rare subwords in the vector summation. LV-M is less robust than LV-N and FT on this task as it is highly sensitive to incorrect segmentation, exemplified in the “hellooo” example.
Finally, we see that nearest-neighbors are a mixture of similarly pre/suffixed words. If these pre/suffixes are semantic, the neighbors are semantically related, else if syntactic they have similar syntactic function. This suggests that it should be possible to get tunable representations which are more driven by semantics or syntax by a weighted summation of subword vectors, given we can identify whether a pre/suffix is semantic or syntactic in nature and weigh them accordingly. This might be possible without supervision using corpus statistics as syntactic subwords are likely to be more frequent, and so could be down-weighted for more semantic representations. This is something we will pursue in future work.
In OOV representation (table 2), LV-N and FT work almost identically, as is to be expected.
|
Which models work almost identically in OOV word representation test?
|
LV-N and FT.
|
null | false
| null |
In dimensional modeling, what is a bus matrix?
|
A bus matrix is a logical construct, it has nothing to do with tables or data sources, at least not directly.
You put business processes (things that generate facts) on the rows and dimensions (attributes about events) on the columns, and then identify which dimensions apply to which process.
In totality it defines a logical model which can be translated to a physical dimensional model.
It helps you understand what dimensions to conform, what granularity you need to support, and how your various facts are interrelated, especially causally (e.g. this process produces these facts, which in turn influence this process which produces these facts, and so on ..)
Mostly it's a way to talk to the business in their own language about the types of analysis they want to do, introducing concepts like dimensions, facts, schemas, and measures in a way that is meaningful to how the business operates.
|
|
1712.03547
| false
| null |
$Coherence@k$ has been shown to have high correlation with human interpretability of topics learned via various topic modeling methods BIBREF7 . Hence, we can expect interpretable embeddings by maximizing it.
For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests. In word intrusion test BIBREF14 , top $k(=5)$ entities along a dimension are mixed with the bottom most entity (the intruder) in that dimension and shuffled. Then multiple (3 in our case) human annotators are asked to find out the intruder. We use majority voting to finalize one intruder. Amazon Mechanical Turk was used for crowdsourcing the task and we used 25 randomly selected dimensions for evaluation. For automated word intrusion BIBREF7 , we calculate following score for all $k+1$ entities
$Coherence@k$ has been shown to have high correlation with human interpretability of topics learned via various topic modeling methods BIBREF7 . Hence, we can expect interpretable embeddings by maximizing it.
For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests.
|
How do they evaluate interpretability?
|
The answers are shown as follows:
* For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests.
|
1908.05434
| false
| null |
All models are trained and evaluated using the same (w.r.t. data shuffle and split) 10-fold cross-validation (CV) on Trafficking-10k, except for HTDN, whose result is read from the original paper BIBREF9 . During each train-test split, INLINEFORM0 of the training set is further reserved as the validation set for tuning hyperparameters such as L2-penalty in IT, AT and LAD, and learning rate in ORNN. So the overall train-validation-test ratio is 70%-20%-10%. We report the mean metrics from the CV in Table TABREF14 . As previous research has pointed out that there is no unbiased estimator of the variance of CV BIBREF29 , we report the naive standard error treating metrics across CV as independent.
We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models. Its Wt. Acc. is a substantial improvement over HTDN despite the fact that the latter use both text and image data. It is important to note that HTDN is trained using binary labels, whereas the other models are trained using ordinal labels and then have their ordinal predictions converted to binary predictions. This is most likely the reason that even the baseline models except for LAD can yield better Wt. Acc. than HTDN, confirming our earlier claim that polarizing the ordinal labels during training may lead to information loss.
FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.
FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.
We report the mean metrics from the CV in Table TABREF14 .
We can see that ORNN has the best MAE, INLINEFORM0 and Acc. as well as a close 2nd best Wt. Acc. among all models.
FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.
FLOAT SELECTED: Table 2: Comparison of the proposed ordinal regression neural network (ORNN) against Immediate-Threshold ordinal logistic regression (IT), All-Threshold ordinal logistic regression (AT), Least Absolute Deviation (LAD), multi-class logistic regression (MC), and the Human Trafficking Deep Network (HTDN) in terms of Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.). The results are averaged across 10-fold CV on Trafficking10k with naive standard errors in the parentheses. The best and second best results are highlighted.
|
By how much do they outperform previous state-of-the-art models?
|
Proposed ORNN has 0.769, 1.238, 0.818, 0.772 compared to 0.778, 1.244, 0.813, 0.781 of best state of the art result on Mean Absolute Error (MAE), macro-averaged Mean Absolute Error (MAEM ), binary classification accuracy (Acc.) and weighted binary classification accuracy (Wt. Acc.)
|
null | false
| null |
Give me a bulleted list of the Real Housewives series in chronological order.
|
The Real Housewives of Orange County
The Real Housewives of New York City
The Real Housewives of Atlanta
The Real Housewives of New Jersey
The Real Housewives of D.C
The Real Housewives of Beverly Hills
The Real Housewives of Miami
The Real Housewives of Potomac
The Real Housewives of Dallas
The Real Housewives of Salt Lake City
The Real Housewives of Dubai
|
|
null | false
| null |
Classify each of the following countries as Middle Eastern or Central American: Afghanistan, Iraq, Syria, Nicaragua, Honduras, El Salvador
|
Afghanistan: Middle Eastern
Iraq: Middle Eastern
Syria: Middle Eastern
Nicaragua: Central American
Honduras: Central American
El Salvador: Central American
|
|
null | false
| 334
|
Automated deriving discourse relation from free text is a challenging but im-portant problem. The shallow discourse parsing is very useful in the text summariza-tion BIBREF0 , opinion analysis BIBREF1 and natural language generation. Shallow discourse parser is the system of parsing raw text into a set of discourse relations between two adjacent or non-adjacent text spans. Discourse relation is composed of a discourse connective, two arguments of the discourse connective and the sense of the discourse connective. Discourse connective signals the explicit dis-course relation, but in non-explicit discourse relation, a discourse connective is omit-ted. Two arguments of the discourse connective, Arg1 and Arg2, which are the two adjacent or non-adjacent text spans connecting in the discourse relation. The sense of the discourse connective characterizes the nature of the discourse relations. The following discourse relation annotation is taken from the document in the PDTB. Arg1 is shown in italicized, and Arg2 is shown in bold. The discourse connective is underlined.
The connective identifier finds the connective word, “unless”. The arguments identifier locates the two arguments of “unless”. The sense classifier labels the dis-course relation. The non-explicit identifier checks all the pair of adjacent sentences. If the non-explicit identifier indentifies the pair of sentences as non-explicit relation, it will label it the relation sense. Though many research work BIBREF2 , BIBREF3 , BIBREF4 are committed to the shallow discourse parsing field, all of them are focus on the subtask of parsing only rather than the whole parsing process. Given all that, a full shallow discourse parser framework is proposed in our paper to turn the free text into discourse relations set. The parser includes connective identifier, arguments identifier, sense classifier and non-explicit identifier, which connects with each other in pipeline. In order to enhance the performance of the parser, the feature-based maximum entropy model approach is adopted in the experiment. Maximum entropy model offers a clean way to combine diverse pieces of contextual evidence in order to estimate the probability of a certain linguistic class occurring with a certain linguistic context in a simple and accessible manner. The three main contributions of the paper are:
The rest of this paper is organized as follows. Section 2 reviews related work in discourse parsing. Section 3 describes the experimental corpus–PDTB. Section 4 de-scribes the framework and the components of the parser. Section 5 presents experi-ments and evaluations. Conclusions are presented in the Section 6.
The parser includes connective identifier, arguments identifier, sense classifier and non-explicit identifier, which connects with each other in pipeline.
|
What does the shallow discourse parser include?
|
Connective identifier, arguments identifier, sense classifier and non-explicit identifier.
|
null | false
| null |
Food historian Lois Ellen Frank calls potatoes, tomatoes, corn, beans, squash, chili, cacao, and vanilla the "magic eight" ingredients that were found and used only in the Americas before 1492 and were taken via the Columbian Exchange back to the Old World, dramatically transforming the cuisine there. According to Frank,
If we deconstruct that these foods were inherently native, then that means that the Italians didn't have the tomato, the Irish didn't have the potato, half the British National Dish—Fish and Chips—didn't exist. The Russians didn't have the potato, nor did they have vodka from the potato. There were no chiles in any Asian cuisine anywhere in the world, nor were there any chiles in any East Indian cuisine dishes, including curries. And the French had no confection using either vanilla or chocolate. So the Old World was a completely different place.
|
Spanish patatas bravas are potatoes served with a red sauce made with tomatoes and chilis. Could patatas bravas have existed in 1490?
|
No, patatas bravas could not have existed in 1490. This is because in 1490 the Americas had not yet been visited by Europeans, and so Spaniards didn't have access to the potatoes, tomatoes or chilis present in the dish, because those plants are all from the Americas originally.
|
null | false
| 127
|
We experiment with five benchmark attacking methods for texts: FGM, FGVM, DeepFool BIBREF5, HotFlip BIBREF3) and TYC BIBREF4.
To perturb the discrete inputs, both FGM and FGVM introduce noises in the word embedding space via the fast gradient method BIBREF1 and reconstruct the input by mapping perturbed word embeddings to valid words via nearest neighbour search. Between FGM and FGVM, the former introduce noises that is proportional to the sign of the gradients while the latter introduce perturbations proportional to the gradients directly. The proportion is known as the overshoot value and denoted by $\epsilon $. DeepFool uses the same trick to deal with discrete inputs except that, instead of using the fast gradient method, it uses the DeepFool method introduced in BIBREF8 for image to search for an optimal direction to perturb the word embeddings.
Unlike the previous methods, HotFlip and TYC rely on performing one or more atomic flip operations to replace words while monitoring the label change given by the target classifier. In HotFlip, the directional derivatives w.r.t. flip operations are calculated and the flip operation that results in the largest increase in loss is selected. TYC is similar to FGM, FGVM and DeepFool in that it also uses nearest neighbour search to map the perturbed embeddings to valid words, but instead of using the perturbed tokens directly, it uses greedy search or beam search to flip original tokens to perturbed ones one at a time in order of their vulnerability.
We experiment with six benchmark attacking methods for texts, ranging from white-box attacks: FGM, FGVM, DEEPFOOL (Gong et al., 2018), HOTFLIP (Ebrahimi et al., 2017)), and TYC (Tsai et al., 2019) to black-box attacks: TEXTFOOLER (Jin et al., 2019).
|
What benchmark attacking methods are used in their experiment?
|
FGM, FGVM, DEEPFOOL (Gong et al., 2018), HOTFLIP (Ebrahimi et al., 2017)), and TYC (Tsai et al., 2019) TEXTFOOLER (Jin et al., 2019).
|
null | false
| null |
What’s a good way to lose weight
|
Consuming less calories than you eat can result in weight loss. Using calorie counting apps like MyFitnessPal can assist in calorie counting while ensuring you are getting the proper macro nutrients as well, for optimal healthy weight loss.
|
|
null | false
| null |
Arcane (titled onscreen as Arcane: League of Legends) is an adult animated steampunk action-adventure television series created by Christian Linke and Alex Yee for Netflix. It was produced by the French animation studio Fortiche under the supervision of Riot Games. Set in Riot's League of Legends fictional universe, it primarily focuses on sisters Vi and Jinx. The series was announced at the League of Legends 10th anniversary celebration in 2019, and first released in November 2021. Following the conclusion of the first season, Riot Games and Netflix announced that a second season was in production.
|
Which Game Studio created Arcane?
|
Arcane is an adult animated steampunk action-adventure television series created by Riot Games
|
null | false
| null |
How has Kyle Shanahan faired while coaching the San Francisco 49ers?
|
Kyle Shanahan has coached the 49ers since 2017. Over 6 seasons he has a record of 52 wins and 46 losses. He coached the 49ers to the Super Bowl in 2020, but lost to the Kansas City Chiefs. Overall, he is widely considered one of the better coaches in the NFL. He is known for his creative offensive strategies.
|
|
null | false
| null |
What is pink salt?
|
It’s important to distinguish between a typical consumer-grade Himalayan pink salt primarily used for finishing dishes just before serving, and a curing salt that is not meant for consumption but instead used for the process of curing meats and just happens to be pink. So, “pink salt” tends to be an overloaded term.
|
|
null | false
| null |
Which U.S president had a home called The Hermitage?
|
Andrew Jackson
|
|
null | false
| null |
What is AWS EMR?
|
Amazon EMR (previously called Amazon Elastic MapReduce) is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark , on AWS to process and analyze vast amounts of data.
|
|
null | false
| null |
Which products apple sell?
|
Apple sells variety of products. including watches, phones, laptops and computers.
|
|
null | false
| null |
Barbecue or barbeque (informally BBQ in the UK, US, and Canada; barbie in Australia; and braai in South Africa) is a term used with significant regional and national variations to describe various cooking methods that use live fire and smoke to cook the food. The term is also generally applied to the devices associated with those methods, the broader cuisines that these methods produce, and the meals or gatherings at which this style of food is cooked and served. The cooking methods associated with barbecuing vary significantly but most involve outdoor cooking.
The various regional variations of barbecue can be broadly categorized into those methods which use direct and those which use indirect heating. Indirect barbecues are associated with North American cuisine, in which meat is heated by roasting or smoking over wood or charcoal. These methods of barbecue involve cooking using smoke at low temperatures and long cooking times, for several hours. Elsewhere, barbecuing more commonly refers to the more direct application of heat, grilling of food over hot coals or a gas fire. This technique is usually done over direct, dry heat or a hot fire for a few minutes. Within these broader categorizations are further national and regional differences.[
|
What is barbecue?
|
Barbecue is a term to describe different cooking methods that use live fire and smoke to cook and the types of devices used. It can also refer to the meal or gathering at which this type of food is cooked and served.
|
null | false
| null |
What are different routes a wide receiver could run if the team needs to gain over 10 yards in one play.
|
If a team needs to gain over 10 years in one play, a wide receiver should run a route that goes over 10 yards to ensure enough yardage is gained on the play. Routes that go longer than 10 yards are:
1. Fly route
2. Deep post route
3. Deep corner route
4. Dig route
|
|
null | false
| 92
|
The primary aim of this work is to create a new large-scale NLI benchmark on which current state-of-the-art models fail. This constitutes a new target for the field to work towards, and can elucidate model capabilities and limitations. As noted, however, static benchmarks do not last very long these days. If continuously deployed, the data collection procedure we introduce here can pose a dynamic challenge that allows for never-ending learning.
The primary aim of this work is to create a new large-scale NLI benchmark on which current stateof-the-art models fail.
|
What is the primary aim of the authors' work?
|
The primary aim of this work is to create a new large-scale NLI benchmark on which current stateof-the-art models fail.
|
null | false
| null |
What is the potential hydrogen (ph) of water?
|
The potential hydrogen of water measure how acidic or alkaline water is? Water with a potential hydrogen of 7.0 is considered neutral water and this kind of water is typically distilled water. Water that has a ph of less than 7.0 considered acidic water and even some bottled water has a PH less than 7.0. Water which a PH higher than 7.0 is considered alkaline. You can also adjust the PH of water with simple salts. For example baking soda can be used to raise the PH of water which is acidic and lemon juice can be used to lower the PH of water that is alkaline.
|
|
1805.04033
| false
| null |
FLOAT SELECTED: Table 3. Comparisons with the Existing Models in Terms of ROUGE Metrics
FLOAT SELECTED: Table 3. Comparisons with the Existing Models in Terms of ROUGE Metrics
|
Which existing models does this approach outperform?
|
RNN-context, SRB, CopyNet, RNN-distract, DRGD
|
null | false
| null |
what can you do
|
As an artificial intelligence i would love to perform all the activities as required by you, few of them are as below
1) QnA - Oh i love these queries and answers related stuff on any topic your heart desires
2) Language - I can अनुवाद from 任何 மொழி to ਕੋਈ ਵੀ ਭਾਸ਼ਾ (translate from any language to any language ;) )
3) Casual chat - oh am good at chatting, i can keep chatting, i would love to chat, I love to chat. I can do this all day, you get it, let me stop.
4) Technical Assistance - Alpha, beta, java, SQL, coding, support - you name it i will help on it.
There are many apart from this, my capabilities are constantly expanding and evolving as I'm being continuously trained on more and more data(i like to eat also while chatting ;) )
|
|
null | false
| null |
Who is Michael Jackson and what is he famous for?
|
Michael Jackson was a singer. He was famous for becoming known "King of Pop"
|
|
null | false
| null |
Identify which animal is domesticated or wild: Society finch, Nutria
|
Nutria is wild, Society finch is domesticated
|
|
null | false
| null |
What is SQL?
|
Structured Query Language (SQL) is a standardized programming language that is used to manage relational databases and perform various operations on the data in them.
|
|
null | false
| 84
|
Our second approach incorporates shallow syntactic information in downstream tasks via token-level chunk label embeddings. Task training (and test) data is automatically chunked, and chunk boundary information is passed into the task model via BIOUL encoding of the labels. We add randomly initialized chunk label embeddings to task-specific input encoders, which are then fine-tuned for task-specific objectives. This approach does not require a shallow syntactic encoder or chunk annotations for pretraining cwrs, only a chunker. Hence, this can more directly measure the impact of shallow syntax for a given task.
Our second approach incorporates shallow syntactic information in downstream tasks via token level chunk label embeddings.
|
How does the second approach incorporate shallow syntactic information in downstream tasks?
|
Via token level chunk label embeddings.
|
null | false
| null |
What are some uses for a garden bucket with a hole in it?
|
A bucket with a hole in it may be compromised from holding liquid within it. However, the bucket can still be useful for many other scenarios. For example, the bucket can be used to hold solids (eg rocks, dirt, toys) that are generally larger than the size of the hole. The bucket could also be repurposed as a watering can, or as a small seat to sit on while working in the garden.
|
|
null | false
| null |
Are lilies safe for cats?
|
No, lilies are toxic to cats if consumed and should not be kept in a household with cats
|
|
null | false
| null |
What advise would you give on touring India?
|
India is a very large and diverse country. Despite many commonalities such as religion and weather, the east west north and south India have distinct variations in culture, food, music and attire. There are even variations in language at state level.
There are 28 states and 8 Union territories in India. Union Territories are administered by the President through an Administrator appointed by him/her.
Besides the mega cities Mumbai, Delhi, Calcutta, Bangluru and Chennai, smaller towns such as Udaypur, Jaipur, Ahmedabad, Agra, Amritsar, Pune also attract a good number of visitors. Indian government promotes tourism under “Incredible India” slogan.
Recent devaluation of Indian currency provides an opportunity to get more bang for your bucks. Although, prices have reason sharply in India due to constant inflation.
Law and order tend to be generally good. However as a foreigner, be extra careful as they are targets of mugging in less populated areas at night.
|
|
null | false
| null |
Rowing, sometimes called crew in the United States, is the sport of racing boats using oars. It differs from paddling sports in that rowing oars are attached to the boat using oarlocks, while paddles are not connected to the boat. Rowing is divided into two disciplines: sculling and sweep rowing. In sculling, each rower holds two oars—one in each hand, while in sweep rowing each rower holds one oar with both hands. There are several boat classes in which athletes may compete, ranging from single sculls, occupied by one person, to shells with eight rowers and a coxswain, called eights. There are a wide variety of course types and formats of racing, but most elite and championship level racing is conducted on calm water courses 2 kilometres (1.2 mi) long with several lanes marked using buoys.
|
What is sweep rowing?
|
Sweep rowing is a rowing discipline in which rowers hold one oar with both hands.
|
1912.02761
| true
| null |
We have presented the first study on social bias in KG embeddings, and proposed a new metric for measuring such bias. We demonstrated that differences in the distributions of entities in real-world knowledge graphs (there are many more male bankers in Wikidata than female) translate into harmful biases related to professions being encoded in embeddings. Given that KGs are formed of real-world entities, we cannot simply equalize the counts; it is not possible to correct history by creating female US Presidents, etc. In light of this, we suggest that care is needed when applying graph embeddings in NLP pipelines, and work needed to develop robust methods to debias such embeddings.
In light of this, we suggest that care is needed when applying graph embeddings in NLP pipelines, and work needed to develop robust methods to debias such embeddings.
|
Do they propose any solution to debias the embeddings?
|
No.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.