paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| 284
|
With the massive increase in social interactions on online social networks, there has also been an increase of hateful activities that exploit such infrastructure. On Twitter, hateful tweets are those that contain abusive speech targeting individuals (cyber-bullying, a politician, a celebrity, a product) or particular groups (a country, LGBT, a religion, gender, an organization, etc.). Detecting such hateful speech is important for analyzing public sentiment of a group of users towards another group, and for discouraging associated wrongful activities. It is also useful to filter tweets before content recommendation, or learning AI chatterbots from tweets.
The manual way of filtering out hateful tweets is not scalable, motivating researchers to identify automated ways. In this work, we focus on the problem of classifying a tweet as racist, sexist or neither. The task is quite challenging due to the inherent complexity of the natural language constructs – different forms of hatred, different kinds of targets, different ways of representing the same meaning. Most of the earlier work revolves either around manual feature extraction BIBREF0 or use representation learning methods followed by a linear classifier BIBREF1 , BIBREF2 . However, recently deep learning methods have shown accuracy improvements across a large number of complex problems in speech, vision and text applications. To the best of our knowledge, we are the first to experiment with deep learning architectures for the hate speech detection task.
In this paper, we experiment with multiple classifiers such as Logistic Regression, Random Forest, SVMs, Gradient Boosted Decision Trees (GBDTs) and Deep Neural Networks(DNNs). The feature spaces for these classifiers are in turn defined by task-specific embeddings learned using three deep learning architectures: FastText, Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs). As baselines, we compare with feature spaces comprising of char n-grams BIBREF0 , TF-IDF vectors, and Bag of Words vectors (BoWV).
Main contributions of our paper are as follows: (1) We investigate the application of deep learning methods for the task of hate speech detection. (2) We explore various tweet semantic embeddings like char n-grams, word Term Frequency-Inverse Document Frequency (TF-IDF) values, Bag of Words Vectors (BoWV) over Global Vectors for Word Representation (GloVe), and task-specific embeddings learned using FastText, CNNs and LSTMs. (3) Our methods beat state-of-the-art methods by a large margin ( INLINEFORM0 18 F1 points better).
In this paper, we experiment with multiple classifiers such as Logistic Regression, Random Forest, SVMs, Gradient Boosted Decision Trees (GBDTs) and Deep Neural Networks(DNNs).
|
What classifiers do the authors experiment with?
|
experiment with multiple classifier, such as Logistic Regression, Random Forest, Gradient Boosted Decision Trees (GBDTs), Gradient Boosted Decision Trees (GBDTs), Gradient Boosted Decision Trees (GBDTs), Gradient Boosted Decision Trees (GBDTs), Gradient Boosted Decision Trees (GBDTs) and Deep Neural Networks (DNNs).
|
null | false
| null |
Which of the following are colors: red, black, yellow, orange, sun, sunflower, chips, book, white, pink, blue, keyboard.
|
The colors are: red, black, yellow, orange, white, pink and blue.
|
|
null | false
| null |
In fluid dynamics, turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to a laminar flow, which occurs when a fluid flows in parallel layers, with no disruption between those layers.
|
What characterizes turbulent flow
|
Turbulent flow is characterized by chaotic changes in pressure and flow velocity.
|
null | false
| null |
Where is Sugarloaf mountain located?
|
The Sugarloaf mountain, Pão de Açúcar in Portuguese, is located at the entrance of the Guanabara bay in the city of Rio de Janeiro, capital of the Rio de Janeiro state in Brazil.
|
|
null | false
| null |
Given the following information about the speed of light:
Speed of light in vacuum which has a refractive index of 1 is 300Km/sec
Speed of light in water which has a refractive index of 1.3 is 225Km/sec
Speed of light in diamond which has a refractive index of 2.4 is 125Km/sec
Estimate the speed of light in glass which has a refractive index of 1.5
|
From the information given we can see the light of speed is inversely proportional the refractive index of the medium. We can estimate the speed of light in glass as speed of light in vacuum divided by the medium's refractive index. In case of glass that would be 300Km/1.5 = 200Km/sec.
The estimated speed of light in glass will be around 200 Km/sec
|
|
null | false
| null |
Why are children so special?
|
Children are incredibly special because they are new to the earth. They have fewer biases and are able to see, imagine and create with perspectives that are unavailable to adults. For example, when a child plays in the sand, she may have only seen or touched sand only twenty times before. For her, the sand holds the promise of new textures, the opportunity to build a yet-unseen creation, and hours of delight before her. The adult sees sand and worries about dirt and bringing sand into the house. The adult has had many experiences with sand and not all of them have been positive. The adult's, vision of sand is clouded by experience. We must hold on and seek out the magic of children so that we can continue to find and enjoy the mysteries of the universe, even if they are not all pleasant every time.
|
|
null | false
| null |
Adani Group is an Indian multinational conglomerate, headquartered in Ahmedabad. It was founded by Gautam Adani in 1988 as a commodity trading business, with the flagship company Adani Enterprises. The Group's diverse businesses include port management, electric power generation and transmission, renewable energy, mining, airport operations, natural gas, food processing and infrastructure.
In April 2021, Adani Group crossed US$100 billion in market capitalization, and in April 2022 it crossed the $200 billion mark, in both cases becoming the third Indian conglomerate to do so, after Tata Group and Reliance Industries. In November 2022, it reached $280 billion (INR 24 trillion) surpassing Tata Group. Adani subsequently lost more than $104 billion of market capitalization after fraud and market manipulation allegations by short-seller firm Hindenburg Research. The Adani Group has also attracted other controversies due to various reports of irregular practices. More than 60 percent of the Adani Group's revenue is derived from coal-related businesses. The company's corporate debt totaled $30 billion in 2022.
|
From the passage provided, extract the total corporate debt of Adani group
|
Adani groups corporate debt totaled $30 billion in 2022
|
null | false
| null |
what's a scenario in which you see lots of value for CDC ?
|
1. Imagine a very large online service, e.g., Snapchat, keeping user activities in something like MongoDB. Update rates are very high, the table size is huge, the table footprint is global.
2. You need the data in OLTP because you're trying to use the user data to target ads for them, so storing in Delta is not an option. You need an online solution. This online store maintains aggregates (e.g., clicks over the last 5 min, interest vectors, etc.). Google would use Kansas for this, Facebook uses sharded MySQL + Tao.
3. You need to use this per-user data to analyze your online advertising strategy and re-train your ML models to increase yield.
4. You cannot hit the serving DB with analytical workloads, they're not optimized for that. Many no-SQL DB's don't even allow the kinds of queries you need to run.
5. The database is very large, so taking a snapshot is expensive and slow.
6. A lot of users are inactive at any specific time of the day, so snapshotting the entire state is also inefficient.
7. I want to also know changes in the original production DB. The production DB is an OLTP store and cannot keep that much state.
CDC --- can be an effective answer for this use case.
|
|
null | false
| null |
Before Miley Cyrus recorded "Wrecking Ball," it was offered to which singer?
|
Beyonce.
|
|
null | false
| null |
What was Hurricane Ian?
|
Hurricane Ian was a powerful Atlantic hurricane which was the third-costliest weather disaster on record, and the deadliest hurricane to strike the state of Florida since the 1935 Labor Day hurricane. Ian caused widespread damage across western Cuba and the southeast United States, especially the states of Florida, South Carolina, and North Carolina. It was the ninth named storm, fourth hurricane, and second major hurricane of the 2022 Atlantic hurricane season. Ian was the strongest hurricane to hit the state since Hurricane Michael in 2018, the most recent major hurricane to strike the west coast of Florida since Hurricane Irma in 2017, the first Category 5 hurricane in the Atlantic since Hurricane Lorenzo of 2019, and the strongest Category 4 hurricane to hit the region since Hurricane Charley in 2004, which made a near identical landfall.
|
|
null | false
| 95
|
People are increasingly using social networking platforms such as Twitter, Facebook, YouTube, etc. to communicate their opinions and share information. Although the interactions among users on these platforms can lead to constructive conversations, they have been increasingly exploited for the propagation of abusive language and the organization of hate-based activities BIBREF0, BIBREF1, especially due to the mobility and anonymous environment of these online platforms. Violence attributed to online hate speech has increased worldwide. For example, in the UK, there has been a significant increase in hate speech towards the immigrant and Muslim communities following the UK's leaving the EU and the Manchester and London attacks. The US also has been a marked increase in hate speech and related crime following the Trump election. Therefore, governments and social network platforms confronting the trend must have tools to detect aggressive behavior in general, and hate speech in particular, as these forms of online aggression not only poison the social climate of the online communities that experience it, but can also provoke physical violence and serious harm BIBREF1.
Recently, the problem of online abusive detection has attracted scientific attention. Proof of this is the creation of the third Workshop on Abusive Language Online or Kaggle’s Toxic Comment Classification Challenge that gathered 4,551 teams in 2018 to detect different types of toxicities (threats, obscenity, etc.). In the scope of this work, we mainly focus on the term hate speech as abusive content in social media, since it can be considered a broad umbrella term for numerous kinds of insulting user-generated content. Hate speech is commonly defined as any communication criticizing a person or a group based on some characteristics such as gender, sexual orientation, nationality, religion, race, etc. Hate speech detection is not a stable or simple target because misclassification of regular conversation as hate speech can severely affect users’ freedom of expression and reputation, while misclassification of hateful conversations as unproblematic would maintain the status of online communities as unsafe environments BIBREF2.
To detect online hate speech, a large number of scientific studies have been dedicated by using Natural Language Processing (NLP) in combination with Machine Learning (ML) and Deep Learning (DL) methods BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF0. Although supervised machine learning-based approaches have used different text mining-based features such as surface features, sentiment analysis, lexical resources, linguistic features, knowledge-based features or user-based and platform-based metadata BIBREF8, BIBREF9, BIBREF10, they necessitate a well-defined feature extraction approach. The trend now seems to be changing direction, with deep learning models being used for both feature extraction and the training of classifiers. These newer models are applying deep learning approaches such as Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), etc.BIBREF6, BIBREF0 to enhance the performance of hate speech detection models, however, they still suffer from lack of labelled data or inability to improve generalization property.
Here, we propose a transfer learning approach for hate speech understanding using a combination of the unsupervised pre-trained model BERT BIBREF11 and some new supervised fine-tuning strategies. As far as we know, it is the first time that such exhaustive fine-tuning strategies are proposed along with a generative pre-trained language model to transfer learning to low-resource hate speech languages and improve performance of the task. In summary:
We propose a transfer learning approach using the pre-trained language model BERT learned on English Wikipedia and BookCorpus to enhance hate speech detection on publicly available benchmark datasets. Toward that end, for the first time, we introduce new fine-tuning strategies to examine the effect of different embedding layers of BERT in hate speech detection.
Our experiment results show that using the pre-trained BERT model and fine-tuning it on the downstream task by leveraging syntactical and contextual information of all BERT's transformers outperforms previous works in terms of precision, recall, and F1-score. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using pre-trained BERT model for debiasing hate speech datasets in future studies.
Our experiment results show that using the pre-trained BERT model and fine-tuning it on the downstream task by leveraging syntactical and contextual information of all BERT’s transformers outperforms previous works in terms of precision, recall, and F1-score. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets.
|
What is the performance of the pre-trained BERT model?
|
The BERT model outperforms previous works in terms of precision, recall, and F1-score. Furthermore, it also can detect some biases in the process of collecting or annotating datasets.
|
null | false
| null |
The Dallas Cowboys joined the NFL as an expansion team in 1960. The team's national following might best be represented by its NFL record of consecutive sell-outs. The Cowboys' streak of 190 consecutive sold-out regular and post-season games (home and away) began in 2002. The franchise has made it to the Super Bowl eight times, tying it with the Pittsburgh Steelers and Denver Broncos for second-most Super Bowl appearances in history behind the New England Patriots' record 11 appearances. The Cowboys have won eight NFC championships, the most in the conference's history. The Cowboys are the only NFL team to record 20 straight winning seasons (from 1966 to 1985) during which they missed the playoffs only twice (1974 and 1984).
|
In YAML tell me a list of teams mentioned in the text. It should be in the form `location: ${location}`, `mascot: ${mascot}`
|
- location: Dallas
mascot: Cowboys
- location: Pittsburgh
mascot: Steelers
- location: Denver
mascot: Broncos
- location: New England
mascot: Patriots
|
1906.06045
| false
| null |
The training objective is to minimize the negative likelihood of the aligned unanswerable question $\tilde{q}$ given the answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ : L=-(q,q,p,a)DP(q|q,p,a;) where $\mathcal {D}$ is the training corpus and $\theta $ denotes all the parameters. Sequence-to-sequence and pair-to-sequence models are trained with the same objective.
The training objective is to minimize the negative likelihood of the aligned unanswerable question $\tilde{q}$ given the answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ : L=-(q,q,p,a)DP(q|q,p,a;) where $\mathcal {D}$ is the training corpus and $\theta $ denotes all the parameters. Sequence-to-sequence and pair-to-sequence models are trained with the same objective.
|
What is the training objective of their pair-to-sequence model?
|
The answers are shown as follows:
* is to minimize the negative likelihood of the aligned unanswerable question $\tilde{q}$ given the answerable question $q$ and its corresponding paragraph $p$ that contains the answer
|
null | false
| null |
Give me a bullet list of 5 US states that have a coastline with the Pacific Ocean
|
- California
- Oregon
- Washington
- Alaska
- Hawaii
|
|
null | false
| null |
Tell me if these are safe or not safe to let me child play with: the monkey bars, matches, scissors, rock paper scissors, a baseball, the stove, a cooking board game
|
the monkey bars: safe, matches: not safe, scissors: not safe, rock paper scissors: safe, a baseball: safe, the stove: not safe, a cooking board game: safe
|
|
null | false
| null |
Is water wet?
|
When water comes into direct contact with other things, the other things become wet. Water as we experience it is not a singular entity, rather a collection of a large number of water molecules. The water molecules touch other water molecules, and as previously established, when water touches other things, the other things become wet. Therefore, because there is a point of contact between water molecules (a strong polar bond in this case), water is wet.
|
|
1810.10254
| false
| null |
In this section, we present the experimental settings for pointer-generator network and language model. Our experiment, our pointer-generator model has 500-dimensional hidden states and word embeddings. We use 50k words as our vocabulary for source and target. We evaluate our pointer-generator performance using BLEU score. We take the best model as our generator and during the decoding stage, we generate 1-best and 3-best using beam search with a beam size of 5. For the input, we build a parallel monolingual corpus by translating the mixed language sequence using Google NMT to English ( INLINEFORM0 ) and Mandarin ( INLINEFORM1 ) sequences. Then, we concatenate the translated English and Mandarin sequences and assign code-switching sequences as the labels ( INLINEFORM2 ).
For the input, we build a parallel monolingual corpus by translating the mixed language sequence using Google NMT to English ( INLINEFORM0 ) and Mandarin ( INLINEFORM1 ) sequences.
|
What parallel corpus did they use?
|
Parallel monolingual corpus in English and Mandarin
|
null | false
| null |
On December 17, 2022, Edey became both the 55th player in Purdue's history to reach 1,000 career points and the 11th player in Purdue's history to reach 100 career blocks.
During the 2022-23 season, Edey received Big Ten Player of the Week six times, tying the school record for most awards in a single season (Caleb Swanigan, 2016-17) and rising to second most all-time in men's basketball in the Big Ten (Evan Turner, 2010-11).
At the close of the season, Edey was named the Big Ten Player of the Year and Sporting News National Player of the Year. He was also named a consensus first-team All-American.
|
What awards did Zach Edey win?
|
Zach Edey won the following awards:
- Big Ten Player of the Week (6x)
- Big Ten Player of the Year
- Sporting News National Player of the Year
- consensus first-team All-American
|
null | false
| 92
|
Table TABREF13 reports the main results. In addition to BERT BIBREF10 and RoBERTa BIBREF25, we also include XLNet BIBREF30 as an example of a strong, but different, model architecture. We show test set performance on the ANLI test sets per round, the total ANLI test set, and the exclusive test subset (examples from test-set-exclusive workers). We also show accuracy on the SNLI test set and the MNLI development (for the purpose of comparing between different model configurations across table rows) set. In what follows, we briefly discuss our observations.
Generally, our results indicate that training on more rounds improves model performance.
|
Does the training on more rounds improve model performance?
|
Yes, it does.
|
null | false
| null |
What is a Delta Lake?
|
A Delta Lake is an open-source storage layer designed to run on top of an existing data lake and improve its reliability, security, and performance. Delta Lakes support ACID transactions, scalable metadata, unified streaming, and batch data processing.
|
|
null | false
| null |
Tell me three ways to get coconut from a coconut tree
|
Here are the three ways to get coconut from a coconut tree
1) Using a pole and saw:
Attach a curved saw to end of a very long stick. Make sure the saw does not fall when you pull on the coconut.Stand away from the bottom of the tree at a safe distance and hook the saw on to the stem of the coconut and pull down.
2) Using foot strap to climb:
Make a foot strap out of a cloth or a rope and climb on the coconut tree by gripping the tree with your hand and the knot on your legs. Make sure the loop around your feet is stretched tight against the tree. Once you reach the top of the tree, you can cut a coconut with a knife that you carried on your belt
3) Using a ladder to climb:
This method is useful for shorter coconut trees. Keep the ladder slanting against the coconut tree on a stable ground. Make sure the ladder is stable while you climb. Climb up the ladder and once you can reach the coconut, cut it out with a knife.
|
|
null | false
| null |
Give me a list of your favor pet and breeds
|
My favor cat breed is Ragdoll. My favor Dog breed is Shiba Inu
|
|
null | false
| null |
In Christianity, salvation (also called deliverance or redemption) is the "saving human beings from sin and its consequences, which include death and separation from God" by Christ's death and resurrection, and the justification following this salvation.
While the idea of Jesus' death as an atonement for human sin was recorded in the Christian Bible, and was elaborated in Paul's epistles and in the Gospels, Paul saw the faithful redeemed by participation in Jesus' death and rising. Early Christians regarded themselves as partaking in a new covenant with God, open to both Jews and Gentiles, through the sacrificial death and subsequent exaltation of Jesus Christ. Early Christian notions of the person and sacrificial role of Jesus in human salvation were further elaborated by the Church Fathers, medieval writers and modern scholars in various atonement theories, such as the ransom theory, Christus Victor theory, recapitulation theory, satisfaction theory, penal substitution theory and moral influence theory.
Variant views on salvation (soteriology) are among the main fault lines dividing the various Christian denominations, including conflicting definitions of sin and depravity (the sinful nature of mankind), justification (God's means of removing the consequences of sin), and atonement (the forgiving or pardoning of sin through the suffering, death and resurrection of Jesus).
|
What is salvation in christianity?
|
Salvation in christianity is the redemption of human beings from the consequences of their sins through the death and resurrection of Jesus Christ. Jesus's death is counted as the payment for the sin of all mankind, as recorded by Paul in his letters. This salvation is open to everyone, not just the jews. Anyone who believes that Jesus died for their sins and rose again receives the salvation that God has to offer.
|
null | false
| null |
Identify which animal species is alive or extinct: Maiasaura, Sea Lions
|
Sea Lions are alive, Maiasaura is extinct.
|
|
null | false
| null |
What are the benefits of meditation?
|
Meditation enhances focus, memory, and mood. Meditation has also been shown to have other benefits such as reducing the negative impacts of lack of sleep.
|
|
null | false
| null |
The viola (/viˈoʊlə/ vee-OH-lə, also UK: /vaɪˈoʊlə/ vy-OH-lə, Italian: [ˈvjɔːla, viˈɔːla]) is a string instrument that is bowed, plucked, or played with varying techniques. Slightly larger than a violin, it has a lower and deeper sound. Since the 18th century, it has been the middle or alto voice of the violin family, between the violin (which is tuned a perfect fifth above) and the cello (which is tuned an octave below). The strings from low to high are typically tuned to C3, G3, D4, and A4.
|
What is an string instrument that is similar to a Violin, but larger in size and with a lower and deeper sound?
|
The instrument you are looking for is a Viola.
|
1911.01188
| false
| null |
Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations.
Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation.
|
Which coreference phenomena are analyzed?
|
The answers are shown as follows:
* shining through
* explicitation
|
null | false
| null |
Especially among growers in the US, the UK, and Australia, there has been a competition since the 1990s to grow the hottest chili pepper. Chili pepper species and cultivars registering over 1,000,000 Scoville Heat units (SHU) are called "super-hots". Past Guinness World Record holders (in increasing order of hotness) include the ghost pepper, Infinity chili, Trinidad Moruga scorpion, Naga Viper pepper, and Trinidad Scorpion Butch T. The current record holder, declared in 2017, is the Carolina Reaper, at more than 1.6 million SHU.
|
Given the information provided about hottest chili pepper, which is the hottest naturally grown chili pepper in the world?
|
Although Carolina Reaper currently holds the Guinness World Record for the hottest chili pepper, it is not the hottest chili pepper that is cultivated naturally. The Ghost Pepper, also known as Bhut Jholokia, is the spiciest chili pepper in the world that is grown naturally. It is most commonly found in Bangladesh and northeastern India. This pepper is frequently referred to as the "ghost pepper" in the West since "bhut" means "ghost" in the Assamese language. These peppers have thin, easily tearing, dented skin.
A ghost pepper and an Ed Currie-bred Carolina Reaper can be compared because both are extremely fiery and have a sweet, fruity flavor. The Carolina Reaper has more than twice the heat of the ghost pepper when it is at its fiercest, despite the fact that both peppers are extremely spicy. Ghost peppers have a maximum SHU of 1,041,427, while the Carolina Reaper has a maximum SHU of 2.2 million.
Ghost peppers have a potent, sweet-fruity chili flavor. The first 30 to 45 seconds do not experience any heat. After the heat sets in, anticipate to experience shortness of breath, hiccups, perspiration, and watery eyes. The burning typically worsens over 10 to 15 minutes before going away after 30 to 40 minutes.
|
null | false
| null |
How can I become a private pilot?
|
First, determine what type of pilot you want to become. There are different requirements for private and commercial pilots. Next, enroll in a flight training school or find a certified instructor. You'll also need to find a medical professional who can issue you a 1st or 2nd class medical certificate. You'll be up in the air quickly after that but must pass the FAA written exam, accumulate enough training hours, and pass a final check ride before you earn your pilot certificate.
|
|
null | false
| 145
|
Language modeling is a probabilistic description of language phenomenon. It provides essential context to distinguish words which sound similar and therefore has one of the most useful applications in Natural Language Processing (NLP) especially in downstreaming tasks like Automatic Speech Recognition (ASR). Recurrent Neural Networks (RNN) especially Long Short Term Memory (LSTM) networks BIBREF0 have been the typical solution to language modeling which do achieve strong results. In spite of these results, their fundamental sequential computation constraint has restricted their use in the modeling of long-term dependencies in sequential data. To address these issues Transformer architecture was introduced. Transformers relies completely on an attention mechanism to form global dependencies between input and output. It also offers more parallelization and has achieved SOTA results in language modeling outperforming LSTM models BIBREF1.
In recent years,we have seen a lot of development based on this standard transformer models particularly on unsupervised pre-training(BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 which have set state-of-the art results on multiple NLP benchmarks. One such model architecture has been the Bidirectional Encoder Representations from Transformers (BERT) model which uses a deep bidirectional transformer architecture.
Another architecture of interest would be the Transformer-XL, which introduces the notion of recurrence in a self-attention model.
The primary research focus though has been mostly on English language for which abundant data is present. It is interesting to see the performance of these models for an agglutinative language like Finnish, which is morphologically richer than English.
In this project, we explore the implementation of Transformer-based models (BERT and Transformer-XL) in language modeling for Finnish. We will use the same training data as in BIBREF8 so that we can do fair comparisons with the performance of the LSTM models. Also, as the BERT model is a bi-directional transformer, we will have to approximate the conditional probabilities given a sequence of words. We also experiment with using sub-word units with Transformer-XL to cope with the large vocabulary problems associated with the Finnish Language. With smaller units, the modeled sequences are longer, and we hope that the recursive XL architecture can allow us to still model long term effects. To the best of our knowledge this is the first work with the Finnish language to use the following:
Approximation of perplexity using a BERT architecture
Using Transformer-XL architecture with sub-word units.
Comparison of Transformer and LSTM models as language models in the same comparable settings with an agglutinative language.
We also experiment with using sub-word units with Transformer-XL to cope with the large vocabulary problems associated with the Finnish Language.
|
What's the function of using sub-word units with Transformer-XL?
|
To cope with the large vocabulary problems associated with the Finnish Language.
|
null | false
| 110
|
The Transformer architecture BIBREF0 for deep neural networks has quickly risen to prominence in NLP through its efficiency and performance, leading to improvements in the state of the art of Neural Machine Translation BIBREF1, BIBREF2, as well as inspiring other powerful general-purpose models like BERT BIBREF3 and GPT-2 BIBREF4. At the heart of the Transformer lie multi-head attention mechanisms: each word is represented by multiple different weighted averages of its relevant context. As suggested by recent works on interpreting attention head roles, separate attention heads may learn to look for various relationships between tokens BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9.
The attention distribution of each head is predicted typically using the softmax normalizing transform. As a result, all context words have non-zero attention weight. Recent work on single attention architectures suggest that using sparse normalizing transforms in attention mechanisms such as sparsemax – which can yield exactly zero probabilities for irrelevant words – may improve performance and interpretability BIBREF12, BIBREF13, BIBREF14. Qualitative analysis of attention heads BIBREF0 suggests that, depending on what phenomena they capture, heads tend to favor flatter or more peaked distributions.
Recent works have proposed sparse Transformers BIBREF10 and adaptive span Transformers BIBREF11. However, the “sparsity" of those models only limits the attention to a contiguous span of past tokens, while in this work we propose a highly adaptive Transformer model that is capable of attending to a sparse set of words that are not necessarily contiguous. Figure FIGREF1 shows the relationship of these methods with ours.
Our contributions are the following:
We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.
We propose an adaptive version of sparse attention, where the shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case.
We make an extensive analysis of the added interpretability of these models, identifying both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of our proposed model.
We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.
|
What's the advantage of the model in the paper?
|
It eases interpretability and leads to slight accuracy gains.
|
null | false
| null |
Earth is the third planet from the Sun and the only place known in the universe where life has originated and found habitability. While Earth may not contain the largest volumes of water in the Solar System, only Earth sustains liquid surface water, extending over 70.8% of the Earth with its ocean, making Earth an ocean world. Earth's polar regions currently retain most of all other water with large sheets of ice covering ocean and land, dwarfing Earth's groundwater, lakes, rivers and atmospheric water. Land, consisting of continents and islands, extends over 29.2% of the Earth and is widely covered by vegetation. Below Earth's surface material lies Earth's crust consisting of several slowly moving tectonic plates, which interact to produce mountain ranges, volcanoes, and earthquakes. Earth's liquid outer core generates a magnetic field that shapes the magnetosphere of Earth, largely deflecting destructive solar winds and cosmic radiation.
|
How much of the earth is covered in land?
|
29.2% of the earth's surface is covered in land.
|
null | false
| null |
Some people describe braising as "transformative". Why is this so?
|
Braising is a common cooking technique where some primary ingredient (typically a cut of tough meat) cooks for a long time under low temperature and with lots of liquid. The main advantage of braising is that it causes the collagen in the meat fibers to break down into gelatin, which makes the meat fibers more separated and thus the meat more tender. In addition, the gelatin mixes with the remaining cooking liquids to create a scrumptious sauce. Thus, the overall process transforms meat from tough into tender, along with a tasty sauce that can be served alongside the meat. Braising is also convenient in that the cook does not have to attend frequently to the food during the cooking process. These two reasons, namely, rendering tough cuts of meat (which also tend to be cheaper) more tender and requiring less effort from the cook, have made braising a common cooking technique among several cultures.
|
|
null | false
| null |
What are five totally distinct, creative ways to describe the usefulness of a child's imagination?
|
1. A child's imagination is a powerful tool, allowing them to create anything they can dream up.
2. A child's imagination can be a source of great joy and wonder.
3. A child's imagination is a source of endless creativity.
4. A child's imagination can be used to develop problem-solving skills.
5. A child's imagination can help them understand the world around them.
|
|
null | false
| null |
Are these places in the United States or Spain? California, New Mexico, Andalusia, Catalonia, Utah, Madrid, Texas
|
California: US, New Mexico: US, Andalusia: Spain, Catalonia: Spain, Utah: US, Madrid: Spain, Texas: US
|
|
null | false
| null |
The 1Malaysia Development Berhad scandal, often referred to as the 1MDB scandal or just 1MDB, describes a corruption, bribery and money laundering conspiracy in which the Malaysian sovereign wealth fund 1Malaysia Development Berhad (1MDB) was systematically embezzled, with assets diverted globally by the perpetrators of the scheme. It had a global scope, implicated institutions and individuals in politics, banking, and entertainment, and sparked criminal investigations across a number of nations. The 1MDB scandal has been described as "one of the world's greatest financial scandals" and declared by the United States Department of Justice as the "largest kleptocracy case to date" in 2016.
|
Please let me know the details of 1MDB saga and scandal.
|
Originally 1MDB stands for of 1Malaysia Development Berhad was a Malaysian state fund, set up in 2009 to promote development through foreign investment and partnerships. The then prime minister, Najib Razak, was its chairman.
The fund has since been at the heart of one of the biggest corruption scandals in the world. The US justice department believes more than $4.5bn was stolen by Jho Low and the resulting scandal has been responsible for the toppling of a government this year and the arrest of Najib, his wife, Rosmah Mansor, and a growing number of close associates.
|
null | false
| null |
What are the five best places to hike in the San Francisco Bay Area and why?
|
1. Alamere Fall includes a waterfall spilling into the Pacific Ocean
2. Tomales Point includes views of open water in three directions along with close encounters with the local elk population.
3. Land's End is a beautiful cliffside trail right in the city of San Francisco.
4. Mount Tamalpais is a Marin County peak with views of the ocean and the city.
5. The trail at Grey Whale Cove runs south to Montara State Beach, and in the spring and fall seasons is the perfect place to spot migrating grey whales.
|
|
1911.01799
| false
| null |
FLOAT SELECTED: Table 1. The distribution over genres.
FLOAT SELECTED: Table 1. The distribution over genres.
|
What genres are covered?
|
genre, entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement
|
null | false
| 253
|
As an essential part of a task-oriented dialogue system BIBREF0 , the task of natural language generation (NLG) is to produce a natural language utterance containing the desired information given a semantic representation consisting of dialogue act types with a set of slot-value pairs. Conventional methods using hand-crafted rules often generates monotonic utterances and it requires substantial amount of human engineering work. Recently, various neural approaches BIBREF1 , BIBREF2 , BIBREF3 have been proposed to generate accurate, natural and diverse utterances. However, these methods are typically developed for particular domains. Moreover, they are often data-intensive to train. The high annotation cost prevents developers to build their own NLG component from scratch. Therefore, it is extremely useful to train a NLG model that can be generalized to other NLG domains or tasks with a reasonable amount of annotated data. This is referred to low-resource NLG task in this paper.
Recently, some methods have been proposed for low-resource NLG tasks. Apart from the simple data augmentation trick BIBREF4 , specialized model architectures, including conditional variational auto-encoders (CVAEs, BIBREF3 , BIBREF5 , BIBREF6 ) and adversarial domain adaptation critics BIBREF5 , have been proposed to learn domain-invariant representations. Although promising results were reported, we found that datasets used by these methods are simple which tend to enumerate many slots and values in an utterance without much linguistic variations. As a consequence, over-fitting the slots and values in the low-resource target domain could even outperform those versions trained with rich source domain examples BIBREF6 . Fortunately, there is a new large-scale dialog dataset (MultiWoz, BIBREF7 ) that contains a great variety of domains and linguistic patterns that allows us to conduct extensive and meaningful experimental analysis for low-resource NLG tasks.
In this paper, instead of casting the problem as model-based approaches, we propose a generalized optimization-based meta-learning approach to directly enhance the optimization procedure for the low-resource NLG task. We start by arguing that a recently proposed model-agnostic meta-learning algorithm (MAML, BIBREF8 ) is a nice fit to the low-resource NLG task. Then, we proposed a generalized NLG algorithm called Meta-NLG based on MAML by viewing languages in different domains or dialog act types as separate Meta NLG tasks. Following the essence of MAML, the goal of Meta-NLG is to learn a better initialization of model parameters that facilitates fast adaptation to new low-resource NLG scenarios. As Meta-NLG is model-agnostic as long as the model can be optimized by gradient descent, we could apply it to any existing NLG models to optimize them in a way that adapt better and faster to new low-resource tasks.
The main contribution of this paper is two-fold:
In this paper, instead of casting the problem as model-based approaches, we propose a generalized optimization-based meta-learning approach to directly enhance the optimization procedure for the low-resource NLG task.
|
Why do the authors propose a generalized optimization-based meta-learning algorithm?
|
It is used to improve the optimization procedure for the low-resource natural language generation (NLG) task of natural language generation (NLG).
|
null | false
| null |
In modern historiography, Ancient Rome refers to Roman civilisation from the founding of the Italian city of Rome in the 8th century BC to the collapse of the Western Roman Empire in the 5th century AD. It encompasses the Roman Kingdom (753–509 BC), Roman Republic (509–27 BC) and Roman Empire (27 BC–476 AD) until the fall of the western empire.
|
How long did the Roman Empire last
|
1229 years
|
null | false
| null |
Is a Hot dog a sandwich?
|
Technically a sandwich is defined as being made from 2 slices of some sort of bread or a split in half roll. By that definition, yes a hotdog is a sandwich.
This is however an extremely contentious subject that many dispute.
|
|
1610.08597
| false
| null |
To build the classifiers we used three different learning algorithms, namely Logistic Regression (LR), Random Forest (RF), and Support Vector Machines (SVM). We used version 0.17.1 of scikit-learn machine learning library for Python to implement the classifiers. An open source tool of Python, Gensim BIBREF19 was used to generate the word embeddings. We compare our results with the two best performing systems reported in BIBREF9 which are the two state-of-the-art models for identifying gang members in Twitter. Both baseline models are built from a random forest classifier trained over term frequencies for unigrams in tweet text, emoji, profile data, YouTube video data and image tags. Baseline Model(1) considers all 3,285 gang and non-gang member profiles in our dataset. Baseline Model(2) considers all Twitter profiles that contain every feature type discussed in Section SECREF2 . Because a Twitter profile may not have every feature type, baseline Model(1) represents a practical scenario where not every Twitter profile contains every type of feature. However, we compare our results to both baseline models and report the improvements.
To build the classifiers we used three different learning algorithms, namely Logistic Regression (LR), Random Forest (RF), and Support Vector Machines (SVM).
|
Which supervised learning algorithms are used in the experiments?
|
The answers are shown as follows:
* Logistic Regression (LR)
* Random Forest (RF)
* Support Vector Machines (SVM)
|
null | false
| null |
What are some of the most popular fantasy books from the last ten years?
|
Some of the most popular fantasy books from the last ten years include The Stormlight Archives series by Brandon Sanderson, the first of which is The Way of Kings, as well as the series A Song of Ice and Fire from George R.R. Martin, popularly known as A Game of Thrones due to both the first book in the series as well as the popular TV show based on the series. Other popular fantasy books include The Name of the Wind by Patrick Rothfuss, and, while certainly not written in the last ten years, The Lord of the Rings by J.R.R. Tolkien remains extremely popular.
|
|
null | false
| null |
What are The Benefits of Trekking?
|
1) It Benefits of Sports in General , Like all physical activities, trekking is beneficial for your health: develop your bones, reduce the risk of osteoporosis, increase your cardiovascular and respiratory capacities, strengthen your immune system, improve your muscular capacity
2) Disconnection from everyday life really helps to take a step back from mundane worries and connect with yourself.
3) You work both the front and the back of your body. Finally, if you have poles, you work both up and down your body.
|
|
null | false
| null |
What is wholesale banking?
|
Wholesale banking is the provision of services by banks to larger customers or organizations such as mortgage brokers, large corporate clients, mid-sized companies, real estate developers and investors, international trade finance businesses, institutional customers (such as pension funds and government entities/agencies), and services offered to other banks or other financial institutions.
Wholesale finance refers to financial services conducted between financial services companies and institutions such as banks, insurers, fund managers, and stockbrokers.
Modern wholesale banks engage in:
- Finance wholesaling
- Underwriting
- Market making
- Consultancy
- Mergers and acquisitions
- Fund management
|
|
1909.12140
| false
| null |
For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence splitting approach outperforms the state of the art in structural TS, returning fine-grained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. The full evaluation methodology and detailed results are reported in niklaus-etal-2019-transforming. In addition, a comparative analysis with the annotations contained in the RST Discourse Treebank BIBREF6 demonstrates that we are able to capture the contextual hierarchy between the split sentences with a precision of almost 90% and reach an average precision of approximately 70% for the classification of the rhetorical relations that hold between them. The evaluation of the German version is in progress.
For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains
For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains
The evaluation of the German version is in progress.
|
What are the corpora used for the task?
|
The answers are shown as follows:
* For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains
* The evaluation of the German version is in progress.
|
null | false
| null |
he Beatles were an English rock band, formed in Liverpool in 1960, that comprised John Lennon, Paul McCartney, George Harrison and Ringo Starr. They are regarded as the most influential band of all time and were integral to the development of 1960s counterculture and popular music's recognition as an art form. Rooted in skiffle, beat and 1950s rock 'n' roll, their sound incorporated elements of classical music and traditional pop in innovative ways; the band also explored music styles ranging from folk and Indian music to psychedelia and hard rock. As pioneers in recording, songwriting and artistic presentation, the Beatles revolutionised many aspects of the music industry and were often publicised as leaders of the era's youth and sociocultural movements
|
Which era of counterculture were the Beatles pivotal to?
|
The Beatles were integral to the development of 1960s counterculture
|
null | false
| null |
Give me a bulleted list of the seven more recent Indian Prime Ministers
|
- Narendra Modi (2014– )
- Manmohan Singh (2004–14)
- Atal Bihari Vajpayee (1998–2004; 2nd time)
- Inder K. Gujral (1997–98)
- H.D. Deve Gowda (1996–97)
- Atal Bihari Vajpayee (1996; 1st time)
- P.V. Narasimha Rao (1991–96)
|
|
null | false
| 30
|
Recently, neural machine translation (NMT) has gained popularity in the field of machine translation. The conventional encoder-decoder NMT proposed by Cho2014 uses two recurrent neural networks (RNN): one is an encoder, which encodes a source sequence into a fixed-length vector, and the other is a decoder, which decodes the vector into a target sequence. A newly proposed attention-based NMT by DzmitryBahdana2014 can predict output words using the weights of each hidden state of the encoder by the attention mechanism, improving the adequacy of translation.
Even with the success of attention-based models, a number of open questions remain in NMT. Tu2016 argued two of the common problems are over-translation: some words are repeatedly translated unnecessary and under-translation: some words are mistakenly untranslated. This is due to the fact that NMT can not completely convert the information from the source sentence to the target sentence. Mi2016a and Feng2016 pointed out that NMT lacks the notion of coverage vector in phrase-based statistical machine translation (PBSMT), so unless otherwise specified, there is no way to prevent missing translations.
Another problem in NMT is an objective function. NMT is optimized by cross-entropy; therefore, it does not directly maximize the translation accuracy. Shen2016 pointed out that optimization by cross-entropy is not appropriate and proposed a method of optimization based on a translation accuracy score, such as expected BLEU, which led to improvement of translation accuracy. However, BLEU is an evaluation metric based on n-gram precision; therefore, repetition of some words may be present in the translation even though the BLEU score is improved.
To address to problem of repeating and missing words in the translation, tu2016neural introduce an encoder-decoder-reconstructor framework that optimizes NMT by back-translation from the output sentences into the original source sentences. In their method, after training the forward translation in a manner similar to the conventional attention-based NMT, they train a back-translation model from the hidden state of the decoder into the source sequence by a new decoder to enforce agreement between source and target sentences.
In order to confirm the language independence of the framework, we experiment on two parallel corpora of English-Japanese and Japanese-English translation tasks using encode-decoder-reconstructor. Our experiments show that their method offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task, though the difference is not significant on Japanese-English translation task.
In addition, we jointly train a model of forward translation and back-translation without pre-training, and then evaluate this model. As a result, the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training.
The main contributions of this paper are as follows:
To address to problem of repeating and missing words in the translation, Tu et al. (2017) introduce an encoder-decoder-reconstructor framework that optimizes NMT by back-translation from the output sentences into the original source sentences.
|
What framework did Tu et al. propose to solve the problem of repetition and missing words in translation?
|
Tu et al. introduce an encoder-decoder-reconstructor framework that optimizes NMT by back-translation from the output sentences into the original source sentences.
|
1911.03243
| false
| null |
The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset.
Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts.
|
How big is the dataset?
|
The answers are shown as follows:
* 1593 annotations
|
null | false
| 30
|
The encoder-decoder-reconstructor is trained with likelihood of both the encoder-decoder and the reconstructor on a set of training datasets. tu2016neural trained a back-translation model from the hidden state of the decoder into the source sequence by reconstructor to enforce agreement between source and target sentences using Equation EQREF23 after training the forward translation in a manner similar to the conventional attention-based NMT using Equation EQREF13 .
In addition, we experiment to jointly train a model of forward translation and back-translation without pre-training. It may learn a globally optimal model compared to locally optimal model pre-trained using the forward translation.
In addition, we experiment to jointly train a model of forward translation and back-translation without pre-training.
|
Whether pre-training is conducted when training a model of forward translation and back-translation?
|
No.
|
null | false
| null |
What is a balloon?
|
A balloon is a latex rubber flexible bag that can be inflated with a gas or liquid. If you want a balloon to float you can inflate it with helium. It can also be filled with a liquid like water to make it a water balloon. Other types of balloons are hot air balloons filled with flammable gas that go up thousands of feet in the sky and can carry people, or weather balloon which floats high into the atmosphere to measure different weather patterns.
|
|
null | false
| null |
Which of these sports involve a ball: swimming, baseball, javelin, hockey, cycling, high jump, tennis, , netball, basketball, and handball.
|
Hockey, tennis, baseball, basketball, netball, and handball involve a ball. High jump, javelin, swimming, and cycling do not involve a ball.
|
|
null | false
| null |
Identify which instrument is string or woodwind: Spinet, Caval
|
Caval is woodwind, Spinet is string.
|
|
null | false
| null |
Chemical engineering is an engineering field which deals with the study of operation and design of chemical plants as well as methods of improving production. Chemical engineers develop economical commercial processes to convert raw materials into useful products. Chemical engineering uses principles of chemistry, physics, mathematics, biology, and economics to efficiently use, produce, design, transport and transform energy and materials. The work of chemical engineers can range from the utilization of nanotechnology and nanomaterials in the laboratory to large-scale industrial processes that convert chemicals, raw materials, living cells, microorganisms, and energy into useful forms and products. Chemical engineers are involved in many aspects of plant design and operation, including safety and hazard assessments, process design and analysis, modeling, control engineering, chemical reaction engineering, nuclear engineering, biological engineering, construction specification, and operating instructions.
|
List the principles that are used in chemical engineering where each new line starts with "- ". Do not say anything else.
|
- chemistry
- physics
- mathematics
- biology
- economics
|
null | false
| null |
Can you please describe the game of golf?
|
Golf originated in Scotland in the 15th century. It is mostly regarded as an "upper-class" sport reserved for members of private country clubs, but has gained a lot of popularity over the last few decades. As a result, golf has become more broadly accessible to the general public, and today the sport is enjoyed by tens of millions of people around the world.
In golf, there are a total of eighteen stretches of grass, known as holes. Each hole is comprised of a tee box, where the golfer hits his or her initial shot, the fairway, and the green. Most holes also include deep grass that runs along the fairways, known as "the rough". For added difficulty, holes can also contain hazards like sand bunkers or bodies of water like ponds and creeks. A small circular hole is located on every green and is marked with a flag so that golfers can see it from far away. The objective is to get your golf ball from the tee box into that small hole in as few shots (known as "strokes") as possible. Golfers advance their ball by striking it with a golf club, and each ball strike is counted as a stroke. Golfers are allowed to use up to 14 golf clubs over the course of their round, each of which is used for different purposes, from different distances. Some clubs, like the driver, are used to hit the ball very long distances. In fact, most professional golfers are able to drive the ball over 300 yards. That's the length of three football fields! A combination of clubs called irons make up the majority of a set of golf clubs. They are are generally used to advance the ball to the green after the golfer hits his or her initial tee shot. The club face of each iron is designed with a different loft angle. Higher loft angles enable golfers to hit the ball higher in the air and shorter distances, whereas low loft angles produce lower trajectory shots that travel longer distances. The most frequently used club in the set is the putter, which is used to tap the ball from wherever it lies on the green into the hole. On average, most good golfers will putt the ball twice per hole, or approximately 36 times per round.
Golf holes are designated as par 3, par 4, or par 5, depending on their length. These numbers are an indication of how many strokes it should take for a professional golfer to complete the hole. Par 3 holes are shorter, and should take a single stroke to get the ball from tee to green. Par 5 holes are longer, and should typically take three strokes to get from tee to green. Most golf courses have four par 3 holes, four par 5 holes, and ten par 4 holes, yielding an aggregate of par 72. At the end of a round of golf, players calculate their final score by aggregating their total number of strokes across all eighteen holes. Over time, the golfer's skill level is assessed by his or her "handicap", which is a system that reflects the average number of strokes over par that the player typically shoots. For example, if one's average round score is 79, their handicap is 7 because it's 7 strokes above par. The lower the handicap, the better the golfer.
|
|
null | false
| null |
How do you brew beer?
|
The steps to make beer is as following: 1. Mashing 2. Separation 3. Boiling 4. Fermentation. The ingredients are brought together through these 4 steps. You can create many beer styles with this process.
|
|
null | false
| null |
What is a simple recipe for a margarita?
|
A simple margarita can be made using just tequila, lime juice, and simple syrup. Alternatively, margarita sweet and sour mix can be bought from the store and will need to be mixed with tequila.
|
|
1904.03339
| false
| null |
JESSI is trained using only the datasets given on the shared task, without using any additional external data. Despite this, JESSI performs second on Subtask A with an F1 score of 77.78% among 33 other team submissions. It also performs well on Subtask B with an F1 score of 79.59%.
JESSI is trained using only the datasets given on the shared task, without using any additional external data.
|
What datasets were used?
|
The answers are shown as follows:
* datasets given on the shared task, without using any additional external data
|
null | false
| null |
Augusta Ada King, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852) was an English mathematician and writer, chiefly known for her work on Charles Babbage's proposed mechanical general-purpose computer, the Analytical Engine. She was the first to recognise that the machine had applications beyond pure calculation, and to have published the first algorithm intended to be carried out by such a machine. As a result, she is often regarded as the first computer programmer
|
Who was the first computer programmer?
|
Augusta Ada King, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852) is often regarded as the first computer programmer for her work on the Analytical Engine with Charles Babbage.
|
null | false
| null |
What is the best way to live life
|
1. Be helpful
2.Be honest
3. Be Kind
4. Be positive
|
|
null | false
| null |
What did Gertrude Townend do
|
Gertrude Catherine Townend was a British nurse and suffragette. She provided, with Nurse Catherine Pine, a care home for suffragettes recovering from imprisonment and force-feeding, and participated in suffragette gatherings.
|
|
null | false
| 168
|
Knowledge representation and reasoning (KRR) is the process of representing the domain knowledge in formal languages (e.g., SPARQL, Prolog) such that it can be used by expert systems to execute querying and reasoning services. KRR have been applied in many fields including financial regulations, medical diagnosis, laws, and so on. One major obstacle in KRR is the creation of large-scale knowledge bases with high quality. For one thing, this requires the knowledge engineers (KEs) not only to have the background knowledge in a certain domain but have enough skills in knowledge representation as well. Unfortunately, qualified KEs are also in short supply. Therefore, it would be useful to build a tool that allows the domain experts without any background in logic to construct and query the knowledge base simply from text.
Controlled natural languages (CNLs) BIBREF0 were developed as a technology that achieves this goal. CNLs are designed based on natural languages (NLs) but with restricted syntax and interpretation rules that determine the unique meaning of the sentence. Representative CNLs include Attempto Controlled English BIBREF1 and PENG BIBREF2 . Each CNL is developed with a language parser which translates the English sentences into an intermediate structure, discourse representation structure (DRS) BIBREF3 . Based on the DRS structure, the language parsers further translate the DRS into the corresponding logical representations, e.g., Answer Set Programming (ASP) BIBREF4 programs. One main issue with the aforementioned CNLs is that the systems do not provide enough background knowledge to preserve semantic equivalences of sentences that represent the same meaning but are expressed via different linguistic structures. For instance, the sentences Mary buys a car and Mary makes a purchase of a car are translated into different logical representations by the current CNL parsers. As a result, if the user ask a question who is a buyer of a car, these systems will fail to find the answer.
In this thesis proposal, I will present KALM BIBREF5 , BIBREF6 , a system for knowledge authoring and question answering. KALM is superior to the current CNL systems in that KALM has a complex frame-semantic parser which can standardize the semantics of the sentences that express the same meaning via different linguistic structures. The frame-semantic parser is built based on FrameNet BIBREF7 and BabelNet BIBREF8 where FrameNet is used to capture the meaning of the sentence and BabelNet BIBREF8 is used to disambiguate the meaning of the extracted entities from the sentence. Experiment results show that KALM achieves superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems.
The rest parts are organized as follows: Section SECREF2 discusses the related works, Section SECREF3 presents the KALM architecture, Section SECREF4 presents KALM-QA, the question answering part of KALM, Section SECREF5 shows the evaluation results, Section SECREF6 shows the future work beyond the thesis, and Section SECREF7 concludes the paper.
The frame-semantic parser is built based on FrameNet and BabelNet where FrameNet is used to capture the meaning of the sentence and BabelNet is used to disambiguate the meaning of the extracted entities from the sentence.
|
What is BabelNet used to?
|
The BabelNet is used to disambiguate the meaning of the extracted entities from the sentence.
|
null | false
| null |
Give me a list of ways to lose weight
|
* Get more active
* Drink plenty of water
* Use a smaller plate
* Cut down on alcohol
|
|
1902.06843
| false
| null |
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.)
Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression.
|
What insights into the relationship between demographics and mental health are provided?
|
The answers are shown as follows:
* either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age
* more women than men were given a diagnosis of depression
|
null | false
| null |
Identify which animal species is alive or extinct: Cave Bear, Saola
|
Saola is alive, Cave Bear is extinct.
|
|
null | false
| null |
How to do you convert from inches to cm?
|
You can convert inches to centimeters using the following:
cm = 2.54 * inches, for example, 5 inches = 5 * 2.54 = 12.7 cm
|
|
1912.13109
| false
| null |
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al.
|
What dataset is used?
|
The answers are shown as follows:
* HEOT
* A labelled dataset for a corresponding english tweets
|
1811.04604
| false
| null |
Our experiments on a goal-oriented dialog corpus, the personalized bAbI dialog dataset, show that leveraging personal information can significantly improve the performance of dialog systems. The Personalized MemN2N outperforms current state-of-the-art methods with over 7% improvement in terms of per-response accuracy. A test with real human users also illustrates that the proposed model leads to better outcomes, including higher task completion rate and user satisfaction.
Our experiments on a goal-oriented dialog corpus, the personalized bAbI dialog dataset, show that leveraging personal information can significantly improve the performance of dialog systems.
|
What datasets did they use?
|
The answers are shown as follows:
* the personalized bAbI dialog dataset
|
null | false
| 132
|
Word Sense Disambiguation (WSD) is a fundamental task and long-standing challenge in Natural Language Processing (NLP), which aims to find the exact sense of an ambiguous word in a particular context BIBREF0. Previous WSD approaches can be grouped into two main categories: knowledge-based and supervised methods.
Knowledge-based WSD methods rely on lexical resources like WordNet BIBREF1 and usually exploit two kinds of lexical knowledge. The gloss, which defines a word sense meaning, is first utilized in Lesk algorithm BIBREF2 and then widely taken into account in many other approaches BIBREF3, BIBREF4. Besides, structural properties of semantic graphs are mainly used in graph-based algorithms BIBREF5, BIBREF6.
Traditional supervised WSD methods BIBREF7, BIBREF8, BIBREF9 focus on extracting manually designed features and then train a dedicated classifier (word expert) for every target lemma.
Although word expert supervised WSD methods perform better, they are less flexible than knowledge-based methods in the all-words WSD task BIBREF10. Recent neural-based methods are devoted to dealing with this problem. BIBREF11 present a supervised classifier based on Bi-LSTM, which shares parameters among all word types except the last layer. BIBREF10 convert WSD task to a sequence labeling task, thus building a unified model for all polysemous words. However, neither of them can totally beat the best word expert supervised methods.
More recently, BIBREF12 propose to leverage the gloss information from WordNet and model the semantic relationship between the context and gloss in an improved memory network. Similarly, BIBREF13 introduce a (hierarchical) co-attention mechanism to generate co-dependent representations for the context and gloss. Their attempts prove that incorporating gloss knowledge into supervised WSD approach is helpful, but they still have not achieved much improvement, because they may not make full use of gloss knowledge.
In this paper, we focus on how to better leverage gloss information in a supervised neural WSD system. Recently, the pre-trained language models, such as ELMo BIBREF14 and BERT BIBREF15, have shown their effectiveness to alleviate the effort of feature engineering. Especially, BERT has achieved excellent results in question answering (QA) and natural language inference (NLI). We construct context-gloss pairs from glosses of all possible senses (in WordNet) of the target word, thus treating WSD task as a sentence-pair classification problem. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on WSD task. In particular, our contribution is two-fold:
1. We construct context-gloss pairs and propose three BERT-based models for WSD.
2. We fine-tune the pre-trained BERT model, and the experimental results on several English all-words WSD benchmark datasets show that our approach significantly outperforms the state-of-the-art systems.
We construct context-gloss pairs from glosses of all possible senses (in WordNet) of the target word, thus treating WSD task as a sentence-pair classification problem.
|
Which dataset was used to construct the context-gloss pairs?
|
WordNet
|
null | false
| 60
|
To further improve the performance, we adopt system combination on the decoding lattice level. By combining systems, we can take advantage of the strength of each model that is optimized for different domains. The results for 2 test sets is showed on Table TABREF17 and TABREF18.
As we can see, for both test sets, system combination significantly reduce the WER. The best result for vlsp2018 of 4.85% WER is obtained by the combination weights 0.6:0.4 where 0.6 is given to the general language model and 0.4 is given to the conversation one. On the vlsp2019 set, the ratio is change slightly by 0.7:0.3 to deliver the best result of 15.09%.
As we can see, for both test sets, system combination significantly reduce the WER. The best result for vlsp2018 of 4.85% WER is obtained by the combination weights 0.6:0.4 where 0.6 is given to the general language model and 0.4 is given to the conversation one. On the vlsp2019 set, the ratio is change slightly by 0.7:0.3 to deliver the best result of 15.09%.
|
What is the effect of the system combination?
|
The WER is reduced.
|
null | false
| null |
When was Belgium founded?
|
4th of October 1830
|
|
null | false
| 6
|
Recently, with the emergence of neural seq2seq models, abstractive summarization methods have seen great performance strides BIBREF0, BIBREF1, BIBREF2. However, complex neural summarization models with thousands of parameters usually require a large amount of training data. In fact, much of the neural summarization work has been trained and tested in news domains where numerous large datasets exist. For example, the CNN/DailyMail (CNN/DM) BIBREF3, BIBREF4 and New York Times (NYT) datasets are in the magnitude of 300k and 700k documents, respectively. In contrast, in other domains such as student reflections, summarization datasets are only in the magnitude of tens or hundreds of documents (e.g., BIBREF5). We hypothesize that training complex neural abstractive summarization models in such domains will not yield good performing models, and we will indeed later show that this is the case for student reflections.
To improve performance in low resource domains, we explore three directions. First, we explore domain transfer for abstractive summarization. While domain transfer is not new, compared to prior summarization studies BIBREF6, BIBREF7, our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small. Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. Lastly, we combine both directions. Evaluations of neural abstractive summarization method across four student reflection corpora show the utility of all three methods.
To improve performance in low resource domains, we explore three directions. First, we explore domain transfer for abstractive summarization. While domain transfer is not new, compared to prior summarization studies (Hua and Wang 2017; Keneshloo, Ramakrishnan, and Reddy 2019), our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small. Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. Lastly, we combine both directions. Evaluations of neural abstractive summarization method across four student reflection corpora show the utility of all three methods.
|
To improve performance in low resource domains, what do the authors explore?
|
The authors explore three directions. First, they explore domain transfer for abstractive summarization. Second, they propose a template-based synthesis method to create synthesized summaries, and then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. Lastly, they combine both directions. Evaluations of neural abstractive summarization method across four student reflection corpora show the utility of all three methods.
|
null | false
| 159
|
The objective function was the negative log-likelihood and computed by:
$$\textbf {F}(\theta ) = -\sum _{t=1}^{T}\textbf {y}_{t}^{\top }\log {\textbf {p}_{t}}$$ (Eq. 27)
where: $\textbf {y}_{t}$ is the ground truth token distribution, $\textbf {p}_{t}$ is the predicted token distribution, $T$ is length of the input sentence. The proposed generators were trained by treating each sentence as a mini-batch with $l_{2}$ regularization added to the objective function for every 5 training examples. The models were initialized with a pretrained Glove word embedding vectors BIBREF28 and optimized by using stochastic gradient descent and back propagation through time BIBREF29 . Early stopping mechanism was implemented to prevent over-fitting by using a validation set as suggested in BIBREF30 .
yt is the ground truth token distribution.
|
What is yt standing for?
|
yt is the ground truth token distribution.
|
1905.13497
| false
| null |
Recently, neural models pre-trained on a language modeling task, such as ELMo BIBREF0 , OpenAI GPT BIBREF1 , and BERT BIBREF2 , have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. The success of BERT can largely be associated to the notion of context-aware word embeddings, which differentiate it from common approaches such as word2vec BIBREF3 that establish a static semantic embedding. Since the introduction of BERT, the NLP community continues to be impressed by the amount of ideas produced on top of this powerful language representation model. However, despite its success, it remains unclear whether the representations produced by BERT can be utilized for tasks such as commonsense reasoning. Particularly, it is not clear whether BERT shed light on solving tasks such as the Pronoun Disambiguation Problem (PDP) and Winograd Schema Challenge (WSC). These tasks have been proposed as potential alternatives to the Turing Test, because they are formulated to be robust to statistics of word co-occurrence BIBREF4 .
In this paper, we show that the attention maps created by an out-of-the-box BERT can be directly exploited to resolve coreferences in long sentences. As such, they can be simply repurposed for the sake of commonsense reasoning tasks while achieving state-of-the-art results on the multiple task. On both PDP and WSC, our method outperforms previous state-of-the-art methods, without using expensive annotated knowledge bases or hand-engineered features. On a Pronoun Disambiguation dataset, PDP-60, our method achieves 68.3% accuracy, which is better than the state-of-art accuracy of 66.7%. On a WSC dataset, WSC-273, our method achieves 60.3%. As of today, state-of-the-art accuracy on the WSC-273 for single model performance is around 57%, BIBREF14 and BIBREF13 . These results suggest that BERT implicitly learns to establish complex relationships between entities such as coreference resolution. Although this helps in commonsense reasoning, solving this task requires more than employing a language model learned from large text corpora.
Particularly, it is not clear whether BERT shed light on solving tasks such as the Pronoun Disambiguation Problem (PDP) and Winograd Schema Challenge (WSC).
On a Pronoun Disambiguation dataset, PDP-60, our method achieves 68.3% accuracy, which is better than the state-of-art accuracy of 66.7%. On a WSC dataset, WSC-273, our method achieves 60.3%.
|
Which datasets do they evaluate on?
|
The answers are shown as follows:
* PDP-60
* WSC-273
|
null | false
| null |
Gilding metal is a form of brass (an alloy of copper and zinc) with a much higher copper content than zinc content. Exact figures range from 95% copper and 5% zinc to “8 parts copper to 1 of zinc” (11% zinc) in British Army Dress Regulations.
Gilding metal is used for various purposes, including the jackets of bullets, driving bands on some artillery shells, as well as enameled badges and other jewellery. The sheet is widely used for craft metalworking by hammer working. It is also used particularly as a lower-cost training material for silversmiths. Starting in 1944, shell casings made of gilding metal were melted down by the United States Mint to be made into pennies. These pennies replaced the less popular steel cent of 1943, and the pennies of this composition were produced until 1946.
Gilding metal was also used for the "bronze" Olympic medal at the 2020 Summer Olympics held in Tokyo, Japan in 2021.
Gilding metal may be annealed by heating to between 800–1,450 °F (427–788 °C). It should be cooled slowly afterwards, to reduce risk of cracking.
|
What are some of the uses of gilding metal?
|
Army Dress, weapons, metalworking, penny production, and Olympic medals
|
null | false
| 133
|
The task of document quality assessment is to automatically assess a document according to some predefined inventory of quality labels. This can take many forms, including essay scoring (quality = language quality, coherence, and relevance to a topic), job application filtering (quality = suitability for role + visual/presentational quality of the application), or answer selection in community question answering (quality = actionability + relevance of the answer to the question). In the case of this paper, we focus on document quality assessment in two contexts: Wikipedia document quality classification, and whether a paper submitted to a conference was accepted or not.
Automatic quality assessment has obvious benefits in terms of time savings and tractability in contexts where the volume of documents is large. In the case of dynamic documents (possibly with multiple authors), such as in the case of Wikipedia, it is particularly pertinent, as any edit potentially has implications for the quality label of that document (and around 10 English Wikipedia documents are edited per second). Furthermore, when the quality assessment task is decentralized (as in the case of Wikipedia and academic paper assessment), quality criteria are often applied inconsistently by different people, where an automatic document quality assessment system could potentially reduce inconsistencies and enable immediate author feedback.
Current studies on document quality assessment mainly focus on textual features. For example, BIBREF0 examine features such as the article length and the number of headings to predict the quality class of a Wikipedia article. In contrast to these studies, in this paper, we propose to combine text features with visual features, based on a visual rendering of the document. Figure 1 illustrates our intuition, relative to Wikipedia articles. Without being able to read the text, we can tell that the article in Figure 1 has higher quality than Figure 1 , as it has a detailed infobox, extensive references, and a variety of images. Based on this intuition, we aim to answer the following question: can we achieve better accuracy on document quality assessment by complementing textual features with visual features?
Our visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. We perform experiments on two datasets: a Wikipedia dataset novel to this paper, and an arXiv dataset provided by BIBREF2 split into three sub-parts based on subject category. Experimental results on the visual renderings of documents show that implicit quality indicators, such as images and visual layout, can be captured by an image classifier, at a level comparable to a text classifier. When we combine the two models, we achieve state-of-the-art results over 3/4 of our datasets.
This paper makes the following contributions:
All code and data associated with this research will be released on publication.
Experimental results on the visual renderings of documents show that implicit quality indicators, such as images and visual layout, can be captured by an image classifier, at a level comparable to a text classifier.
|
What is the performance of the image classifier in the experiment?
|
The implicit quality indicators, such as images and visual layout, can be captured by an image classifier, at a level comparable to a text classifier.
|
null | false
| null |
The contemporary national legal systems are generally based on one of four basic systems: civil law, common law, statutory law, religious law or combinations of these. However, the legal system of each country is shaped by its unique history and so incorporates individual variations. The science that studies law at the level of legal systems is called comparative law.
Both civil (also known as Roman) and common law systems can be considered the most widespread in the world: civil law because it is the most widespread by landmass and by population overall, and common law because it is employed by the greatest number of people compared to any single civil law system.
|
List the basis of contemporary national legal systems from the passage. Display the results in comma separated format.
|
civil law, common law, statutory law, religious law, comparative law
|
null | false
| null |
"My captain, my leader, my right-hand man. He was the spirit and the heartbeat of the team. A cool, calculating footballer I could trust with my life. He was the supreme professional, the best I ever worked with. Without him England would never have won the World Cup." Alf Ramsey*
"He was my friend as well as the greatest defender I ever played against. The world has lost one of its greatest football players and an honourable gentleman." Pelé
"Bobby Moore was a real gentleman and a true friend." Franz Beckenbauer
"Moore was the best defender I have ever seen." Sir Alex Ferguson
"Bobby Moore was the best defender in the history of the game" Franz Beckenbauer
"There should be a law against him. He knows what's happening 20 minutes before everyone else." Jock Stein
"Ask me to talk about Bobby Moore the footballer and I will talk for days. Ask me about the man and I will dry up in a minute." Ron Greenwood
"Immaculate footballer. Imperial defender. Immortal hero of 1966. First Englishman to raise the World Cup aloft. Favourite son of London's East End. Finest legend of West Ham United. National Treasure. Master of Wembley. Lord of the game. Captain extraordinary. Gentleman of all time." Inscription on the pedestal of the statue at Wembley Stadium.
|
What did Franz Beckenbauer call Bobby Moore?
|
Franz Beckenbauer described Bobby Moore as "a real gentleman and a true friend".
|
null | false
| null |
Which book is Mary Shelley most famous for?
|
Frankenstein
|
|
null | false
| null |
Think up different types of places to go for vacation.
|
There are many different places you can go on vacation. For people who love the outdoors, camping is a great option, including hiking, kayaking, and bird-watching. For those who like relaxation, a hotel by the beach can be ideal. Beach-related vacations can include sunbathing, swimming, laying out by the pool, and having refreshing drinks. Traveling to a different country can be an enriching vacation if you enjoy sightseeing, learning new languages, and sampling unknown foods.
|
|
null | false
| 200
|
Datasets and Models We evaluate our adversarial attacks on different text classification datasets from tasks such as sentiment classification, subjectivity detection and question type classification. Amazon, Yelp, IMDB are sentence-level sentiment classification datasets which have been used in recent work BIBREF15 while MR BIBREF16 contains movie reviews based on sentiment polarity. MPQA BIBREF17 is a dataset for opinion polarity detection, Subj BIBREF18 for classifying a sentence as subjective or objective and TREC BIBREF19 is a dataset for question type classification.
We use 3 popular text classification models: word-LSTM BIBREF20, word-CNN BIBREF21 and a fine-tuned BERT BIBREF12 base-uncased classifier. For each dataset we train the model on the training data and perform the adversarial attack on the test data. For complete model details refer to Appendix.
As a baseline, we consider TextFooler BIBREF11 which performs synonym replacement using a fixed word embedding space BIBREF22. We only consider the top $K{=}50$ synonyms from the MLM predictions and set a threshold of 0.8 for the cosine similarity between USE based embeddings of the adversarial and input text.
Results We perform the 4 modes of our attack and summarize the results in Table . Across datasets and models, our BAE attacks are almost always more effective than the baseline attack, achieving significant drops of 40-80% in test accuracies, with higher average semantic similarities as shown in parentheses. BAE-R+I is the strongest attack since it allows both replacement and insertion at the same token position, with just one exception. We observe a general trend that the BAE-R and BAE-I attacks often perform comparably, while the BAE-R/I and BAE-R+I attacks are much stronger. We observe that the BERT-based classifier is more robust to the BAE and TextFooler attacks than the word-LSTM and word-CNN models which can be attributed to its large size and pre-training on a large corpus.
The baseline attack is often stronger than the BAE-R and BAE-I attacks for the BERT based classifier. We attribute this to the shared parameter space between the BERT-MLM and the BERT classifier before fine-tuning. The predicted tokens from BERT-MLM may not drastically change the internal representations learned by the BERT classifier, hindering their ability to adversarially affect the classifier prediction.
Effectiveness We study the effectiveness of BAE on limiting the number of R/I operations permitted on the original text. We plot the attack performance as a function of maximum $\%$ perturbation (ratio of number of word replacements and insertions to the length of the original text) for the TREC dataset. From Figure , we clearly observe that the BAE attacks are consistently stronger than TextFooler. The classifier models are relatively robust to perturbations up to 20$\%$, while the effectiveness saturates at 40-50$\%$. Surprisingly, a 50$\%$ perturbation for the TREC dataset translates to replacing or inserting just 3-4 words, due to the short text lengths.
Qualitative Examples We present adversarial examples generated by the attacks on a sentence from the IMDB and Yelp datasets in Table . BAE produces more natural looking examples than TextFooler as tokens predicted by the BERT-MLM fit well in the sentence context. TextFooler tends to replace words with complex synonyms, which can be easily detected. Moreover, BAE's additional degree of freedom to insert tokens allows for a successful attack with fewer perturbations.
Human Evaluation We consider successful adversarial examples generated from the Amazon and IMDB datasets and verify their sentiment and grammatical correctness. Human evaluators annotated the sentiment and the grammar (Likert scale of 1-5) of randomly shuffled adversarial examples and original texts. From Table , BAE and TextFooler have inferior accuracies compared to the Original, showing they are not always perfect. However, BAE has much better grammar scores, suggesting more natural looking adversarial examples.
Ablation Study We analyze the benefits of R/I operations in BAE in Table . From the table, the splits $\mathbb {A}$ and $\mathbb {B}$ are the $\%$ of test points which compulsorily need I and R operations respectively for a successful attack. We can observe that the split $\mathbb {A}$ is larger than $\mathbb {B}$ thereby indicating the importance of the I operation over R. Test points in split require both R and I operations for a successful attack. Interestingly, split is largest for Subj, which is the most robust to attack (Table ) and hence needs both R/I operations. Thus, this study gives positive insights towards the importance of having the flexibility to both replace and insert words.
Refer to the Appendix for additional results, effectiveness graphs and details of human evaluation.
Amazon, Yelp, IMDB are sentiment classification datasets used in recent works (Sarma et al., 2018) and MR (Pang and Lee, 2005) contains movie reviews based on sentiment polarity.
|
What does the dataset MR contain?
|
MR contains movie reviews based on sentiment polarity.
|
null | false
| null |
List five titles by Suzanne Collins.
|
Catching Fire, Mockingjay, Gregor the Overlander, Year of the Jungle, The Ballad of Songbirds and Snakes
|
|
null | false
| null |
What is geomorphometry?
|
Geomorphometry, or geomorphometrics, is the science and practice of measuring the characteristics of terrain, the shape of the surface of the Earth, and the effects of this surface form on human and natural geography. It gathers various mathematical, statistical and image processing techniques that can be used to quantify morphological, hydrological, ecological and other aspects of a land surface. Common synonyms for geomorphometry are geomorphological analysis (after geomorphology), terrain morphometry, terrain analysis, and land surface analysis. Geomorphometrics is the discipline based on the computational measures of the geometry, topography and shape of the Earth's horizons, and their temporal change. This is a major component of geographic information systems (GIS) and other software tools for spatial analysis.
In simple terms, geomorphometry aims at extracting (land) surface parameters (morphometric, hydrological, climatic etc.) and objects (watersheds, stream networks, landforms etc.) using input digital land surface model (also known as digital elevation model, DEM) and parameterization software.[Extracted surface parameters and objects can then be used, for example, to improve mapping and modelling of soils, vegetation, land use, geomorphological and geological features and similar.
With the rapid increase of sources of DEMs today (and especially due to the Shuttle Radar Topography Mission and LIDAR-based projects), extraction of land surface parameters is becoming more and more attractive to numerous fields ranging from precision agriculture, soil-landscape modelling, climatic and hydrological applications to urban planning, education and space research. The topography of almost all Earth has been today sampled or scanned, so that DEMs are available at resolutions of 100 m or better at global scale. Land surface parameters are today successfully used for both stochastic and process-based modelling, the only remaining issue being the level of detail and vertical accuracy of the DEM.
|
|
null | false
| null |
Head races are time trial / processional races that take place from autumn (fall) to early spring (depending on local conditions). Boats begin with a rolling start at intervals of 10 – 20 seconds, and are timed over a set distance. Head courses usually vary in length from 2,000 metres (1.24 mi) to 12,000 metres (7.46 mi), though there are longer races such as the Boston Rowing Marathon and shorter such as Pairs Head.
|
What is a head rowing race?
|
A head rowing race is a time trial race where rowing boats race on a rolling start with short time gaps between start times. Courses vary in length from 2,000 meters to 12,000 meters.
|
null | false
| null |
What are some songs from the Beatles' White Album that you'd recommend listening to?
|
The White Album is one of the Beatles most iconic and experimental albums. Recommended songs to listen to from this album include; 'Happiness is a Warm Gun', 'Dear Prudence', 'Rocky Raccoon', 'Back in the U.S.S.R', 'Blackbird', 'While My Guitar Gently Weeps', 'Glass Onion ' and 'Helter Skelter'.
|
|
null | false
| 308
|
Given the large collections of unstructured and semi-structured data available on the web, there is a crucial need to enable quick and efficient access to the knowledge content within them. Traditionally, the field of information extraction has focused on extracting such knowledge from unstructured text documents, such as job postings, scientific papers, news articles, and emails. However, the content on the web increasingly contains more varied types of data, including semi-structured web pages, tables that do not adhere to any schema, photographs, videos, and audio. Given a query by a user, the appropriate information may appear in any of these different modes, and thus there's a crucial need for methods to construct knowledge bases from different types of data, and more importantly, combine the evidence in order to extract the correct answer.
Motivated by this goal, we introduce the task of multimodal attribute extraction. Provided contextual information about an entity, in the form of any of the modes described above, along with an attribute query, the goal is to extract the corresponding value for that attribute. While attribute extraction on the domain of text has been well-studied BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , to our knowledge this is the first time attribute extraction using a combination of multiple modes of data has been considered. This introduces additional challenges to the problem, since a multimodal attribute extractor needs to be able to return values provided any kind of evidence, whereas modern attribute extractors treat attribute extraction as a tagging problem and thus only work when attributes occur as a substring of text.
In order to support research on this task, we release the Multimodal Attribute Extraction (MAE) dataset, a large dataset containing mixed-media data for over 2.2 million commercial product items, collected from a large number of e-commerce sites using the Diffbot Product API. The collection of items is diverse and includes categories such as electronic products, jewelry, clothing, vehicles, and real estate. For each item, we provide a textual product description, collection of images, and open-schema table of attribute-value pairs (see Figure 1 for an example). The provided attribute-value pairs only provide a very weak source of supervision; where the value might appear in the context is not known, and further, it is not even guaranteed that the value can be extracted from the provided evidence. In all, there are over 4 million images and 7.6 million attribute-value pairs. By releasing such a large dataset, we hope to drive progress on this task similar to how the Penn Treebank BIBREF5 , SQuAD BIBREF6 , and Imagenet BIBREF7 have driven progress on syntactic parsing, question answering, and object recognition, respectively.
To asses the difficulty of the task and the dataset, we first conduct a human evaluation study using Mechanical Turk that demonstrates that all available modes of information are useful for detecting values. We also train and provide results for a variety of machine learning models on the dataset. We observe that a simple most-common value classifier, which always predicts the most-common value for a given attribute, provides a very difficult baseline for more complicated models to beat (33% accuracy). In our current experiments, we are unable to train an image-only classifier that can outperform this simple model, despite using modern neural architectures such as VGG-16 BIBREF8 and Google's Inception-v3 BIBREF9 . However, we are able to obtain significantly better performance using a text-only classifier (59% accuracy). We hope to improve and obtain more accurate models in further research.
We observe that a simple most-common value classifier, which always predicts the most-common value for a given attribute, provides a very difficult baseline for more complicated models to beat (33% accuracy).
|
What can be predicted by the simple most-common value classifier they observe?
|
The most-common value for a given attribute.
|
null | false
| null |
In the Encyclopedia of Food Microbiology, Michael Gaenzle writes: "One of the oldest sourdough breads dates from 3700 BCE and was excavated in Switzerland, but the origin of sourdough fermentation likely relates to the origin of agriculture in the Fertile Crescent and Egypt several thousand years earlier", which was confirmed a few years later by archeological evidence. ... "Bread production relied on the use of sourdough as a leavening agent for most of human history; the use of baker's yeast as a leavening agent dates back less than 150 years."
Sourdough remained the usual form of leavening down into the European Middle Ages until being replaced by barm from the beer brewing process, and after 1871 by purpose-cultured yeast.
Bread made from 100% rye flour, popular in the northern half of Europe, is usually leavened with sourdough. Baker's yeast is not useful as a leavening agent for rye bread, as rye does not contain enough gluten. The structure of rye bread is based primarily on the starch in the flour as well as other carbohydrates known as pentosans; however, rye amylase is active at substantially higher temperatures than wheat amylase, causing the structure of the bread to disintegrate as the starches are broken down during baking. The lowered pH of a sourdough starter, therefore, inactivates the amylases when heat cannot, allowing the carbohydrates in the bread to gel and set properly. In the southern part of Europe, where panettone is still made with sourdough as leavening, sourdough has become less common in the 20th century; it has been replaced by the faster-growing baker's yeast, sometimes supplemented with longer fermentation rests to allow for some bacterial activity to build flavor. Sourdough fermentation re-emerged as a major fermentation process in bread production during the 2010s, although it is commonly used in conjunction with baker's yeast as leavening agent.
French bakers brought sourdough techniques to Northern California during the California Gold Rush, and it remains a part of the culture of San Francisco today. (The nickname remains in "Sourdough Sam", the mascot of the San Francisco 49ers.) Sourdough has long been associated with the 1849 gold prospectors, though they were more likely to make bread with commercial yeast or baking soda. The "celebrated" San Francisco sourdough is a white bread characterized by a pronounced sourness, and indeed the strain of Lactobacillus in sourdough starters is named Fructilactobacillus sanfranciscensis (previously Lactobacillus sanfranciscensis), alongside the sourdough yeast Kasachstania humilis (previously Candida milleri) found in the same cultures.
The sourdough tradition was carried into Department of Alaska in the United States and the Yukon territory in Canada during the Klondike Gold Rush of 1898. Conventional leavenings such as yeast and baking soda were much less reliable in the conditions faced by the prospectors. Experienced miners and other settlers frequently carried a pouch of starter either around their neck or on a belt; these were fiercely guarded to keep from freezing. However, freezing does not kill a sourdough starter; excessive heat does. Old hands came to be called "sourdoughs", a term that is still applied to any Alaskan or Klondike old-timer. The significance of the nickname's association with Yukon culture was immortalized in the writings of Robert Service, particularly his collection of "Songs of a Sourdough".[citation needed]
In English-speaking countries, where wheat-based breads predominate, sourdough is no longer the standard method for bread leavening. It was gradually replaced, first by the use of barm from beer making, then, after the confirmation of germ theory by Louis Pasteur, by cultured yeasts. Although sourdough bread was superseded in commercial bakeries in the 20th century, it has undergone a revival among artisan bakers and, more recently, in industrial bakeries. In countries where there is no legal definition of sourdough bread, the dough for some products named or marketed as such is leavened using baker's yeast or chemical raising agents as well as, or instead of, a live sourdough starter culture. The Real Bread Campaign calls these products sourfaux.
Manufacturers of non-sourdough breads make up for the lack of yeast and bacterial culture by introducing into their dough an artificially-made mix known as bread improver or flour improver.
|
Extract the different kinds of leavening agents mentioned in this passage. Separate them with a comma.
|
sourdough, baker's yeast, barm, purpose-cultured yeast, commercial yeast, baking soda, cultured yeasts, bread improver, flour improver
|
null | false
| 223
|
Word embeddings have been widely used for natural language processing (NLP) tasks BIBREF0 . However, the large word vocabulary makes word embeddings expensive to train. Some people argue that we can model languages at the character-level BIBREF1 . For alphabetic languages such as English, where the characters are much fewer than the words, the character embeddings achieved the state-of-the-art results with much fewer parameters.
Unfortunately, for the other languages that use non-alphabetic systems, the character vocabulary can be also large. Moreover, Chinese and Japanese, two of the most widely used non-alphabetic languages, especially contain large numbers of ideographs: hanzi of Chinese and kanji of Japanese. The character vocabulary can be as scalable as the word vocabulary (e.g., see the datasets introduced in Section SECREF11 ). Hence the conventional character embedding-based method is not able to give us a slim vocabulary on Chinese and Japanese.
For convenience, let us collectively call hanzi and kanji as Chinese characters. Chinese characters are ideographs composed with semantic and phonetic radicals, both of which are available for character recognition, and the semantic information may be embedded in Chinese characters by the semantic radicals BIBREF2 . Besides, though the character vocabulary is huge, the number of the radicals is much fewer. Accordingly, we explored a model that represents the Chinese characters by the sequence of the radicals. We applied our proposed model to sentiment classification tasks on Chinese and Japanese and achieved the follows:
We applied our proposed model to sentiment classification tasks on Chinese and Japanese and achieved the follows
|
What task did they apply the proposed model to?
|
Sentiment classification tasks on Chinese and Japanese.
|
null | false
| null |
How tall can a human being theoretically be?
|
Theoretically, a human being can reach a maximum height of around 7 feet, 9 inches (2.3 meters) if they are fully grown. This is based on the average height of a male adult in the United States. According to the Guinness Book of World Records, the tallest man was Sultan Kösen. Sultan Kösen (born 10 December 1982) is a Turkish farmer of Kurdish ethnicity who holds the Guinness World Record for tallest living male at 251 cm (8 ft 2.82 in). Kösen's growth resulted from the conditions gigantism and acromegaly, caused by a tumor affecting his pituitary gland.
|
|
null | false
| 34
|
Our work is motivated by three major threads of research in machine learning and natural language processing: semantic-parsing for open-domain question answering, character-level language modeling, and encoder-decoder methods.
Semantic parsing for open-domain question answering, which translates a question into a structured KB query, is a key component in question answering with a KB. While early approaches relied on building high-quality lexicons for domain-specific databases such as GeoQuery BIBREF7 , recent work has focused on building semantic parsing frameworks for general knowledge bases such as Freebase BIBREF1 , BIBREF8 , BIBREF0 , BIBREF9 , BIBREF2 .
Semantic parsing frameworks for large-scale knowledge bases have to be able to successfully generate queries for the millions of entities and thousands of predicates in the KB, many of which are unseen during training. To address this issue, recent work relies on producing embeddings for predicates and entities in a KB based on their textual descriptions BIBREF8 , BIBREF0 , BIBREF1 , BIBREF10 . A general interaction function can then be used to measure the semantic relevance of these embedded KB entries to the question and determine the most likely KB query.
Most of these approaches use word-level embeddings to encode entities and predicates, and therefore might suffer from the out-of-vocabulary (OOV) problem when they encounter unseen words during test time. Consequently, they often rely on significant data augmentation from sources such as Paralex BIBREF2 , which contains 18 million question-paraphrase pairs scraped from WikiAnswers, to have sufficient examples for each word they encounter BIBREF11 , BIBREF1 , BIBREF0 .
As opposed to word-level modeling, character-level modeling can be used to handle the OOV issue. While character-level modeling has not been applied to factoid question answering before, it has been successfully applied to information retrieval, machine translation, sentiment analysis, classification, and named entity recognition BIBREF12 , BIBREF13 , BIBREF6 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 . Moreover, gflstm demonstrate that gated-feedback LSTMs on top of character-level embeddings can capture long-term dependencies in language modeling.
Lastly, encoder-decoder networks have been applied to many structured machine learning tasks. First introduced in sutskever2014sequence, in an encoder-decoder network, a source sequence is first encoded with a recurrent neural network (RNN) into a fixed-length vector which intuitively captures its meaning, and then decoded into a desired target sequence. This approach and related memory-based or attention-based approaches have been successfully applied in diverse domains such as speech recognition, machine translation, image captioning, parsing, executing programs, and conversational dialogues BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 .
Unlike previous work, we formulate question answering as a problem of decoding the KB query given the question and KB entries which are encoded in embedding spaces. We therefore integrate the learning of question and KB embeddings in a unified encoder-decoder framework, where the whole system is optimized end-to-end.
Unlike previous work, we formulate question answering as a problem of decoding the KB query given the question and KB entries which are encoded in embedding spaces.
|
How do they formulate question answering?
|
They formulate question answering as a problem of decoding the KB query given the question and KB entries which are encoded in embedding spaces.
|
1905.06512
| true
| null |
The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework. Without utilizing the information of sememes, it learns a probabilistic mapping $P(y | x)$ from the word $x$ to be defined to a definition $y = [y_1, \dots , y_T ]$ , in which $y_t$ is the $t$ -th word of definition $y$ .
The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework.
|
Do they compare against Noraset et al. 2017?
|
Yes.
|
null | false
| null |
How long does a tennis match take?
|
A tennis match is usually played as the best of three or five sets, where each set is played as the first to six games with at least a two game margin. A tiebreaker is played if both sides win 6 games. The relative skill of the players, the frequency of breaks, and other factors can all affect the length of a tennis match, but usually a best-of-three set will take around 90 minutes and a best-of-five set will take around 2 and a half hours to 3 hours.
|
|
null | false
| null |
List some beverages that you can buy at Starbucks.
|
Frappuccino, shaken espresso, caramel macchiato, refresher, hot chocolate, latte, drip coffee
|
|
null | false
| null |
I'm getting into music history and want to listen to authentic analogue music from the 1980's. Which of the following should I try: ipod, vinyl, apple music, cassette, compact disc, spotify, youtube
|
Vinyl, cassette and compact disc are all analogue music devices available in the 1980's - you should try them.
|
|
null | false
| null |
Name some of the bravolebrities from The Real Housewives of Orange County
|
Shannon Beador, Heather Dubrow, Gina Kirschenheiter and Emily Simpson
|
|
null | false
| null |
The Ryder Cup is a biennial men's golf competition between teams from Europe and the United States. The competition is contested every two years with the venue alternating between courses in the United States and Europe. The Ryder Cup is named after the English businessman Samuel Ryder who donated the trophy. The event is jointly administered by the PGA of America and Ryder Cup Europe, the latter a joint venture of the PGA European Tour (60%), the PGA of Great Britain and Ireland (20%), and the PGAs of Europe (20%).
Originally contested between Great Britain and the United States, the first official Ryder Cup took place in the United States in 1927 at Worcester Country Club in Worcester, Massachusetts. The home team won the first five contests, but with the competition's resumption after the Second World War, repeated American dominance eventually led to a decision to extend the representation of "Great Britain and Ireland" to include continental Europe from 1979. The inclusion of continental European golfers was partly prompted by the success of a new generation of Spanish golfers, led by Seve Ballesteros and Antonio Garrido. In 1973 the official title of the British Team had been changed from "Great Britain" to "Great Britain and Ireland", but this was simply a change of name to reflect the fact that golfers from the Republic of Ireland had been playing in the Great Britain Ryder Cup team since 1953, while Northern Irish players had competed since 1947. In addition to players from Great Britain and Ireland, the European team has since included players from Austria, Belgium, Denmark, France, Germany, Italy, Norway, Spain and Sweden.
|
Given a reference text, who is the Ryder Cup named after?
|
The Ryder Cup is named after English businessman Samuel Ryder.
|
null | false
| null |
Why do people like to live in Minnesota?
|
Minnesota is a great place to live for a multitude of reasons. While the winter weather can be difficult, the overall high quality of life people from Minnesota have heavily outweighs this perceived negative. Minnesota’s education system is consistently rated of one of the best in the United States. Standardize test scores reflect this as well as a strong college/university ecosystem. Minnesota is home to a diverse set of companies and has a higher-than-normal number of F500 companies headquartered in Minnesota. The presence of a large, diverse set of companies has provided Minnesotans with one of the lowest employment rates in the USA. Minnesota is also highly rated when it comes to restaurants based on the number (on a per capita basis) of James Beard Award winners. The overall healthcare systems (including the famous Mayo Clinic) rank very high nationally.
|
|
null | false
| null |
What are some ideas for businesses that cater to millennials?
|
Millennials like coffee shops, boba tea shops and cafes that have healthy foods like salads and smoothies. Restaurants that sell organic, vegetarian and gluten-free foods are good ideas. A pet store would also be a good idea because millennials spend a lot of money on their pets. Millennials also value travel and experiences, so a tour company that takes customers to unusual or exciting places could work well.
|
|
null | false
| null |
How did George II die
|
Fell off toilet
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.