paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
How does the train work?
|
How Does An Electric Locomotive Work:
Electric locomotives use electricity from overhead wires to convert electrical energy into mechanical energy, making them lighter, faster, and quieter than steam or diesel engines.
How Does a Diesel Locomotive Work:
Diesel-electric locomotives provide cheaper upfront costs and improved reliability.
Train wheels are conical, allowing them to travel a longer distance while turning at the same speed as the inner wheel. The wheel on the left is positioned further out from the center of the axle than the right, which is beneficial for stability and comfort of the passengers. Railways have wide curves due to the "flange" of the wheel limiting lateral movement. Distance between tracks must be greater than wheel flanges to allow wheels to move laterally in curves, reducing efficiency and comfort. The "nominal gauge" of a railway is defined for the straight section and widened in curves to allow the train to tilt.
|
|
null | false
| null |
Ursula Martius Franklin was a German-Canadian metallurgist, research physicist, author, and educator who taught at the University of Toronto for more than 40 years. Franklin was a pioneer in the field of archaeometry, which applies modern materials analysis to archaeology. She worked for example, on the dating of prehistoric bronze, copper and ceramic artifacts. One small example of her work in this field regards what was a standing question on the nature of shiny black Chinese mirrors found in high quantities in ancient tombs. Franklin's use of microscopic etching, electron microprobe and x-ray fluorescence analyses produced evidence that what was thought by some to be a corrosive effect was in fact present in these ancient mirrors (and weapons) at their inception, in dark iron oxides intentionally added near the objects' surfaces. Franklin additionally pulled from historic and literary accounts of black mirrors in Chinese literature to support these findings. Franklin's expertise was also instrumental to dating glass; she guided a study on the remains of blue glass beads in North America remaining from early trade relationships between American Indian tribes and Europe
|
Provide an example of using modern materials science to study archaeological artifacts
|
Archaeometry employs modern technology (such as x-ray fluorescence) to study the origins and environments of historical artifacts. Ursula Franklin was well known in this field; she found evidence of global trade in the remains of glass beads and proved that black mirrors present in large quantities in ancient Chinese tombs were purposely crafted, and not the result of corrosion.
|
null | false
| 72
|
We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. Results using these two datasets show the applicability and superiority for different topics, languages, data distributions, and platforms.
The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen’s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero.
To test whether the assumption of this paper – posts attract users who hold the same stance to like them – is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach.
The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 .
The FBFans dataset has more integrated functions than the CreateDebate dataset; thus our model can utilize all linguistic and extra-linguistic features. For the CreateDebate dataset, on the other hand, the like and comment features are not available (as there is a stance label for each reply, replies are evaluated as posts as other previous work) but we still implemented our model using the content, author, and topic information.
The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs.
|
Where is the data of the FBFans dataset from?
|
Anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014.
|
null | false
| 493
|
Then, using the trajectory observations, we train the policies. We show the pseudo-code of the modified REINFORCE-based trajectory EQMRL in Algorithm 2.****We set the number of resamples as m1 = 50, the size of each resample m2 = 5τ , the number of periods of return data τ = 60, the number of resampled subsets m3 = 50, and the size of each subset m4 = n0.7 , where m is number of assets; that is, m = 25 in FF25, m = 48 in FF48 and m = 100 in FF100.
|
For the real-world dataset, how have the authors trained the model.?
|
We assume the stationary of the time series and train the policy using resampled observations, as we show it in Algorithm 2 of the updated manuscript. At the camera-ready, we add a more detailed description.
|
null | false
| null |
What is the name of the season that begins in the fall and ends in the spring?
|
Winter
|
|
null | false
| null |
Give me a list of five films made in the 1950s whose stars are still alive
|
Rio Bravo (1959) Angie Dickinson
Brothers Karamazov (1958) William Shatner
Revenge of the Creature (1955) Clint Eastwood
An American in Paris (1951) Leslie Caron
Vertigo (1958) Kim Novak
|
|
null | false
| null |
The Needle is a fictional character, a mutant supervillain appearing in American comic books published by Marvel Comics. Created by Mark Gruenwald, Carmine Infantino, and Al Gordon, the character first appeared in Spider-Woman #9 (December 1978). Creator Mark Gruenwald brought the character back during his run on the West Coast Avengers as a member of the villain team Night Shift, which included other former Spider-Woman foes.
|
Extract the names of creators of The Needle from the text provided. Separate them with a comma.
|
Mark Gruenwald, Carmine Infantino, Al Gordon
|
null | false
| null |
How can I improve my table tennis skills?
|
To improve your table tennis skills, first you need to play more to build muscle memory. You can also watch videos online to learn specific techniques. And you can also consider taking some personal training courses. All of the above will help you improve your table tennis skills.
|
|
null | false
| null |
What is a credit card?
|
A credit card is a payment card issued to users (cardholders) to enable the cardholder to pay a merchant for goods and services based on the cardholder's accrued debt (i.e., promise to the card issuer to pay them for the amounts plus the other agreed charges).The card issuer (usually a bank or credit union) creates a revolving account and grants a line of credit to the cardholder, from which the cardholder can borrow money for payment to a merchant or as a cash advance. There are two credit card groups: consumer credit cards and business credit cards. Most cards are plastic, but some are metal cards (stainless steel, gold, palladium, titanium), and a few gemstone-encrusted metal cards.
|
|
null | false
| 363
|
Understanding what people are talking about and how they feel about it is valuable especially for industries which need to know the customers' opinions on their products. Aspect-Based Sentiment Analysis (ABSA) is a branch of sentiment analysis which deals with extracting the opinion targets (aspects) as well as the sentiment expressed towards them. For instance, in the sentence The spaghetti was out of this world., a positive sentiment is mentioned towards the target which is spaghetti. Performing these tasks requires a deep understanding of the language. Traditional machine learning methods such as SVM BIBREF2, Naive Bayes BIBREF3, Decision Trees BIBREF4, Maximum Entropy BIBREF5 have long been practiced to acquire such knowledge. However, in recent years due to the abundance of available data and computational power, deep learning methods such as CNNs BIBREF6, BIBREF7, BIBREF8, RNNs BIBREF9, BIBREF10, BIBREF11, and the Transformer BIBREF12 have outperformed the traditional machine learning techniques in various tasks of sentiment analysis. Bidirectional Encoder Representations from Transformers (BERT) BIBREF13 is a deep and powerful language model which uses the encoder of the Transformer in a self-supervised manner to learn the language model. It has been shown to result in state-of-the-art performances on the GLUE benchmark BIBREF14 including text classification. BIBREF1 show that adding domain-specific information to this model can enhance its performance in ABSA. Using their post-trained BERT (BERT-PT), we add adversarial examples to further improve BERT's performance on Aspect Extraction (AE) and Aspect Sentiment Classification (ASC) which are two major tasks in ABSA. A brief overview of these two sub-tasks is given in Section SECREF3.
Adversarial examples are a way of fooling a neural network to behave incorrectly BIBREF15. They are created by applying small perturbations to the original inputs. In the case of images, the perturbations can be invisible to human eye, but can cause neural networks to output a completely different response from the true one. Since neural nets make mistakes on these examples, introducing them to the network during the training can improve their performance. This is called adversarial training which acts as a regularizer to help the network generalize better BIBREF0. Due to the discrete nature of text, it is not feasible to produce perturbed examples from the original inputs. As a workaround, BIBREF16 apply this technique to the word embedding space for text classification. Inspired by them and building on the work of BIBREF1, we experiment with adversarial training for ABSA.
Our contributions are twofold. First, by carrying out an ablation study on the number of training epochs and the values for dropout in the classification layer, we show that there are values that outperform the specified ones for BERT-PT. Second, we introduce the application of adversarial training in ABSA by proposing a novel architecture which combines adversarial training with the BERT language model for AE and ASC tasks. Our experiments show that the proposed model outperforms the best performance of BERT-PT in both tasks.
Our experiments show that the proposed model outperforms the best performance of BERT-PT in both tasks.
|
Does the proposed model outperform the best performance of BERT-PT in both tasks?
|
Yes, it does.
|
null | false
| null |
I would like to get a camper for the family vacations. What things do I need to be aware of when buying one?
|
Top considerations for a camper would be budget, size, weight, & payload required by the tow vehicle. Payload is usually the most overlooked factor as many times trailers can go well over the payload shown by the manufacturer. It is usually best to have a tow vehicle that has a buffer of payload over so that you can safely tow a trailer. Most car/truck manufacturers share must higher towing capabilities than actual since options can bring down the amount of payload and tow capacity.
|
|
null | false
| null |
Which characters belong to DC or Marvel Universe? Vision, Red Tornado
|
Vision is Marvel, Red Tornado is DC
|
|
null | false
| null |
How to manage a high performance team?
|
Create a supportive environment
Encourage innovation
Acknowledge and learn from conflict
Assign specific tasks
Create incentives
Foster independence
|
|
1912.10435
| false
| null |
We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer.
We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer.
|
What ensemble methods are used for best model?
|
The answers are shown as follows:
* choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer
|
null | false
| null |
Who first recorded the weather?
|
It is believed that people in China made notes about wind, rain and snow onto pieces of tortoiseshell. Some evidence dates back to 3000 years, making the Chinese the first to record the weather.
|
|
null | false
| null |
Identify which animal species is alive or extinct: Meganuera, Sei Whale
|
Sei Whale is alive, Meganuera is extinct.
|
|
null | false
| null |
Make a bullet list of 5 gifts you can give someone for Valentine's Day
|
- Chocolates
- Bouquet of Roses
- Stuffed animal
- Heart shaped balloon
- Personalized Card
|
|
null | false
| 140
|
“Ché saetta previsa vien più lenta.”
– Dante Alighieri, Divina Commedia, Paradiso
Antisocial behavior is a persistent problem plaguing online conversation platforms; it is both widespread BIBREF0 and potentially damaging to mental and emotional health BIBREF1, BIBREF2. The strain this phenomenon puts on community maintainers has sparked recent interest in computational approaches for assisting human moderators.
Prior work in this direction has largely focused on post-hoc identification of various kinds of antisocial behavior, including hate speech BIBREF3, BIBREF4, harassment BIBREF5, personal attacks BIBREF6, and general toxicity BIBREF7. The fact that these approaches only identify antisocial content after the fact limits their practicality as tools for assisting pre-emptive moderation in conversational domains.
Addressing this limitation requires forecasting the future derailment of a conversation based on early warning signs, giving the moderators time to potentially intervene before any harm is done (BIBREF8 BIBREF8, BIBREF9 BIBREF9, see BIBREF10 BIBREF10 for a discussion). Such a goal recognizes derailment as emerging from the development of the conversation, and belongs to the broader area of conversational forecasting, which includes future-prediction tasks such as predicting the eventual length of a conversation BIBREF11, whether a persuasion attempt will eventually succeed BIBREF12, BIBREF13, BIBREF14, whether team discussions will eventually lead to an increase in performance BIBREF15, or whether ongoing counseling conversations will eventually be perceived as helpful BIBREF16.
Approaching such conversational forecasting problems, however, requires overcoming several inherent modeling challenges. First, conversations are dynamic and their outcome might depend on how subsequent comments interact with each other. Consider the example in Figure FIGREF2: while no individual comment is outright offensive, a human reader can sense a tension emerging from their succession (e.g., dismissive answers to repeated questioning). Thus a forecasting model needs to capture not only the content of each individual comment, but also the relations between comments. Previous work has largely relied on hand-crafted features to capture such relations—e.g., similarity between comments BIBREF16, BIBREF12 or conversation structure BIBREF17, BIBREF18—, though neural attention architectures have also recently shown promise BIBREF19.
The second modeling challenge stems from the fact that conversations have an unknown horizon: they can be of varying lengths, and the to-be-forecasted event can occur at any time. So when is it a good time to make a forecast? Prior work has largely proposed two solutions, both resulting in important practical limitations. One solution is to assume (unrealistic) prior knowledge of when the to-be-forecasted event takes place and extract features up to that point BIBREF20, BIBREF8. Another compromising solution is to extract features from a fixed-length window, often at the start of the conversation BIBREF21, BIBREF15, BIBREF16, BIBREF9. Choosing a catch-all window-size is however impractical: short windows will miss information in comments they do not encompass (e.g., a window of only two comments would miss the chain of repeated questioning in comments 3 through 6 of Figure FIGREF2), while longer windows risk missing the to-be-forecasted event altogether if it occurs before the end of the window, which would prevent early detection.
In this work we introduce a model for forecasting conversational events that overcomes both these inherent challenges by processing comments, and their relations, as they happen (i.e., in an online fashion). Our main insight is that models with these properties already exist, albeit geared toward generation rather than prediction: recent work in context-aware dialog generation (or “chatbots”) has proposed sequential neural models that make effective use of the intra-conversational dynamics BIBREF22, BIBREF23, BIBREF24, while concomitantly being able to process the conversation as it develops (see BIBREF25 for a survey).
In order for these systems to perform well in the generative domain they need to be trained on massive amounts of (unlabeled) conversational data. The main difficulty in directly adapting these models to the supervised domain of conversational forecasting is the relative scarcity of labeled data: for most forecasting tasks, at most a few thousands labeled examples are available, insufficient for the notoriously data-hungry sequential neural models.
To overcome this difficulty, we propose to decouple the objective of learning a neural representation of conversational dynamics from the objective of predicting future events. The former can be pre-trained on large amounts of unsupervised data, similarly to how chatbots are trained. The latter can piggy-back on the resulting representation after fine-tuning it for classification using relatively small labeled data. While similar pre-train-then-fine-tune approaches have recently achieved state-of-the-art performance in a number of NLP tasks—including natural language inference, question answering, and commonsense reasoning (discussed in Section SECREF2)—to the best of our knowledge this is the first attempt at applying this paradigm to conversational forecasting.
To test the effectiveness of this new architecture in forecasting derailment of online conversations, we develop and distribute two new datasets. The first triples in size the highly curated `Conversations Gone Awry' dataset BIBREF9, where civil-starting Wikipedia Talk Page conversations are crowd-labeled according to whether they eventually lead to personal attacks; the second relies on in-the-wild moderation of the popular subreddit ChangeMyView, where the aim is to forecast whether a discussion will later be subject to moderator action due to “rude or hostile” behavior. In both datasets, our model outperforms existing fixed-window approaches, as well as simpler sequential baselines that cannot account for inter-comment relations. Furthermore, by virtue of its online processing of the conversation, our system can provide substantial prior notice of upcoming derailment, triggering on average 3 comments (or 3 hours) before an overtly toxic comment is posted.
To summarize, in this work we:
introduce the first model for forecasting conversational events that can capture the dynamics of a conversation as it develops;
build two diverse datasets (one entirely new, one extending prior work) for the task of forecasting derailment of online conversations;
compare the performance of our model against the current state-of-the-art, and evaluate its ability to provide early warning signs.
Our work is motivated by the goal of assisting human moderators of online communities by preemptively signaling at-risk conversations that might deserve their attention. However, we caution that any automated systems might encode or even amplify the biases existing in the training data BIBREF26, BIBREF27, BIBREF28, so a public-facing implementation would need to be exhaustively scrutinized for such biases BIBREF29.
Our main insight is that models with these properties already exist, albeit geared toward generation rather than prediction: recent work in context-aware dialog generation (or “chatbots”) has proposed sequential neural models that make effective use of the intra-conversational dynamics, while concomitantly being able to process the conversation as it develops (see for a survey).
|
What's the authors' main insight?
|
Models with these properties already exist, albeit geared toward generation rather than prediction: recent work in context-aware dialog generation (or “chatbots”) has proposed sequential neural models that make effective use of the intra-conversational dynamics, while concomitantly being able to process the conversation as it develops (see for a survey).
|
null | false
| null |
The quit India movement is also known as
|
August Kranti
|
|
1707.00995
| false
| null |
It is also worth to mention that we use a ResNet trained on 1.28 million images for a classification tasks. The features used by the attention mechanism are strongly object-oriented and the machine could miss important information for a multimodal translation task. We believe that the robust architecture of both encoders INLINEFORM0 combined with a GRU layer and word-embeddings took care of the right translation for relationships between objects and time-dependencies. Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and "gets lost", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead. We illustrate with an example:
Yet, we noticed a common misbehavior for all our multimodal models: if the attention loose track of the objects in the picture and "gets lost", the model still takes it into account and somehow overrides the information brought by the text-based annotations. The translation is then totally mislead.
|
What misbehavior is identified?
|
The answers are shown as follows:
* if the attention loose track of the objects in the picture and "gets lost", the model still takes it into account and somehow overrides the information brought by the text-based annotations
|
null | false
| null |
Is it a gas, liquid, or solid
|
Stone, Cloud, oxygen, water, hydrogen, dirt, nitrogen, brick, concrete, lava
|
|
null | false
| 188
|
For this report, we are exploring the field of natural language processing, which is the broad study of how computers and machines can understand human to human communication and how texts are analyzed based on contextual information by machines.
In particular, we are using natural language processing to classify news articles as real news or “fake news”. Fake news is misinformation masked under the guise of a real news article, and is used to deceptively influence people’s beliefs.
For this report, we are classifying news articles as “real” or “fake”, which will be a binary classification problem - classifying the samples as a positive (with fake news) or negative (not fake news) sample. Many studies have used machine learning algorithms and build classifiers based on features like content, the author’s name and job-title, using lots of models like the convolutional neural network (CNN), recurrent neural network (RNN), feed-forward neural network (FFNN), long-short term memory (LSTM) and logistic regression to find the most optimal model and return its results. In [1], the author built a classifier using natural language processing and used models like CNN, RNN, FFNN, and Logistic Regression and concluded that the CNN classifiers could not be as competitive as the RNN classifiers. The authors in [2] think that their study can be improved by having more features like knowing the history of lies spoken by the news reporter or the speaker.
Moreover, apart from the traditional machine learning methods, new models have also been developed. One of the newer models, TraceMiner, creates an LSTM-RNN model inferring from the embedding of social media users in the social network structure to propagate through the path of messages and has provided high classification accuracy$^{5}$. FAKEDETECTOR is another inference model developed to detect the credibility of the fake news which is considered to be quite reliable and accurate$^{7}$.
There also have been studies that have a different approach. A paper surveys the current state-of-the-art technologies that are imperative when adopting and developing fake news detection and provides a classification of several accurate assessment methods that analyze the text and detect anomalies$^{3}$.
These previous approaches lack a clear contextual analysis used in NLP. We considered the semantic meaning of each word and we feel that the presence of particular words influence the meaning. We reckoned this important since we felt the contextual meaning of the text needs to be preserved and analyzed for better classification. Other studies emphasize the user and features related to them. In [4], “45 features…[were used] for predicting accuracy...across four types: structural, user, content, and temporal,” so features included characteristics beyond the text. Article [6] "learn[s] the representations of news articles, creators and subjects simultaneously." In our project, we emphasize the content by working with articles whose labels only relate to the text, nothing outside that scope, and have used SVM, Logistic Regression, ANN, LSTM, and Random Forest.
We had devised this problem into 3 different phases: pre-processing, text-to-numeric representation conversion using pre-trained algorithms, and then evaluate the models using state-of-the-art machine learning algorithms. We had analysed the data set and in particular the text part of the data explaining how it is distributed and then we converted each text into numeric representation using pre-training models such as TFIDF, CV and W2V for vector representation. Finally, we evaluated our numeric conversion data using significant machine learning algorithms such as neural networks, classification algorithms etc to perform the classification.
Fake news is misinformation masked under the guise of a real news article, and is used to deceptively influence people’s beliefs.
|
What is fake news?
|
Fake news is misinformation masked under the guise of a real news article, and is used to deceptively influence people’s beliefs.
|
2003.08529
| false
| null |
In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.
The second task involves two essential problems in SLU, which are intent classification (IC) and slot labeling (SL). In IC, the model needs to detect the intention of a text input (i.e., utterance, conveys). For example, for an input of I want to book a flight to Seattle, the intention is to book a flight ticket, hence the intent class is bookFlight. In SL, the model needs to extract the semantic entities that are related to the intent. From the same example, Seattle is a slot value related to booking the flight, i.e., the destination. Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents.
In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.
Here we experiment with the Snips dataset BIBREF26, which is widely used in SLU research. This dataset contains test spoken utterances (text) classified into one of 7 intents.
|
Which real-world datasets did they use?
|
The answers are shown as follows:
* SST-2 (Stanford Sentiment Treebank, version 2)
* Snips
|
null | false
| null |
The Mauritius fody (Foudia rubra) is a rare species of bird in the weaver family. It is endemic to the island of Mauritius. It is classified by BirdLife International as being endangered. It is also on the United States' Endangered Species List with an endangered status.
This bird is 14 centimeters long. Breeding males are olive brown with a red head, breast and rump patch and black lores. While females, non-breeding males and juveniles are olive brown with white wing bars and a brown bill.
The bird lives in several types of forest, including degraded areas, as well as plantations. Stands of Japanese cedar (Cryptomeria japonica) have replaced native vegetation and now provide protection against predators. It feeds on insects like grasshoppers, beetle larvae, caterpillars, and also spiders. Berries are eaten regularly by some individuals. It feeds on nectar regularly, using its specialised brush-tipped tongue.
The bird is a weaver, the male and female cooperating to weave each nest, from material like grass, moss and small twigs.
|
Given a reference text about the Mauritius fody, tell me what they look like.
|
Male and female Mauritius fody birds have different appearances. Breeding males are olive brown with red heads, breast and rump patches, and black lores. Females and non-breeding males are olive brown with white wing bars and brown bills.
|
null | false
| null |
The record for the most Grammy Awards won in a lifetime is held by Beyoncé, an American singer, songwriter, and dancer, who has won 32. It was previously held by Georg Solti, a Hungarian-British conductor, who won 31.
|
Given this paragraph about the Grammy Awards, who has the most Grammys of all time?
|
The current record holder for most Grammy Awards won in a lifetime is Beyoncé.
|
null | false
| null |
Who directed the most episodes of season seven of Game of Thrones?
|
Jeremy Podeswa, Mark Mylod, and Matt Shakman each directed two episodes of the seventh season of Game of Thrones.
|
|
null | false
| null |
Give me an enumerated list of Four Noble Truths mentioned in Buddhism
|
1. Suffering: Buddhism teaches that struggle and suffering is part of life. Everyone is suffering in one way or another. It is to promote pragmatism and as an anti-dote of negativity. Once one accept it, then it gets easier for move forward in life and confidently march forward in the face of difficulties.
2. Cause of suffering: Desire, ignorance and attachments are often described as causes of suffering explained in previous point. For instance, Buddhism further explained that various vices, such as hatred, jealousy, temper and greed, are the by product of our ignorance.
3. End of suffering: Simply put, this Truth is inviting one person to be aware about the above causes of suffering and hence walk on the path, i.e. lead the life, with clarity and unshakable spirit where primary goal is to reach the state of Nirvana.
4. Path that leads to the end of suffering: To achieve the state of Nirvana, there are various practices defined to lead the life where despite suffering presents in our day to day life, one always keep the calm and composed attitude towards life.
|
|
null | false
| 138
|
We present our results for the binary classification tasks in Tables TABREF23 – TABREF24 in terms of average precision, average recall and macro average F1 score. The results of the regression tasks are reported in Tables TABREF25 and TABREF29 in terms of the mean absolute error between the predicted and actual scores, as well as the Spearman INLINEFORM0 correlation between the rankings induced by both sets of scores. It can be clearly seen from the results that our proposed method (EGEL-All) can effectively integrate Flickr tags with the available structured information. It outperforms the baselines for all the considered tasks. Furthermore, note that the PPMI-based weighting in EGEL-Tags consistently outperforms GloVe and that both the addition of negative examples and term selection lead to further improvements. The use of term selection leads to particularly substantial improvements for the regression problems.
While our experimental results confirm the usefulness of embeddings for predicting environmental features, this is only consistently the case for the variants that use both the tags and the structured datasets. In particular, comparing BOW-Tags with EGEL-Tags, we sometimes see that the former achieves the best results. While this might seem surprising, it is in accordance with the findings in BIBREF54 , BIBREF38 , among others, where it was also found that bag-of-words representations can sometimes lead to surprisingly effective baselines. Interestingly, we note that in all cases where EGEL-KL(Tags+NS) performs worse than BOW-Tags, we also find that BOW-KL(Tags) performs worse than BOW-Tags. This suggests that for these tasks there is a very large variation in the kind of tags that can inform the prediction model, possibly including e.g. user-specific tags. Some of the information captured by such highly specific but rare tags is likely to be lost in the embedding.
To further analyze the difference in performance between BoW representations and embeddings, Figure TABREF29 compares the performance of the GloVe model with the bag-of-words model for predicting place scenicness, as a function of the number of tag occurrences at the considered locations. What is clearly noticeable in Figure TABREF29 is that GloVe performs better than the bag-of-words model for large corpora and worse for smaller corpora. This issue has been alleviated in our embedding method by the addition of negative examples.
To further analyze the difference in performance between BoW representations and embeddings, Figure 1 compares the performance of the GloVe model with the bag-of-words model for predicting place scenicness, as a function of the number of tag occurrences at the considered locations.
|
Which two models are compared together to further analyze the difference in performance between BoW representations and embeddings?
|
The GloVe model and the bag-of-words model.
|
null | false
| null |
How many toes does a cat have?
|
A cat typically has five toes: four on its paw pad and one smaller one on the side of its arm called the declaw. Some cats are polydactyl, which means they could have more than the average number of toes.
|
|
null | false
| null |
What is Spot rate in finance?
|
This is the exchange rate for foreign−exchange transactions which are being done straightaway.
|
|
null | false
| null |
When Guettel took up music composition in his mid-teens, he was encouraged by his family. His mother said that she offered him advice for around a year, "After that, he was so far beyond anything I could ever have dreamed of, I just backed off." Richard Rodgers, who died when Guettel was 15, overheard an early composition, said he liked it and asked him to play it louder. Guettel has qualified the compliment, noting that "He was literally on his deathbed on the other side of the living-room wall." In his high school and collegiate years and into his early twenties, Guettel worked as a rock and jazz musician, singing and playing bass, before realizing "that writing for character and telling stories through music was something that I really loved to do, and that allowed me to express love."
|
List all the names of people in this paragraph. Separate the names with commas.
|
Guettel, Richard Rodgers
|
1909.04625
| false
| null |
are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256.
We also compare LSTM language models trained on large corpora. We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15. We set the size of the input embeddings and hidden layers to 400 for the LSTM (frWaC) model since it is trained on a large dataset.
The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13.
We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15.
|
What is the size of the datasets employed?
|
The answers are shown as follows:
* (about 4 million sentences, 138 million word tokens)
* one trained on the Billion Word benchmark
|
null | false
| null |
Which classical composer was deaf?
|
Ludwig van Beethoven
|
|
null | false
| 415
|
In recent years, there has been a significant amount of work that studies and improves both adversarial and certified robustness of neural networks. However, currently, there are two key limitations that hinder the wider adoption of robust models in practice.
Existing Models are Robustly Inaccurate First, despite substantial progress in training robust models, existing works usually only report robust accuracy, i.e., samples for which the model robustly predicts the correct label. Meanwhile, the issue of robust inaccuracy, i.e., samples that are robustly misclassified with a wrong label, is usually not even reported (we formally define robust inaccuracy in Equation). This is especially problematic for safety-critical models, where the robustness can be mistakenly used as a safety argument. We quantify the severity of this issue in Table, by evaluating recent state-of-the-art robust models. As can be seen, recent models contain up to 15% of robust inaccurate samples and the ratio of such samples worsens with smaller perturbation regions. Table: Percentage of robust and inaccurate samples for various recent robust models and datasets, which we describe in more detail in Section 7. Each model is trained for the indicated threat model and evaluated using 40-step APGD ., and our training L ERA (Equation). In this case, our training achieves the same robust accuracy as L TRADES but avoids all robust inaccurate samples by making them non-robust. Note that all three models predict over all three classes, however, the decision regions for class 2 of the L TRADES and L ERA trained models happen to be too small to be visible. The considered model architecture and the training hyperparameters are provided in Appendix A.2.
Robustness vs Accuracy Tradeoff Second, existing robust training methods improve the model robustness, but they also typically degrade the standard accuracy on unperturbed inputs. To address this limitation, a number of recent works study this issue in detail and propose new methods to mitigate it.
In this work, we advance the line of work that aims to boost robustness without sacrificing accuracy, but we approach the problem from a new perspective -by avoiding robust inaccuracy.
Concretely, we propose a new training method that jointly maximizes robust accuracy while minimizing robust inaccuracy. We illustrate the effect of our training on a synthetic dataset (described in Appendix A.2) in Figure, showing the decision boundaries of three models, trained using standard training L std , adversarial training L TRADES, and our training L ERA (Equation). First, observe that while the L std trained model achieves 100% accuracy, only 91.1% of these samples are robust (and accurate). When using L TRADES , we can observe the robustness vs accuracy tradeoffthe robust accuracy improves to 98.4% at the expense of 1.6% (robust) inaccuracy. In contrast, using our L ERA , we retain the high robust accuracy of 98.4% but avoid all robust inaccurate samples by appropriately shifting the decision boundary, rendering them non-robust.
Second, since our models are trained to be robust only if they are accurate, we leverage robustness as a principled abstain mechanism. This abstain mechanism then allows us to combine models in a compositional architecture that significantly boosts overall robustness without sacrificing accuracy. Concretely, in Figure, we would define a selector model that abstains on all non-robust samples. Then, the abstained (non-robust) samples are evaluated by the standard trained model L std , while the selected samples are evaluated using the robust model L ERA . This allows us to achieve the best of both models -high robust accuracy (98.4%), high natural accuracy (100%), and no robust inaccuracy.
We show the practical effectiveness of our approach by instantiating over several datasets and existing robust models for both empirical and certified robustness. Our evaluations show that our method effectively reduces robust and inaccurate samples by up to 97.28%. Further, our approach significantly improves robustness for CIFAR-10, CIFAR-100, MTSD and SBB datasets by 60.3%, 38.8%, 29.2% and 37.7%, respectively, while simultaneously decreasing natural accuracy by only 0.2%.
Figure 1: Decision regions for models trained via standard training Lstd, adversarial training LTRADES (Zhang et al., 2019a), and our training LERA (Equation 5). In this case, our training achieves the same robust accuracy as LTRADES but avoids all robust inaccurate samples by making them non-robust. Note that all three models predict over all three classes, however, the decision regions for class 2 of the LTRADES and LERA trained models happen to be too small to be visible. The considered model architecture and the training hyperparameters are provided in Appendix A.2.
|
Can you provide more discussion to explain the reason behind the effectiveness of the proposed method?
|
We have included some intuition behind our method in Figure 1. It shows a synthetic example that illustrates deficiencies of both standard training and adversarial training. The effectiveness of our work lies in combining the strengths of both by ensembling natural and robust models into one, with the goal of retaining the accuracy while improving the robustness. The key challenge lies in how these models should be combined, which we address by using robustness as a principled selection (abstain) mechanism. We hypothesize that empirically, the robust inaccurate samples lie in regions that are close by in the learned manifold, thus regularizing them to become even slightly non-robust generalizes well.
|
null | false
| null |
Yellowstone was part of a federally governed territory. With no state government that could assume stewardship of the land, the federal government took on direct responsibility for the park, the official first national park of the United States. The combined effort and interest of conservationists, politicians and the Northern Pacific Railroad ensured the passage of enabling legislation by the United States Congress to create Yellowstone National Park. Theodore Roosevelt and his group of conservationists, the Boone and Crockett Club, were active campaigners and were highly influential in convincing fellow Republicans and big business to back the bill. Yellowstone National Park soon played a pivotal role in the conservation of these national treasures, as it was suffering at the hands of poachers and others who stood at the ready to pillage what they could from the area. Theodore Roosevelt and his newly formed Boone and Crockett Club successfully took the lead in protecting Yellowstone National Park from this plight, resulting in laws designed to conserve the natural resources in Yellowstone and other parks under the Government's purview.
|
What is the Boone and Crockett Club?
|
The Boone and Crockett Club is a group of conservationists who campaigned with Theodore Roosevelt to pass a bill in the US Congress that established Yellowstone National Park.
|
null | false
| 163
|
Sarcasm, in general, has been the focus of research for quite some time. In one of the pioneering works jorgensen1984test explained how sarcasm arises when a figurative meaning is used opposite to the literal meaning of the utterance. In the word of clark1984pretense, sarcasm processing involves canceling the indirectly negated message and replacing it with the implicated one. giora1995irony, on the other hand, define sarcasm as a mode of indirect negation that requires processing of both negated and implicated messages. ivanko2003context define sarcasm as a six tuple entity consisting of a speaker, a listener, Context, Utterance, Literal Proposition and Intended Proposition and study the cognitive aspects of sarcasm processing.
Computational linguists have previously addressed this problem using rule based and statistical techniques, that make use of : (a) Unigrams and Pragmatic features BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 (b) Stylistic patterns BIBREF4 and patterns related to situational disparity BIBREF5 and (c) Hastag interpretations BIBREF6 , BIBREF7 .
Most of the previously done work on sarcasm detection uses distant supervision based techniques (ex: leveraging hashtags) and stylistic/pragmatic features (emoticons, laughter expressions such as “lol” etc). But, detecting sarcasm in linguistically well-formed structures, in absence of explicit cues or information (like emoticons), proves to be hard using such linguistic/stylistic features alone.
With the advent of sophisticated eye-trackers and electro/magneto-encephalographic (EEG/MEG) devices, it has been possible to delve deep into the cognitive underpinnings of sarcasm understanding. Filik2014, using a series of eye-tracking and EEG experiments try to show that for unfamiliar ironies, the literal interpretation would be computed first. They also show that a mismatch with context would lead to a re-interpretation of the statement, as being ironic. Camblin2007103 show that in multi-sentence passages, discourse congruence has robust effects on eye movements. This also implies that disrupted processing occurs for discourse incongruent words, even though they are perfectly congruous at the sentence level. In our previous work BIBREF8 , we augment cognitive features, derived from eye-movement patterns of readers, with textual features to detect whether a human reader has realized the presence of sarcasm in text or not.
The recent advancements in the literature discussed above, motivate us to explore gaze-based cognition for sarcasm detection. As far as we know, our work is the first of its kind.
With the advent of sophisticated eye-trackers and electro/magneto-encephalographic (EEG/MEG) devices, it has been possible to delve deep into the cognitive underpinnings of sarcasm understanding.
|
What makes it possible to delve deep into the cognitive underpinnings of sarcasm understanding?
|
The advent of sophisticated eye-trackers and electro/magneto-encephalographic (EEG/MEG) devices.
|
null | false
| 18
|
In this paper, we present our first attempts in building a multilingual Neural Machine Translation framework. By treating words in different languages as different words and force the attention and translation to the direction of desired target language, we are able to employ attention-enable NMT toward a multilingual translation system. Our proposed approach alleviates the need of complicated architecture re-designing when accommodating attention mechanism. In addition, the number of free parameters to learn in our network does not go beyond that magnitute of a single NMT system. With its universality, our approach has shown its effectiveness in an under-resourced translation task with considerable improvements. In addition, the approach has achieved interesting and promising results when applied in the translation task that there is no direct parallel corpus between source and target languages.
Nevertheless, there are issues that we can continue working on to address in future work. A more balancing data would be helpful for this framework. The mechanism of forcing the NMT system to the right target language could be improved. We could conduct more detailed analyses of the various strategies under the framework to show its universarity.
Nevertheless, there are issues that we can continue working on to address in future work. A more balancing data would be helpful for this framework. The mechanism of forcing the NMT system to the right target language could be improved. We could conduct more detailed analyses of the various strategies under the framework to show its universarity.
|
What do they plan to do in the future?
|
They may create more balancing data, improve the mechanism of forcing the NMT system to the right target language and analyze the various strategies in more detail under the framework to show its universality.
|
null | false
| null |
How is the The U.S. Naturalization Interview and test done?
|
During your naturalization interview, a USCIS officer will ask you questions about your application and background. Unless you qualify for an exemption, you will also take a naturalization test which is made up of two components, an English and civics test.
During the English test, you must demonstrate an understanding of the English language including the ability to read, write, and speak basic English.
During the civics test, you will answer important questions about American government and history.
|
|
null | false
| null |
Many museums turned to their existing social media presences to engage their audience online during the COVID-19 pandemic. Inspired by the Rijksmuseum in Amsterdam and Instagram accounts such as the Dutch Tussen Kunst & Quarantaine (“between art and quarantine”) and Covid Classics, the Getty sponsored the Getty Museum Challenge, inviting people to use everyday objects to recreate works of art and share their creations on social media, prompting thousands of submissions. The museum was among those singled out for particular praise by industry analysts for their successful social media content strategy during the shutdown, both for the challenge and for incorporating its works into the popular video game Animal Crossing.
|
How did the Getty Museum in Los Angeles respond to the global pandemic?
|
Like other art institutions, the Getty Museum leveraged its social media presence in a creative way to keep engagement levels high during the pandemic. The Getty invited people to reimagine classic works of art using every day objects, the results of which were posted on social media for others' enjoyment and commentary. Thousands of people were inspired to post submissions, and industry analysts praised the museum for this successful social media strategy.
|
null | false
| 198
|
The prevalent use of emoji—and their text-based precursors—is mostly unaddressed in current natural language processing (NLP) tasks. The support of the Unicode Standard BIBREF0 for emoji characters in 2010 ushered in a wide-spread, international adoption of these graphical elements in casual contexts. Interpreting the meaning of these characters has been challenging however, since they take on multiple semantic roles BIBREF1.
Whether or not emoji are used depends on the context of a text or conversation, with more formal settings generally being less tolerating. So is the popular aligned corpus Europarl BIBREF2 naturally devoid of emoji. Technical limitations, like no Unicode support, also limit its use. This in turn affects commonly used corpora, tokenizers, and pre-trained networks.
Take for example the Ubuntu Dialog Corpus by BIBREF3, a commonly used corpus for multi-turn systems. This dataset was collected from an Internet Relay Chat (IRC) room casually discussing the operating system Ubuntu. IRC nodes usually support the ASCII text encoding, so there's no support for graphical emoji. However, in the 7,189,051 utterances, there are only 9946 happy emoticons (i.e. :-) and the cruelly denosed :) version) and 2125 sad emoticons.
Word embeddings are also handling emoji poorly: Word2vec BIBREF4 with the commonly used pre-trained Google News vectors doesn't support the graphical emoji at all and vectors for textual emoticons are inconsistent. As another example with contextualized word embeddings, there are also no emoji or textual emoticons in the vocabulary list of BERT BIBREF5 by default and support for emoji is only recently added to the tokenizer. The same is true for GPT-2 BIBREF6. As all downstream systems, ranging from multilingual résumé parsing to fallacy detection BIBREF7, rely on the completeness of these embeddings, this lack of emoji support can affect the performance of some of these systems.
Another challenge is that emoji usage isn't static. Think of shifting conventions, different cultures, and newly added emoji to the Unicode list. Several applications also use their own custom emoji, like chat application Slack and streaming service Twitch. This becomes an issue for methods that leverage the Unicode description BIBREF8 or that rely on manual annotations BIBREF9.
Our contribution with this paper is two-fold: firstly, we argue that the current use—or rather non-existing use—of emoji in the tokenizing, training, and the datasets themselves is insufficient. Secondly, we attempt to quantify the significance of incorporating emoji-based features by presenting a fine-tuned model. We then compare this model to a baseline, but without special attention to emoji.
Section SECREF2 will start with an overview of work on emoji representations, emoji-based models and analysis of emoji usage. A brief introduction in conversational systems will also be given. Section SECREF3 will then look into popular datasets with and without emoji and then introduce the dataset we used.
Our model will then be discussed in Section SECREF4, including the tokenization in Subsection SECREF4, training setup in Subsection SECREF6 and evaluation in Subsection SECREF10. This brings us to the results of our experiment, which is discussed in Section SECREF5 and finally our conclusion and future work are presented in Section SECREF6.
Secondly, we attempt to quantify the significance of incorporating emoji-based features by presenting a fine-tuned model.
|
What do they want to use their fine-tuned model to do?
|
To quantify the significance of incorporating emoji-based features in the tokenizing, training, and the datasets themselves.
|
null | false
| null |
The One Night Trilogy, comprising three games, One Night, One Night 2: The Beyond and One Night: Full Circle, is a series of 2D tile-based overhead psychological horror games. The three games tell the story of an attempt to invade Earth by a race of supernatural shadow people and a collection of protagonists who must survive the attacks and fight against them. The origins of the creatures and their motives are detailed in the prequel, One Night 2: The Beyond, while the first and third games deal with subsequent invasion attempts and the conclusion to the conflict.
|
How many versions of One Night game available?
|
The One Night Trilogy, comprising three games, One Night, One Night 2: The Beyond and One Night
|
null | false
| null |
What is a difference between alligators and crocodiles?
|
A difference between alligators and crocodiles is the shape of their heads. Crocodiles have more narrow and longer heads than alligators.
|
|
null | false
| null |
Nachum Gutman was born in Teleneşti, Bessarabia Governorate, then a part of the Russian Empire (now in the Republic of Moldova). He was the fourth child of Simha Alter and Rivka Gutman. His father was a Hebrew writer and educator who wrote under the pen name S. Ben Zion. In 1903, the family moved to Odessa, and two years later, to Ottoman Palestine. In 1908, Gutman attended the Herzliya Gymnasium in what would later become Tel Aviv. In 1912, he studied at the Bezalel School in Jerusalem. In 1920–26, he studied art in Vienna, Berlin and Paris.
Gutman was married to Dora, with whom he had a son. After Gutman's death in 1980, Dora asked two Tel Aviv gallery owners, Meir Stern of Stern Gallery and Miriam Tawin of Shulamit Gallery, to appraise the value all of the works left in his estate.
|
Extract the locations where Nachum lived from the text below and list them in alphabetical order and separated by a semicolon.
|
Berlin;Jerusalem;Odessa;Palestine;Paris;Tel Aviv;Vienna
|
null | false
| null |
Best Actress Award in Drama series in 27th Screen Actors Guild(SAG) Awards was won by whom?
|
Gillian Anderson for her performance as Margaret Thatcher
|
|
null | false
| 146
|
We now describe our studies to assess the predictive power of our classification systems to decide whether visual questions will lead to answer (dis)agreement.
We capitalize on today's largest visual question answering dataset BIBREF2 to evaluate our prediction system, which includes 369,861 visual questions about real images. Of these, 248,349 visual questions (i.e., Training questions 2015 v1.0) are kept for for training and the remaining 121,512 visual questions (i.e., Validation questions 2015 v1.0) are employed for testing our classification system. This separation of training and testing samples enables us to estimate how well a classifier will generalize when applied to an unseen, independent set of visual questions.
To our knowledge, no prior work has directly addressed predicting answer (dis)agreement for visual questions. Therefore, we employ as a baseline a related VQA algorithm BIBREF24 , BIBREF2 which produces for a given visual question an answer with a confidence score. This system parallels the deep learning architecture we adapt. However, it predicts the system's uncertainty in its own answer, whereas we are interested in the humans' collective disagreement on the answer. Still, it is a useful baseline to see if an existing algorithm could serve our purpose.
We evaluate the predictive power of the classification systems based on each classifier's predictions for the 121,512 visual questions in the test dataset. We first show performance of the baseline and our two prediction systems using precision-recall curves. The goals are to achieve a high precision, to minimize wasting crowd effort when their efforts will be redundant, and a high recall, to avoid missing out on collecting the diversity of accepted answers from a crowd. We also report the average precision (AP), which indicates the area under a precision-recall curve. AP values range from 0 to 1 with better-performing prediction systems having larger values.
Figure FIGREF8 a shows precision-recall curves for all prediction systems. Both our proposed classification systems outperform the VQA Algorithm BIBREF2 baseline; e.g., Ours - RF yields a 12 percentage point improvement with respect to AP. This is interesting because it shows there is value in learning the disagreement task specifically, rather than employing an algorithm's confidence in its answers. More generally, our results demonstrate it is possible to predict whether a crowd will agree on a single answer from a given image and associated question. Despite the significant variety of questions and image content, and despite the variety of reasons for which the crowd can disagree, our learned model is able to produce quite accurate results.
We observe our Random Forest classifier outperforms our deep learning classifier; e.g., Ours: RF yields a three percentage point improvement with respect to AP while consistently yielding improved precision-recall values over Ours: LSTM-CNN (Figure FIGREF8 a). In general, deep learning systems hold promise to replace handcrafted features to pick out the discriminative features. Our baselines highlight a possible value in developing a different deep learning architecture for the problem of learning answer disagreement than applied for predicting answers to visual questions.
We show examples of prediction results where our top-performing RF classifier makes its most confident predictions (Figure FIGREF8 b). In these examples, the predictor expects human agreement for “what room... ?" visual questions and disagreement for “why... ?" visual questions. These examples highlight that the classifier may have a strong language prior towards making predictions, as we will discuss in the next section.
We now explore what makes a visual question lead to crowd answer agreement versus disagreement. We examine the influence of whether visual questions lead to the three types of answers (“yes/no", “number", “other") for both our random forest (RF) and deep learning (DL) classification systems. We enrich our analysis by examining the predictive performance of both classifiers when they are trained and tested exclusively with image and question features respectively. Figure FIGREF9 shows precision-recall curves for both classification systems with question features alone (Q), image features alone (I), and both question and image features together (Q+I).
When comparing AP scores (Figure FIGREF9 ), we observe our Q+I predictors yield the greatest predictive performance for visual questions that lead to “other" answers, followed by “number" answers, and finally “yes/no" answers. One possible reason for this finding is that the question wording strongly drives whether a crowd will disagree for “other" visual questions, whereas some notion of common sense may be required to learn whether a crowd will agree for “yes/no" visual questions (e.g., Figure FIGREF7 a vs Figure FIGREF7 g).
We observe that question-based features yield greater predictive performance than image-based features for all visual questions, when comparing AP scores for Q and I classification results (Figure FIGREF9 ). In fact, image features contribute to performance improvements only for our random forest classifier for visual questions that lead to “number" answers, as illustrated by comparing AP scores for Our RF: Q+I and Our RF: Q (Figure FIGREF9 b). Our overall finding that most of the predictive power stems from language-based features parallels feature analysis findings in the automated VQA literature BIBREF2 , BIBREF21 . This does not mean, however, that the image content is not predictive. Further work improving visual content cues for VQA agreement is warranted.
Our findings suggest that our Random Forest classifier's overall advantage over our deep learning system arises because of counting questions, as indicated by higher AP scores (Figure FIGREF9 ). For example, the advantage of the initial higher precision (Figure FIGREF8 a; Ours: RF vs Ours: DL) is also observed for counting questions (Figure FIGREF9 b; Ours: RF - Q+I vs Ours: DL - Q+I). We hypothesize this advantage arises due to the strength of the Random Forest classifier in pairing the question prior (“How many?") with the image-based SOS features that indicates the number of objects in an image. Specifically, we expect “how many" to lead to agreement only for small counting problems.
Classification Performance: We evaluate the predictive power of the classification systems based on each classifier’s predictions for the 121,512 visual questions in the test dataset. We first show performance of the baseline and our two prediction systems using precision-recall curves. The goals are to achieve a high precision, to minimize wasting crowd effort when their efforts will be redundant, and a high recall, to avoid missing out on collecting the diversity of accepted answers from a crowd. We also report the average precision (AP), which indicates the area under a precision-recall curve. AP values range from 0 to 1 with better-performing prediction systems having larger values.
|
Why do they show the performance of the baseline and their two prediction systems using precision-recall curves?
|
To achieve a high precision, to minimize wasting crowd effort when their efforts will be redundant, and a high recall, to avoid missing out on collecting the diversity of accepted answers from a crowd.
|
null | false
| 164
|
We typically start by identifying the questions we wish to explore. Can text analysis provide a new perspective on a “big question” that has been attracting interest for years? Or can we raise new questions that have only recently emerged, for example about social media? For social scientists working in computational analysis, the questions are often grounded in theory, asking: How can we explain what we observe? These questions are also influenced by the availability and accessibility of data sources. For example, the choice to work with data from a particular social media platform may be partly determined by the fact that it is freely available, and this will in turn shape the kinds of questions that can be asked. A key output of this phase are the concepts to measure, for example: influence; copying and reproduction; the creation of patterns of language use; hate speech. Computational analysis of text motivated by these questions is insight driven: we aim to describe a phenomenon or explain how it came about. For example, what can we learn about how and why hate speech is used or how this changes over time? Is hate speech one thing, or does it comprise multiple forms of expression? Is there a clear boundary between hate speech and other types of speech, and what features make it more or less ambiguous? In these cases, it is critical to communicate high-level patterns in terms that are recognizable.
This contrasts with much of the work in computational text analysis, which tends to focus on automating tasks that humans perform inefficiently. These tasks range from core linguistically motivated tasks that constitute the backbone of natural language processing, such as part-of-speech tagging and parsing, to filtering spam and detecting sentiment. Many tasks are motivated by applications, for example to automatically block online trolls. Success, then, is often measured by performance, and communicating why a certain prediction was made—for example, why a document was labeled as positive sentiment, or why a word was classified as a noun—is less important than the accuracy of the prediction itself. The approaches we use and what we mean by `success' are thus guided by our research questions.
Domain experts and fellow researchers can provide feedback on questions and help with dynamically revising them. For example, they may say “we already think we know that”, “that's too naïve”, “that doesn't reflect social reality” (negative); “two major camps in the field would give different answers to that question” (neutral); “we tried to look at that back in the 1960s, but we didn't have the technology” (positive); and “that sounds like something that people who made that archive would love”, “that's a really fundamental question” (very positive).
Sometimes we also hope to connect to multiple disciplines. For example, while focusing on the humanistic concerns of an archive, we could also ask social questions such as “is this archive more about collaborative processes, culture-building or norm creation?” or “how well does this archive reflect the society in which it is embedded?" BIBREF3 used quantitative methods to tell a story about Darwin's intellectual development—an essential biographical question for a key figure in the history of science. At the same time, their methods connected Darwin's development to the changing landscape of Victorian scientific culture, allowing them to contrast Darwin's “foraging” in the scientific literature of his time to the ways in which that literature was itself produced. Finally, their methods provided a case study, and validation of technical approaches, for cognitive scientists who are interested in how people explore and exploit sources of knowledge.
Questions about potential “dual use” may also arise. Returning to our introductory example, BIBREF0 started with a deceptively simple question: if an internet platform eliminates forums for hate speech, does this impact hate speech in other forums? The research was motivated by the belief that a rising tide of online hate speech was (and is) making the internet increasingly unfriendly for disempowered groups, including minorities, women, and LBGTQ individuals. Yet the possibility of dual use troubled the researchers from the onset. Could the methodology be adopted to target the speech of groups like Black Lives Matter? Could it be adopted by repressive governments to minimize online dissent? While these concerns remained, they concluded that hypothetical dual use scenarios did not outweigh the tangible contribution this research could offer towards making the online environment more equal and just.
Murdock et al. [2017] used quantitative methods to tell a story about Darwin’s intellectual development—an essential biographical question for a key figure in the history of science. At the same time, their methods connected Darwin’s development to the changing landscape of Victorian scientific culture,
|
What does Murdock' s team connect Darwin' s development to in their research methods?
|
The changing landscape of Victorian scientific culture.
|
null | false
| null |
The Platinum Jubilee of Elizabeth II was the international celebration in 2022 marking the 70th anniversary of the accession of Queen Elizabeth II on 6 February 1952. It was the first time that any monarch in British history celebrated a platinum jubilee, as is the case in the histories of the other Commonwealth realms.[n 1]
Initiatives to commemorate the jubilee were announced by the governments of many realms—including Australia, Canada, New Zealand, Papua New Guinea, and the United Kingdom—of territories, such as the Cayman Islands and Gibraltar, and celebrations were also held in other Commonwealth member states, like the Gambia, Malaysia, Malta, Pakistan, and Samoa. Leaders from across the world, including from China, France, Germany, Israel, and the United States, sent messages of congratulations to the Queen on reaching the milestone. In the United Kingdom, there was an extra bank holiday on 3 June and the usual spring bank holiday was moved from the end of May to 2 June to create the four-day Platinum Jubilee Central Weekend from Thursday, 2 June, to Sunday, 5 June. Commemorative stamps and coins were issued by several Commonwealth nations and beacons were lit in every Commonwealth capital for the first time. In many places, trees were planted in the Queen's honour.
The Queen died in her Platinum Jubilee year, on 8 September, at the age of 96. Her funeral was held on 19 September 2022 and she was buried at the King George VI Memorial Chapel later that day.
|
What's the Platinum Jubilee?
|
The Platinum Jubilee was a 2022 celebration commemorating British Monarch Queen Elizabeth II's 70th anniversary on the throne. No other monarch in British history has lived long enough to celebrate this milestone, and celebrations were held in her honor all over the world, and several Commonwealth nations commemorated the occasion by issuing special stamps and coins.
|
null | false
| 446
|
Table 7 supplements the semi-supervised learning results on Table 4, providing additional results using ResNet-50 as the network architecture for the encoder. FedEMA consistently outperforms all the other methods.****Table 7: Top-1 accuracy comparison on using 1% and 10% of labeled data for semi-supervised
learning on the non-IID settings of CIFAR datasets. FedEMA outperforms all other methods.
|
Why are baselines such as FedMoCoV1 and FedMoCoV2 missing from Table 7?
|
We have run these experiments (FedSimCLR, FedSimSiam, FedMoCoV1, and FedMoCoV2) and added them in Table 7 in Appendix. C.
|
null | false
| 269
|
Subword segmentation has become a standard preprocessing step in many neural approaches to natural language processing (NLP) tasks, e.g Neural Machine Translation (NMT) BIBREF0 and Automatic Speech Recognition (ASR) BIBREF1. Word level modeling suffers from sparse statistics, issues with Out-of-Vocabulary (OOV) words, and heavy computational cost due to a large vocabulary. Word level modeling is particularly unsuitable for morphologically rich languages, but subwords are commonly used for other languages as well. Subword segmentation is best suited for languages with agglutinative morphology.
While rule-based morphological segmentation systems can achieve high quality, the large amount of human effort needed makes the approach problematic, particularly for low-resource languages. The systems are language dependent, necessitating use of multiple tools in multilingual setups. As a fast, cheap and effective alternative, data-driven segmentation can be learned in a completely unsupervised manner from raw corpora. Unsupervised morphological segmentation saw much research interest until the early 2010's; for a survey on the methods, see hammarstrom2011unsupervised. Semi-supervised segmentation with already small amounts of annotated training data was found to improve the accuracy significantly when compared to a linguistic segmentation; see ruokolainen2016comparative for a survey. While this line of research has been continued in supervised and more grammatically oriented tasks BIBREF2, the more recent work on unsupervised segmentation is less focused on approximating a linguistically motivated segmentation. Instead, the aim has been to tune subword segmentations for particular applications. For example, the simple substitution dictionary based Byte Pair Encoding segmentation algorithm BIBREF3, first proposed for NMT by sennrich2015neural, has become a standard in the field. Especially in the case of multilingual models, training a single language-independent subword segmentation method is preferable to linguistic segmentation BIBREF4.
In this study, we compare three existing and one novel subword segmentation method, all sharing the use of a unigram language model in a generative modeling framework. The previously published methods are Morfessor Baseline BIBREF5, Greedy Unigram Likelihood BIBREF6, and SentencePiece BIBREF7. The new Morfessor variant proposed in this work is called Morfessor EM+Prune.
The contributions of this article are
a better training algorithm for Morfessor Baseline, with reduction of search error during training, and improved segmentation quality for English, Finnish and Turkish;
comparing four similar segmentation methods, including a close look at the SentencePiece reference implementation, highlighting details omitted from the original article BIBREF7;
and showing that the proposed Morfessor EM+Prune with particular hyper-parameters yields SentencePiece.
The previously published methods are Morfessor Baseline, Greedy Unigram Likelihood, and SentencePiece.
|
What are the previously published methods?
|
Morfessor Baseline, Greedy Unigram Likelihood, and SentencePiece.
|
null | false
| null |
Classify these cities into the countries that they are in.
Paris, Munich, Frankfurt, Nice, Lyon, Dresden
|
France - Paris, Nice, Lyon
Germany - Munich, Frankfurt, Dresden
|
|
null | false
| null |
Utility ratemaking is the formal regulatory process in the United States by which public utilities set the prices (more commonly known as "rates") they will charge consumers. Ratemaking, typically carried out through "rate cases" before a public utilities commission, serves as one of the primary instruments of government regulation of public utilities.
|
What is the process by which utilities rates are set in the US
|
The price of a utility's products and services will affect its consumption. As with most demand curves, a price increase decreases demand. Through a concept known as rate design or rate structure, regulators set the prices (known as "rates" in the case of utilities) and thereby affect the consumption. With declining block rates, the per-unit price of utility consumption decreases as the energy consumption increases. Typically a declining block rate is offered only to very large consumers. If conservation is the goal, regulators can promote conservation by letting prices rise. A third possible rate design is a flat rate which charges the same price for all consumption.
|
null | false
| null |
What is cloud computing in simple terms?
|
In simple terms cloud computing allows you to use preconfigured hardware devices and software services managed by somebody else on internet. Most cases you pay only for the resources you use and it is very easy to procure more as you need more services.
Cloud computing makes it easy to setup the technology infrastructure at a low cost and scale as you grow.
|
|
null | false
| null |
Name a Christopher Nolan movie that was remade into an Indian movie.
|
Ghajini movie in Tamil language was the first Indian unofficial remake of Christopher Nolan's Memento.
|
|
null | false
| 143
|
Event Extraction in the Biomedical domain is a task that has gained more importance recently. Event Extraction goes beyond Relation Extraction. In Biomedical Event Extraction, events generally refer to a change in the state of biological molecules such as proteins and DNA. Generally, it includes detection of targeted event types such as gene expression, regulation, localisation and transcription. Each event type in addition can have multiple arguments that need to be detected. An additional layer of complexity comes from the fact that events can also be arguments of other events, giving rise to a nested structure. This helps to capture the underlying biology better BIBREF1 . Detecting the event type often involves recognising and classifying trigger words. Often, these words are verbs such as “activates”, “inhibits”, “phosphorylation” that may indicate a single, or sometimes multiple event types. In this section, we will discuss some of the successful models for Event Extraction in some detail.
Event Extraction gained a lot of interest with the availability of an annotated corpus with the BioNLP'09 Shared Task on Event Extraction BIBREF34 . The task involves prediction of trigger words over nine event types such as expression, transcription, catabolism, binding, etc. given only annotation of named entities (proteins, genes, etc.). For each event, its class, trigger expression and arguments need to be extracted. Since the events can be arguments to other events, the final output in general is a graph representation with events and named entities as nodes, and edges that correspond to event arguments. BIBREF33 present a pipeline based method that is heavily dependent on dependency parsing. Their pipeline approach consists of three steps: trigger detection, argument detection and semantic post-processing. While the first two components are learning based systems, the last component is a rule based system. For the BioNLP'09 corpus, only 5% of the events span multiple sentences. Hence the approach does not get affected severely by considering only single sentences. It is important to note that trigger words cannot simply be reduced to a dictionary lookup. This is because a specific word may belong to multiple classes, or may not always be a trigger word for an event. For example, “activate” is found to not be a trigger word in over 70% of the cases. A multi-class SVM is trained for trigger detection on each token, using a large feature set consisting of semantic and syntactic features. It is interesting to note that the hyperparameters of this classifier are optimised based on the performance of the entire end-to-end system.
For the second component to detect arguments, labels for edges between entities must be predicted. For the BioNLP'09 Shared Task, each directed edge from one event node to another event node, or from an event node to a named entity node are classified as “theme”, “cause”, or None. The second component of the pipeline makes these predictions independently. This is also trained using a multi-class SVM which involves heavy use of syntactic features, including the shortest dependency path between the nodes. The authors note that the precision-recall choice of the first component affects the performance of the second component: since the second component is only trained on Gold examples, any error by the first component will lead to a cascading of errors. The final component, which is a semantic post-processing step, consists of rules and heuristics to correct the output of the second component. Since the edge predictions are made independently, it is possible that some event nodes do not have any edges, or have an improper combination of edges. The rule based component corrects these and applies rules to break directed cycles in the graph, and some specific heuristics for different types of events. The final model gives a cumulative F-Score of 52% on the test set, and was the best model on the task.
BIBREF35 note that previous approaches on the task suffer due to the pipeline nature and the propagation of errors. To counter this, they adopt a joint inference method based on Markov Logic Networks BIBREF36 for the same task on BioNLP'09. The Markov Logic Network jointly predicts whether each token is a trigger word, and if yes, the class it belongs to; for each dependency edge, whether it is an argument path leading to a “theme” or a “cause”. By formulating the Event Extraction problem using an MLN, the approach becomes computationally feasible and only linear in the length of the sentence. They incorporate hard constraints to encode rules such as “an argument path must have an event”, “a cause path must start with a regulation event”, etc. In addition, they also include some domain specific soft constraints as well as some linguistically-motivated context-specific soft constraints. In order to train the MLN, stochastic gradient descent was used. Certain heuristic methods are implemented in order to deal with errors due to syntactic parsing, especially ambiguities in PP-attachment and coordination. Their final system is competitive and comes very close to the system by BIBREF33 with an average F-Score of 50%. To further improve the system, they suggest leveraging additional joint-inference opportunities and integrating the syntactic parser better. Some other more recent models for Biomedical Event Extraction include BIBREF37 , BIBREF38 .
In Biomedical Event Extraction, events generally refer to a change in the state of biological molecules such as proteins and DNA.
|
What is the meaning of events in Biomedical Event Extraction?
|
In Biomedical Event Extraction, events generally refer to a change in the state of biological molecules such as proteins and DNA.
|
null | false
| null |
Do you know any other languages besides english?can you name colors in ARABIC?
|
yes , i could speak a little bit of ARABIC and a few other languages aswell here are the colors in ARABIC along with its english spelling aswell أبيض abyad.
أسود aswad.
أحمر ahmar.
أصفر asfar.
أخضر akhdar.
أزرق azraq.
بنى bunni.
برتقالى burtuqali. orange
|
|
null | false
| 46
|
Question answering (QA) has drawn a lot of attention in the past few years. QA tasks on images BIBREF0 have been widely studied, but most focused on understanding text documents BIBREF1 . A representative dataset in text QA is SQuAD BIBREF1 , in which several end-to-end neural models have accomplished promising performance BIBREF2 . Although there is a significant progress in machine comprehension (MC) on text documents, MC on spoken content is a much less investigated field. In spoken question answering (SQA), after transcribing spoken content into text by automatic speech recognition (ASR), typical approaches use information retrieval (IR) techniques BIBREF3 to find the proper answer from the ASR hypotheses. One attempt towards QA of spoken content is TOEFL listening comprehension by machine BIBREF4 . TOEFL is an English examination that tests the knowledge and skills of academic English for English learners whose native languages are not English. Another SQA corpus is Spoken-SQuAD BIBREF5 , which is automatically generated from SQuAD dataset through Google Text-to-Speech (TTS) system. Recently ODSQA, a SQA corpus recorded by real speakers, is released BIBREF6 .
To mitigate the impact of speech recognition errors, using sub-word units is a popular approach for speech-related downstream tasks. It has been applied to spoken document retrieval BIBREF7 and spoken term detection BIBREF8 The prior work showed that, using phonectic sub-word units brought improvements for both Spoken-SQuAD and ODSQA BIBREF5 .
Instead of considering sub-word features, this paper proposes a novel approach to mitigate the impact of ASR errors. We consider reference transcriptions and ASR hypotheses as two domains, and adapt the source domain data (reference transcriptions) to the target domain data (ASR hypotheses) by projecting these two domains in the shared common space. Therefore, it can effectively benefit the SQA model by improving the robustness to ASR errors in the SQA model.
Domain adaptation has been successfully applied on computer vision BIBREF9 and speech recognition BIBREF10 . It is also widely studied on NLP tasks such as sequence tagging and parsing BIBREF11 , BIBREF12 , BIBREF13 . Recently, adversarial domain adaptation has already been explored on spoken language understanding (SLU). Liu and Lane learned domain-general features to benefit from multiple dialogue datasets BIBREF14 ; Zhu et al. learned to transfer the model from the transcripts side to the ASR hypotheses side BIBREF15 ; Lan et al. constructed a shared space for slot tagging and language model BIBREF16 . This paper extends the capability of adversarial domain adaptation for SQA, which has not been explored yet.
This paper extends the capability of adversarial domain adaptation for SQA, which has not been explored yet.
|
Has the capability of adversarial domain adaptation for SQA been explored before?
|
No.
|
null | false
| null |
Pinot noir's home is France's Burgundy region, particularly Côte-d'Or. It is also planted in Argentina, Australia, Austria, Bulgaria, Canada, Chile, northern parts of Croatia, Czech Republic, England, the Republic of Georgia, Germany, Greece, Israel, Italy, Hungary, Kosovo, the Republic of Macedonia, Moldova, New Zealand, Romania, Serbia, Slovakia, Slovenia, South Africa, Switzerland, Ukraine, United States, and Uruguay. The United States has increasingly become a major pinot noir producer, with some of the best regarded coming from Oregon's Willamette Valley, and California's Sonoma County with its Russian River Valley and Sonoma Coast appellations. Lesser-known appellations are found in Mendocino County's Anderson Valley, the Central Coast's Santa Lucia Highlands appellation, the Santa Maria Valley, and Sta. Rita Hills American Viticulture Area in Santa Barbara County. In New Zealand, it is principally grown in Martinborough, Marlborough, Waipara, and Central Otago.
|
From this paragraph please summarize the notable California regions where pinot noir is produced.
|
Notable California regions that produce pinot noir include Sonoma County (Russian River Valley and Sonoma Coast), Mendocino County (Anderson Valley), Central Coast (Santa Lucia Highlands), and Santa Barbara County (Santa Maria Valley, Sta. Rita Hills American Viticulture Area).
|
null | false
| 57
|
This work has applied a multiscale graph partitioning algorithm (Markov Stability) to extract content-based clusters of documents from a textual dataset of healthcare safety incident reports in an unsupervised manner at different levels of resolution. The method uses paragraph vectors to represent the records and obtains an ensuing similarity graph of documents constructed from their content. The framework brings the advantage of multi-resolution algorithms capable of capturing clusters without imposing a priori their number or structure. Since different levels of resolution of the clustering can be found to be relevant, the practitioner can choose the level of description and detail to suit the requirements of a specific task.
Our a posteriori analysis evaluating the similarity against the hand-coded categories and the intrinsic topic coherence of the clusters showed that the method performed well in recovering meaningful categories. The clusters of content capture topics of medical practice, thus providing complementary information to the externally imposed classification categories. Our analysis shows that some of the most relevant and persistent communities emerge because of their highly homogeneous medical content, although they are not easily mapped to the standardised external categories. This is apparent in the medically-based content clusters associated with Labour ward, Pressure ulcer, Chemotherapy, Radiotherapy, among others, which exemplify the alternative groupings that emerge from free text content.
The categories in the top level (Level 1) of the pre-defined classification hierarchy are highly diverse in size (as shown by their number of assigned records), with large groups such as `Patient accident', `Medication', `Clinical assessment', `Documentation', `Admissions/Transfer' or `Infrastructure' alongside small, specific groups such as `Aggressive behaviour', `Patient abuse', `Self-harm' or `Infection control'. Our multi-scale partitioning finds corresponding groups in content across different levels of resolution, providing additional subcategories with medical detail within some of the large categories (as shown in Fig. 4 and S1 ). An area of future research will be to confirm if the categories found by our analysis are consistent with a second level in the hierarchy of external categories (Level 2, around 100 categories) that is used less consistently in hospital settings. The use of content-driven classification of reports could also be important within current efforts by the World Health Organisation (WHO) under the framework for the International Classification for Patient Safety (ICPS) BIBREF48 to establish a set of conceptual categories to monitor, analyse and interpret information to improve patient care.
One of the advantages of a free text analytical approach is the provision, in a timely manner, of an intelligible description of incident report categories derived directly from the rich description in the 'words' of the reporter themselves. The insight from analysing the free text entry of the person reporting could play a valuable role and add rich information than would have otherwise been obtained from the existing approach of pre-defined classes. Not only could this improve the current state of play where much of the free text of these reports goes unused, but it avoids the fallacy of assigning incidents to a pre-defined category that, through a lack of granularity, can miss an important opportunity for feedback and learning. The nuanced information and classifications extracted from free text analysis thus suggest a complementary axis to existing approaches to characterise patient safety incident reports.
Currently, local incident reporting system are used by hospitals to submit reports to the NRLS and require risk managers to improve data quality of reports, due to errors or uncertainty in categorisation from reporters, before submission. The application of free text analytical approaches, like the one we have presented here, has the potential to free up risk managers time from labour-intensive tasks of classification and correction by human operators, instead for quality improvement activities derived from the intelligence of the data itself. Additionally, the method allows for the discovery of emerging topics or classes of incidents directly from the data when such events do not fit the pre-assigned categories by using projection techniques alongside methods for anomaly and innovation detection.
In ongoing work, we are currently examining the use of our characterisation of incident reports to enable comparisons across healthcare organisations and also to monitor their change over time. This part of ongoing research requires the quantification of in-class text similarities and to dynamically manage the embedding of the reports through updates and recalculation of the vector embedding. Improvements in the process of robust graph construction are also part of our future work. Detecting anomalies in the data to decide whether newer topic clusters should be created, or providing online classification suggestions to users based on the text they input are some of the improvements we aim to add in the future to aid with decision support and data collection, and to potentially help fine-tune some of the predefined categories of the external classification.
The framework brings the advantage of multi-resolution algorithms capable of capturing clusters without imposing a priori their number or structure.
|
Is the framework capable of capturing clusters without imposing a priori their number or structure?
|
Yes.
|
1903.00058
| false
| null |
Our baseline approach relies on a simple inverse document frequency (IDF) based similarity score. We define the IDF score of any token, INLINEFORM0 , as INLINEFORM1 , where INLINEFORM2 is the number of sentence pairs in training corpus and INLINEFORM3 is the number of sentences INLINEFORM4 occurs in. Let any two sentence pairs in the corpus be INLINEFORM5 and INLINEFORM6 . Then we define the similarity between INLINEFORM7 and INLINEFORM8 by, DISPLAYFORM0
Motivated by phrase based SMT, we retrieve neighbors which have high local, sub-sentence level overlap with the source sentence. We adapt our approach to retrieve n-grams instead of sentences. We note that the similarity metric defined above for sentences is equally applicable for n-gram retrieval.
Then we define the similarity between INLINEFORM7 and INLINEFORM8 by, DISPLAYFORM0
We note that the similarity metric defined above for sentences is equally applicable for n-gram retrieval.
|
Which similarity measure do they use in their n-gram retrieval approach?
|
The answers are shown as follows:
* we define the similarity between INLINEFORM7 and INLINEFORM8 by, DISPLAYFORM0
|
1908.07816
| false
| null |
Usually the probability distribution $p(\mathbf {y}\,|\,\mathbf {X})$ can be modeled by an RNN language model conditioned on $\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\mathbf {e}$, which is combined with $\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\mathbf {c}_t$ and $\mathbf {e}$, and how they are combined in the decoding part.
When generating the word $y_t$ at time step $t$, the context $\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\mathbf {e}$, which is combined with $\mathbf {c}_t$ to produce the distribution.
|
How does the multi-turn dialog system learns?
|
The answers are shown as follows:
* we extract the emotion information from the utterances in $\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\mathbf {e}$, which is combined with $\mathbf {c}_t$ to produce the distribution
|
null | false
| null |
What are the Pros and Cons of social media?
|
Social media are interactive technologies that facilitate the creation and sharing of information, ideas, interests, and other forms of expression through virtual communities and networks. Social media are great tools for making new friends or connect with old ones, can bring people together that are interested in the same topic and allow reaching a wide audience in a matter of seconds. However, the nature of social media bring in a range of potential issues. Too much time spent on social media can lead to social anxiety, depression and exposure to content that is not appropriate for the age of the user. Another potential issue is about cyberbullying, a form of bullying or harassment using electronic means which has become increasingly common, especially among teenagers, as the digital sphere has expanded.
|
|
null | false
| null |
The Mo, also known simply as Mo, was a Dutch pop band best remembered for the 1980s hits "Fred Astaire" and "Cheese." The band was known to be an experimental pop band during the new wave era.
|
Name a song The Mo is best known for that is not named after a dairy product.
|
Fred Astaire
|
null | false
| 58
|
Recent studies have demonstrated that the systems trained on the human-written texts learn human-like biases BIBREF1 , BIBREF6 . In general, any predictive model built on historical data may inadvertently inherit human biases based on gender, ethnicity, race, or religion BIBREF7 , BIBREF8 . Discrimination-aware data mining focuses on measuring discrimination in data as well as on evaluating performance of discrimination-aware predictive models BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 .
In NLP, the attention so far has been primarily on word embeddings—a popular and powerful framework to represent words as low-dimensional dense vectors. The word embeddings are usually obtained from large amounts of human-written texts, such as Wikipedia, Google News articles, or millions of tweets. Bias in sentiment analysis systems has only been explored in simple systems that make use of pre-computed word embeddings BIBREF13 . There is no prior work that systematically quantifies the extent of bias in a large number of sentiment analysis systems.
This paper does not examine the differences in accuracies of systems on text produced by different races or genders, as was done by hovy2015demographic,blodgett2016demographic,jurgens2017incorporating,buolamwini2018gender. Approaches on how to mitigate inappropriate biases BIBREF14 , BIBREF1 , BIBREF15 , BIBREF16 , BIBREF13 , BIBREF17 , BIBREF18 are also beyond the scope of this paper. See also the position paper by hovy2016social, which identifies socio-ethical implications of the NLP systems in general.
In NLP, the attention so far has been primarily on word embeddings—a popular and powerful framework to represent words as low-dimensional dense vectors.
|
What has NLP focused its attention on so far?
|
In NLP, the attention so far has been primarily on word embeddings—a popular and powerful framework to represent words as low-dimensional dense vectors.
|
null | false
| null |
Decide the sentiment of the following user comments. Classify each of them as "Positive", "Neutral", "Mixed", or "Negative".
1. the latest update removed all the best features!! how could they do this to such a good app!!
2. There's not much to say; it checks emails, has a slick UI, and lets me focus on what I need to do.
3. It's pretty good.
4. The app's UI is amazing, but it doesn't seem to notify me consistently which really sucks. Not sure what's up with that but if they could get that working then it'd be amazing.
5. I can't believe how terrible this app is; it literally won't do anything I ask it to.
6. wow, im impressed
|
1. Negative
2. Positive
3. Neutral
4. Mixed
5. Negative
6. Positive
|
|
null | false
| null |
Long short-term memory (LSTM) is an artificial neural network used in the fields of artificial intelligence and deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. Such a recurrent neural network (RNN) can process not only single data points (such as images), but also entire sequences of data (such as speech or video). This characteristic makes LSTM networks ideal for processing and predicting data. For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition, machine translation, speech activity detection, robot control, video games, and healthcare.
|
Given this paragraph, what are some examples of long short-term memory (LSTM) use cases?
|
Long short-term memory is used for handwriting recognition, speech recognition, machine translation, speech activity detection, robot control, video games and healthcare.
|
2002.00652
| false
| null |
In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance.
Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods.
|
What two large datasets are used for evaluation?
|
The answers are shown as follows:
* SParC BIBREF2 and CoSQL BIBREF6
|
null | false
| null |
A garden centre (Commonwealth English spelling; U.S. nursery or garden center) is a retail operation that sells plants and related products for the domestic garden as its primary business.
It is a development from the concept of the retail plant nursery but with a wider range of outdoor products and on-site facilities. It is now usual for garden centres to obtain their stock from plants which have been propagated elsewhere, such as by specialist nurseries or wholesalers.
|
What is a garden center?
|
A garden center sells plants and products for domestic gardens. Plants are usually obtained from other specialist nurseries or wholesalers. It is the addition of outdoor products that make it different from the plant nurseries of the past.
|
null | false
| 45
|
Speech-to-Text translation (ST) is essential for a wide range of scenarios: for example in emergency calls, where agents have to respond emergent requests in a foreign language BIBREF0; or in online courses, where audiences and speakers use different languages BIBREF1. To tackle this problem, existing approaches can be categorized into cascaded method BIBREF2, BIBREF3, where a machine translation (MT) model translates outputs of an automatic speech recognition (ASR) system into target language, and end-to-end method BIBREF4, BIBREF5, where a single model learns acoustic frames to target word sequence mappings in one step towards the final objective of interest. Although the cascaded model remains the dominant approach due to its better performance, the end-to-end method becomes more and more popular because it has lower latency by avoiding inferences with two models and rectifies the error propagation in theory.
Since it is hard to obtain a large-scale ST dataset, multi-task learning BIBREF5, BIBREF6 and pre-training techniques BIBREF7 have been applied to end-to-end ST model to leverage large-scale datasets of ASR and MT. A common practice is to pre-train two encoder-decoder models for ASR and MT respectively, and then initialize the ST model with the encoder of the ASR model and the decoder of the MT model. Subsequently, the ST model is optimized with the multi-task learning by weighing the losses of ASR, MT, and ST. This approach, however, causes a huge gap between pre-training and fine-tuning, which are summarized into three folds:
Subnet Waste: The ST system just reuses the ASR encoder and the MT decoder, while discards other pre-trained subnets, such as the MT encoder. Consequently, valuable semantic information captured by the MT encoder cannot be inherited by the final ST system.
Role Mismatch: The speech encoder plays different roles in pre-training and fine-tuning. The encoder is a pure acoustic model in pre-training, while it has to extract semantic and linguistic features additionally in fine-tuning, which significantly increases the learning difficulty.
Non-pre-trained Attention Module: Previous work BIBREF6 trains attention modules for ASR, MT and ST respectively, hence, the attention module of ST does not benefit from the pre-training.
To address these issues, we propose a Tandem Connectionist Encoding Network (TCEN), which is able to reuse all subnets in pre-training, keep the roles of subnets consistent, and pre-train the attention module. Concretely, the TCEN consists of three components, a speech encoder, a text encoder, and a target text decoder. Different from the previous work that pre-trains an encoder-decoder based ASR model, we only pre-train an ASR encoder by optimizing the Connectionist Temporal Classification (CTC) BIBREF8 objective function. In this way, the additional decoder of ASR is not required while keeping the ability to read acoustic features into the source language space by the speech encoder. Besides, the text encoder and decoder can be pre-trained on a large MT dataset. After that, we employ common used multi-task learning method to jointly learn ASR, MT and ST tasks.
Compared to prior works, the encoder of TCEN is a concatenation of an ASR encoder and an MT encoder and our model does not have an ASR decoder, so the subnet waste issue is solved. Furthermore, the two encoders work at tandem, disentangling acoustic feature extraction and linguistic feature extraction, ensuring the role consistency between pre-training and fine-tuning. Moreover, we reuse the pre-trained MT attention module in ST, so we can leverage the alignment information learned in pre-training.
Since the text encoder consumes word embeddings of plausible texts in MT task but uses speech encoder outputs in ST task, another question is how one guarantees the speech encoder outputs are consistent with the word embeddings. We further modify our model to achieve semantic consistency and length consistency. Specifically, (1) the projection matrix at the CTC classification layer for ASR is shared with the word embedding matrix, ensuring that they are mapped to the same latent space, and (2) the length of the speech encoder output is proportional to the length of the input frame, so it is much longer than a natural sentence. To bridge the length gap, source sentences in MT are lengthened by adding word repetitions and blank tokens to mimic the CTC output sequences.
We conduct comprehensive experiments on the IWSLT18 speech translation benchmark BIBREF1, demonstrating the effectiveness of each component. Our model is significantly better than previous methods by 3.6 and 2.2 BLEU scores for the subword-level decoding and character-level decoding strategies, respectively.
Our contributions are three-folds: 1) we shed light on why previous ST models cannot sufficiently utilize the knowledge learned from the pre-training process; 2) we propose a new ST model, which alleviates shortcomings in existing methods; and 3) we empirically evaluate the proposed model on a large-scale public dataset.
Our contributions are three-folds: 1) we shed light on why previous ST models cannot sufficiently utilize the knowledge learned from the pre-training process; 2) we propose a new ST model, which alleviates shortcomings in existing methods; and 3) we empirically evaluate the proposed model on a large-scale public dataset.
|
What are their contributions?
|
They explain the previous models' limitations and propose a new model that alleviates shortcomings in existing methods, and empirically evaluate the proposed model.
|
null | false
| 22
|
We now evaluate LiLi in terms of its predictive performance and strategy formulation abilities.
Data: We use two standard datasets (see Table 4): (1) Freebase FB15k, and (2) WordNet INLINEFORM0 . Using each dataset, we build a fairly large graph and use it as the original KB ( INLINEFORM1 ) for evaluation. We also augment INLINEFORM2 with inverse triples ( INLINEFORM3 , INLINEFORM4 , INLINEFORM5 ) for each ( INLINEFORM6 , INLINEFORM7 , INLINEFORM8 ) following existing KBC methods.
Parameter Settings. Unless specified, the empirically set parameters (see Table 1) of LiLi are: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 , INLINEFORM9 , INLINEFORM10 . For training RL-model with INLINEFORM11 -greedy strategy, we use INLINEFORM12 , INLINEFORM13 , pre-training steps=50000. We used Keras deep learning library to implement and train the prediction model. We set batch-size as 128, max. training epoch as 150, dropout as 0.2, hidden units and embedding size as 300 and learning rate as 5e-3 which is reduced gradually on plateau with factor 0.5 and patience 5. Adam optimizer and early stopping were used in training. We also shuffle INLINEFORM14 in each epoch and adjust class weights inversely proportional to class frequencies in INLINEFORM15 .
Labeled Dataset Generation and Simulated User Creation. We create a simulated user for each KB to evaluate LiLi. We create the labeled datasets, the simulated user’s knowledge base ( INLINEFORM0 ), and the base KB ( INLINEFORM1 ) from INLINEFORM2 . INLINEFORM3 used as the initial KB graph ( INLINEFORM4 ) of LiLi.
We followed BIBREF16 for labeled dataset generation. For Freebase, we found 86 relations with INLINEFORM0 triples and randomly selected 50 from various domains. We randomly shuffle the list of 50 relations, select 25% of them as unknown relations and consider the rest (75%) as known relations. For each known relation INLINEFORM1 , we randomly shuffle the list of distinct triples for INLINEFORM2 , choose 1000 triples and split them into 60% training, 10% validation and 20% test. Rest 10% along with the leftover (not included in the list of 1000) triples are added to INLINEFORM3 . For each unknown relation INLINEFORM4 , we remove all triples of INLINEFORM5 from INLINEFORM6 and add them to INLINEFORM7 . In this process, we also randomly choose 20% triples as test instances for unknown INLINEFORM8 which are excluded from INLINEFORM9 . Note that, now INLINEFORM10 has at least 10% of chosen triples for each INLINEFORM11 (known and unknown) and so, user is always able to provide clues for both cases. For each labeled dataset, we randomly choose 10% of the entities present in dataset triples, remove triples involving those entities from INLINEFORM12 and add to INLINEFORM13 . At this point, INLINEFORM14 gets reduced to INLINEFORM15 and is used as INLINEFORM16 for LiLi. The dataset stats in Table 4 shows that the base KB (60% triples of INLINEFORM17 ) is highly sparse (compared to original KB) which makes the inference task much harder. WordNet dataset being small, we select all 18 relations for evaluation and create labeled dataset, INLINEFORM18 and INLINEFORM19 following Freebase. Although the user may provide clues 100% of the time, it often cannot respond to MLQs and CLQs (due to lack of required triples/facts). Thus, we further enrich INLINEFORM20 with external KB triples.
Given a relation INLINEFORM0 and an observed triple ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ) in training or testing, the pair ( INLINEFORM4 , INLINEFORM5 ) is regarded as a +ve instance for INLINEFORM6 . Following BIBREF18 , for each +ve instance ( INLINEFORM7 , INLINEFORM8 ), we generate two negative ones, one by randomly corrupting the source INLINEFORM9 , and the other by corrupting the target INLINEFORM10 . Note that, the test triples are not in INLINEFORM11 or INLINEFORM12 and none of the -ve instances overlap with the +ve ones.
Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.
Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.
Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.
F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .
BG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@" blindly, no guessing mechanism.
w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.
Evaluation Metrics. To evaluate the strategy formulation ability, we introduce a measure called Coverage( INLINEFORM0 ), defined as the fraction of total query data instances, for which LiLi has successfully formulated strategies that lead to winning. If LiLi wins on all episodes for a given dataset, INLINEFORM1 is 1.0. To evaluate the predictive performance, we use Avg. MCC and avg. +ve F1 score.
We use two standard datasets (see Table 4): (1) Freebase FB15k12, and (2) WordNet12.
|
What datasets used by them?
|
Freebase FB15k and WordNet.
|
null | false
| 42
|
Neural machine translation (NMT) emerged in the last few years as a very successful paradigm BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . While NMT is generally more fluent than previous statistical systems, adequacy is still a major concern BIBREF4 : common mistakes include dropping source words and repeating words in the generated translation.
Previous work has attempted to mitigate this problem in various ways. BIBREF5 incorporate coverage and length penalties during beam search—a simple yet limited solution, since it only affects the scores of translation hypotheses that are already in the beam. Other approaches involve architectural changes: providing coverage vectors to track the attention history BIBREF6 , BIBREF7 , using gating architectures and adaptive attention to control the amount of source context provided BIBREF8 , BIBREF9 , or adding a reconstruction loss BIBREF10 . BIBREF11 also use the notion of fertility implicitly in their proposed model. Their “fertility conditioned decoder” uses a coverage vector and an “extract gate” which are incorporated in the decoding recurrent unit, increasing the number of parameters.
In this paper, we propose a different solution that does not change the overall architecture, but only the attention transformation. Namely, we replace the traditional softmax by other recently proposed transformations that either promote attention sparsity BIBREF12 or upper bound the amount of attention a word can receive BIBREF13 . The bounds are determined by the fertility values of the source words. While these transformations have given encouraging results in various NLP problems, they have never been applied to NMT, to the best of our knowledge. Furthermore, we combine these two ideas and propose a novel attention transformation, constrained sparsemax, which produces both sparse and bounded attention weights, yielding a compact and interpretable set of alignments. While being in-between soft and hard alignments (Figure FIGREF20 ), the constrained sparsemax transformation is end-to-end differentiable, hence amenable for training with gradient backpropagation.
To sum up, our contributions are as follows:
While NMT is generally more fluent than previous sta-tistical systems, adequacy is still a major concern (Koehn and Knowles, 2017): common mis-takes include dropping source words and repeating words in the generated translation.
|
What type of mistakes is commonly seen in NMT results?
|
Dropping source words and repeating words in the generated translation.
|
null | false
| null |
What’s a recipe for chimichurri?
|
Start with parsley as the foundation of the sauce, add salt, pepper, garlic powder, oregano, paprika and a touch of black garlic, finally, add olive oil mixing all spices throughly.
|
|
null | false
| null |
Is it faster to take the subway or drive a car?
|
Sometimes it is faster to take the subway and sometimes it is faster to drive a car. The answer for any particular trip primarily depends on the origin and destination of the trip. If the origin and destination are both near a subway station, then it is often faster to take the subway, especially if there is a lot of automobile traffic on the roads and you do not need to connect between subway lines. But if either the origin or destination, or both, are far from a subway station, then the total time for the trip may be dominated by the time it takes to get to and from the subway station on either or both ends of the journey; this may add enough time to the time spent on the subway so that the total trip end-to-end takes longer than driving the whole way.
|
|
null | false
| null |
Cisco Systems, Inc., commonly known as Cisco, is an American-based multinational digital communications technology conglomerate corporation headquartered in San Jose, California. Cisco develops, manufactures, and sells networking hardware, software, telecommunications equipment and other high-technology services and products. Cisco specializes in specific tech markets, such as the Internet of Things (IoT), domain security, videoconferencing, and energy management with leading products including Webex, OpenDNS, Jabber, Duo Security, and Jasper. Cisco is one of the largest technology companies in the world ranking 74 on the Fortune 100 with over $51 billion in revenue and nearly 80,000 employees.
|
What is Cisco's formal company name, where is their headquarters, and how many employees are there in the company?
|
Cisco Systems, Inc. is located in San Jose, California and has nearly 80,000 employees.
|
null | false
| null |
How do you train for a half marathon?
|
To train for a half marathon, build up your running volume over the course of a months. At first, start with just a couple of 2-3 mile runs per week. Then increase mileage no more than 10% per week up to 15-20 miles total per week.
To reduce injury risk, do 80% of your running at slow pace. This will allow you to build an aerobic base without putting too much load on your body. On the remaining 20%, go hard and push your limits.
On the race day, make sure to get enough rest and eat a good breakfast. Try to do a warm up mile before the race even starts. Then during the race, find the pacer that is running at your speed, and stick with the group for most of the race. In the last half mile, sprint to the finish!
|
|
null | false
| null |
What genre did Taylor Swift begin her career in?
|
Taylor Swift began her career as a country artist
|
|
null | false
| null |
What things could a high school graduate do in a gap year between high school and college?
|
Go to community college, take different classes and see which subjects he likes.
Travel / backpacking in a very different part of the world.
Get a job, earn some money and get work experience.
Take on a passion project and try to make it work.
Volunteer at a non-profit in an area he might want to pursue as a career.
|
|
null | false
| null |
Categorize the following data types into personal or business data: e-mail address, financials, government identifiers, system designs, marketing materials, credit card numbers
|
E-mail Address: Personal Data
Financials: Business Data
Government Identifiers: Personal Data
System Designs: Business Data
Marketing Materials: Business Data
Credit Card Numbers: Personal Data
|
|
null | false
| null |
Identify which car manufacturer is German or American: Tesla, Elektron Motors
|
Tesla is American, Elektron Motors is German
|
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Redbeard?
|
Gerrick Kingsblood was a free folk raider who founded House Redbeard.
|
|
null | false
| 277
|
AliMe Bot is a kind of retrieval-based online service of E-commerce which collects a lot of predefined question-answering pairs. Through data analysis, we find that many variants of a question exist which means a sort of questions can correspond to a same answer. Based on the observation, naturally, we can view these questions with the same answer as a bag. Obviously, the bag contains diverse expressions of a question, which may provide more matching evidence than only one question due to the rich information contained in the bag. Motivated by the fact, different from existing query-question (Q-Q) matching method, we propose a new query-bag matching approach for retrieval-based chatbots. Concretely, when a user raises a query, the query-bag matching model provides the most suitable bag and returns the corresponding answer of the bag. To our knowledge, there is no query-bag matching study exists, and we focus on the new approach in this paper.
Recalling the text matching task BIBREF0, recently, researchers have adopted the deep neural network to model the matching relationship. ESIM BIBREF1 judges the inference relationship between two sentences by enhanced LSTM and interaction space. SMN BIBREF2 performs the context-response matching for the open-domain dialog system. BIBREF3 BIBREF3 explores the usefulness of noisy pre-training in the paraphrase identification task. BIBREF4 BIBREF4 surveys the methods in query-document matching in web search which focuses on the topic model, the dependency model, etc. However, none of them pays attention to the query-bag matching which concentrates on the matching for a query and a bag containing multiple questions.
When a user poses a query to the bot, the bot searches the most similar bag and uses the corresponding answer to reply to the user. The more information in the query covered by the bag, the more likely the bag's corresponding answer answers the query. What's more, the bag should not have too much information exceeding the query. Thus modelling the bag-to-query and query-to-bag coverage is essential in this task.
In this paper, we propose a simple but effective mutual coverage component to model the above-mentioned problem. The coverage is based on the cross-attention matrix of the query-bag pair which indicates the matching degree of elements between the query and bag. The mutual coverage is performed by stacking the cross-attention matrix along two directions, i.e., query and bag, in the word level respectively. In addition to the mutual coverage, a bag representation in word level is issued to help discover the main points of a bag. The bag representation then provides new matching evidence to the query-bag matching model.
We conduct experiments on the AliMe and Quora dataset for the query-bag matching based information-seeking conversation. Compared with baselines, we verify the effectiveness of our model. Our model obtains 0.05 and 0.03 $\text{R}_{10}@1$ gains comparing to the strongest baseline in the two datasets. The ablation study shows the usefulness of the components. The contributions in this paper are summarized as follows: 1) To the best of our knowledge, we are the first to adopt query-bag matching in the information-seeking conversation. 2) We propose the mutual coverage model to measure the information coverage in the query-bag matching. 3) We release the composite Quora dataset to facilitate the research in this area.
Motivated by the fact, different from existing query-question (Q-Q) matching method, we propose a new query-bag matching approach for retrieval-based chatbots.
|
What is their new query-bag matching approach for?
|
Retrieval-based chatbots.
|
null | false
| null |
The University of Liverpool (abbreviated UOL) is a public research university based in the city of Liverpool, England. Founded as a college in 1881, it gained its Royal Charter in 1903 with the ability to award degrees, and is also known to be one of the six 'red brick' civic universities, the first to be referred to as The Original Red Brick. It comprises three faculties organised into 35 departments and schools. It is a founding member of the Russell Group, the N8 Group for research collaboration and the university management school is triple crown accredited.
|
Jot down some points about University of Liverpool from given text
|
1. The institution of Liverpool is a public research institution with its main campus in Liverpool, England (abbreviated UOL).
2. The Original Red Brick was the first of the six civic universities known as "red bricks," and it was established in 1881 as a college before receiving its Royal Charter in 1903, enabling it to grant degrees.
3. It has 35 departments and schools divided into three faculties.
4. The university management school has earned the triple crown of accreditation, and it is a founding member of the Russell Group and the N8 Group for research collaboration.
|
1707.07554
| false
| null |
To verify the reliability of the transformed semantic space, we propose an evaluation benchmark on the basis of word similarity datasets. Given an enriched space INLINEFORM0 and a similarity dataset INLINEFORM1 , we compute the similarity of each word pair INLINEFORM2 as the cosine similarity of their corresponding transformed vectors INLINEFORM3 and INLINEFORM4 from the two spaces, where INLINEFORM5 and INLINEFORM6 for LS and INLINEFORM7 and INLINEFORM8 for CCA. A high performance on this benchmark shows that the mapping has been successful in placing semantically similar terms near to each other whereas dissimilar terms are relatively far apart in the space. We repeat the computation for each pair in the reverse direction.
To verify the reliability of the transformed semantic space, we propose an evaluation benchmark on the basis of word similarity datasets. Given an enriched space INLINEFORM0 and a similarity dataset INLINEFORM1 , we compute the similarity of each word pair INLINEFORM2 as the cosine similarity of their corresponding transformed vectors INLINEFORM3 and INLINEFORM4 from the two spaces, where INLINEFORM5 and INLINEFORM6 for LS and INLINEFORM7 and INLINEFORM8 for CCA. A high performance on this benchmark shows that the mapping has been successful in placing semantically similar terms near to each other whereas dissimilar terms are relatively far apart in the space. We repeat the computation for each pair in the reverse direction.
|
How is performance measured?
|
The answers are shown as follows:
* To verify the reliability of the transformed semantic space, we propose an evaluation benchmark on the basis of word similarity datasets. Given an enriched space INLINEFORM0 and a similarity dataset INLINEFORM1 , we compute the similarity of each word pair INLINEFORM2 as the cosine similarity of their corresponding transformed vectors INLINEFORM3 and INLINEFORM4 from the two spaces, where INLINEFORM5 and INLINEFORM6 for LS and INLINEFORM7 and INLINEFORM8 for CCA.
|
null | false
| 108
|
As this corpus of annotated patient notes comprises original healthcare data which contains protected health information (PHI) per The Health Information Portability and Accountability Act of 1996 (HIPAA) BIBREF16 and can be joined to the MIMIC-III database, individuals who wish to access to the data must satisfy all requirements to access the data contained within MIMIC-III. To satisfy these conditions, an individual who wishes to access the database must take a “Data or Specimens Management” course, as well as sign a user agreement, as outlined on the MIMIC-III database webpage, where the latest version of this database will be hosted as “Annotated Clinical Texts from MIMIC” BIBREF17. This corpus can also be accessed on GitHub after completing all of the above requirements.
To satisfy these conditions, an individual who wishes to access the database must take a “Data or Specimens Management” course, as well as sign a user agreement, as outlined on the MIMIC-III database webpage, where the latest version of this database will be hosted as “Annotated Clinical Texts from MIMIC”1 (Moseley et al., 2020).
|
Could anyone access the MIMIC-III database?
|
No, they couldn't.
|
1606.08495
| false
| null |
More recently, novel applications of word2vec involving unconventional generalized “words” and training corpuses have been proposed. These powerful ideas from the NLP community have been adapted by researchers from other domains to tasks beyond representation of words, including relational entities BIBREF1 , BIBREF2 , general text-based attributes BIBREF3 , descriptive text of images BIBREF4 , nodes in graph structure of networks BIBREF5 , and queries BIBREF6 , to name a few.
These powerful ideas from the NLP community have been adapted by researchers from other domains to tasks beyond representation of words, including relational entities BIBREF1 , BIBREF2 , general text-based attributes BIBREF3 , descriptive text of images BIBREF4 , nodes in graph structure of networks BIBREF5 , and queries BIBREF6 , to name a few.
|
What domains are considered that have such large vocabularies?
|
The answers are shown as follows:
* relational entities
* general text-based attributes
* descriptive text of images
* nodes in graph structure of networks
* queries
|
null | false
| null |
The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.
By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.
The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.
In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised.
|
What is electric current and how is it measured?
|
The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes.
|
null | false
| null |
List as many types of craft beer as you can
|
IPA, pale ale, wheat beer, lager, pilsner, stout, porter, brown ale, sour, saison, red ale, imperial stout, mild ale, english special bitter, milkshake IPA, sour IPA, cream ale
|
|
null | false
| null |
Replikator is a 1994 science fiction film directed by G. Philip Jackson and starring Michael St. Gerard, Brigitte Bako, Ned Beatty and Ilona Staller.
Cast
Michael St. Gerard as Ludo Ludovic
Brigitte Bako as Kathy Moscow
Ned Beatty as Inspector Victor Valiant
Ilona Staller as Miss Tina Show (as La Cicciolina)
Lisa Howard as Lena
Peter Outerbridge as John Cheever
Ron Lea as Byron Scott
David Hemblen as Police Chief
Mackenzie Gray as Candor
Frank Moore as Investigating Officer
|
Extract the names from the cast of the film Replikator, and separate them with a comma.
|
Michael St. Gerard as Ludo Ludovic, Brigitte Bako as Kathy Moscow, Ned Beatty as Inspector Victor Valiant, Ilona Staller as Miss Tina Show (as La Cicciolina), Lisa Howard as Lena, Peter Outerbridge as John Cheever, Ron Lea as Byron Scott
, David Hemblen as Police Chief, Mackenzie Gray as Candor and Frank Moore as Investigating Officer
|
null | false
| null |
What is a remote control used for?
|
A remote control is used for controlling electronic devices. Some items you can use with a remote are a remote control car, television, stereo receiver, lights, home automation, fans, and kids toys. Using a remote will take batteries or some sort of power source in order to make it operational.
|
|
null | false
| null |
What makes Southwest Virginia special?
|
Southwest Virginia is a beautiful place. The area is nestled in the Appalachian Mountains. The perfect backdrop for incredible scenic views. The climate is fantastic. Springtime and Fall brings brisk weather and the welcoming or good bye of leaves. Finally, the friendly people of Virginia are eager to share the beauty with visitors.
|
|
null | false
| null |
Who is Harry Potter?
|
Harry Potter is a series of seven fantasy novels written by British author J. K. Rowling. The novels chronicle the lives of a young wizard, Harry Potter, and his friends Hermione Granger and Ron Weasley, all of whom are students at Hogwarts School of Witchcraft and Wizardry. The main story arc concerns Harry's conflict with Lord Voldemort, a dark wizard who intends to become immortal, overthrow the wizard governing body known as the Ministry of Magic and subjugate all wizards and Muggles (non-magical people).
The series follows the life of a boy named Harry Potter. In the first book, Harry Potter and the Philosopher's Stone, Harry lives in a cupboard under the stairs in the house of the Dursleys, his aunt, uncle and cousin, Dudley. The Dursleys consider themselves perfectly normal, but at the age of 11, Harry discovers that he is a wizard. He meets a half-giant named Hagrid who invites him to attend the Hogwarts School of Witchcraft and Wizardry. Harry learns that as a baby, his parents were murdered by the dark wizard Lord Voldemort. When Voldemort attempted to kill Harry, his curse rebounded and Harry survived with a lightning-shaped scar on his forehead.
Harry becomes a student at Hogwarts and is sorted into Gryffindor House. He gains the friendship of Ron Weasley, a member of a large but poor wizarding family, and Hermione Granger, a witch of non-magical, or Muggle, parentage. Harry encounters the school's potions master, Severus Snape, who displays a dislike for him; the rich pure-blood Draco Malfoy whom he develops an enmity with; and the Defence Against the Dark Arts teacher, Quirinus Quirrell, who turns out to be allied with Lord Voldemort. The first book concludes with Harry's confrontation with Voldemort, who, in his quest to regain a body, yearns to gain the power of the Philosopher's Stone, a substance that bestows everlasting life.
|
|
null | false
| null |
Smoking is one of the leading preventable causes of deaths globally and is the cause of over 8 million deaths annually, 1.2 million of which are non-smokers who die due to second-hand smoke.In the United States, about 500,000 deaths per year are attributed to smoking-related diseases and a recent study estimated that as much as 1/3 of China's male population will have significantly shortened life-spans due to smoking. Male and female smokers lose an average of 13.2 and 14.5 years of life, respectively. At least half of all lifelong smokers die earlier as a result of smoking.
|
With reference to given text, Please provide a list of dangers associated with Smoking
|
1. Smoking is the cause of more than 8 million deaths per year out of which 1.2 million are non-smokers who simply dies because of inhaling second-hand smoke.
2. In United states nearly 500,000 people dies annually because of smoking related diseases.
3. A recent study indicates that close to 1/3rd of China's male population have their life-spans shortened significantly because of smoking.
4. Among smokers males have average of 13.2 years less life span compared to non smoking males similarly females have 14.5 years less life span compared to non smoking females.
5 At least 50% of smokers die earlier because of smoking.
|
null | false
| 24
|
Because it is not feasible to collect training data for every language, there is a growing interest in cross-lingual transfer learning. In this paper, we systematically explore zero-shot cross-lingual transfer learning on reading comprehension tasks with a language representation model pre-trained on multi-lingual corpus. The experimental results show that with pre-trained language representation zero-shot learning is feasible, and translating the source data into the target language is not necessary and even degrades the performance. We further explore what does the model learn in zero-shot setting.
In this paper, we systematically explore zeroshot cross-lingual transfer learning on reading comprehension tasks with a language representation model pre-trained on multi-lingual corpus.
|
What does the paper explore?
|
The cross-lingual transferring ability of multi-BERT on reading comprehension (RC) tasks.
|
null | false
| null |
Classify each of the following among a country, a continent, both, none: america, north america, australia, london, india, china
|
america : none, north america : continent, australia: both, london: none, india : country, china : country, nile : none, vatican city : country, europe : continent, arctic : none
|
|
null | false
| null |
Why is the sky blue?
|
Blue light travels in shorter waves, thus it it scattered more than any other color
|
|
null | false
| null |
Name some NHL hockey players that were born in Arizona.
|
Matthew Tkachuk; Sean Couturier; Brady Tkachuk; Tage Thompson; Jesse Ylönen
|
|
null | false
| null |
What are some ways to improve the value of your home?
|
There are several things you do can to increase the value of your home. First, you can focus on what is known as "curb appeal", essentially making the outside and surrounding of your house attractive. Ensure that the landscaping is well maintained, fix any broken doors or windows, and painting the outside so that the house does not look dull. In the interior of your house, the best places to focus will be in your bathrooms and kitchen. Further, you can replace major appliances and components such as water heaters, furnaces, or the roof to increase the value. Another way to increase the value of your home is by making it bigger. Often, pricing for homes is done by looking at the area of the house, so by adding on new rooms or extra space, you can also increase the overall value of the house.
|
|
null | false
| null |
Why mobile is bad for human
|
We are always engaged one phone which is not good.
|
|
1802.05574
| false
| null |
We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences. There were several patterns that appeared in these sentences.
We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences.
|
What is the most common error type?
|
The answers are shown as follows:
* all annotators that a triple extraction was incorrect
|
1910.05608
| false
| null |
The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.
The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes.
Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7
The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%.
|
What dataset do they use?
|
They used Wiki Vietnamese language and Vietnamese newspapers to pretrain embeddings and dataset provided in HSD task to train model (details not mentioned in paper).
|
null | false
| null |
Explain the expression "the days are long, and the years are short" as it relates to raising children.
|
Every day you're exhausted and pushed to the limits. This can be from lack of sleep, because your child is waking up every few hours. Or it can be because whatever it is you're asking of your child, they don't want to do it. Or it can be because the strain is affecting your own ability to be kind and loving to both yourself, your spouse, and your children. Whatever the reasons, the feeling at the end of every night is one of utter exhaustion, where presumably this feeling is now your life, and will be so forever. But, in that exhaustion, it can be hard to either notice or truly appreciate beautiful little moments simultaneously occurring. For example, you might not pause to appreciate when they laugh at something you say or funny face you make. Or maybe you miss the beauty in witnessing their frustration and struggle as they persistently try to fit that round peg in the square hole. Or maybe the way they mispronounce certain words sounds so endearing, though you're more focused on helping them pronounce it correctly. These are all brief moments where you are witnessing the growth of your child. These moments are fleeting. They don't exist forever, because eventually they grow out of them. They don't find the same jokes or silly faces funny, they figure out the round peg fits in the round hole, and their word pronunciation improves. And when you notice those are past, you feel nostalgic. You wish you could relive those moments. In that nostalgia, you forget about all the exhaustion that came with the territory. Maybe the expression is best described as analogous to another expression "the grass is greener on the other side"
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.