paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
What are good finishes for a desk top made of hard wood?
|
You'll want to avoid tacky finishes because you'll be touching your desk all the time. The finish should be fairly durable. Many people use tung oil, danish oil, or poly for desk tops. Avoid using a shellac or other alcohol solvents to reduce the damage from drinks spilling.
|
|
null | false
| null |
Which of the following are flowers: roses, peonies, worms, carnations, tulips, dirt, lavender, and sunflowers?
|
Roses, peonies, carnations, tulips, lavender, and sunflowers are flowers. Worms and dirt are not flowers.
|
|
null | false
| null |
Tell me if these are professional soccer players or professional surfers: Lionel Messi, Kelly Slater, Neymar Jr., Kanoa Igarashi, Zinedine Zidane, Mark Occhilupo, Gabriel Medina, Zlatan Ibrahimovic, Ngolo Kante, Miguel Pupo
|
Professional Soccer Players: Lionel Messi, Neymar Jr., Zinedine Zidane, Zlatan Ibrahimovic, Ngolo Kante
Professional Surfers: Kelly Slater, Kanoa Igarashi, Mark Occhilupo, Gabriel Medina, Miguel Pupo
|
|
null | false
| null |
Why did Kim Kardashian and Ye divorce?
|
Kim and Ye were living two separate lives. Kim was in Hidden Hills, CA taking care of their four kids while Ye was developing the acres of land he bought in Wyoming. Ye began creating a whole self-sustaining community of christian devoted followers while Kim was filming her show and studying to become a lawyer. They began to grow a part, not to mention claims of both of them cheating. After two years trying to become divorced, they most recently officially became divorced and Ye must pay Kim $200,000 per month for security of their kids. Kim bought their house from Ye and other financials were settled.
|
|
null | false
| null |
Kennebunkport /ˌkɛniˈbʌŋkˌpɔːrt/ is a resort town in York County, Maine, United States. The population was 3,629 people at the 2020 census. It is part of the Portland–South Portland–Biddeford metropolitan statistical area.
The town center, the area in and around Dock Square, is located along the Kennebunk River, approximately 1 mile (1.6 km) from the mouth of the river on the Atlantic Ocean. Historically a shipbuilding and fishing village, for well over a century the town has been a popular summer colony and seaside tourist destination. The Dock Square area has a district of souvenir shops, art galleries, schooner attractions, seafood restaurants, and bed and breakfasts. Cape Porpoise, while retaining its identity as a fishing harbor, has a very small village area with several restaurants, a church, grocery store, coffee shop, small library, and art gallery. Kennebunkport has a reputation as a summer haven for the upper class and is one of the wealthiest communities in the state of Maine. The Municipality of Kennebunkport includes the constituent villages of Kennebunkport Village, Cape Arundel & Colony Beach, the Cottage Coast, Wilde's District (Wildwood), Goose Rocks Beach, Turbatts Creek, Cape Porpoise Village, North Village Crossing (Townhouse Corner), among various other newer developments. The town is the home of Walker's Point, a summer estate of the Bush family.
Kennebunkport and neighboring towns Kennebunk and Arundel comprise school district RSU 21.
The Kennebunkport Christmas Prelude takes place annually in the town, beginning either in late November or early December.
|
Given a reference text about Kennebunkport, tell me its neighboring towns and where it is located.
|
Kennebunkport is located in York County, Maine and is neighbored by the towns of Kennebunk and Arundel.
|
null | false
| null |
Which film owns the record for the most Oscar wins?
|
Three films own the record for the most Oscar wins. They are Titanic, Ben Hur, and Lord of the Rings: Return of the King, all of which won 11 awards.
|
|
null | false
| 110
|
The Transformer architecture BIBREF0 for deep neural networks has quickly risen to prominence in NLP through its efficiency and performance, leading to improvements in the state of the art of Neural Machine Translation BIBREF1, BIBREF2, as well as inspiring other powerful general-purpose models like BERT BIBREF3 and GPT-2 BIBREF4. At the heart of the Transformer lie multi-head attention mechanisms: each word is represented by multiple different weighted averages of its relevant context. As suggested by recent works on interpreting attention head roles, separate attention heads may learn to look for various relationships between tokens BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9.
The attention distribution of each head is predicted typically using the softmax normalizing transform. As a result, all context words have non-zero attention weight. Recent work on single attention architectures suggest that using sparse normalizing transforms in attention mechanisms such as sparsemax – which can yield exactly zero probabilities for irrelevant words – may improve performance and interpretability BIBREF12, BIBREF13, BIBREF14. Qualitative analysis of attention heads BIBREF0 suggests that, depending on what phenomena they capture, heads tend to favor flatter or more peaked distributions.
Recent works have proposed sparse Transformers BIBREF10 and adaptive span Transformers BIBREF11. However, the “sparsity" of those models only limits the attention to a contiguous span of past tokens, while in this work we propose a highly adaptive Transformer model that is capable of attending to a sparse set of words that are not necessarily contiguous. Figure FIGREF1 shows the relationship of these methods with ours.
Our contributions are the following:
We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.
We propose an adaptive version of sparse attention, where the shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case.
We make an extensive analysis of the added interpretability of these models, identifying both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of our proposed model.
Recent works have proposed sparseTransformers (Child et al., 2019) and adaptive span Transformers (Sukhbaatar et al., 2019). However, the “sparsity” of those models only limits the attention to a contiguous span of past tokens, while in this work we propose a highly adaptive Transformer model that is capable of attending to a sparse set of words that are not necessarily contiguous.
|
What limitations do the previous models have?
|
The “sparsity” of those models only limits the attention to a contiguous span of past tokens.
|
null | false
| null |
When do you think humans can set feet on Mars
|
NASA has set a goal of landing humans on Mars by the mid 2030 but it's also affected by many factors such as technology readiness, funding and political willingness. To travel to Mars takes about 6 months and we need much more powerful rocket and also enough supplies to support human travelers on the way. Even for a short period of staying on Mars, we need substantial life support on Mars which is extremely hostile to life at this moment. The biggest challenge might be how to return the human safely to the Earth. Overall landing humans on Mars is a audacious effort and would be a major milestone for humans exploring the space and eventually go beyond the Solar system.
|
|
null | false
| null |
Does Minnesota have a professional hockey team?
|
Yes, the Minnesota Wild are the primary hockey team in Minnesota. They are 1 of 32 teams in the National Hockey League (NHL). The Minnesota Wild play in St. Paul, the capitol of Minnesota at the Excel Energy Center.
|
|
1709.05404
| false
| null |
Supervised Learning. We restrict our supervised experiments to a default linear SVM learner with Stochastic Gradient Descent (SGD) training and L2 regularization, available in the SciKit-Learn toolkit BIBREF25 . We use 10-fold cross-validation, and only two types of features: n-grams and Word2Vec word embeddings. We expect Word2Vec to be able to capture semantic generalizations that n-grams do not BIBREF26 , BIBREF27 . The n-gram features include unigrams, bigrams, and trigrams, including sequences of punctuation (for example, ellipses or "!!!"), and emoticons. We use GoogleNews Word2Vec features BIBREF28 .
We use 10-fold cross-validation, and only two types of features: n-grams and Word2Vec word embeddings.
The n-gram features include unigrams, bigrams, and trigrams, including sequences of punctuation (for example, ellipses or "!!!"), and emoticons. We use GoogleNews Word2Vec features BIBREF28 .
|
What simple features are used?
|
The answers are shown as follows:
* unigrams, bigrams, and trigrams, including sequences of punctuation
* Word2Vec word embeddings
|
null | false
| null |
When did India gain independence?
|
India gained independence from British rule on August 15th, 1947.
|
|
null | false
| 187
|
Datasets: We use two recent benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . These two datasets are created to avoid reversible relation problems, thus the prediction task becomes more realistic and hence more challenging BIBREF18 . Table TABREF7 presents the statistics of WN18RR and FB15k-237.
Evaluation protocol: Following BIBREF3 , for each valid test triple INLINEFORM0 , we replace either INLINEFORM1 or INLINEFORM2 by each of all other entities to create a set of corrupted triples. We use the “Filtered” setting protocol BIBREF3 , i.e., not taking any corrupted triples that appear in the KG into accounts. We rank the valid test triple and corrupted triples in descending order of their scores. We employ evaluation metrics: mean rank (MR), mean reciprocal rank (MRR) and Hits@10 (i.e., the proportion of the valid test triples ranking in top 10 predictions). Lower MR, higher MRR or higher Hits@10 indicate better performance. Final scores on the test set are reported for the model obtaining the highest Hits@10 on the validation set.
Training protocol: We use the common Bernoulli strategy BIBREF20 , BIBREF21 when sampling invalid triples. For WN18RR, BIBREF22 found a strong evidence to support the necessity of a WordNet-related semantic setup, in which they averaged pre-trained word embeddings for word surface forms within the WordNet to create synset embeddings, and then used these synset embeddings to initialize entity embeddings for training their TransE association model. We follow this evidence in using the pre-trained 100-dimensional Glove word embeddings BIBREF23 to train a TransE model on WN18RR.
We employ the TransE and ConvKB implementations provided by BIBREF24 and BIBREF15 . For ConvKB, we use a new process of training up to 100 epochs and monitor the Hits@10 score after every 10 training epochs to choose optimal hyper-parameters with the Adam initial learning rate in INLINEFORM0 and the number of filters INLINEFORM1 in INLINEFORM2 . We obtain the highest Hits@10 scores on the validation set when using N= 400 and the initial learning rate INLINEFORM3 on WN18RR; and N= 100 and the initial learning rate INLINEFORM4 on FB15k-237.
Like in ConvKB, we use the same pre-trained entity and relation embeddings produced by TransE to initialize entity and relation embeddings in our CapsE for both WN18RR and FB15k-237 ( INLINEFORM0 ). We set the batch size to 128, the number of neurons within the capsule in the second capsule layer to 10 ( INLINEFORM1 ), and the number of iterations in the routing algorithm INLINEFORM2 in INLINEFORM3 . We run CapsE up to 50 epochs and monitor the Hits@10 score after each 10 training epochs to choose optimal hyper-parameters. The highest Hits@10 scores for our CapsE on the validation set are obtained when using INLINEFORM4 , INLINEFORM5 and the initial learning rate at INLINEFORM6 on WN18RR; and INLINEFORM7 , INLINEFORM8 and the initial learning rate at INLINEFORM9 on FB15k-237.
Dataset: We use the SEARCH17 dataset BIBREF12 of query logs of 106 users collected by a large-scale web search engine. A log entity consists of a user identifier, a query, top-10 ranked documents returned by the search engine and clicked documents along with the user's dwell time. BIBREF12 constructed short-term (session-based) user profiles and used the profiles to personalize the returned results. They then employed the SAT criteria BIBREF26 to identify whether a returned document is relevant from the query logs as either a clicked document with a dwell time of at least 30 seconds or the last clicked document in a search session (i.e., a SAT click). After that, they assigned a INLINEFORM0 label to a returned document if it is a SAT click and also assigned INLINEFORM1 labels to the remaining top-10 documents. The rank position of the INLINEFORM2 labeled documents is used as the ground truth to evaluate the search performance before and after re-ranking.
The dataset was uniformly split into the training, validation and test sets. This split is for the purpose of using historical data in the training set to predict new data in the test set BIBREF12 . The training, validation and test sets consist of 5,658, 1,184 and 1,210 relevant (i.e., valid) triples; and 40,239, 7,882 and 8,540 irrelevant (i.e., invalid) triples, respectively.
Evaluation protocol: Our CapsE is used to re-rank the original list of documents returned by a search engine as follows: (i) We train our model and employ the trained model to calculate the score for each INLINEFORM0 triple. (ii) We then sort the scores in the descending order to obtain a new ranked list. To evaluate the performance of our proposed model, we use two standard evaluation metrics: mean reciprocal rank (MRR) and Hits@1. For each metric, the higher value indicates better ranking performance.
We compare CapsE with the following baselines using the same experimental setup: (1) SE: The original rank is returned by the search engine. (2) CI BIBREF27 : This baseline uses a personalized navigation method based on previously clicking returned documents. (3) SP BIBREF9 , BIBREF11 : A search personalization method makes use of the session-based user profiles. (4) Following BIBREF12 , we use TransE as a strong baseline model for the search personalization task. Previous work shows that the well-known embedding model TransE, despite its simplicity, obtains very competitive results for the knowledge graph completion BIBREF28 , BIBREF29 , BIBREF14 , BIBREF30 , BIBREF15 . (5) The CNN-based model ConvKB is the most closely related model to our CapsE.
Embedding initialization: We follow BIBREF12 to initialize user profile, query and document embeddings for the baselines TransE and ConvKB, and our CapsE.
We train a LDA topic model BIBREF31 with 200 topics only on the relevant documents (i.e., SAT clicks) extracted from the query logs. We then use the trained LDA model to infer the probability distribution over topics for every returned document. We use the topic proportion vector of each document as its document embedding (i.e. INLINEFORM0 ). In particular, the INLINEFORM1 element ( INLINEFORM2 ) of the vector embedding for document INLINEFORM3 is: INLINEFORM4 where INLINEFORM5 is the probability of the topic INLINEFORM6 given the document INLINEFORM7 .
We also represent each query by a probability distribution vector over topics. Let INLINEFORM0 be the set of top INLINEFORM1 ranked documents returned for a query INLINEFORM2 (here, INLINEFORM3 ). The INLINEFORM4 element of the vector embedding for query INLINEFORM5 is defined as in BIBREF12 : INLINEFORM6 , where INLINEFORM7 is the exponential decay function of INLINEFORM8 which is the rank of INLINEFORM9 in INLINEFORM10 . And INLINEFORM11 is the decay hyper-parameter ( INLINEFORM12 ). Following BIBREF12 , we use INLINEFORM13 . Note that if we learn query and document embeddings during training, the models will overfit to the data and will not work for new queries and documents. Thus, after the initialization process, we fix (i.e., not updating) query and document embeddings during training for TransE, ConvKB and CapsE.
In addition, as mentioned by BIBREF9 , the more recently clicked document expresses more about the user current search interest. Hence, we make use of the user clicked documents in the training set with the temporal weighting scheme proposed by BIBREF11 to initialize user profile embeddings for the three embedding models.
Hyper-parameter tuning: For our CapsE model, we set batch size to 128, and also the number of neurons within the capsule in the second capsule layer to 10 ( INLINEFORM0 ). The number of iterations in the routing algorithm is set to 1 ( INLINEFORM1 ). For the training model, we use the Adam optimizer with the initial learning rate INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 . We also use ReLU as the activation function INLINEFORM8 . We select the number of filters INLINEFORM9 . We run the model up to 200 epochs and perform a grid search to choose optimal hyper-parameters on the validation set. We monitor the MRR score after each training epoch and obtain the highest MRR score on the validation set when using INLINEFORM10 and the initial learning rate at INLINEFORM11 .
We employ the TransE and ConvKB implementations provided by BIBREF24 and BIBREF15 and then follow their training protocols to tune hyper-parameters for TransE and ConvKB, respectively. We also monitor the MRR score after each training epoch and attain the highest MRR score on the validation set when using margin = 5, INLINEFORM0 -norm and SGD learning rate at INLINEFORM1 for TransE; and INLINEFORM2 and the Adam initial learning rate at INLINEFORM3 for ConvKB.
A log entity consists of a user identifier, a query, top10 ranked documents returned by the search engine and clicked documents along with the user’s dwell time.
|
What does the log entity consist of?
|
A log entity consists of a user identifier, a query, top10 ranked documents returned by the search engine and clicked documents along with the user’s dwell time.
|
1607.03542
| false
| null |
Much recent work on semantic parsing has been evaluated using the WebQuestions dataset BIBREF3 . This dataset is not suitable for evaluating our model because it was filtered to only questions that are mappable to Freebase queries. In contrast, our focus is on language that is not directly mappable to Freebase. We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy-2015-semparse-open-vocabulary, which consists of the ClueWeb09 web corpus along with Google's FACC entity linking of that corpus to Freebase BIBREF9 . For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 . This produced 2.1m predicate instances involving 142k entity pairs and 184k entities. After removing infrequently-seen predicates (seen fewer than 6 times), there were 25k categories and 4.2k relations.
We also used the test set created by Krishnamurthy and Mitchell, which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb. However, as they did not release a development set with their data, we used this set as a development set. For a final evaluation, we generated another, similar test set from a different held out section of ClueWeb, in the same fashion as done by Krishnamurthy and Mitchell. This final test set contains 307 queries.
For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 .
We also used the test set created by Krishnamurthy and Mitchell, which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb. However, as they did not release a development set with their data, we used this set as a development set.
This final test set contains 307 queries.
|
How big is their dataset?
|
3 million webpages processed with a CCG parser for training, 220 queries for development, and 307 queries for testing
|
null | false
| null |
What makes someone beautiful?
|
It's often said that beauty is in the eye of the beholder. What makes someone beautiful is subjective to whatever quality the individual or other party deems beautiful, attractive or desirable. Most individuals are not a good judge of their own appearance or qualities. According to the Austrian writer Kyle Krauss, the best way to be beautiful, is to stop worrying about being ugly. The most attractive people are often confident and are usually not concerned with how others perceive their appearance.
|
|
null | false
| null |
Which characters belong to DC or Marvel Universe? Ghost Rider, Atomic Skull
|
Ghost Rider is Marvel, Atomic Skull is DC
|
|
null | false
| null |
Mickey Mouse is an animated cartoon character co-created in 1928 by Walt Disney and Ub Iwerks. The longtime mascot of The Walt Disney Company, Mickey is an anthropomorphic mouse who typically wears red shorts, large yellow shoes, and white gloves. Taking inspiration from silent film personalities such as Charlie Chaplin's Tramp, Mickey is traditionally characterized as a sympathetic underdog who gets by on pluck and ingenuity. The character’s status as a small mouse is personified through his diminutive stature and falsetto voice, the latter of which was originally provided by Disney. Mickey is one of the world's most recognizable and universally acclaimed fictional characters of all time.
|
Who is Micky Mouse?
|
Mickey Mouse is an animated cartoon character co-created in 1928 by Walt Disney and Ub Iwerks. The longtime mascot of The Walt Disney Company, Mickey is an anthropomorphic mouse who typically wears red shorts, large yellow shoes, and white gloves. Taking inspiration from silent film personalities such as Charlie Chaplin's Tramp, Mickey is traditionally characterized as a sympathetic underdog who gets by on pluck and ingenuity. Mickey is one of the world's most recognizable and universally acclaimed fictional characters of all time. Mickey generally appears alongside his girlfriend Minnie Mouse, his pet dog Pluto, and his friends Donald Duck and Goofy.
|
null | false
| null |
A 30-year-old member at your fitness center has been weight training for 4 months and claims to have made significant gains in strength. Twice a week, he performs 1 set of 12-15 repetitions using 8 weight machines. Over the last 6 weeks, he's noticed that he isn’t achieving the strength gains that he’s used to. How would you modify his training routine to optimize gains in muscular strength?
|
First of all, he’s been performing the same routine over the last 4 months, which means his body may have adapted to the specific sets, repetition ranges, and types of exercises he performs, leading to less strength gains. I would recommend him to switch up his routine! He can consider doing more sets per exercise e.g. 3 sets with lower repetitions e.g. 8 repetitions. He can also consider switching weight machines to free weights, which can help target more muscles and larger muscle groups. He can also increase the number of days he trains a week, from two to three. He can also do well to remember to properly warm up, cool down, stretch, and order his exercises properly (from larger muscle groups to smaller).
|
|
null | false
| 191
|
Traditional approaches BIBREF0 , BIBREF1 , BIBREF2 for sentence relation modeling tasks such as paraphrase identification, question answering, recognized textual entailment and semantic textual similarity prediction usually build the supervised model using a variety of hand crafted features. Hundreds of features generated at different linguistic levels are exploited to boost classification. With the success of deep learning, there has been much interest in applying deep neural network based techniques to further improve the prediction performances BIBREF3 , BIBREF4 , BIBREF5 .
A key component of deep neural network is word embedding which serve as an lookup table to get word representations. From low level NLP tasks such as language modeling, POS tagging, name entity recognition, and semantic role labeling BIBREF6 , BIBREF7 , to high level tasks such as machine translation, information retrieval and semantic analysis BIBREF8 , BIBREF9 , BIBREF10 . Deep word representation learning has demonstrated its importance for these tasks. All the tasks get performance improvement via further learning either word level representations or sentence level representations. On the other hand, some researchers have found character-level convolutional networks BIBREF11 , BIBREF12 are useful in extracting information from raw signals for the task such as language modeling or text classification.
In this work, we focus on deep neural network based sentence relation modeling tasks. We explore treating each sentence as a kind of raw signal at character level, and applying temporal (one-dimensional) Convolution Neural Network (CNN) BIBREF6 , Highway Multilayer Perceptron (HMLP) and multi-layer bidirectional LSTM (Long Short Term Memory) BIBREF13 to learn sentence representations. We propose a new deep neural network architecture that jointly leverage pre-trained word embedding and character embedding to represent the meaning sentences. More specifically, our new approach first generates two kinds of word sequence representations. One kind of sequence representations are the composition of pre-trained word vectors. The other kind of sequence representation comprise word vectors that generating from character-level convolutional network. We then inject the two sequence representations into bidirectional LSTM, which means forward directional LSTM accept pre-trained word embedding output and backward directional LSTM accept auxiliary character CNN embedding output. The final sentence representation is the concatenation of the two direction. After that, we construct matching features followed by another temporal CNN to learn high-level hidden matching feature representations. Figure FIGREF1 shows the neural network architecture for general sentence relation modeling.
Our model shows that when trained on small size datasets, combining pre-trained word embeddings with auxiliary character-level embedding can improve the sentence representation. Word embeddings can help capturing general word semantic meanings, whereas char-level embedding can help modeling task specific word meanings. Note that auxiliary character-level embedding based sentence representation do not require the knowledge of words or even syntactic structure of a language. The enhanced sentence representation generated by multi-layer bidirectional LSTM will encapsulate the character and word levels informations. Furthermore, it may enhance matching features that generated by computing similarity measures on sentence pairs. Quantitative evaluations on standard dataset demonstrate the effectiveness and advantages of our method.
We propose a new deep neural network architecture that jointly leverage pre-trained word embedding and character embedding to represent the meaning sentences. More specifically, our new approach first generates two kinds of word sequence representations.
|
What kind of framework does the paper propose?
|
A new deep neural network architecture that jointly leverage pre-trained word embedding and character embedding to represent the meaning sentences.
|
null | false
| null |
Name a style of hot yoga.
|
Bikram.
|
|
null | false
| 6
|
Recently, with the emergence of neural seq2seq models, abstractive summarization methods have seen great performance strides BIBREF0, BIBREF1, BIBREF2. However, complex neural summarization models with thousands of parameters usually require a large amount of training data. In fact, much of the neural summarization work has been trained and tested in news domains where numerous large datasets exist. For example, the CNN/DailyMail (CNN/DM) BIBREF3, BIBREF4 and New York Times (NYT) datasets are in the magnitude of 300k and 700k documents, respectively. In contrast, in other domains such as student reflections, summarization datasets are only in the magnitude of tens or hundreds of documents (e.g., BIBREF5). We hypothesize that training complex neural abstractive summarization models in such domains will not yield good performing models, and we will indeed later show that this is the case for student reflections.
To improve performance in low resource domains, we explore three directions. First, we explore domain transfer for abstractive summarization. While domain transfer is not new, compared to prior summarization studies BIBREF6, BIBREF7, our training (news) and tuning (student reflection) domains are quite dissimilar, and the in-domain data is small. Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline. Lastly, we combine both directions. Evaluations of neural abstractive summarization method across four student reflection corpora show the utility of all three methods.
Second, we propose a template-based synthesis method to create synthesized summaries, then explore the effect of enriching training data for abstractive summarization using the proposed model compared to a synthesis baseline.
|
How to create synthesized summaries in this paper?
|
Authors propose a template-based synthesis method to create synthesized summaries.
|
null | false
| null |
He was a member of the Australian Broadcasting Corporation Board of Directors for five years from 1999. He was chair of the National Childcare Accreditation Council for four years from 1996 to 1999. He was president of the Association for the Blind of WA from 1995 to 1999. He was president of the League of Help for the Sick and Elderly of WA from 1995 to 1997. In December 2001 he resigned as chairman of the Fremantle Football Club in the wake of revelations that the club had exceeded the salary cap for the 2001 season. He was a member of the Board of the Western Australian Cricket Association from 2004 to 2012.
|
here is some information about Ross McLean, how many years was Ross president of the Leavue of help for sick and elderly?
|
2 years, between 1995 to 1997.
|
null | false
| null |
what is a sql query?
|
SQL stands for structured query language. A SQL query is a statement using SQL syntax to get data returned for a specific question to business data you are trying to get answered. SQL queries can be simple returns of data, and they can also include simple or complex aggregations over time. SQL queries run against a database like Oracle Database, Azure's SQL Server Database, or Google's Bigquery.
|
|
null | false
| null |
What characteristics are IPAs known for?
|
IPAs are known to be more bitter than mass-produced lagers, and typically contain floral, citrus-like, and resinous notes. This flavor is largely derived from hops, which is the flower of the hop plant.
|
|
null | false
| null |
Classify each of the following as having two or four legs: human, horse, dog, cat, monkey, kangaroo, boar
|
Human: Two Legs
Horse: Four Legs
Dog: Four Legs
Cat: Four Legs
Monkey: Two Legs
Kangaroo: Two Legs
Boar: Four Legs
|
|
null | false
| null |
Why do people put ICE as a contact in their cell phone?
|
ICE is an acronym referring to "In case of emergency". It's recommended to put an emergency contact with the acronym ICE in your phone as a preventative measure.
|
|
null | false
| null |
Tell me whether you wear each of the following on your hand or face: ring, glove, glasses, lipstick, mitten, mask
|
Ring: Hand
Glove: Hand
Glasses: Face
Lipstick: Face
Mitten: Hand
Mask: Face
|
|
null | false
| null |
Schools and universities typically have a summer break to take advantage of the warmer weather and longer days. In almost all countries, children are out of school during this time of year for summer break, although dates vary. Many families will take holidays for a week or two over summer, particularly in Southern Hemisphere Western countries with statutory Christmas and New Year holidays.
In the United States, public schools usually end in late May in Memorial Day weekend, while colleges finish in early May. Public school traditionally resumes near Labor Day, while higher institutions often resume in mid-August.
In England and Wales, school ends in mid-July and resumes again in early September. In Scotland, the summer holiday begins in late June and ends in mid-to late-August. Similarly, in Canada the summer holiday starts on the last or second-last Friday in June and ends in late August or on the first Tuesday of September, with the exception of when that date falls before Labour Day, in which case, ends on the second Tuesday of the month. In Russia the summer holiday begins at the end of May and ends on 31 August.
In the Southern Hemisphere, school summer holiday dates include the major holidays of Christmas and New Year's Day. School summer holidays in Australia, New Zealand and South Africa begin in early December and end in early February, with dates varying between states. In South Africa, the new school year usually starts during the second week of January, thus aligning the academic year with the Calendar year. In India, school ends in late April and resumes in early or mid-June. In Cameroon and Nigeria, schools usually finish for summer vacation in mid-July and resume in the later weeks of September or the first week of October.
|
What is summer break?
|
Schools and universities typically have a summer break to take advantage of the warmer weather and longer days. In almost all countries, children are out of school during this time of year for summer break, although dates vary.
|
null | false
| null |
Tell me which of these foods is spicy: jalapeño, apple, five-alarm chili, marshmallow, tabasco, wasabi
|
Jalapeño: spicy
Apple: not spicy
Five-alarm chili: spicy
Marshmallow: not spicy
Tabasco: spicy
Wasabi: spicy
|
|
null | false
| null |
Alpha Phi International Women's Fraternity (ΑΦ, also known as APhi) is an international sorority with 172 active chapters and over 250,000 initiated members.
Founded at Syracuse University in Syracuse, New York on September 18, 1872, it is the fourth Greek-letter organization founded for women, and the first women's fraternity founded in the northeast.
Alpha Phi is a member of the National Panhellenic Conference, the governing council of 26 women's fraternities. Alpha Phi's international headquarters are located in Evanston, Illinois.
|
Summarize what Alpha Phi is.
|
Alpha Phi is one of the first women's fraternities founded in the Northeast. They have over 172 active chapters at College campuses and over 250,000 initiated members. Alpha Phi was founded at Syracuse University in Syracuse, New York on September 18, 1872.
|
null | false
| 24
|
Multi-BERT has showcased its ability to enable cross-lingual zero-shot learning on the natural language understanding tasks including XNLI BIBREF19, NER, POS, Dependency Parsing, and so on. We now seek to know if a pre-trained multi-BERT has ability to solve RC tasks in the zero-shot setting.
Multi-BERT has showcased its ability to enable cross-lingual zero-shot learning on the natural language understanding tasks including XNLI (Conneau et al., 2018), NER, POS, Dependency Parsing, and so on. We now seek to know if a pretrained multi-BERT has ability to solve RC tasks in the zero-shot setting.
|
On what kinds of tasks does Multi-BERT have cross-lingual zero-shot learning?
|
The natural language understanding tasks including XNLI (Conneau et al., 2018), NER, POS, Dependency Parsing, and so on.
|
null | false
| 31
|
Single-document summarization is the task of generating a short summary for a given document. Ideally, the generated summaries should be fluent and coherent, and should faithfully maintain the most important information in the source document. purpleThis is a very challenging task, because it arguably requires an in-depth understanding of the source document, and current automatic solutions are still far from human performance BIBREF0 .
Single-document summarization can be either extractive or abstractive. Extractive methods typically pick sentences directly from the original document based on their importance, and form the summary as an aggregate of these sentences. Usually, summaries generated in this way have a better performance on fluency and grammar, but they may contain much redundancy and lack in coherence across sentences. In contrast, abstractive methods attempt to mimic what humans do by first extracting content from the source document and then produce new sentences that aggregate and organize the extracted information. Since the sentences are generated from scratch they tend to have a relatively worse performance on fluency and grammar. Furthermore, while abstractive summaries are typically less redundant, they may end up including misleading or even utterly false statements, because the methods to extract and aggregate information form the source document are still rather noisy.
In this work, we focus on extracting informative sentences from a given document (without dealing with redundancy), especially when the document is relatively long (e.g., scientific articles).
Most recent works on neural extractive summarization have been rather successful in generating summaries of short news documents (around 650 words/document) BIBREF1 by applying neural Seq2Seq models BIBREF2 . However when it comes to long documents, these models tend to struggle with longer sequences because at each decoding step, the decoder needs to learn to construct a context vector capturing relevant information from all the tokens in the source sequence BIBREF3 .
Long documents typically cover multiple topics. In general, the longer a document is, the more topics are discussed. As a matter of fact, when humans write long documents they organize them in chapters, sections etc.. Scientific papers are an example of longer documents and they follow a standard discourse structure describing the problem, methodology, experiments/results, and finally conclusions BIBREF4 .
To the best of our knowledge only one previous work in extractive summarization has explicitly leveraged section information to guide the generation of summaries BIBREF5 . However, the only information about sections fed into their sentence classifier is a categorical feature with values like Highlight, Abstract, Introduction, etc., depending on which section the sentence appears in.
In contrast, in order to exploit section information, in this paper we propose to capture a distributed representation of both the global (the whole document) and the local context (e.g., the section/topic) when deciding if a sentence should be included in the summary
Our main contributions are as follows: (i) In order to capture the local context, we are the first to apply LSTM-minus to text summarization. LSTM-minus is a method for learning embeddings of text spans, which has achieved good performance in dependency parsing BIBREF6 , in constituency parsing BIBREF7 , as well as in discourse parsing BIBREF8 . With respect to more traditional methods for capturing local context, which rely on hierarchical structures, LSTM-minus produces simpler models i.e. with less parameters, and therefore faster to train and less prone to overfitting. (ii) We test our method on the Pubmed and arXiv datasets and results appear to support our goal of effectively summarizing long documents. In particular, while overall we outperform the baseline and previous approaches only by a narrow margin on both datasets, the benefit of our method become much stronger as we apply it to longer documents. purpleFurthermore, in an ablation study to assess the relative contributions of the global and the local model we found that, rather surprisingly, the benefits of our model seem to come exclusively from modeling the local context, even for the longest documents.[6] (iii) In order to evaluate our approach, we have created oracle labels for both Pubmed and arXiv BIBREF9 , by applying a greedy oracle labeling algorithm. The two datasets annotated with extractive labels will be made public.
In order to evaluate our approach, we have created oracle labels for both Pubmed and arXiv, by applying a greedy oracle labeling algorithm.
|
How are the oracle labels created?
|
By applying a greedy oracle labeling algorithm.
|
null | false
| null |
When was siri for the iphone created?
|
Siri was first created back in 2011 by Apple. It's original use was for the Iphone as a virtual voice assistant that used artificial intelligence. The voice of siri is done by Susan Bennett. She is a voice over artist, and over the years siri has other voices that can be used too. Siri can open applications on the iphone for you, give weather updates, set alarms and reminders, take notes for you, and automate most tasks that you can do on your iphone yourself. Amazon Alexa was create two years after Siri because of the rise in popularity of virtual voice assistants.
|
|
null | false
| null |
Is brazilian jiu jitsu good for self defense?
|
Brazilian Jiujitsu, or BJJ, is known as the gentle art in martial arts. BJJ can be practiced in two modalities: Sport BJJ and the other as self defense. Sport BJJ is competitive and based on a point system based on a competitor being awarded a number of points based on advancing to a more advantageous position. Alternatively, a competitor may immediately win the match by submission. When BJJ is used in self defense, the goal is usually to subdue an attacker. Since BJJ is mostly grappling from the ground, it can be a highly effective practice for self defense as many street fights end up on the ground. While many of the techniques are intended to immobilize an opponent, it can be less effective if the opponent has an unfair advantage such as the use of a weapon. Practicing BJJ routinely will prepare the practitioner mentally for physical combat which can be advantageous in a self defense scenario. Another reason BJJ can be viewed as good for self defense is that there are techniques that can work well for different body types that level the playing field for even smaller or weaker opponents. If both opponents are trained in BJJ, size and strength can become a larger factor. Routine training can also lead to increased physical fitness as classes can be quite rigorous and physically demanding. Some may argue that more ruthless martial arts are intentionally designed for self defense and therefore may be more effective.
|
|
null | false
| null |
Give me a list of the best places in the United States to go skiing
|
The "best" place to go skiing is very subjective based on location, skiing ability, and preferred terrain. Generally speaking, Killington in the east, Winter Park, Big Sky, and Park City in the midwest, and Pallisades in Lake Tahoe have the best combinations of accessibility, trails for all abilities, and a large variety of terrain.
|
|
null | false
| null |
What is a birdie in Golf
|
Each hole in Golf is designated with the number of strokes it would take an expert to make to the hole. If the golfer makes it in the designated strokes it is called a Par. One stroke less than the designated will result in a Birdie. Example , for a Par 4 hole if a Golfer takes 3 strokes to get the ball in the hole , he has a Birdie.
|
|
null | false
| null |
Give me a list of Germany's five most recent chancellors, along with the years they were in office.
|
Here are the five most recent chancellors of Germany:
1. Olaf Scholz (2021 - present)
2. Angela Merkel (2005 - 2021)
3. Gerhard Schröder (1998 - 2005)
4. Helmut Kohl (1982 - 1998)
5. Helmut Kohl (1974 - 1982)
|
|
null | false
| null |
What are the worst top ten movies of all time, according to rotten tomatoes?
|
1 - Ballistic: Ecks vs. Sever (2002) with Lucy Liu and Antonio Banderas
2 - One Missed Call (2008)
3 - A Thousand Words (2012) with Eddy Murphy
4 - Gotti (2018) with John Travolta
5 - Pinocchio (2002) with Roberto Benigni
6 - Superbabies: Baby Geniuses 2 (2004)
7 - Gold Diggers (2003)
8 - The Last Days of American Crime (2020) with Michael Pitt and Edgar Ramirez
9 - Jaws: The Revenge (1987) with Michael Caine
10 - Dark Crimes (2016) with Jim Carrey and Charlotte Gainsbourg
|
|
null | false
| null |
Tell me whether these instruments are stringed or percussion: Cello, Cymbals, Triangle, Harp
|
The cello and harp are stringed instruments while the cymbals and triangle are percussion instruments.
|
|
null | false
| null |
Why do people say "if I had a nickle for every time"?
|
"If I had a nickle for every time" is a common saying used in the English language to describe a situation that happens repeatedly. The implication, which is meant to be an exaggeration, is that the individual would be wealthy or make substantial money due to the volume of times the event is repeated. Oftentimes, it refers to a question or comment that an individual hears repeatedly.
|
|
null | false
| null |
Name some famous Sci-Fi books
|
- Dune, by Frank Herbert
- The Martian Chronicles, by Ray Bradbury
- Orbit The Fifth Season, by N.K. Jemisin
- Solaris, by Stanislaw Lem
- Foundation, by Isaac Asimov
|
|
null | false
| 34
|
Single-relation factoid questions are the most common form of questions found in search query logs and community question answering websites BIBREF1 , BIBREF2 . A knowledge-base (KB) such as Freebase, DBpedia, or Wikidata can help answer such questions after users reformulate them as queries. For instance, the question Where was Barack Obama born? can be answered by issuing the following KB query: $
\lambda (x).place\_of\_birth(Barack\_Obama, x)
$
However, automatically mapping a natural language question such as Where was Barack Obama born? to its corresponding KB query remains a challenging task.
There are three key issues that make learning this mapping non-trivial. First, there are many paraphrases of the same question. Second, many of the KB entries are unseen during training time; however, we still need to correctly predict them at test time. Third, a KB such as Freebase typically contains millions of entities and thousands of predicates, making it difficult for a system to predict these entities at scale BIBREF1 , BIBREF3 , BIBREF0 . In this paper, we address all three of these issues with a character-level encoder-decoder framework that significantly improves performance over state-of-the-art word-level neural models, while also providing a much more compact model that can be learned from less data.
First, we use a long short-term memory (LSTM) BIBREF4 encoder to embed the question. Second, to make our model robust to unseen KB entries, we extract embeddings for questions, predicates and entities purely from their character-level representations. Character-level modeling has been previously shown to generalize well to new words not seen during training BIBREF5 , BIBREF6 , which makes it ideal for this task. Third, to scale our model to handle the millions of entities and thousands of predicates in the KB, instead of using a large output layer in the decoder to directly predict the entity and predicate, we use a general interaction function between the question embeddings and KB embeddings that measures their semantic relevance to determine the output. The combined use of character-level modeling and a semantic relevance function allows us to successfully produce likelihood scores for the KB entries that are not present in our vocabulary, a challenging task for standard encoder-decoder frameworks.
Our novel, character-level encoder-decoder model is compact, requires significantly less data to train than previous work, and is able to generalize well to unseen entities in test time. In particular, without use of ensembles, we achieve 70.9% accuracy in the Freebase2M setting and 70.3% accuracy in the Freebase5M setting on the SimpleQuestions dataset, outperforming the previous state-of-arts of 62.7% and 63.9% BIBREF0 by 8.2% and 6.4% respectively. Moreover, we only use the training questions provided in SimpleQuestions to train our model, which cover about 24% of words in entity aliases on the test set. This demonstrates the robustness of the character-level model to unseen entities. In contrast, data augmentation is usually necessary to provide more coverage for unseen entities and predicates, as done in previous work BIBREF0 , BIBREF1 .
In contrast, data augmentation is usually necessary to provide more coverage for unseen entities and predicates, as done in previous work.
|
Why is data augmentation usually necessary?
|
To provide more coverage for unseen entities and predicates.
|
1807.08447
| false
| null |
We evaluate LinkNBed and baselines on two real world knowledge graphs: D-IMDB (derived from large scale IMDB data snapshot) and D-FB (derived from large scale Freebase data snapshot). Table 1 provides statistics for our final dataset used in the experiments. Appendix B.1 provides complete details about dataset processing.
We evaluate LinkNBed and baselines on two real world knowledge graphs: D-IMDB (derived from large scale IMDB data snapshot) and D-FB (derived from large scale Freebase data snapshot).
|
On what data is the model evaluated?
|
The answers are shown as follows:
* D-IMDB (derived from large scale IMDB data snapshot)
* D-FB (derived from large scale Freebase data snapshot)
|
null | false
| null |
How do you make a beef brisket rub?
|
1 1/2 cups of Brown Sugar
1/3 cup or salt
1/4 cup or pepper
1/2 cup or smoked paprika
1 1/4 tablespoons of garlic powder
1 tablespoons of onion powder
1 tablespoon of cayenne pepper
1/2 tablespoon of chili powder
1/2 tablespoon of cumin
1/3 teaspoon of coco powder
|
|
null | false
| 198
|
The prevalent use of emoji—and their text-based precursors—is mostly unaddressed in current natural language processing (NLP) tasks. The support of the Unicode Standard BIBREF0 for emoji characters in 2010 ushered in a wide-spread, international adoption of these graphical elements in casual contexts. Interpreting the meaning of these characters has been challenging however, since they take on multiple semantic roles BIBREF1.
Whether or not emoji are used depends on the context of a text or conversation, with more formal settings generally being less tolerating. So is the popular aligned corpus Europarl BIBREF2 naturally devoid of emoji. Technical limitations, like no Unicode support, also limit its use. This in turn affects commonly used corpora, tokenizers, and pre-trained networks.
Take for example the Ubuntu Dialog Corpus by BIBREF3, a commonly used corpus for multi-turn systems. This dataset was collected from an Internet Relay Chat (IRC) room casually discussing the operating system Ubuntu. IRC nodes usually support the ASCII text encoding, so there's no support for graphical emoji. However, in the 7,189,051 utterances, there are only 9946 happy emoticons (i.e. :-) and the cruelly denosed :) version) and 2125 sad emoticons.
Word embeddings are also handling emoji poorly: Word2vec BIBREF4 with the commonly used pre-trained Google News vectors doesn't support the graphical emoji at all and vectors for textual emoticons are inconsistent. As another example with contextualized word embeddings, there are also no emoji or textual emoticons in the vocabulary list of BERT BIBREF5 by default and support for emoji is only recently added to the tokenizer. The same is true for GPT-2 BIBREF6. As all downstream systems, ranging from multilingual résumé parsing to fallacy detection BIBREF7, rely on the completeness of these embeddings, this lack of emoji support can affect the performance of some of these systems.
Another challenge is that emoji usage isn't static. Think of shifting conventions, different cultures, and newly added emoji to the Unicode list. Several applications also use their own custom emoji, like chat application Slack and streaming service Twitch. This becomes an issue for methods that leverage the Unicode description BIBREF8 or that rely on manual annotations BIBREF9.
Our contribution with this paper is two-fold: firstly, we argue that the current use—or rather non-existing use—of emoji in the tokenizing, training, and the datasets themselves is insufficient. Secondly, we attempt to quantify the significance of incorporating emoji-based features by presenting a fine-tuned model. We then compare this model to a baseline, but without special attention to emoji.
Section SECREF2 will start with an overview of work on emoji representations, emoji-based models and analysis of emoji usage. A brief introduction in conversational systems will also be given. Section SECREF3 will then look into popular datasets with and without emoji and then introduce the dataset we used.
Our model will then be discussed in Section SECREF4, including the tokenization in Subsection SECREF4, training setup in Subsection SECREF6 and evaluation in Subsection SECREF10. This brings us to the results of our experiment, which is discussed in Section SECREF5 and finally our conclusion and future work are presented in Section SECREF6.
Secondly, we attempt to quantify the significance of incorporating emoji-based features by presenting a fine-tuned model. We then compare this model to a baseline, but without special attention to emoji.
|
Do they pay special attention to emoji when comparing their model to a baseline?
|
No.
|
null | false
| null |
I would like to go to Mystery Spot near Santa Cruz California. What can you tell me about the mystery surrounding the Mystery Spot and how should I plan for this trip?
|
It is great to hear that you are looking to go to Mystery Spot in Santa Cruz, California. During winter it can get chilly so layer up. The road leading up from South Bay is windy on 17 South and then goes through a thicker forest. Mystery Spot is a tourist attraction that was opened in 1939 by George Prather. You can sign up for a guided tour at https://www.mysteryspot.com/ and Mystery Spot is open year round. The tour is approx. 40 mins long and you discover that gravity doesn't behave like it should or what should be higher than you turns out to be at same height or lower. You make a steep quick climb to a tilted cabin on a tilted road and you feel disoriented and seem to be at a severe incline when you enter this cabin. While there are various hypotheses as to reason for these effects, the most likely explanation is some level of optical illusion as the road is tilted at a severe opposite angle to the tilted roof line and the inside of the cabin. I think a combination of gravity hill where you get the sensation you are climbing up the road when in reality you are going down and the odd tilted angles of the cabin causes severe optical illusion. It also causes humans to fall back on visual senses more and ignore other senses which might be counter acting what you see.
|
|
null | false
| null |
According to some, the idea of infinite worlds was first suggested by the pre-Socratic Greek philosopher Anaximander in the sixth century BCE. However, there is debate as to whether he believed in multiple worlds, and if he did, whether those worlds were co-existent or successive. The first to whom we can definitively attribute the concept of innumerable worlds are the Ancient Greek Atomists, beginning with Leucippus and Democritus in the 5th century BCE, followed by Epicurus (341-270 BCE) and Lucretius (1st century BCE). In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time. The concept of multiple universes became more defined in the Middle Ages. The American philosopher and psychologist William James used the term "multiverse" in 1895, but in a different context. The concept first appeared in the modern scientific context in the course of the debate between Boltzmann and Zermelo in 1895. In Dublin in 1952, Erwin Schrödinger gave a lecture in which he jocularly warned his audience that what he was about to say might "seem lunatic". He said that when his equations seemed to describe several different histories, these were "not alternatives, but all really happen simultaneously". This sort of duality is called "superposition".
|
Based on the given text give the name of the people and a summary of how they contributed to the notion of infinite or multiple worlds.
|
Here are the name of the people and a summary of how they contributed to the notion of infinite or multiple worlds:
Anaximander (6th century BCE): The idea of infinite worlds was first suggested by him.
Leucippus (5th century BCE), Democritus (5th century BCE), Epicurus (341-270 BCE) and Lucretius (1st century BCE): They believed in the concept of innumerable worlds.
Chrysippus (3rd century BCE): He suggested the existence of multiple universes across time which means the world eternally expired and regenerated.
William James (1895): He first used the term "multiverse".
Boltzmann and Zermelo (1895): In a debate between them the multiverse concept first appeared in the modern scientific context.
Erwin Schrödinger (1952): He first came up with the term "superposition" by referring to his equations that seemed to describe several different histories which were "not alternatives, but all really happen simultaneously".
|
null | false
| null |
The company was incorporated as Tesla Motors, Inc. on July 1, 2003, by Martin Eberhard and Marc Tarpenning. Eberhard and Tarpenning served as CEO and CFO, respectively. Eberhard said he wanted to build "a car manufacturer that is also a technology company", with its core technologies as "the battery, the computer software, and the proprietary motor".
Ian Wright was Tesla's third employee, joining a few months later. In February 2004, the company raised $7.5 million in series A funding, including $6.5 million from Elon Musk, who had received $100 million from the sale of his interest in PayPal two years earlier. Musk became the chairman of the board of directors and the largest shareholder of Tesla. J. B. Straubel joined Tesla in May 2004 as chief technical officer.
A lawsuit settlement agreed to by Eberhard and Tesla in September 2009 allows all five – Eberhard, Tarpenning, Wright, Musk, and Straubel – to call themselves co-founders.
|
Given this paragraph about the founding of Tesla, who was the chairman of the board of directors as of March 2004?
|
Elon Musk
|
null | false
| 107
|
The SemEval-2010 benchmark dataset BIBREF0 is composed of 244 scientific articles collected from the ACM Digital Library (conference and workshop papers). The input papers ranged from 6 to 8 pages and were converted from PDF format to plain text using an off-the-shelf tool. The only preprocessing applied is a systematic dehyphenation at line breaks and removal of author-assigned keyphrases. Scientific articles were selected from four different research areas as defined in the ACM classification, and were equally distributed into training (144 articles) and test (100 articles) sets. Gold standard keyphrases are composed of both author-assigned keyphrases collected from the original PDF files and reader-assigned keyphrases provided by student annotators.
Long documents such as those in the SemEval-2010 benchmark dataset are notoriously difficult to handle due to the large number of keyphrase candidates (i.e. phrases that are eligible to be keyphrases) that the systems have to cope with BIBREF6 . Furthermore, noisy textual content, whether due to format conversion errors or to unusable elements (e.g. equations), yield many spurious keyphrase candidates that negatively affect keyphrase extraction performance. This is particularly true for systems that make use of core NLP tools to select candidates, that in turn exhibit poor performance on degraded text. Filtering out irrelevant text is therefore needed for addressing these issues.
In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below.
Table shows the average number of sentences and words along with the maximum possible recall for each level of preprocessing. The maximum recall is obtained by computing the fraction of the reference keyphrases that occur in the documents. We observe that the level 2 preprocessing succeeds in eliminating irrelevant text by significantly reducing the number of words (-19%) while maintaining a high maximum recall (-2%). Level 3 preprocessing drastically reduce the number of words to less than a quarter of the original amount while interestingly still preserving high recall.
Level 2: Similarly to (Nguyen and Luong, 2010; Lopez and Romary, 2010), we retrieve6 the original PDF files from the ACM Digital Library. We then extract the enriched7 textual content of the PDF files using an Optical Character Recognition (OCR) system8, and perform document logical structure detection using ParsCit (Kan et al., 2010)9. We use the detected logical structure to remove author- assigned keyphrases and select only relevant elements: title, headers, abstract, introduction, related work, body text10 and conclusion. We finally apply a systematic dehyphenation at line breaks and run the Stanford CoreNLP suite.
|
How is the filtering of irrelevant text arranged in Lv.2 preprocessing?
|
They perform document logical structure detection using ParsCit (Kan et al., 2010). They use the detected logical structure to remove author- assigned keyphrases and select only relevant elements: title, headers, abstract, introduction, related work, body text and conclusion. They finally apply a systematic dehyphenation at line breaks and run the Stanford CoreNLP suite.
|
null | false
| 159
|
The recurrent language generator proposed in this paper is based on a neural language generator BIBREF8 , which consists of three main components: (i) an Encoder that incorporates the target meaning representation (MR) as the model inputs, (ii) an Aligner that aligns and controls the semantic elements, and (iii) an RNN Decoder that generates output sentences. The generator architecture is shown in Figure 1 . The Encoder first encodes the MR into input semantic elements which are then aggregated and selected by utilizing an attention-based mechanism by the Aligner. The input to the RNN Decoder at each time step is a 1-hot encoding of a token $\textbf {w}_{t}$ and an attentive DA representation $\textbf {d}_{t}$ . At each time step $t$ , RNN Decoder also computes how much the feature value vector $\textbf {s}_{t-1}$ retained for the next computational steps, and adds this information to the RNN output which represents the probability distribution of the next token $\textbf {w}_{t+1}$ . At generation time, we can sample from this conditional distribution to obtain the next token in a generated sentence, and feed it as the next input to the RNN Decoder. This process finishes when an end sign is generated BIBREF17 , or some constraints are reached BIBREF16 . The model can produce a sequence of tokens which can finally be lexicalized to form the required utterance.
The RALSTM cell proposed in this paper, which consists of three components: an Refinement Cell, a traditional LSTM Cell, and an Adjustment Cell.
|
What are the main components of the RALSTM cell?
|
It consists of three components: an Refinement Cell, a traditional LSTM Cell, and an Adjustment Cell.
|
null | false
| null |
Formula One (more commonly known as Formula 1 or F1) is the highest class of international racing for open-wheel single-seater formula racing cars sanctioned by the Fédération Internationale de l'Automobile (FIA). The FIA Formula One World Championship has been one of the premier forms of racing around the world since its inaugural season in 1950. The word formula in the name refers to the set of rules to which all participants' cars must conform. A Formula One season consists of a series of races, known as Grands Prix. Grands Prix take place in multiple countries and continents around the world on either purpose-built circuits or closed public roads.
|
What is F1 or Formula 1 racing?
|
Formula 1 is the best and most technical open-wheel single-seater racing car form. It is very technical because the cars are built with the latest and bleeding technology in aerodynamics and every other field in science to make the car as fast as possible. The races happen every year across all the continents on dedicated circuits or public roads converted to a race track for the race weekend.
|
null | false
| 179
|
Diagrams are a common feature of many everyday media from newspapers to school textbooks, and not surprisingly, different forms of diagrammatic representation have been studied from various perspectives. To name just a few examples, recent work has examined patterns in diagram design BIBREF0 and their interpretation in context BIBREF1, and developed frameworks for classifying diagrams BIBREF2 and proposed guidelines for their design BIBREF3. There is also a long-standing interest in processing and generating diagrams computationally BIBREF4, BIBREF5, BIBREF6, which is now resurfacing as advances emerging from deep learning for computer vision and natural language processing are brought to bear on diagrammatic representations BIBREF7, BIBREF8, BIBREF9.
From the perspective of computational processing, diagrammatic representations present a formidable challenge, as they involve tasks from both computer vision and natural language processing. On the one hand, diagrams have a spatial organisation – layout – which needs to be segmented to identify meaningful units and their position. Making sense of how diagrams exploit the 2D layout space falls arguably within the domain of computer vision. On the other hand, diagrams also have a discourse structure, which uses the layout space to set up discourse relations between instances of natural language, various types of images, arrows and lines, thus forming a unified discourse organisation. The need to parse this discourse structure shifts the focus towards the field of natural language processing.
Understanding and making inferences about the structure of diagrams and other forms of multimodal discourse may be broadly conceptualised as multimodal discourse parsing. Recent examples of work in this area include alikhanietal2019 and ottoetal2019, who model discourse relations between natural language and photographic images, drawing on linguistic theories of coherence and text–image relations, respectively. In most cases, however, predicting a single discourse relation covers only a part of the discourse structure. sachanetal2019 note that there is a need for comprehensive theories and models of multimodal communication, as they can be used to rethink tasks that have been previously considered only from the perspective of natural language processing.
Unlike many other areas, the study of diagrammatic representations is particularly well-resourced, as several multimodal resources have been published recently to support research on computational processing of diagrams BIBREF10, BIBREF8, BIBREF11. This study compares two such resources, AI2D BIBREF10 and AI2D-RST BIBREF11, which both feature the same diagrams, as the latter is an extension of the former. Whereas AI2D features crowd-sourced, non-expert annotations, AI2D-RST provides multiple layers of expert annotations, which are informed by state-of-the-art approaches to multimodal communication BIBREF12 and annotation BIBREF13, BIBREF14.
This provides an interesting setting for comparison and evaluation, as non-expert annotations are cheap to produce and easily outnumber the expert-annotated data, whose production consumes both time and resources. Expert annotations, however, incorporate domain knowledge from multimodality theory, which is unavailable via crowd-sourcing. Whether expert annotations provide better representations of diagrammatic structures and thus justify their higher cost is one question that this study seeks to answer.
Both AI2D and AI2D-RST represent the multimodal structure of diagrams using graphs. This enables learning their representations using graph neural networks, which are gaining currency as a graph is a natural choice for representing many types of data BIBREF15. This article reports on two experiments that evaluate the capability of AI2D and AI2D-RST to represent the multimodal structure of diagrams using graphs, focusing particularly on spatial layout, the hierarchical organisation of diagram elements and their connections expressed using arrows and lines.
This article reports on two experiments that evaluate the capability of AI2D and AI2D-RST to represent the multimodal structure of diagrams using graphs, focusing particularly on spatial layout, the hierarchical organisation of diagram elements and their connections expressed using arrows and lines.
|
What capability do the experiments evaluate?
|
The capability of AI2D and AI2D-RST to represent the multimodal structure of diagrams using graphs.
|
null | false
| 247
|
With the rapid growth of social network platforms, more and more people tend to share their experiences and emotions online.[2]Corresponding Author: xuruifeng@hit.edu.cn Emotion analysis of online text becomes a new challenge in Natural Language Processing (NLP). In recent years, studies in emotion analysis largely focus on emotion classification including detection of writers' emotions BIBREF0 as well as readers' emotions BIBREF1 . There are also some information extraction tasks defined in emotion analysis BIBREF2 , BIBREF3 , such as extracting the feeler of an emotion BIBREF4 . These methods assume that emotion expressions are already observed. Sometimes, however, we care more about the stimuli, or the cause of an emotion. For instance, Samsung wants to know why people love or hate Note 7 rather than the distribution of different emotions.
Ex.1 我的手机昨天丢了,我现在很难过。
Ex.1 Because I lost my phone yesterday, I feel sad now.
In an example shown above, “sad” is an emotion word, and the cause of “sad” is “I lost my phone”. The emotion cause extraction task aims to identify the reason behind an emotion expression. It is a more difficult task compared to emotion classification since it requires a deep understanding of the text that conveys an emotions.
Existing approaches to emotion cause extraction mostly rely on methods typically used in information extraction, such as rule based template matching, sequence labeling and classification based methods. Most of them use linguistic rules or lexicon features, but do not consider the semantic information and ignore the relation between the emotion word and emotion cause. In this paper, we present a new method for emotion cause extraction. We consider emotion cause extraction as a question answering (QA) task. Given a text containing the description of an event which [id=lq]may or may not cause a certain emotion, we take [id=lq]an emotion word [id=lq]in context, such as “sad”, as a query. The question to the QA system is: “Does the described event cause the emotion of sadness?”. The [id=lq]expected answer [id=lq]is either “yes” or “no”. (see Figure FIGREF1 ). We build our QA system based on a deep memory network. The memory network has two inputs: a piece of text, [id=lq]referred to as a story in QA systems, and a query. The [id=lq]story is represented using a sequence of word embeddings.
[id=lq]A recurrent structure is implemented to mine the deep relation between a query and a text. It measure[id=lq]s the [id=lq]importance of each word in the text by [id=lq]an attention mechanism. Based on the [id=lq]learned attention result, the network maps the text into a low dimensional vector space. This vector is [id=lq]then used to generate an answer. Existing memory network based approaches to QA use weighted sum of attentions to jointly consider short text segments stored in memory. However, they do not explicitly model [id=lq]sequential information in the context. In this paper, we propose a new deep memory network architecture to model the context of each word simultaneously by multiple memory slots which capture sequential information using convolutional operations BIBREF5 , and achieves the state-of-the-art performance compared to existing methods which use manual rules, common sense knowledge bases or other machine learning models.
The rest of the paper is organized as follows. Section SECREF2 gives a review of related works on emotion analysis. Section SECREF3 presents our proposed deep memory network based model for emotion cause extraction. Section SECREF4 discusses evaluation results. Finally, Section SECREF5 concludes the work and outlines the future directions.
In this paper, we propose a new deep memory network architecture to model the context of each word simultaneously by multiple memory slots which capture sequential information using convolutional operations (Kim, 2014), and achieves the state-of-the-art performance compared to existing methods which use manual rules, common sense knowledge bases or other machine learning models.
|
How to model the context of each word simultaneously in the new deep memory network architecture proposed by the authors?
|
It models the context of each word simultaneously by multiple memory slots.
|
null | false
| null |
Describe a recurring dream you've had.
|
Thank you for asking about my dreams. I am a computer model and therefore I do not actually dream myself. I do learn from content and so I know a lot about what kinds of dreams one could have. People often have dreams about their work, their children, losing all of their teeth, and flying. Some of these can be recurring dreams, which means that someone can have the same dream repeatedly. It must be nice to dream.
|
|
1910.11769
| false
| null |
The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral.
The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral
|
How many emotions do they look at?
|
9
|
1801.07537
| false
| null |
Here we perform a qualitative analysis of this communication process to better understand what kind of language the agent has learned. We find that while optimizing its reformulations to adapt to the language of the QA system, AQA diverges from well structured language in favour of less fluent, but more effective, classic information retrieval (IR) query operations. These include term re-weighting (tf-idf), expansion and morphological simplification/stemming. We hypothesize that the explanation of this behaviour is that current machine comprehension tasks primarily require ranking of short textual snippets, thus incentivizing relevance more than deep language understanding.
We find that while optimizing its reformulations to adapt to the language of the QA system, AQA diverges from well structured language in favour of less fluent, but more effective, classic information retrieval (IR) query operations.
|
What is the difference in findings of Buck et al? It looks like the same conclusion was mentioned in Buck et al..
|
The answers are shown as follows:
* AQA diverges from well structured language in favour of less fluent, but more effective, classic information retrieval (IR) query operations
|
null | false
| null |
Cystatin-11 is a protein that in humans is encoded by the CST11 gene.
The cystatin superfamily encompasses proteins that contain multiple cystatin-like sequences. Some of the members are active cysteine protease inhibitors, while others have lost or perhaps never acquired this inhibitory activity.
There are three inhibitory families in the superfamily, including the type 1 cystatins (stefins), type 2 cystatins and the kininogens. The type 2 cystatin proteins are a class of cysteine proteinase inhibitors found in a variety of human fluids and secretions.
The cystatin locus on chromosome 20 contains the majority of the type 2 cystatin genes and pseudogenes. This gene is located in the cystatin locus and encodes an epididymal-specific protein whose specific function has not been determined. Alternative splicing yields two variants encoding distinct isoforms.
|
What is CST 11
|
Cystatin-11 is a protein that in humans is encoded by the CST11 gene.
The cystatin superfamily encompasses proteins that contain multiple cystatin-like sequences. Some of the members are active cysteine protease inhibitors, while others have lost or perhaps never acquired this inhibitory activity.
|
1910.08772
| true
| null |
To show the effectiveness of our approach, we show results on the SICK dataset BIBREF1, a common benchmark for logic-based NLI, and find MonaLog to be competitive with more complicated logic-based approaches (many of which require full semantic parsing and more complex logical machinery). We also introduce a supplementary version of SICK that corrects several common annotation mistakes (e.g., asymmetrical inference annotations) based on previous work by kalouli2017entail,kalouli2018. Positive results on both these datasets show the ability of lightweight monotonicity models to handle many of the inferences found in current NLI datasets, hence putting a more reliable lower-bound on what results the simplest logical approach is capable of achieving on this benchmark.
To show the effectiveness of our approach, we show results on the SICK dataset BIBREF1, a common benchmark for logic-based NLI, and find MonaLog to be competitive with more complicated logic-based approaches (many of which require full semantic parsing and more complex logical machinery).
|
Do they beat current state-of-the-art on SICK?
|
No.
|
1912.05066
| false
| null |
The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd.
The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd.
|
Who is the crowd in these experiments?
|
The answers are shown as follows:
* peoples' sentiments expressed over social media
|
null | false
| null |
What are the different types of snowboarding styles?
|
Since snowboarding's inception as an established winter sport, it has developed various styles, each with its own specialized equipment and technique. The most common styles today are: freeride, freestyle, and freecarve/race. These styles are used for both recreational and professional snowboarding. While each style is unique, there is overlap between them.
|
|
null | false
| null |
Sinclair has received numerous awards for his research, including the Irving S. Wright Award of Distinction from the American Federation for Aging Research in 2018, the Advance Award in Life Sciences from the Australian government in 2017, and the Australian Society for Medical Research Medal in 2014. In 2014, Sinclair was included in Time 100 as one of the hundred most influential people in the world, and in 2018 he was included in Time magazine's 50 Most Influential People in Health Care. In 2018, Sinclair was made an officer of the Order of Australia (AO) for "distinguished service to medical research into the biology of ageing and lifespan extension, as a geneticist and academic, to biosecurity initiatives, and as an advocate for the study of science" (2018 Australia Day Honours).
|
Please extract Sinclair's awards from this paragraph and list them in bullets in order from oldest to most recent.
|
-Time's 100 most influential people in the world
-Australian Society for Medical Research Medal
-Advance Award in Life Sciences
-Irving S Wright Award of Distinction
-Time magazine's 50 Most Influential People in Health Care
-Officer of the Order of Australia
|
null | false
| null |
What are different ways people help each other?
|
People help each other in a variety of different ways. Help can be offered to those who are physically handicapped, helped can be offered to those who are suffering emotionally, help can be offered to those who don't understand a topic or subject, help can be offered through knowledge transfer to others, help can be offered through active listening, help can be offered through physical touch, help can be offered through a community of people, and help can be offered through verbal and non verbal communication.
|
|
null | false
| null |
What is behavioural economics?
|
A method of economic analysis that applies psychological insights into human behavior to explain economic decision-making. Behavioural economics, the application of psychological insights to economic theories and problems, has been growing in influence for decades.’ In short behavioural economics is a subset of the study of economics. It is a method of economic analysis that applies psychological insights into human behaviours to explain economic decision making.
|
|
null | false
| null |
How early should you arrive at the airport before a flight?
|
An individual can arrive as early as they would like before a flight. It is recommended to arrive 4 hours before an international flight, and 2 hours before a domestic flight. Airline check in policies can vary. If checking a bag with the airline, some airlines allow checking a bag up to 6 hours before boarding time, some airlines only allow checking a bag up to 4 hours before boarding time. It is important to check various policies before planning your travel.
|
|
null | false
| null |
How can I find accurate directions between two places?
|
There are several applications that will allow you to get highly accurate directions, in terms of both time and distance, between two or more locations. You can even see real-time traffic! The two most popular apps are Apple Maps and Google Maps, but there are other services available like Waze and MapQuest. In addition, these apps let you get directions based on your mode of transport, whether it's public transportation, personal automobile, bike, or walking.
|
|
null | false
| 41
|
Performance appraisal (PA) is an important HR process, particularly for modern organizations that crucially depend on the skills and expertise of their workforce. The PA process enables an organization to periodically measure and evaluate every employee's performance. It also provides a mechanism to link the goals established by the organization to its each employee's day-to-day activities and performance. Design and analysis of PA processes is a lively area of research within the HR community BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .
The PA process in any modern organization is nowadays implemented and tracked through an IT system (the PA system) that records the interactions that happen in various steps. Availability of this data in a computer-readable database opens up opportunities to analyze it using automated statistical, data-mining and text-mining techniques, to generate novel and actionable insights / patterns and to help in improving the quality and effectiveness of the PA process BIBREF4 , BIBREF5 , BIBREF6 . Automated analysis of large-scale PA data is now facilitated by technological and algorithmic advances, and is becoming essential for large organizations containing thousands of geographically distributed employees handling a wide variety of roles and tasks.
A typical PA process involves purposeful multi-step multi-modal communication between employees, their supervisors and their peers. In most PA processes, the communication includes the following steps: (i) in self-appraisal, an employee records his/her achievements, activities, tasks handled etc.; (ii) in supervisor assessment, the supervisor provides the criticism, evaluation and suggestions for improvement of performance etc.; and (iii) in peer feedback (aka INLINEFORM0 view), the peers of the employee provide their feedback. There are several business questions that managers are interested in. Examples:
In this paper, we develop text mining techniques that can automatically produce answers to these questions. Since the intended users are HR executives, ideally, the techniques should work with minimum training data and experimentation with parameter setting. These techniques have been implemented and are being used in a PA system in a large multi-national IT company.
The rest of the paper is organized as follows. Section SECREF2 summarizes related work. Section SECREF3 summarizes the PA dataset used in this paper. Section SECREF4 applies sentence classification algorithms to automatically discover three important classes of sentences in the PA corpus viz., sentences that discuss strengths, weaknesses of employees and contain suggestions for improving her performance. Section SECREF5 considers the problem of mapping the actual targets mentioned in strengths, weaknesses and suggestions to a fixed set of attributes. In Section SECREF6 , we discuss how the feedback from peers for a particular employee can be summarized. In Section SECREF7 we draw conclusions and identify some further work.
In most PA processes, the communication includes the following steps: (i) in self-appraisal, an employee records his/her achievements, activities, tasks handled etc.; (ii) in supervisor assessment, the supervisor provides the criticism, valuation and suggestions for improvement of performance etc.; and (iii) in peer feedback (aka 360◦ view), the peers of the employee provide their feedback.
|
What steps do most PA processes include?
|
(i) in self-appraisal, an employee records his/her achievements, activities, tasks handled etc.; (ii) in supervisor assessment, the supervisor provides the criticism, valuation and suggestions for improvement of performance etc.; and (iii) in peer feedback (aka 360◦ view), the peers of the employee provide their feedback.
|
null | false
| 331
|
UTF8gkai With the widespread of electronic health records (EHRs) in recent years, a large number of EHRs can be integrated and shared in different medical environments, which further support the clinical decision making and government health policy formulationBIBREF0. However, most of the information in current medical records is stored in natural language texts, which makes data mining algorithms unable to process these data directly. To extract relational entity triples from the text, researchers generally use entity and relation extraction algorithm, and rely on the central word to convert the triples into key-value pairs, which can be processed by conventional data mining algorithms directly. Fig. FIGREF1 shows an example of entity and relation extraction in the text of EHRs. The text contains three relational entity triples, i.e., $<$咳嗽, 程度等级, 反复$>$ ($<$cough, degree, repeated$>$), $<$咳痰, 程度等, 反复$>$ ($<$expectoration, degree, repeated$>$) and $<$发热, 存在情况, 无$>$ ($<$fever, presence, nonexistent$>$). By using the symptom as the central word, these triples can then be converted into three key-value pairs, i.e., $<$咳嗽的程度等级, 反复$>$ ($<$degree of cough, repeated$>$), $<$咳痰的程度等级, 反复$>$ ($<$degree of expectoration, repeated$>$) and $<$发热的存在情况, 无$>$ ($<$presence of fever, nonexistent$>$).
UTF8gkai To solve the task of entity and relation extraction, researchers usually follows pipeline processing and split the task into two sub-tasks, namely named entity recognition (NER)BIBREF1 and relation classification (RC)BIBREF2, respectively.
However, this pipeline method usually fails to capture joint features between entity and relationship types. For example, for a valid relation “存在情况(presence)” in Fig. FIGREF1, the types of its two relational entities must be “疾病(disease)”, “症状(symptom)” or “存在词(presence word)”. To capture these joint features, a large number of joint learning models have been proposed BIBREF3, BIBREF4, among which bidirectional long short term memory (Bi-LSTM) BIBREF5, BIBREF6 are commonly used as the shared parameter layer. However, compared with the language models that benefit from abundant knowledge from pre-training and strong feature extraction capability, Bi-LSTM model has relatively lower generalization performance.
To improve the performance, a simple solution is to incorporate language model into joint learning as a shared parameter layer. However, the existing models only introduce language models into the NER or RC task separately BIBREF7, BIBREF8. Therefore, the joint features between entity and relationship types still can not be captured. Meanwhile, BIBREF9 considered the joint features, but it also uses Bi-LSTM as the shared parameter layer, resulting the same problem as discussed previously.
Given the aforementioned challenges and current researches, we propose a focused attention model based on widely known BERT language model BIBREF10 to jointly for NER and RC tasks. Specifically, through the dynamic range attention mechanism, we construct task-specific MASK matrix to control the attention range of the last $K$ layers in BERT language model, leading to the model focusing on the words of the task. This process helps obtain the corresponding task-specific context-dependent representations. In this way, the modified BERT language model can be used as the shared parameter layer in joint learning NER and RC task. We call the modified BERT language model shared task representation encoder (STR-encoder) in the following paper.
To sum up, the main contributions of our work are summarized as follows:
We propose a focused attention model to jointly learn NER and RC task. The model integrates BERT language model as a shared parameter layer to achieve better generalization performance.
In the proposed model, we incorporate a novel structure, called STR-encoder, which changes the attention range of the last $K$ layers in BERT language model to obtain task-specific context-dependent representations. It can make full use of the original structure of BERT to produce the vector of the task, and can directly use the prior knowledge contained in the pre-trained language model.
For RC task, we proposed two different MASK matrices to extract the required feature representation of RC task. The performances of these two matrices are analyzed and compared in the experiment.
The rest of the paper is organized as follows. We briefly review the related work on NER, RC and joint entity and relation extraction in Section SECREF2. In Section SECREF3, we present the proposed focused attention model. We report the experimental results in Section SECREF4. Section SECREF5 is dedicated to studying several key factors that affect the performance of our model. Finally, conclusion and future work are given in Section SECREF6.
Given the aforementioned challenges and current researches, we propose a focused attention model based on widely known BERT language model to jointly for NER and RC tasks. Specifically, through the dynamic range attention mechanism, we construct task-specific MASK matrix to control the attention range of the last $K$ layers in BERT language model, leading to the model focusing on the words of the task.
|
What model do they modify?
|
BERT language model.
|
null | false
| null |
Can you please divide these fruit and vegetables into red and green colors. Some can be either red or green, so list those as 'both red and green varieties'. Here is the list: cherries, raspberries, lettuce, celery cherries, strawberries, apples, peppers, cabbage, beans, kale, tomatoes, capsicum, raspberries, celery, spinach.
|
Lettuce, celery cherries, beans, kale, celery and spinach are all green-colored. Cherries, raspberries and strawberries are red-colored. Apples, peppers, cabbage, tomatoes and capsicum have both red and green varieties.
|
|
null | false
| null |
How should I go about planning an agenda for the upcoming leadership meeting I am hosting?
|
In order to determine what a good agenda would look like, it's important to first decide what you want to accomplish during your leadership meeting. Common leadership meeting goals are reviewing team performance for the last period, aligning on the priorities and objectives you will focus on for the upcoming year, tackling problems that can best be worked on while you are all in the same room, and building personal and professional relationships through teaming exercises and having common experiences.
Next, you should prioritize which of these goals is most important to you. If you could only accomplish one of them, which would it be? Continue to identify the next most important goal until you have fully prioritized your list of goals.
The next step is to determine how much time you have to dedicate to each topic in order to ensure you can accomplish your goals fully.
Finally, identify how many breaks you will need to build into your meeting time to ensure team members do not become too mentally taxed. One best practice is to take a 15-30 minute break every 90 minutes, and to take a one hour break every 4 hours.
Once you have identified all of these parameters, you can build your leadership meeting agenda.
|
|
null | false
| null |
Classify each of the following financial assets as either liquid or illiquid; checking accounts, savings accounts, certificate of deposits (CDs), Stocks, Bonds, mutual funds, Real estate investment trusts (REITs), Exchange Traded Funds (ETFs), 401K, Investment Art, Physical Property.
|
Liquid assets are those assets that can be very quickly sold without significant loss in value. This demands a healthy a vibrant market (demand) for these assets the most relatable example being money. Given this definition assets like checking accounts, individual stocks, Exchange Traded Funds, Real estate Investment Trusts, and some types of highly popular short-term bonds are considered highly liquid. Selling these assets immediately and without loss on face value is trivial and non-restricted. Assets such as Mutual Funds which have restrictions on selling, Certificates of Deposits which tie your money up for a length of time, Investment Art, and Physical Property are often considered illiquid as it is either not allowed by contract or there simply isn't enough interest in a market to be able to sell these assets in a timely fashion and without loss of value.
|
|
null | false
| 199
|
Evidence-based medicine (EBM) is of primary importance in the medical field. Its goal is to present statistical analyses of issues of clinical focus based on retrieving and analyzing numerous papers in the medical literature BIBREF0 . The PubMed database is one of the most commonly used databases in EBM BIBREF1 .
Biomedical papers, describing randomized controlled trials in medical intervention, are published at a high rate every year. The volume of these publications makes it very challenging for physicians to find the best medical intervention for a given patient group and condition BIBREF2 . Computational methods and natural language processing (NLP) could be adopted in order to expedite the process of biomedical evidence synthesis. Specifically, NLP tasks applied to well structured documents and queries can help physicians extract appropriate information to identify the best available evidence in the context of medical treatment.
Clinical questions are formed using the PIO framework, where clinical issues are broken down into four components: Population/Problem (P), Intervention (I), Comparator (C), and Outcome (O). We will refer to these categories as PIO elements, by using the common practice of merging the C and I categories. In BIBREF3 a literature screening performed in 10 systematic reviews was studied. It was found that using the PIO framework can significantly improve literature screening efficacy. Therefore, efficient extraction of PIO elements is a key feature of many EBM applications and could be thought of as a multi-label sentence classification problem.
Previous works on PIO element extraction focused on classical NLP methods, such as Naive Bayes (NB), Support Vector Machines (SVM) and Conditional Random Fields (CRF) BIBREF4 , BIBREF5 . These models are shallow and limited in terms of modeling capacity. Furthermore, most of these classifiers are trained to extract PIO elements one by one which is sub-optimal since this approach does not allow the use of shared structure among the individual classifiers.
Deep neural network models have increased in popularity in the field of NLP. They have pushed the state of the art of text representation and information retrieval. More specifically, these techniques enhanced NLP algorithms through the use of contextualized text embeddings at word, sentence, and paragraph levels BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 .
More recently, jin2018pico proposed a bidirectional long short term memory (LSTM) model to simultaneously extract PIO components from PubMed abstracts. To our knowledge, that study was the first in which a deep learning framework was used to extract PIO elements from PubMed abstracts.
In the present paper, we build a dataset of PIO elements by improving the methodology found in BIBREF12 . Furthermore, we built a multi-label PIO classifier, along with a boosting framework, based on the state of the art text embedding, BERT. This embedding model has been proven to offer a better contextualization compared to a bidirectional LSTM model BIBREF9 .
Evidence-based medicine (EBM) is of primary importance in the medical field. Its goal is to present statistical analyses of issues of clinical focus based on retrieving and analyzing numerous papers in the medical literature.
|
Which field in medicine is this paper relevant to?
|
Evidence-based medicine.
|
null | false
| null |
Make me a list of give cocktails made using vodka, include any other ingredients required.
|
Here are some cocktails that use Vodka as the primary ingredient:
- Bloody Mary is made using vodka, tomato juice and a variety of optional ingredients to add flavor. Popular additions are Worcestershire sauce, hot sauce, celery, olives, salt, black pepper and lemon juice.
- Vodka Martini is made with vodka and vermouth and is frequently garnished with olives or a twist of lemon rind.
- White Russian, is made using vodka, coffee liqueur and either milk or cream
- Caipiroska is made with vodka, lime and raw sugar
- Cosmopolitan is made using vodka, orange liqueur and cranberry juice
|
|
null | false
| null |
Kosmos 1375 was placed into a low Earth orbit with a perigee of 986 kilometres (613 mi), an apogee of 1,003 kilometres (623 mi), 65.8 degrees of inclination, and an orbital period of 105 minutes. On 18 June 1982, it was successfully intercepted and destroyed by Kosmos 1379 in the final Soviet anti-satellite weapons test to be conducted. As of 2022, debris is still in orbit.
Kosmos 1375 was the last of ten Lira satellites to be launched, of which all but the first were successful. Lira was derived from the earlier DS-P1-M satellite, which it replaced.
|
How many of the Lira satellites were successful?
|
10 were launched, 1 was not successful, so 10-1 = 9 were successful.
|
2002.10361
| false
| null |
Demographic variations root in documents, especially in social media data BIBREF26, BIBREF25, BIBREF10. Such variations could further impact the performance and fairness of document classifiers. In this study, we experiment four different classification models including logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37. We present the baseline results of both performance and fairness evaluations across the multilingual corpus.
In this study, we experiment four different classification models including logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37.
|
Which document classifiers do they experiment with?
|
The answers are shown as follows:
* logistic regression (LR), recurrent neural network (RNN) BIBREF35, convolutional neural network (CNN) BIBREF36 and Google BERT BIBREF37
|
null | false
| null |
Describe the main areas of assembly in an automotive factory.
|
A traditional automotive assembly has a number of process areas in order to produce a vehicle. The process begins with chassis assembly where the lower frame of the vehicle is welded. In parallel, the body shop produces the body shell which then moves to the paint shop to be painted. The painted body is then married with the chassis and interior components such as the engine, transmission, seats, and infotainment subsystems are assembled onto the vehicle. Lastly, the vehicle undergoes a quality assurance step to ensure the finished automobile functions as intended.
|
|
null | false
| null |
The Tour de France (French pronunciation: [tuʁ də fʁɑ̃s]) is an annual men's multiple-stage bicycle race primarily held in France, while also occasionally passing through nearby countries. Like the other Grand Tours (the Giro d'Italia and the Vuelta a España), it consists of 21 stages, each a day long, over the course of 23 days, coinciding with the Bastille Day holiday. It is the oldest of the Grand Tours and generally considered the most prestigious.
|
Given this paragraph about the Tour De France, how long is the race
|
The tour de france is 21 stages long, with each stage taking one day to complete. The race takes 23 days from start to finish due to 2 rest days the riders receive.
|
null | false
| null |
Everything Everywhere All At Once is the most awarded film of all time. The film received a leading 11 nominations at the 95th Academy Awards, and won a leading seven awards: Best Picture, Best Director, Best Actress (Yeoh), Best Supporting Actor (Quan), Best Supporting Actress (Curtis), Best Original Screenplay, and Best Film Editing,. It also won two Golden Globe Awards, five Critics' Choice Awards (including Best Picture), one BAFTA Award, a record four SAG Awards (including Best Ensemble), a record seven Independent Spirit Awards (including Best Feature), and swept the four major guild awards (DGA, PGA, SAG, and WGA).
|
Given this paragraph, in which categories did Everything Everywhere All At Once win an Oscar and how many Academy Awards did the film win in total?
|
Best Picture, Best Director, Best Actress, Best Supporting Actor, Best Supporting Actress, Best Original Screenplay, and Best Film Editing. The film won 7 Oscars.
|
null | false
| null |
If you were to weigh each one of these items on average, what would be considered heavy and light if you were to carry them: boulder, pebble, feather, bowling ball, elephant, seed, sand, dirt, water, books, papers, backpack
|
heavy: boulder, bowling ball, elephant, sand, dirt, water, backpack
light: feather, seed, books, papers
|
|
null | false
| null |
Fishing tackles are the equipment used by fishers when fishing. Almost any equipment or gear used for fishing can be called a fishing tackle, although the term is most commonly associated with gear used in angling. Some examples are hooks, lines, sinkers, floats, rods, reels, baits, lures, spears, nets, gaffs, traps, waders, and tackle boxes. Fishing techniques refer to the ways the tackles are used when fishing.
Tackles that are attached to the end of a fishing line are collectively called terminal tackles. These include hooks, sinkers, floats, leader lines, swivels, split rings, and any wires, snaps, beads, spoons, blades, spinners and clevises used to attach spinner blades to fishing lures. People also tend to use dead or live bait fish as another form of bait.
|
What is fishing tackle?
|
Fishing tackle is gear used to catch fish and includes hooks, lines, sinkers, floats, rods, reels, baits, lures, spears, nets, gaffs, traps, waders, and tackle boxes.
|
null | false
| 154
|
Four publicly available corpora are selected for our analysis. These corpora are based on Wikipedia, so more comparable than the others, and have already been used for the evaluation of several QA systems.
WikiQA BIBREF6 comprises questions selected from the Bing search queries, where user click data give the questions and their corresponding Wikipedia articles. The abstracts of these articles are then extracted to create answer candidates. The assumption is made that if many queries lead to the same article, it must contain the answer context; however, this assumption fails for some occasions, which makes this dataset more challenging. Since the existence of answer contexts is not guaranteed in this task, it is called answer triggering instead of answer selection.
SelQA BIBREF7 is a product of five annotation tasks through crowdsourcing. It consists of about 8K questions where a half of the questions are paraphrased from the other half, aiming to reduce contextual similarities between questions and answers. Each question is associated with a section in Wikipedia where the answer context is guaranteed, and also with five sections selected from the entire Wikipedia where the selection is made by the Lucene search engine. This second dataset does not assume the existence of the answer context, so can be used for the evaluation of answer triggering.
SQuAD BIBREF12 presents 107K+ crowdsourced questions on 536 Wikipedia articles, where the answer contexts are guaranteed to exist within the provided paragraph. It contains annotation of answer phrases as well as the pointers to the sentences including the answer phrases; thus, it can be used for both answer extraction and selection. This corpus also provides human accuracy on those questions, setting up a reasonable upper bound for machines. To avoid overfitting, the evaluation set is not publicly available although system outputs can be evaluated by their provided script.
InfoboxQA BIBREF13 gives 15K+ questions based on the infoboxes from 150 articles in Wikipedia. Each question is crowdsourced and associated with an infobox, where each line of the infobox is considered an answer candidate. This corpus emphasizes the gravity of infoboxes, which summary arguably the most commonly asked information about those articles. Although the nature of this corpus is different from the others, it can also be used to evaluate answer selection.
It consists of about 8K questions where a half of the questions are paraphrased from the other half, aiming to reduce contextual similarities between questions and answers.
|
What does the SELQA consist of?
|
It consists of about 8K questions where a half of the questions are paraphrased from the other half, aiming to reduce contextual similarities between questions and answers.
|
null | false
| null |
Write something you may never forget.
|
One experience that has stayed with me is the time I almost got arrested at Tokyo airport. I was traveling from Dallas to Shanghai and had a layover in Tokyo. While waiting to board my next flight, I heard my name called and was brought to a small room where several police officers were waiting. They politely asked to inspect my suitcase, and I nervously agreed. I had assumed that it may cased by safety shoe inside, but to my surprise, they found three bullets in my suitcase that I had kept as a memory from a hunting activity. When they asked if they were real, I truthfully replied yes. They called someone from the lab, and almost 10 people came to take a look at the bullets. I tried to explain that they were from a legal hunting activity in the USA, but I had just forgotten to clear them before traveling. Luckily, they gave me a release form, but unfortunately, my flight had already departed, and it was the last one for that day. I had to re-book another flight for the following day. It was a stressful and frustrating experience, but I learned an important lesson to always check and clear my suitcase before traveling.
|
|
1807.08204
| false
| null |
FLOAT SELECTED: Figure 1. A visual depiction of the NTP’ recursive computation graph construction, applied to a toy KB (top left). Dash-separated rectangles denote proof states (left: substitutions, right: proof score -generating neural network). All the non-FAIL proof states are aggregated to obtain the final proof success (depicted in Figure 2). Colours and indices on arrows correspond to the respective KB rule application.
FLOAT SELECTED: Figure 1. A visual depiction of the NTP’ recursive computation graph construction, applied to a toy KB (top left). Dash-separated rectangles denote proof states (left: substitutions, right: proof score -generating neural network). All the non-FAIL proof states are aggregated to obtain the final proof success (depicted in Figure 2). Colours and indices on arrows correspond to the respective KB rule application.
|
What are proof paths?
|
A sequence of logical statements represented in a computational graph
|
2004.02393
| false
| null |
To alleviate the noise in the distant supervision signal $\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game:
Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards.
|
How do two models cooperate to select the most confident chains?
|
The answers are shown as follows:
* Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards
|
null | false
| null |
What is the difference between a fire fighter and a nurse?
|
A fire fighter is responsible for responding to emergencies that are mostly related to fires. A typical task would be to ride in a fire truck from the fire station to a home that has caught fire and use water and other fire-fighting tools to put it out. A nurse is responsible for providing medical care to patients, typically in healthcare facilities like hospitals, nursing homes and outpatient clinics. A typical task for a nurse would be to care for the wounds and help manage pain for a patient following surgery.
|
|
null | false
| null |
How reliable is public transportation?
|
Reliability of public transportation varies drastically depending on which city you are in and depends on investment and maintenance of existing infrastructure. For example, the public transportation network in Switzerland spans a wide network that comprises of trains, trams, busses, and ferries, runs timely on a schedule to the minute, and is used as a primary mode of transportation for many residents. In contrast, it can be challenging to get around many cities in the United States without a car. Public transportation ridership often depends on reliability, which in turn often determines the availability of investments in the required infrastructure. This creates an unfortunate cycle making it very difficult for public transportation to become more reliable without external investments where the system is funding itself.
|
|
null | false
| null |
How do you build a roof?
|
Roofs are more complex than they look from the outside. Here are the basic steps to building a roof:
1. Build out the shape of the roof with a wood frame. This is the roof's skeleton.
2. Cover the frame with decking. Decking is the first layer of the roof. It's a thin piece of wood that all the other material will rest on.
3. Add an underlayment on top of the decking. This is a synthetic barrier that guards against water and fire, and helps the roof maintain its shape if the wood under it expands or contracts.
4. Add flashing around vertical elements on the roof. Flashing is a thin sheet of metal that prevents water from getting into gaps around vertical elements like chimneys and vents.
5. Apply shingles to the roof. Shingles are made of asphalt and they are the outermost layer of a roof. Shingles are the first line of defense in protecting your home, but they can also add color and design elements to your home.
|
|
null | false
| null |
"Eponymous" redirects here. For the album, see Eponymous (album). "Self-titled" redirects here. For other uses, see Self-titled (disambiguation). Not to be confused with Namesake.
An eponym is a person, a place, or a thing after whom or which someone or something is, or is believed to be, named. The adjectives which are derived from the word eponym include eponymous and eponymic.Usage of the wordThe term eponym functions in multiple related ways, all based on an explicit relationship between two named things. A person, place, or thing named after a particular person share an eponymous relationship. In this way, Elizabeth I of England is the eponym of the Elizabethan era, but the Elizabethan era can also be referred to as the eponym of Elizabeth I of England.When Henry Ford is referred to as "the eponymous founder of the Ford Motor Company", his surname "Ford" and the name of the motor company have an eponymous relationship. The word "eponym" can also refer to the title character of a fictional work (such as Rocky Balboa of the Rocky film series), as well as to self-titled works named after their creators (such as the album The Doors by the band the Doors).Walt Disney created the eponymous Walt Disney Company, with his name similarly extended to theme parks such as Walt Disney World. Medical eponymous terms are often called medical eponyms although that usage is deprecable.
|
What are the examples of eponyms provided in the passage. List them in the bulleted format.
|
Elizabeth I of England and Elizabethan era
Henry Ford and Ford Motor Company
Rocky Balboa and Rocky film series
album The Doors and band the Doors
Walt Disney and Walt Disney Company and Walt Disney World
|
null | false
| null |
Where can I go to request an absentee ballot for an upcoming election in Wisconsin, or to track the status of my ballot request?
|
The MyVote.wi.gov site has several resources to help Wisconsin voters. One resource allows you to enter in your name and birth date and request an absentee ballot. You should plan to do this well in advance of the election. Once you have submitted your request, you can track whether the ballot has been sent or received through the website. Not only do you have to complete the ballot correctly in advance, but you must also have it properly witnessed, and you witness must correctly complete their section of the ballot. Additionally, your ballot must be received by election day. Ballots received after the election date will not be counted--regardless of whether the delay was the responsibility of the postal service. If you are concerned that there is not enough time for the postal service to return the ballot on time, you can drop off your absentee ballot in person at your polling site on the day of the election, or you can simply vote in person at the polling site. Your ballot cannot be turned in by another person to the polling site. This website is provided by the Wisconsin Elections Commission, which is a bipartisan commission charged with implementing Wisconsin's election statutes.
|
|
null | false
| 32
|
Propaganda aims at influencing people's mindset with the purpose of advancing a specific agenda. In the Internet era, thanks to the mechanism of sharing in social networks, propaganda campaigns have the potential of reaching very large audiences BIBREF0, BIBREF1, BIBREF2.
Propagandist news articles use specific techniques to convey their message, such as whataboutism, red Herring, and name calling, among many others (cf. Section SECREF3). Whereas proving intent is not easy, we can analyse the language of a claim/article and look for the use of specific propaganda techniques. Going at this fine-grained level can yield more reliable systems and it also makes it possible to explain to the user why an article was judged as propagandist by an automatic system.
With this in mind, we organised the shared task on fine-grained propaganda detection at the NLP4IF@EMNLP-IJCNLP 2019 workshop. The task is based on a corpus of news articles annotated with an inventory of 18 propagandist techniques at the fragment level. We hope that the corpus would raise interest outside of the community of researchers studying propaganda. For example, the techniques related to fallacies and the ones relying on emotions might provide a novel setting for researchers interested in Argumentation and Sentiment Analysis.
We hope that the corpus would raise interest outside of the community of researchers studying propaganda.
|
What do the authors hope the proposed corpus does?
|
They hope that the corpus would raise interest outside of the community of researchers studying propaganda.
|
null | false
| 175
|
Electroencephalography (EEG) is a non invasive way of measuring electrical activity of human brain. In BIBREF0 we demonstrated deep learning based automatic speech recognition (ASR) using EEG signals for a limited English vocabulary of four words and five vowels. In this paper we extend our work for a much larger English vocabulary and we use state-of-art end-to-end continuous speech recognition models to perform recognition. In our prior work we predicted isolated words and vowels.
ASR systems forms the front end or back end in many cutting edge voice activated technologies like Amazon Alexa, Apple Siri, Windows Cortana, Samsung Bixby etc. Unfortunately these systems are trained to recognize text only from acoustic features. This limits technology accessibility to people with speaking disabilities and disorders. The research work presented in this paper tries to address this issue by investigating speech recognition using only EEG signals with no acoustic input and also by combining EEG features along with traditional acoustic features to perform recognition. We believe the former will help with speech restoration for people who can not speak at all and the latter will help people who are having speaking disabilities like broken or discontinued speech etc to use voice activated technologies with better user experience there by helping in improving technology accessibility.
ASR performance is degraded in presence of noisy speech and in real life situations most of the speech is noisy. Inspired from the unique robustness to environmental artifacts exhibited by the human auditory cortex BIBREF1 , BIBREF2 we used very noisy speech data for this work and demonstrated lower word error rate (WER) for smaller corpus using EEG features, concatenation of EEG features and acoustic features.
In BIBREF3 authors decode imagined speech from EEG using synthetic EEG data and connectionist temporal classification (CTC) network but in our work we use real EEG data, use EEG data recorded along with acoustics. In BIBREF4 authors perform envisioned speech recognition using random forest classifier but in our case we use end to end state of art models and perform recognition for noisy speech. In BIBREF5 authors demonstrate speech recognition using electrocorticography (ECoG) signals, which are invasive in nature but in our work we use non invasive EEG signals.
This work is mainly motivated by the results explained in BIBREF0 , BIBREF6 , BIBREF7 , BIBREF3 . In BIBREF6 the authors used classification approach for identifying phonological categories in imagined and silent speech but in our work we used continuous speech recognition state of art models and our models were predicting words, characters at each time step. Similarly in BIBREF7 neural network based classification approach was used for predicting phonemes.
Major contribution of this paper is the demonstration of end to end continuous noisy speech recognition using only EEG features and this paper further validates the concepts introduced in BIBREF0 for a much larger English corpus.
We believe the former will help with speech restoration for people who can not speak at all and the latter will help people who are having speaking disabilities like broken or discontinued speech etc to use voice activated technologies with better user experience there by helping in improving technology accessibility.
|
Will the authors' work help with speech restoration for people who can not speak at all?
|
Yes, it will.
|
null | false
| null |
Radiohead are an English rock band formed in Abingdon, Oxfordshire, in 1985. The band consists of Thom Yorke (vocals, guitar, piano, keyboards); brothers Jonny Greenwood (lead guitar, keyboards, other instruments) and Colin Greenwood (bass); Ed O'Brien (guitar, backing vocals); and Philip Selway (drums, percussion). They have worked with the producer Nigel Godrich and the cover artist Stanley Donwood since 1994. Radiohead's experimental approach is credited with advancing the sound of alternative rock.
|
Extract the members of the band Radiohead in a comma separated list.
|
Thom Yorke, Jonny Greenwood, Colin Greenwood, Ed O'Brien, Philip Selway
|
1612.05310
| false
| null |
Another challenging problem is the distinction between the classes “Troll” and “Engage”. This is true when the direct responder is intensely flared up with the suspected comment to the point that his own comment becomes a trolling attempt. A useful indicator for distinguishing these cases are the presence of insults, and to detect them we look for swear words, but as we noted before, there is no guarantee that swear words are used for insulting. This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation. It is typical in a troll-engaged comment scheme to observe longer than usual exchanges between two users, and the comments evolve in very agitated remarks. One may then use this information to disambiguate between the two classes.
This kind of error affects the precision and recall for the “troll” and “engage” classes. A solution to this problem may be the inclusion of longer parts of the conversation.
|
What potential solutions are suggested?
|
The answers are shown as follows:
* inclusion of longer parts of the conversation
|
null | false
| null |
What does quantitative easing mean?
|
Quantitative Easing, or QE is a policy action taken by the central bank to increase the amount of money in circulation with the goal of stimulating economic activity. The central bank achieves this by purchasing a predetermined amount of purchasing government bonds. Money is 'created' through this process.
|
|
null | false
| null |
What rhymes with orange?
|
There is only one perfect rhyming word for orange which is "sporange." A sporange is an old botanical term for "sporangium." The sporangium is the portion of a fern where asexual spores are created.
|
|
null | false
| null |
Give me a list of white wine varieties I should try
|
- Chardonnay
- Viura (a.k.a Macabeo and Macabeu)
- Trebbiano
- Marsanne
- Savagnin
- Sauvignon Blanc
- Grenache Blanc
- Roussanne
- Riesling
- Grüner Veltliner
- Aligoté
- Jacquère
- Rkatsiteli
- Assyrtiko
- Pinot Gris (a.k.a. Pinot Grigio)
- Pinot Blanc
- Garganega
- Godello
- Clairette Blanche
- Bourboulenc
- Xarel-lo
- Gewürztraminer
- Viognier
- Chenin Blanc
- Semillon
|
|
null | false
| null |
In signal processing and related disciplines, aliasing is an effect that causes different signals to become indistinguishable (or aliases of one another) when sampled. It also often refers to the distortion or artifact that results when a signal reconstructed from samples is different from the original continuous signal.
|
Please give me an example of this phenomenon that I can understand as a layman.
|
An example of aliasing that easy to understand as a layman is the 'wagon wheel effect'. You've probably seen the wagon wheel effect when watching a movie or tv show and the wheels of a car appear to be spinning backwards even as the car moves forwards. This is an effect of aliasing. This occurs because the sampling rate of the film is not adequate to capture the frequency of the signal (the frequency that the tires are spinning forwards) and the signal becomes indistinguishable from another signal (a frequency of the tires spinning backwards). As an example of how this happens imagine that the camera's sampling rate allows it to record an image frame at about 90% of the frequency that the tires are spinning. That is, the tire is spinning fast enough that it rotates 90% around in between each picture the camera can take. In this case, the wheel will look like its rotated backwards by 10% of a rotation on each image. When you put these images together in a film, the tires will appear to be rotating backwards.
|
null | false
| 170
|
We applied this method to discover gender-associated words in both domains. In Table TABREF9, we present a sample of the most gender-associated nouns from the celebrity domain. Several themes emerge: for example, female celebrities seem to be more associated with appearance (“gown,” “photo,” “hair,” “look”), while male celebrities are more associated with creating content (“movie,” “film,” “host,” “director”). This echoes real-world trends: for instance, on the red carpet, actresses tend to be asked more questions about their appearance –- what brands they are wearing, how long it took to get ready, etc. –- while actors are asked questions about their careers and creative processes (as an example, see BIBREF31).
Table TABREF9 also includes some of the most gender-associated verbs and adjectives from the professor domain. Female CS professors seem to be praised for being communicative and personal with students (“respond,” “communicate,” “kind,” “caring”), while male CS professors are recognized for being knowledgeable and challenging the students (“teach,”, “challenge,” “brilliant,” “practical”). These trends are well-supported by social science literature, which has found that female teachers are praised for “personalizing” instruction and interacting extensively with students, while male teachers are praised for using “teacher as expert” styles that showcase mastery of material BIBREF32.
These findings establish that there are clear differences in how people talk about women and men – even with Bonferroni correction, there are still over 500 significantly gender-associated nouns, verbs, and adjectives in the celebrity domain and over 200 in the professor domain. Furthermore, the results in both domains align with prior studies and real world trends, which validates that our methods can capture meaningful patterns and innovatively provide evidence at the large-scale. This analysis also hints that it can be helpful to abstract from words to topics to recognize higher-level patterns of gender associations, which motivates our next section on clustering.
Table TABREF11 displays a sample of our results – we find that the clusters are coherent in context and the labels seem reasonable. In the next section, we discuss human evaluations that we conducted to more rigorously evaluate the output, but first we discuss the value of these methods toward analysis.
At the word-level, we hypothesized that in the celebrity domain, women were more associated with appearance and men with creating content. Now, we can validate those hypotheses against labeled clusters – indeed, there is a cluster labeled clothing that is 100% female (i.e. 100% words are female-associated), and a 80% male cluster labeled movie. Likewise, in the professor domain, we had guessed that women are associated with communication and men with knowledge, and there is a 100% female cluster labeled communication and a 89% male cluster labeled cognition. Thus, cluster labeling proves to be very effective at pulling out the patterns that we believed we saw at the word-level, but could not formally validate.
The clusters we mentioned so far all lean heavily toward one gender association or the other, but some clusters are interesting precisely because they do not lean heavily – this allows us to see where semantic groupings do not align exactly with gender association. For example, in the celebrity domain, there is a cluster labeled lover that has a mix of female-associated words (“boyfriend,” “beau,” “hubby”) and male-associated words (“wife,” “girlfriend”). Jointly leveraging cluster labels and gender associations allows us to see that in the semantic context of having a lover, women are typically associated with male figures and men with female figures, which reflects heteronormativity in society.
At the word-level, we hypothesized that in the celebrity domain, women were more associated with appearance and men with creating content.
|
What did the authors hypothesize at the word level?
|
At the word-level, the authors hypothesized that in the celebrity domain, women were more associated with appearance and men with creating content.
|
1909.06762
| false
| null |
Since dialogue dataset is not typically annotated with the retrieval results, training the KB-retriever is non-trivial. To make the training feasible, we propose two methods: 1) we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion; 2) we use Gumbel-Softmax BIBREF14 as an approximation of the non-differentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model. Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever. Both the retrievers trained with distant-supervision and Gumbel-Softmax technique outperform the compared systems in the automatic and human evaluations. Analysis empirically verifies our assumption that more than 80% responses in the dataset can be supported by a single KB row and better retrieval results lead to better task-oriented dialogue generation performance.
Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever.
|
Which dialog datasets did they experiment with?
|
The answers are shown as follows:
* Camrest
* InCar Assistant
|
1911.03562
| false
| null |
FLOAT SELECTED: Figure 25 Average citations for papers published 1965–2016 (left side) and 2010–2016 (right side), grouped by venue and paper type.
FLOAT SELECTED: Figure 25 Average citations for papers published 1965–2016 (left side) and 2010–2016 (right side), grouped by venue and paper type.
|
Which journal and conference are cited the most in recent years?
|
CL Journal and EMNLP conference
|
1911.09886
| true
| null |
We focus on the task of extracting multiple tuples with overlapping entities from sentences. We choose the New York Times (NYT) corpus for our experiments. This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations. We name this version as NYT29. We select 10% of the original training data and use it as the validation dataset. The remaining 90% is used for training. We include statistics of the training and test datasets in Table TABREF11.
This corpus has multiple versions, and we choose the following two versions as their test dataset has significantly larger number of instances of multiple relation tuples with overlapping entities. (i) The first version is used by BIBREF6 (BIBREF6) (mentioned as NYT in their paper) and has 24 relations. We name this version as NYT24. (ii) The second version is used by BIBREF11 (BIBREF11) (mentioned as NYT10 in their paper) and has 29 relations.
|
Are there datasets with relation tuples annotated, how big are datasets available?
|
Yes.
|
null | false
| 8
|
Abstracting latent representations from a long sequence of words, such as a paragraph, is a challenging task. While several novel neural network models BIBREF25 , BIBREF26 have been introduced in recent years for encoding a paragraph, Recurrent Neural Network (RNN)-based methods remain the most effective approaches. RNNs, especially the long-short term memory (LSTM) BIBREF27 models, have been widely used to encode a paragraph for machine translation BIBREF28 , dialogue systems BIBREF29 and text summarization BIBREF30 because of its ability in modeling long-distance dependencies between words. In addition, among four typical pooling methods (sum, mean, last and max) for calculating sentence representations from RNN-encoded hidden states for individual words, max-pooling along with bidirectional LSTM (Bi-LSTM) BIBREF31 yields the current best universal sentence representation method BIBREF32 . We adopted a similar neural network architecture for paragraph encoding.
In addition, among four typical pooling methods (sum, mean, last and max) for calculating sentence representations from RNN-encoded hidden states for individual words, max-pooling along with bidirectional LSTM (Bi-LSTM) (Schuster and Paliwal, 1997) yields the current best universal sentence representation method (Conneau et al., 2017).
|
What are the four typical pooling methods for calculating sentence representations from RNN-encoded hidden states for individual words?
|
Sum, mean, last and max.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.