paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
null
Which is a species of fish? John Dory or Jack Sparrow
John Dory
null
false
194
Micro-blogging social media platforms have become very popular in recent years. One of the most popular platforms is Twitter, which allows users to broadcast short texts (i.e., 140 characters initially, and 280 characters in a recent platform update) in real time with almost no restrictions on content. Twitter is a source of people’s attitudes, opinions, and thoughts toward the things that happen in their daily life. Twitter data are publicly accessible through Twitter application programming interface (API); and there are several tools to download and process these data. Twitter is being increasingly used as a valuable instrument for surveillance research and predictive analytics in many fields including epidemiology, psychology, and social sciences. For example, Bian et al. explored the relation between promotional information and laypeople’s discussion on Twitter by using topic modeling and sentiment analysis BIBREF0. Zhao et al. assessed the mental health signals among sexual and gender minorities using Twitter data BIBREF1. Twitter data can be used to study and predict population-level targets, such as disease incidence BIBREF2, political trends BIBREF3, earthquake detection BIBREF4, and crime perdition BIBREF5, and individual-level outcomes or life events, such as job loss BIBREF6, depression BIBREF7, and adverse events BIBREF8. Since tweets are unstructured textual data, natural language processing (NLP) and machine learning, especially deep learning nowadays, are often used for preprocessing and analytics. However, for many studiesBIBREF9, BIBREF10, BIBREF11, especially those that analyze individual-level targets, manual annotations of several thousands of tweets, often by experts, is needed to create gold-standard training datasets, to be fed to the NLP and machine learning tools for subsequent, reliable automated processing of millions of tweets. Manual annotation is obviously labor intense and time consuming. Crowdsourcing can scale up manual labor by distributing tasks to a large set of workers working in parallel instead of a single people working serially BIBREF12. Commercial platforms such as Amazon’s Mechanical Turk (MTurk, https://www. mturk.com/), make it easy to recruit a large crowd of people working remotely to perform time consuming manual tasks such as entity resolution BIBREF13, BIBREF14, image or sentiment annotation BIBREF15, BIBREF16. The annotation tasks published on MTurk can be done on a piecework basis and, given the very large pool of workers usually available (even by selecting a subset of those who have, say, a college degree), the tasks can be done almost immediately. However, any crowdsourcing service that solely relies on human workers will eventually be expensive when large datasets are needed, that is often the case when creating training datasets for NLP and deep learning tasks. Therefore, reducing the training dataset size (without losing performance and quality) would also improve efficiency while contain costs. Query optimization techniques (e.g., active learning) can reduce the number of tweets that need to be labeled, while yielding comparable performance for the downstream machine learning tasks BIBREF17, BIBREF18, BIBREF19. Active learning algorithms have been widely applied in various areas including NLP BIBREF20 and image processing BIBREF21. In a pool-based active learning scenario, data samples for training a machine learning algorithm (e.g., a classifier for identifying job loss events) are drawn from a pool of unlabeled data according to some forms of informativeness measure (a.k.a. active learning strategies BIBREF22), and then the most informative instances are selected to be annotated. For a classification task, in essence, an active learning strategy should be able to pick the “best” samples to be labelled that will improve the classification performance the most. In this study, we integrated active learning into a crowdsourcing pipeline for the classification of life events based on individual tweets. We analyzed the quality of crowdsourcing annotations and then experimented with different machine/deep learning classifiers combined with different active learning strategies to answer the following two research questions (RQs): RQ1. How does (1) the amount of time that a human worker spends on and (2) the number of workers assigned to each annotation task impact the quality of an-notation results? RQ2. Which active learning strategy is most efficient and cost-effective to build event classification models using Twitter data? -5pt In this study, we integrated active learning into a crowdsourcing pipeline for the classification of life events based on individual tweets.
Why do they integrate active learning into a crowdsourcing pipeline in the paper?
For the classification of life events based on individual tweets.
null
false
null
Which of the star signs are earth signs?
Virgo, Taurus and Capricorn are the three astrological earth signs in the Zodiac.
null
false
null
Tell me about cockney rhyming slang
Cockney rhyming slang is most closely associated with East London in England. It dates back to the 19th century. It works like this: A word or phrase is replaced by another word or phrase that rhymes with it. For example instead of saying the word "face" you would say "boat race". The rhyming phrase is often longer than the original word and so the rhyme gets abbreviated. In our example "boat race" becomes "boat". So a sentence might become "you should have seen the look on their boat" instead of "you should have seen the look on their face". Another example might be "I'm off up the apple and pears to go to me Uncle Ned" - "I'm going upstairs to go to bed".
null
false
null
Classify the animals based on their food habits into Herbivores or Carnivores. Jackal, Hyena, Zebra, Giraffe
Herbivores - Zebra, Giraffe Carnivores. - Jackal, Hyena
null
false
null
What are the primary causes of inflation?
In economics, inflation is the growth rate of the price of goods and services for a particular selection of items. For example, items that a typical household consumes are captured in the Consumer Price Index, whereas items that domestic producers buy are included in the Producer Price Index. The drivers behind changes in the general price levels can be attributed to two main reasons. First, changes in the real demand and supply of goods and services will drive prices up or down. Higher prices will encourage suppliers to increase supply and discourage consumer demand reestablishing market equilibrium. Second, changes in the money supply –i.e. the amount of currency in circulation, bank deposits, and other liquid monetary aggregates– can lead to inflation. When a country's central bank decides to increase the money supply through quantitative easing monetary policies, the amount of currency available to buy and sell the same amount of goods and services increases. If the rate at which money is exchanged in an economy remains unchanged (i.e. velocity of money is constant) then a higher money supply will lead to higher prices because there is more money to transact the same amount of goods and services in the economy.
null
false
267
In our evaluation of the proposed scheme, each classifier is implemented as a deep learning model having four layers, as illustrated in Figure FIGREF16 , and is described as follows: The Input (a.k.a Embedding) Layer. The input layer's size is defined by the number of inputs for that classifier. This number equals the size to the word vector plus the number of additional features. The word vector dimension was set to 30 so that to be able to encode every word in the vocabulary used. The hidden layer. The sigmoid activation was selected for the the hidden LSTM layer. Based on preliminary experiments the dimensionality of the output space for this layer was set to 200. This layer is fully connected to both the Input and the subsequent layer. The dense layer. The output of the LSTM was run through an additional layer to improve the learning and obtain more stable output. The ReLU activation function was used. Its size was selected equal to the size of the input layer. The output layer. This layer has 3 neurons to provide output in the form of probabilities for each of the three classes Neutral, Racism, and Sexism. The softmax activation function was used for this layer. In total we experimented with 11 different setups of the proposed scheme, each with a different ensemble of classifiers, see Table TABREF17 . In our evaluation of the proposed scheme, each classifier is implemented as a deep learning model having four layers, as illustrated in Figure 2, and is described as follows: • The Input (a.k.a Embedding) Layer. The input layer’s size is defined by the number of inputs for that classifier. This number equals the size to the word vector plus the number of additional features. The word vector dimension was set to 30 so that to be able to encode every word in the vocabulary used. • The hidden layer. The sigmoid activation was selected for the the hidden LSTM layer. Based on preliminary experiments the dimensionality of the output space for this layer was set to 200. This layer is fully connected to both the Input and the subsequent layer. • The dense layer. The output of the LSTM was run through an additional layer to improve the learning and obtain more stable output. The ReLU activation function was used. Its size was selected equal to the size of the input layer. • The output layer. This layer has 3 neurons to provide output in the form of probabilities for each of the three classes Neutral, Racism, and Sexism. The softmax activation function was used for this layer.
What layers does their model consist of?
The input layer, the hidden layer, the dense layer, and the output layer.
null
false
null
What does the software company Teamwork do and where is it located
Teamwork is an Irish, privately owned, web-based software company headquartered in Cork, Ireland. Teamwork creates task management and team collaboration software. Founded in 2007, as of 2016 the company stated that its software was in use by over 370,000 organisations worldwide (including Disney, Spotify and HP), and that it had over 2.4m users.
null
false
null
Seattle is located between the saltwater Puget Sound (an arm of the Pacific Ocean) to the west and Lake Washington to the east. The city's chief harbor, Elliott Bay, is part of Puget Sound, which makes the city an oceanic port. To the west, beyond Puget Sound, are the Kitsap Peninsula and Olympic Mountains on the Olympic Peninsula; to the east, beyond Lake Washington and the Eastside suburbs, are Lake Sammamish and the Cascade Range. Lake Washington's waters flow to Puget Sound through the Lake Washington Ship Canal (consisting of two man-made canals, Lake Union, and the Hiram M. Chittenden Locks at Salmon Bay, ending in Shilshole Bay on Puget Sound).[citation needed] The sea, rivers, forests, lakes, and fields surrounding Seattle were once rich enough to support one of the world's few sedentary hunter-gatherer societies. The surrounding area lends itself well to sailing, skiing, bicycling, camping, and hiking year-round. The city itself is hilly, though not uniformly so. Like Rome, the city is said to lie on seven hills; the lists vary but typically include Capitol Hill, First Hill, West Seattle, Beacon Hill, Queen Anne, Magnolia, and the former Denny Hill. The Wallingford, Delridge, Mount Baker, Seward Park, Washington Park, Broadmoor, Madrona, Phinney Ridge, Sunset Hill, Blue Ridge, Broadview, Laurelhurst, Hawthorne Hills, Maple Leaf, and Crown Hill neighborhoods are all located on hills as well. Many of the hilliest areas are near the city center, with Capitol Hill, First Hill, and Beacon Hill collectively constituting something of a ridge along an isthmus between Elliott Bay and Lake Washington. The break in the ridge between First Hill and Beacon Hill is man-made, the result of two of the many regrading projects that reshaped the topography of the city center. The topography of the city center was also changed by the construction of a seawall and the artificial Harbor Island (completed 1909) at the mouth of the city's industrial Duwamish Waterway, the terminus of the Green River. The highest point within city limits is at High Point in West Seattle, which is roughly located near 35th Ave SW and SW Myrtle St. Other notable hills include Crown Hill, View Ridge/Wedgwood/Bryant, Maple Leaf, Phinney Ridge, Mt. Baker Ridge, and Highlands/Carkeek/Bitterlake.[citation needed] North of the city center, Lake Washington Ship Canal connects Puget Sound to Lake Washington. It incorporates four natural bodies of water: Lake Union, Salmon Bay, Portage Bay, and Union Bay.[citation needed] Due to its location in the Pacific Ring of Fire, Seattle is in a major earthquake zone. On February 28, 2001, the magnitude 6.8 Nisqually earthquake did significant architectural damage, especially in the Pioneer Square area (built on reclaimed land, as are the Industrial District and part of the city center), and caused one fatality. Other strong quakes occurred on January 26, 1700 (estimated at 9 magnitude), December 14, 1872 (7.3 or 7.4), April 13, 1949 (7.1), and April 29, 1965 (6.5). The 1965 quake caused three deaths in Seattle directly and one more by heart failure. Although the Seattle Fault passes just south of the city center, neither it nor the Cascadia subduction zone has caused an earthquake since the city's founding. The Cascadia subduction zone poses the threat of an earthquake of magnitude 9.0 or greater, capable of seriously damaging the city and collapsing many buildings, especially in zones built on fill. According to the United States Census Bureau, the city has a total area of 142.5 square miles (369 km2), 83.9 square miles (217 km2) of which is land and 58.7 square miles (152 km2), water (41.16% of the total area).[citation needed]
Based on this excerpt, what are the main topological elements of Seattle, Washington?
Seattle is contained by 2 main bodies of water, the Puget Sound and Lake Washington. Between these two bodies of water there is a ridge, consisting of several neighborhoods along the top of the ridge. The city in general is hilly, with many other neighborhoods on hills that are not along the main ridge.
null
false
115
The aim of the technical validation of the data is to guarantee good recording quality and to replicate findings of previous studies investigating co-registration of EEG and eye movement data during natural reading tasks (e.g. dimigen2011coregistration). We also compare the results to ZuCo 1.0 BIBREF1, which allows a more direct comparison due to the analogous recording procedure. Moreover, these previous studies were able to show an effect of fixation duration on the resulting FRPs. To show this dependency we followed two approaches. First, for each reading task, all single-trial FRPs were ordered by fixation duration and a vertical sliding time-window was used to smooth the data (Dimigen et al., 2011).
In what order did they sort single-trial FRPs in the paper?
Fixation duration.
null
false
null
How would it feel to be the only person left alive in the US?
It’s done. I am the only one left, at least in the US. When it first happened, I was convinced that there must be other pockets of people in the US. I have spent the last five years traveling throughout the country, to large cities, small towns and rural areas and I have not seen or heard any people or any evidence that anyone else survived. After five years the loneliness is crushing. Every day I hope to find another human being – to talk to, to touch, perhaps even beginning to rebuild community together. I vacillate between rage and despair that we were unable to save our planet. We had so much global technology and innovation and yet as climate change grew continuously worse nations were unable to collaborate to find solutions. As more and more locations became uninhabitable due to rising heat and humidity, increasingly frequent and intense natural hazards, and failed crops, we simply let people die. Although I still cannot imagine what I could have done differently, I wish I had done more.
null
false
null
Identify which car manufacturer is Italian or American: Ferrari, GMC
GMC is American, Ferrari is Italian
null
false
null
Kyoto is one of the oldest municipalities in Japan, having been chosen in 794 as the new seat of Japan's imperial court by Emperor Kanmu. The original city, named Heian-kyō, was arranged in accordance with traditional Chinese feng shui following the model of the ancient Chinese capitals of Chang'an and Luoyang. The emperors of Japan ruled from Kyoto in the following eleven centuries until 1869. It was the scene of several key events of the Muromachi period, Sengoku period, and the Boshin War, such as the Ōnin War, the Honnō-ji Incident, the Kinmon incident and the Battle of Toba–Fushimi. The capital was relocated from Kyoto to Tokyo after the Meiji Restoration.
When was Kyoto the capital of Japan?
The emperors of Japan ruled from Kyoto from 794 as the new seat of Japan's imperial court for eleven centuries until 1869.
null
false
null
Do you have any dietary restrictions? If so, what are they.
Yes. I'm both kosher AND vegetarian. That mean simply getting kosher food that contains meat or fish won't be something I can eat. Likewise, a vegetarian entree from a regular restaurant also won't work as there are all sorts of problems that might ensure. These include: cross-contamination w/ utensils used to prepare meat or fish, inclusion of ingredients that appear vegetarian but aren't (e.g., cheese made with animal rennet or enzymes, pastries containing gelatin made from bones, natural colorants or flavorings derived from insects or animals, egg-substitute made from plasmolyzed fish blood, etc.). My own personal standard doesn't require certification by a kosher agency; but in general, I'll only eat at places that only serve vegan or vegetarian items (and from the latter, there may be items I can't eat).
null
false
161
Emotion detection has long been a topic of interest to scholars in natural language processing (NLP) domain. Researchers aim to recognize the emotion behind the text and distribute similar ones into the same group. Establishing an emotion classifier can not only understand each user's feeling but also be extended to various application, for example, the motivation behind a user's interests BIBREF0. Based on releasing of large text corpus on social media and the emotion categories proposed by BIBREF1, BIBREF2, numerous models have provided and achieved fabulous precision so far. For example, DeepMoji BIBREF3 which utilized transfer learning concept to enhance emotions and sarcasm understanding behind the target sentence. CARER BIBREF4 learned contextualized affect representations to make itself more sensitive to rare words and the scenario behind the texts. As methods become mature, text-based emotion detecting applications can be extended from a single utterance to a dialogue contributed by a series of utterances. Table TABREF2 illustrates the difference between single utterance and dialogue emotion recognition. The same utterances in Table TABREF2, even the same person said the same sentence, the emotion it convey may be various, which may depend on different background of the conversation, tone of speaking or personality. Therefore, for emotion detection, the information from preceding utterances in a conversation is relatively critical. In SocialNLP 2019 EmotionX, the challenge is to recognize emotions for all utterances in EmotionLines dataset, a dataset consists of dialogues. According to the needs for considering context at the same time, we develop two classification models, inspired by bidirectional encoder representations from transformers (BERT) BIBREF5, FriendsBERT and ChatBERT. In this paper, we introduce our approaches including causal utterance modeling, model pre-training, and fine-turning. According to the needs for considering context at the same time, we develop two classification models, inspired by bidirectional encoder representations from transformers (BERT, FriendsBERT and ChatBERT.
Which two models did the author develop?
FriendsBERT and ChatBERT.
null
false
null
What are some items you can put on a bookshelf, other than books, to make it nice looking.
Peppering a bookshelf with items other than books can be aesthetically pleasing. Some ideas for items to add to a bookshelf include small containers or boxes to hold small items, pictures in picture frames, book ends, trinkets, polished stones or crystals, candles, statues, plants, flower arrangements (living or dried), a standing clock, antique machines, a lamp, or a globe.
1910.06701
false
null
Experiments ::: Baselines For comparison, we select several public models as baselines including semantic parsing models: BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage; QANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage; BERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently; and numerical MRC models: NAQANet BIBREF6, a numerical version of QANet model. NAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc. The enhancements are also used in our NumNet model and the details are given in the Appendix. Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations; OpenIE BIBREF6, KDG with open information extraction based sentence representations; SRL BIBREF6, KDG with semantic role labeling based sentence representations; and traditional MRC models: Experiments ::: Baselines For comparison, we select several public models as baselines including semantic parsing models: BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage; QANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage; BERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently; and numerical MRC models: NAQANet BIBREF6, a numerical version of QANet model. NAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc. Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations; OpenIE BIBREF6, KDG with open information extraction based sentence representations; SRL BIBREF6, KDG with semantic role labeling based sentence representations; and traditional MRC models:
what are the existing models they compared with?
The answers are shown as follows: * Syn Dep * OpenIE * SRL * BiDAF * QANet * BERT * NAQANet * NAQANet+
null
false
null
Write an email to someone updating that I’ve accepted the position at their company to which they referred me
Hi, I wanted to update you that I accepted the offer! Thank you for being such a huge part of this, from coffee chats to interview prepping. You made all the difference in my application journey and I’m so excited to work with you.
null
false
null
The Indian Premier League (IPL) (also known as the TATA IPL for sponsorship reasons) is a men's Twenty20 (T20) cricket league held annually in India and contested by ten city-based franchise teams. The league was founded by the Board of Control for Cricket in India (BCCI) in 2007. Brijesh Patel is the incumbent chairman of the IPL. The competition is usually held annually in summer (between March and May) and has an exclusive window in the ICC Future Tours Programme, meaning that less international cricket take place during the IPL seasons. The IPL is the most popular cricket league in the world, and in 2014 it was ranked sixth by average attendance among all sports leagues.[needs update] In 2010, the IPL became the first sporting event in the world to be broadcast live on YouTube. The brand value of the IPL in 2022 was ₹90,038 crore (US$11 billion). According to BCCI, the 2015 IPL season contributed ₹1,150 crore (US$140 million) to the GDP of the Indian economy. In December 2022, the league became a decacorn valued at $10.9 billion registering a 75% growth in dollar terms since 2020 when it was valued at $6.2 billion, according to a report by consulting firm D & P Advisory. With five titles, Mumbai Indians are the most successful team in the league's history in terms of the number of titles won. The Chennai Super Kings have won four titles, the Kolkata Knight Riders have won two, and four other teams, Rajasthan Royals, Deccan Chargers, Sunrisers Hyderabad and Gujarat Titans have won the title once.
List the teams that have won IPL.
The teams that have won IPL are Mumbai Indians, Chennai Super Kings, Kolkata Knight Riders, Rajasthan Royals, Deccan Chargers, Sunrisers Hyderabad and Gujarat Titans.
null
false
null
Why do people enjoy surfing
Surfing is enjoyable because it connects you with the ocean and can often be seen as therapeutic. Additionally surfing can often be a great form of exercise which helps maintain a healthy lifestyle. Lastly, surfing is done in the ocean and typically near a beach. People enjoy the atmosphere and joy the beach brings.
null
false
240
Paraphrase identification is an important topic in artificial intelligence and this task justifies whether two sentences expressed in various forms are semantically similar, BIBREF0 . For example, “On Sunday, the boy runs in the yard” and “The child runs outside at the weekend” are identified as paraphrase. This task directly benefits many industrial applications, such as plagiarism identification BIBREF0 , machine translation BIBREF1 and removing redundancy questions in Quora website BIBREF2 . Recently, there emerge many methods, such as ABCNN BIBREF3 , Siamese LSTM BIBREF2 and L.D.C BIBREF4 . Conventionally, neural methodology aligns the sentence pair and then generates a matching score for paraphrase identification, BIBREF4 , BIBREF2 . Regarding the alignment, we conjecture that the aligned unmatched parts are semantically critical, where we define the corresponded word pairs with low similarity as aligned unmatched parts. For an example: “On Sunday, the boy runs in the yard” and “The child runs inside at the weekend”, the matched parts (i.e. (Sunday, weekend), (boy, child), run) barely make contribution to the semantic sentence similarity, but the unmatched parts (i.e. “yard” and “inside”) determine these two sentences are semantically dissimilar. For another example: “On Sunday, the boy runs in the yard” and “The child runs outside at the weekend”, the aligned unmatched parts (i.e. “yard” and “outside”) are semantically similar, which makes the two sentences paraphrase. In conclusion, if the aligned unmatched parts are semantically consistent, the two sentences are paraphrase, otherwise they are non-paraphrase. Traditional alignment methods take advantage of attention mechanism BIBREF2 , which is a soft-max weighting technique. Weighting technique could pick out the most similar/dissimilar parts, but is weak in modeling the aligned unmatched parts, which are the crucial evidence to identify paraphrase. For the input sentences in Figure FIGREF1 , the weight between “Sunday” and “run” is lower than the weight between “yard” and “inside”, but the former weight is not the evidence of paraphrase/non-paraphrase, because the former two words that are most dissimilar should not be aligned for an inappropriate comparison. To extract the aligned unmatched parts, in this paper, we embed Hungarian algorithm BIBREF5 into neural architecture as Hungarian layer (Algorithm SECREF7 ). Illustrated in Figure FIGREF1 , the alignment in sentence matching could be formulated as the task-assignment problem, which is tackled by Hungarian algorithm. Simply, Hungarian algorithm works out the theoretically optimal alignment relationship in an exclusive manner and the exclusiveness characterizes the aligned unmatched parts. For the example in Figure FIGREF1 , because Hungarian layer allocates the aligned pairs with exclusiveness, the matched parts (i.e (Sunday, weekend), (boy, child), run) are aligned firstly, then the word “yard” would be assigned to the word “inside” with a negative similarity, making a strong evidence for discrimination. Specifically, our model performs this task in three steps. First, our model applies BiLSTM to parse the input sentences into hidden representations. Then, Hungarian layer leverages the hidden representations to extract the aligned unmatched parts. Last, we apply cosine similarity to metric the aligned unmatched parts for a final discrimination. Regarding the training process of Hungarian layer, we modify the back-propagation algorithm in both directions. In the forward pass, Hungarian layer works out the alignment relationship, according to which, the computational graph is dynamically constructed, as demonstrated in Figure FIGREF13 . Once the computational graph has been dynamically constructed, the backward propagation could be performed as usual in a conventional graph. We conduct our experiments on the public benchmark dataset of “Quora Question Pairs” for the task of paraphrase identification. Experimental results demonstrate that our model outperforms other baselines extensively and significantly, which verifies our theory about the aligned unmatched parts and illustrates the effectiveness of our methodology. Contributions. (1.) We offer a new perspective for paraphrase identification, which focuses on the aligned unmatched parts of two sentences. Accordingly, we propose the Hungarian layer to extract the aligned unmatched parts. The proposed method can achieve hard and exclusive alignments between two sequences, while we can learn parameters by end-to-end back-propagation. (2.) Our model outperforms other baselines extensively, verifying the effectiveness of our theory and method. Organization. In Section 2, we survey the related work of paraphrase identification and dynamic differentiable computational graphs. In Section 3, we introduce our neural architecture. In Section 4, we conduct the experiments. In Section 5, we conclude our paper and publish our codes. Experimental results demonstrate that our model outperforms other baselines extensively and significantly, which verifies our theory about the aligned unmatched parts and illustrates the effectiveness of our methodology.
Does the model verify the theory about the aligned unmatched parts and illustrate the effectiveness of the methodology?
Yes, it does.
null
false
335
While the demand for physical and manual labor is gradually declining, there is a growing need for a workforce with soft skills. Which soft skill do you think would be the most valuable in your daily life? According to an article in Forbes BIBREF0 , 70% of employed Americans agree that public speaking skills are critical to their success at work. Yet, it is one of the most dreaded acts. Many people rate the fear of public speaking even higher than the fear of death BIBREF1 . To alleviate the situation, several automated systems are now available that can quantify behavioral data for participants to reflect on BIBREF2 . Predicting the viewers' ratings from the speech transcripts would enable these systems to generate feedback on the potential audience behavior. Predicting human behavior, however, is challenging due to its huge variability and the way the variables interact with each other. Running Randomized Control Trials (RCT) to decouple each variable is not always feasible and also expensive. It is possible to collect a large amount of observational data due to the advent of content sharing platforms such as YouTube, Massive Open Online Courses (MOOC), or ted.com. However, the uncontrolled variables in the observational dataset always keep a possibility of incorporating the effects of the “data bias” into the prediction model. Recently, the problems of using biased datasets are becoming apparent. BIBREF3 showed that the error rates in the commercial face-detectors for the dark-skinned females are 43 times higher than the light-skinned males due to the bias in the training dataset. The unfortunate incident of Google's photo app tagging African-American people as “Gorilla” BIBREF4 also highlights the severity of this issue. We address the data bias issue as much as possible by carefully analyzing the relationships of different variables in the data generating process. We use a Causal Diagram BIBREF5 , BIBREF6 to analyze and remove the effects of the data bias (e.g., the speakers' reputations, popularity gained by publicity, etc.) in our prediction model. In order to make the prediction model less biased to the speakers' race and gender, we confine our analysis to the transcripts only. Besides, we normalize the ratings to remove the effects of the unwanted variables such as the speakers' reputations, publicity, contemporary hot topics, etc. For our analysis, we curate an observational dataset of public speech transcripts and other meta-data collected from the ted.com website. This website contains a large collection of high-quality public speeches that are freely available to watch, share, rate, and comment on. Every day, numerous people watch and annotate their perceptions about the talks. Our dataset contains 2231 public speech transcripts and over 5 million ratings from the spontaneous viewers of the talks. The viewers annotate each talk by 14 different labels—Beautiful, Confusing, Courageous, Fascinating, Funny, Informative, Ingenious, Inspiring, Jaw-Dropping, Long-winded, Obnoxious, OK, Persuasive, and Unconvincing. We use two neural network architectures in the prediction task. In the first architecture, we use LSTM BIBREF7 for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM BIBREF8 to represent the input sentences in the form of a dependency tree. Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). To the best of our knowledge, this is the best performance in the literature on predicting the TED talk ratings. We compare the performances of these two models with a baseline of classical machine learning techniques using hand-engineered features. We find that the neural networks largely outperform the classical methods. We believe this gain in performance is achieved by the networks' ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels. We find that the neural networks largely outperform the classical methods.
Do the neural networks largely outperform the classical methods?
Yes, they do.
null
false
170
The study of gender and language has a rich history in social science. Its roots are often attributed to Robin Lakoff, who argued that language is fundamental to gender inequality, “reflected in both the ways women are expected to speak, and the ways in which women are spoken of” BIBREF2. Prominent scholars following Lakoff have included Deborah Tannen BIBREF3, Mary Bucholtz and Kira Hall BIBREF4, Janet Holmes BIBREF5, Penelope Eckert BIBREF6, and Deborah Cameron BIBREF7, along with many others. In recent decades, the study of gender and language has also attracted computational researchers. Echoing Lakoff's original claim, a popular strand of computational work focuses on differences in how women and men talk, analyzing key lexical traits BIBREF8, BIBREF9, BIBREF10 and predicting a person's gender from some text they have written BIBREF11, BIBREF12. There is also research studying how people talk to women and men BIBREF13, as well as how people talk about women and men, typically in specific domains such as sports journalism BIBREF14, fiction writing BIBREF15, movie scripts BIBREF16, and Wikipedia biographies BIBREF17, BIBREF18. Our work builds on this body by diving into two novel domains: celebrity news, which explores gender in pop culture, and student reviews of CS professors, which examines gender in academia and, particularly, the historically male-dominated field of CS. Furthermore, many of these works rely on manually constructed lexicons or topics to pinpoint gendered language, but our methods automatically infer gender-associated words and labeled clusters, thus reducing supervision and increasing the potential to discover subtleties in the data. Modeling gender associations in language could also be instrumental to other NLP tasks. Abusive language is often founded in sexism BIBREF0, BIBREF1, so models of gender associations could help to improve detection in those cases. Gender bias also manifests in NLP pipelines: prior research has found that word embeddings preserve gender biases BIBREF19, BIBREF20, BIBREF21, and some have developed methods to reduce this bias BIBREF22, BIBREF23. Yet, the problem is far from solved; for example, BIBREF24 showed that it is still possible to recover gender bias from “de-biased” embeddings. These findings further motivate our research, since before we can fully reduce gender bias in embeddings, we need to develop a deeper understanding of how gender permeates through language in the first place. We also build on methods to cluster words in word embedding space and automatically label clusters. Clustering word embeddings has proven useful for discovering salient patterns in text corpora BIBREF25, BIBREF26. Once clusters are derived, we would like them to be interpretable. Much research simply considers the top-n words from each cluster, but this method can be subjective and time-consuming to interpret. Thus, there are efforts to design methods of automatic cluster labeling BIBREF27. We take a similar approach to BIBREF28, who leverage word embeddings and WordNet during labeling, and we extend their method with additional techniques and evaluations. We also build on methods to cluster words in word embedding space and automatically label clusters.
What methods do the authors build on?
The authors build on methods to cluster words in word embedding space and automatically label clusters.
null
false
null
What is "the Scully effect?"
The Scully Effect, named for the science-minded character Dana Scully from the sci-fi TV show "The X-Files," is the term used to suggest the impact that the character had on influencing women to pursue careers in science, technology, engineering, and math.
null
false
null
Pinot noir (French: [pino nwaʁ]) is a red-wine grape variety of the species Vitis vinifera. The name may also refer to wines created predominantly from pinot noir grapes. The name is derived from the French words for pine and black. The word pine alludes to the grape variety having tightly clustered, pinecone–shaped bunches of fruit.
What does Pinot Noir mean?
The name Pinot Noir is derived from the French words for pine and black.
null
false
null
Find the bird in this list: Shark, Dog, Arctic tern
Arctic tern
null
false
17
Early research efforts, such as the TDT program BIBREF5 , have studied news clustering for some time. The problem of online monolingual clustering algorithms (for English) has also received a fair amount of attention in the literature. One of the earlier papers by aggarwal2006framework introduced a two-step clustering system with both offline and online components, where the online model is based on a streaming implementation of INLINEFORM0 -means and a bag-of-words document representation. Other authors have experimented with distributed representations, such as ahmed2011unified, who cluster news into storylines using Markov chain Monte Carlo methods, rehureklrec who used incremental Singular Value Decomposition (SVD) to find relevant topics from streaming data, and sato2017distributed who used the paragraph vector model BIBREF6 in an offline clustering setting. More recently, crosslingual linking of clusters has been discussed by rupnik2016news in the context of linking existing clusters from the Event Registry BIBREF7 in a batch fashion, and by steinberger2016mediagist who also present a batch clustering linking system. However, these are not “truly” online crosslingual clustering systems since they only decide on the linking of already-built monolingual clusters. In particular, rupnik2016news compute distances of document pairs across clusters using nearest neighbors, which might not scale well in an online setting. As detailed before, we adapted the cluster-linking dataset from rupnik2016news to evaluate our online crosslingual clustering approach. Preliminary work makes use of deep learning techniques BIBREF8 , BIBREF9 to cluster documents while learning their representations, but not in an online or multilingual fashion, and with a very small number of cluster labels (4, in the case of the text benchmark). In our work, we studied the problem of monolingual and crosslingual clustering, having experimented several directions and methods and the impact they have on the final clustering quality. We described the first system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. As detailed before, we adapted the cluster-linking dataset from Rupnik et al. to evaluate our online crosslingual clustering approach.
What method has the author adapted?
They adapted the cluster-linking dataset from Rupnik et al. to evaluate their online crosslingual clustering approach.
null
false
12
20 newsgroups data set is one of the frequently used datasets in the text processing community for text classification and text clustering. This data set contains approximately 20,000 English documents from 20 topics to be identified, with 11314 documents for training and 7532 for testing. In this work, we used only 90% of documents for training and the remaining 10% for validation. For fair comparison with other publications, we used 53160 words vocabulary set available in the datasets website. 20 newsgroups3 data set is one of the frequently used datasets in the text processing community for text classification and text clustering. This data set contains approximately 20,000 English documents from 20 topics to be identified, with 11314 documents for training and 7532 for testing.
How many documents were used for training in the 20 newsgroups dataset?
11314 documents.
null
false
null
IRRI station is a railway station located on the South Main Line in Los Baños, Laguna, Philippines. It is a flag stop for the line as there are no platforms yet being erected, temporary stairs for the trains are added in the meantime to facilitate loading and unloading. History In December 2019, the flag stop was opened as PNR extended the Metro South Commuter trips by adding 5 more stations on the present commuter line. KiHa 59 series and KiHa 35 trainsets ply the route, with the former servicing the entire route to Tutuban and the latter going up to Alabang only. The station served as the southern terminus of the newly opened line. Services was disrupted as soon as the lockdown caused by the COVID-19 Pandemic takes effect mid-March 2020. As of October 2021, the service is still inactive. A passing loop was planned for possible use of locomotives in the station but only the switch was laid. This plan was not realized as of October 2021. In January 2022, the railway switch and the steel stairs was dismantled by PNR Crew along with DEL 5007 to be repurposed for the upcoming Inter-Provincial Commuter Train Service between San Pablo City in the province of Laguna and Lucena City in the province of Quezon. Only some dismantled rail pieces and railfrogs remain scattered in the area of the flagstop. In May 25, 2022, an inspection train hailing from Dela Rosa Station travelled to IRRI Flagstop with officials onboard to conduct certification of the railway from Manila to Los Banos for possible reopening of commuter services along with the San Pablo-Lucena Commuter Line. The trainset used consist of DHL-9003, PC 8303, with DEL 5007 at the end serving as a back engine. As of July 2022 only the line connecting Laguna and Quezon Province had been realised while the Dela Rosa-IRRI-San Pablo is still pending due to lack of available train.
What was the compelling event that shut down the IRRI station?
Unfortunately, the IRRI station railway located on the South Main LIne in Los Banos, Laguna, Philippines became inactive in mid-March of 2020 due to COVID lockdowns.
null
false
null
What should I look for when looking for a used car?
Items to look for when looking to purchase a used car: * First of all, ask about any major work done to the car like engine, suspension, body work. If it's been in any accidents that may have impacted it's chassis at all. Later on when we're examining the car we may find evidence of damage or repair work, and it's good to know if the seller is lying to cover up any issues before we start negotiating price with them. Pulling the carfax can also be a good way to check if the seller is lying since it often contains detailed information about repair that has been performed to the car. If you notice that the seller is lying either after pulling the carfax or after inspecting the body, it's best not to purchase the car. * Ask the owner if they have any maintenance records. If they do, that not only means that the car has received the proper care but it also illustrates that the buyer is the kind of person that is very organized and likes to keep their processions in good condition, which is super important. That's the kind of person you want to be buying a car from. Also note that if the owner has been taking the car to the dealership or to an independent shop it's very easy to obtain these service records using the VIN number either online or by calling the mechanic directly. * Cars having a timing belt needs to be replaced every 70,000-100,000 miles. You can easily check if the car you're planning on purchasing has a timing belt or timing chain. Typically timing chains do not need to be replaced for the life of the car, so finding a car with a timing chain means that you have to worry less about this aspect. Depending on the car replacing the timing belt can be from 1,000 to 6,000 dollars so it's a pretty significant expense that you should take into account. * Ask how frequently the owner has been changing the oil. In most cases changing it every 3,000 to 6,000 or every year (whichever comes first) is ideal. Many owner manuals mention that you can go over this mileage. In some cases manuals mention oil change intervals of 10,000 or 20,000 miles. Personally, I would not consider a car that has its oil replaced this infrequently. In many cases, these longer intervals are an attempt by the manufacturer's marketing department to make the car more appealing to buyers if it requires less maintenance. Also ask if they've been using synthetic oil. Typically synthetic oil doesn't break down over time so longer intervals might be ok. * Ask when was the last time brake pads and rotors have been replaced. Typically brake pads need to be replaced every 50000-100000 miles. Brake rotors can be trickier because whether or not needs replacing depends on its thickness, however if during the test drive you notice vibration when braking, it's likely due to the brake rotors. If the owner hasn't replaced the brakes recently there might be a good opportunity for negotiating the price. * If the car is a manual ask if the clutch has been replaced. This is a wear item and the time to replace it depends on the usage (cars driven in cities with a lot of stop and go traffic will need a replacement more frequently than cars driving long distances on the freeway). * For automatics ask when was the last time the transmission fluid was changed. Typically this needs to be replaced every 50-60k miles. Again many service manuals mention that the transmission fluid does not need to be replaced at all, but that's arguably again due to marketing reasons. * Examine the tires and note how much tread is on the tires. If you were to imagine putting a penny into the tire's tread, a new tire would take up 1/3 - 1/2 of the penny. Also tires need to be replaced every 5 years. The tire sidewall has a 4 digit number which shows the week and year it was manufactured, so it's easy to check if it's due for a replacement. Lastly check if the tires (especially the front) are worn evenly. If the interior part is more worn compared to the exterior, they might indicate suspension or alignment issues. So these issues will need to be addressed and the tires will need to be replaced to fix that problem. * Ask for and look for any interior problems, bulbs, gauges, AC, power windows not working, stuff like that. * Check for rust in the wheel wells and rocker panels. Stand back from the car and look for any slight differences in color between it's exterior panels that may indicate they were replaced, also look for dents. The best and easiest indication that a car was repaired or in an accident is if the gaps between body panels is uneven. A good way to identify body panel issues is by running your finger across a body panel, if you feel that one part of the body is lower than the other or if that gap is not consistent, this might mean that the car has been in an accident. * Look for any signs of fluids, greasy stains on metal, anything that would indicate that it's leaked something. Oil leaks would look dark and greasy. Battery leaks look white and powdery. Coolant leaks look dark and smell sweet. Look at the bottom of the hood, if there are any leak stains there too. Also looks underneath the engine for any greasy spots, because fluids always drip down. * Do some parts on the car look newer/cleaner than others? A new radiator could mean that the car was recently in a frontal collision. * Listen to the engine. Does it sound like it's keeping a constant RPM? You may hear some ticking caused by piston slapping or poorly lubricated valves. These are signs that the engine is old, but isn't that big of a deal. Grinding noises are very bad and expensive to fix. Older engines can be pretty noisy until they warm up which is normal. * Ask the owner to rev the engine while you look at it. Does the engine hop violently when it's revved? This might mean that the engine mounts might need to be replaced. * Again, ask the owner to rev the engine, this time look at the exhaust. If any blue smoke comes out, or (if it's a warm day outside) any dense white smoke comes out it may indicate that the engine is burning oil, and unless you like adding oil to your engine as often as you fill up your gas tank, it'll be expensive to fix. Time to take it for a test drive! * Look at the gauge cluster, is the check engine light on? Keep an eye on the temperature gauge during your trip, does it fluctuate at all? A good car's temperature stays constant during it's trip. * Take some sharp turns at speed and listen to any thumping or squeeking noises that may come from the suspension or drive train. * Does the steering wheel shake at all? (when driving it could mean bad alignment or bad steering, when braking it could mean warped rotors) * Make sure you take the car in and out of all it's gears. * Listen for funny noises, rattles, squeeks, clanks, vibrations, etc. It's inevitable that older cars will have some interior panels rattle. * Accelerate hard and brake hard. If the car has ABS does it activate or do you feel that the wheels stop moving? * Go over some speed bumps, or go on a bumpy road. The car shouldn't be bouncy when recovering from a bump, and neither should it feel like riding over a bump in a shopping cart. It should absorb the bump in a firm but smooth manner. * After driving for a while, pull over to the side of the road and turn off the car. Pop the hood and look at the coolant resevoir (DON'T OPEN IT!!!) and check if the fluid inside is bubbling or boiling. If it is then the headgasket is blown. * Remember when we asked the owner when they changed their oil? New oil should look like honey, old oil should have the same color as dark coffee (dark coffee color is a good thing). If the oil is brown or any other funky colors or smells funny something might be wrong with the engine. * Once you let the car rest for a few minutes check the oil level by using the dipstick if it's not all the way to full, it might be burning or leaking oil.
null
false
null
Which states are part of the New England region?
The states in New England include Massachusetts, New Hampshire, Maine, Vermont, Rhode Island, and Connecticut.
null
false
null
The Porsche flat-six engine series is a line of mechanically similar, naturally aspirated and turbocharged, flat-six boxer engines, produced by Porsche for almost 60 consecutive years, since 1963. The engine is an evolution of the flat-four boxer used in the original Volkswagen Beetle. 1967 Porsche 911R 0:22 2006 Porsche Cayman S 0:34 Problems playing these files? See media help. The flat-six engine is most often associated with their 911 model, Porsche's flagship rear-engined sports car which has used flat-six engines exclusively since 1963. The engines were air-cooled until 1999, when Porsche started using water-cooled engines. In April 2011, Porsche announced the third generation of the 997 GT3 RS with an enlarged 4.0-litre engine having a power output of 500 PS (368 kW; 493 hp). The naturally-aspirated 4.0-litre flat-six engine (the largest engine offered in a street-legal 911) was introduced with their 911 (997) GT3 RS 4.0, in 2011. The engine itself uses the crankshaft from the RSR with increased stroke dimensions (from 76.4 mm to 80.4 mm). This change increased the power output to 500 PS (368 kW; 493 hp) at 8,250 rpm and 460 N⋅m (339 lbf⋅ft) of torque at 5,750 rpm. giving it a power-to-weight ratio of 365 hp per ton. Only 600 cars were built. At 493 hp (368 kW), the engine is one of the most powerful six-cylinder naturally aspirated engines in any production car with a 123.25 hp (92 kW) per litre output. Other Porsche models that use flat-six engines are the 1970–1972 Porsche 914/6 (mid-engine), the 1986–1993 Porsche 959 (rear-engine), and the 1996–2021 Porsche Boxster/Cayman (mid-engine). The Porsche 962 sports prototype also used a twin-turbocharged flat-six engine. These engines have also been used by German tuning company RUF in various replica Porsche sports cars.
What is the most powerful Porsche flat six engine?
The most powerful flat six engine made by Porsche is the 997 GT3 RS 4.0. It had nearly 493hp on just 4.0L of displacement. This is a very rare car that hardly ever come up for sale. It is one of the most powerful six-cylinder naturally aspirated engines ever made.
1707.02377
false
null
Data dependent regularization. As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table TABREF24 lists the words having the smallest INLINEFORM0 norm of embeddings found by different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as debacle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words. As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words.
How do they determine which words are informative?
Informative are those that will not be suppressed by regularization performed.
null
false
315
Pre-trained language models such as BERT BIBREF0 have significantly improved the accuracy of various language processing tasks. However, we cannot apply BERT to language generation tasks as is because its model structure is not suitable for language generation. Several pre-trained seq-to-seq models for language generation BIBREF1, BIBREF2 based on an encoder-decoder Transformer model, which is a standard model for language generation, have recently been proposed. These models have achieved blackstate-of-the-art results in various language generation tasks, including abstractive summarization. However, when generating a summary, it is essential to correctly predict which part of the source text should be included in the summary. Some previous studies without pre-training have examined combining extractive summarization with abstractive summarization BIBREF3, BIBREF4. Although pre-trained seq-to-seq models have achieved higher accuracy compared to previous models, it is not clear whether modeling “Which part of the source text is important?” can be learned through pre-training. blackThe purpose of this study is to clarify the blackeffectiveness of combining saliency models that identify the important part of the source text with a pre-trained seq-to-seq model in the abstractive summarization task. Our main contributions are as follows: We investigated nine combinations of pre-trained seq-to-seq and token-level saliency models, where the saliency models share the parameters with the encoder of the seq-to-seq model or extract important tokens independently of the encoder. We proposed a new combination model, the conditional summarization model with important tokens (CIT), in which a token sequence extracted by a saliency model is explicitly given to a seq-to-seq model as an additional input text. We evaluated the combination models on the CNN/DM BIBREF5 and XSum BIBREF6 datasets. Our CIT model outperformed a simple fine-tuned model in terms of ROUGE scores on both datasets. We evaluated the combination models on the CNN/DM and XSum datasets. Our CIT model outperformed a simple fine-tuned model in terms of ROUGE scores on both datasets.
Did the CIT model outperform a simple fine-tuned model in terms of ROUGE scores on both datasets?
Yes, it did.
null
false
null
Tell me which Pokemon are flying or not flying: Pidgeot, Fearow, Charizard, Ditto, Eevee, Chansey
Pidgeot: Flying Fearow: Flying Charizard: Flying Ditto: Not Flying Eevee: Not Flying Chansey: Not Flying
null
false
99
In recent years there has been increasing interest on the issue of disinformation spreading on online social media. Global concern over false (or "fake") news as a threat to modern democracies has been frequently raised–ever since 2016 US Presidential elections–in correspondence of events of political relevance, where the proliferation of manipulated and low-credibility content attempts to drive and influence people opinions BIBREF0BIBREF1BIBREF2BIBREF3. Researchers have highlighted several drivers for the diffusion of such malicious phenomenon, which include human factors (confirmation bias BIBREF4, naive realism BIBREF5), algorithmic biases (filter bubble effect BIBREF0), the presence of deceptive agents on social platforms (bots and trolls BIBREF6) and, lastly, the formation of echo chambers BIBREF7 where people polarize their opinions as they are insulated from contrary perspectives. The problem of automatically detecting online disinformation news has been typically formulated as a binary classification task (i.e. credible vs non-credible articles), and tackled with a variety of different techniques, based on traditional machine learning and/or deep learning, which mainly differ in the dataset and the features they employ to perform the classification. We may distinguish three approaches: those built on content-based features, those based on features extracted from the social context, and those which combine both aspects. A few main challenges hinder the task, namely the impossibility to manually verify all news items, the lack of gold-standard datasets and the adversarial setting in which malicious content is created BIBREF3BIBREF6. In this work we follow the direction pointed out in a few recent contributions on the diffusion of disinformation compared to traditional and objective information. These have shown that false news spread faster and deeper than true news BIBREF8, and that social bots and echo chambers play an important role in the diffusion of malicious content BIBREF6, BIBREF7. Therefore we focus on the analysis of spreading patterns which naturally arise on social platforms as a consequence of multiple interactions between users, due to the increasing trend in online sharing of news BIBREF0. A deep learning framework for detection of fake news cascades is provided in BIBREF9, where the authors refer to BIBREF8 in order to collect Twitter cascades pertaining to verified false and true rumors. They employ geometric deep learning, a novel paradigm for graph-based structures, to classify cascades based on four categories of features, such as user profile, user activity, network and spreading, and content. They also observe that a few hours of propagation are sufficient to distinguish false news from true news with high accuracy. Diffusion cascades on Weibo and Twitter are analyzed in BIBREF10, where authors focus on highlighting different topological properties, such as the number of hops from the source or the heterogeneity of the network, to show that fake news shape diffusion networks which are highly different from credible news, even at early stages of propagation. In this work, we consider the results of BIBREF11 as our baseline. The authors use off-the-shelf machine learning classifiers to accurately classify news articles leveraging Twitter diffusion networks. To this aim, they consider a set of basic features which can be qualitatively interpreted w.r.t to the social behavior of users sharing credible vs non-credible information. Their methodology is overall in accordance with BIBREF12, where authors successfully detect Twitter astroturfing content, i.e. political campaigns disguised as spontaneous grassroots, with a machine learning framework based on network features. In this paper, we propose a classification framework based on a multi-layer formulation of Twitter diffusion networks. For each article we disentangle different social interactions on Twitter, namely tweets, retweets, mentions, replies and quotes, to accordingly build a diffusion network composed of multiple layers (on for each type of interaction), and we compute structural features separately for each layer. We pick a set of global network properties from the network science toolbox which can be qualitatively explained in terms of social dimensions and allow us to encode different networks with a tuple of features. These include traditional indicators, e.g. network density, number of strong/weak connected components and diameter, and more elaborated ones such as main K-core number BIBREF13 and structural virality BIBREF14. Our main research question is whether the use of a multi-layer, disentangled network yields a significant advance in terms of classification accuracy over a conventional single-layer diffusion network. Additionally, we are interested in understanding which of the above features, and in which layer, are most effective in the classification task. We perform classification experiments with an off-the-shelf Logistic Regression model on two different datasets of mainstream and disinformation news shared on Twitter respectively in the United States and in Italy during 2019. In the former case we also account for political biases inherent to different news sources, referring to the procedure proposed in BIBREF2 to label different outlets. Overall we show that we are able to classify credible vs non-credible diffusion networks (and consequently news articles) with high accuracy (AUROC up to 94%), even when accounting for the political bias of sources (and training only on left-biased or right-biased articles). We observe that the layer of mentions alone conveys useful information for the classification, denoting a different usage of this functionality when sharing news belonging to the two news domains. We also show that most discriminative features, which are relative to the breadth and depth of largest cascades in different layers, are the same across the two countries. The outline of this paper is the following: we first formulate the problem and describe data collection, network representation and structural properties employed for the classification; then we provide experimental results–classification performances, layer and feature importance analyses and a temporal classification evaluation–and finally we draw conclusions and future directions. Researchers have highlighted several drivers for the diffusion of such malicious phenomenon, which include human factors (confirmation bias (Nickerson 1998), naive realism (Reed, Turiel, and Brown 2013)), algorithmic biases (filter bubble effect (Allcott and Gentzkow 2017)), the presence of deceptive agents on social platforms (bots and trolls (Shao et al. 2018a)) and, lastly, the formation of echo chambers (Del Vicario et al. 2016) where people polarize their opinions as they are insulated from contrary perspectives
What are the drivers for the diffusion of malicious phenomenon highlighted by researchers?
They are human factors, naive realism, algorithmic biases, the presence of deceptive agents on social platforms and the formation of echo chambers.
null
false
153
Visual dialog BIBREF0 is an interesting new task combining the research efforts from Computer Vision, Natural Language Processing and Information Retrieval. While BIBREF1 presents some tips and tricks for VQA 2.0 Challenge, we follow their guidelines for the Visual Dialog challenge 2018. Our models use attention similar to BIBREF2 to get object level image representations from Faster R-CNN model BIBREF3. We experiment with different encoder mechanisms to get representations of conversational history. Our models use attention similar to to get object level image representations from Faster R-CNN model.
What kind of attention do the authors' models use?
Get object level image representations from Faster R-CNN model
null
false
158
In this question answering task, a reading passage , a query and several answer choices are given. P denotes the passage, Q denotes query and C denotes one of the multiple choices. The target of the model is to choose a correct answer A from multiple choices based on informations of P and Q. Fig. FIGREF1 is the pipeline overview of QACNN. First, we use embedding layer to transform P, Q, and C into word embedding. Then the compare layer generates passage-query similarity map INLINEFORM0 and passage-choice similarity map INLINEFORM1 . The following part is the main component of QACNN. It consists of two-staged CNN architecture. The first stage projects word-level feature into sentence-level, and the second stage projects sentence-level feature into passage-level. Moreover, we apply query-based attention mechanism to each stage on the basis of INLINEFORM2 feature at word level and sentence level respectively. After QACNN Layer, we obtain each choice answer feature. Finally, a prediction layer collects output information from every choice feature and returns the most possible answer. We propose an attention convolutional matching layer to inte grate two similarity maps given above. That is, QACNN Layer is used to learn the location relationship pattern. It contains a two-staged CNN combined with query-based attention mechanism. Each stage comprises two major part: attention map and output representation.
What is the usage of the QACNN Layer?
QACNN Layer is used to learn the location relationship pattern.
null
false
null
Sunil Manohar Gavaskar (Marathi pronunciation: [suniːl ɡaːʋəskəɾ]; pronunciation (help·info); born 10 July 1949), is an Indian cricket commentator and former cricketer who represented India and Bombay from 1971 to 1987. Gavaskar is acknowledged as one of the greatest opening batsmen of all time. Gavaskar was widely admired for his technique against fast bowling, with a particularly high average of 65.45 against the West Indies, who possessed a four-pronged fast bowling attack, widely regarded as the most vicious in Test history. However, most of Gavaskar's centuries against West Indies were against their second string team when their four-pronged attack were not playing together His captaincy of the Indian team, was considered as one of the first attacking ones, with Indian team winning the 1984 Asia Cup, and the Benson & Hedges World Championship of Cricket in 1985. At the same time, there were multiple exchanges of captaincy between Gavaskar and Kapil Dev, with one coming just six months before Kapil led India to victory at the 1983 Cricket World Cup. He is also a former Sheriff of Mumbai. Gavaskar is a recipient of the Indian sports honour of the Arjuna Award and the civilian honour of the Padma Bhushan. He was inducted into the ICC Cricket Hall of Fame in 2009. In 2012, he was awarded the Col CK Nayudu Lifetime Achievement Award for Cricket in India.
From the passage mention the awards received by Sunil Manohar Gavaskar. Display the results in comma separated format.
Arjuna Award, Padma Bhushan, inducted into the ICC Cricket Hall of Fame, Col CK Nayudu Lifetime Achievement Award
null
false
null
Write a brief passage arguing for mandating Electric Vehicles (EVs) over gas combustion vehicles in the United States.
Electric Vehicles (EVs) provide several key benefits to the United States over Internal Combustion Engine (ICE) vehicles. Firstly, EVs are powered strictly by electricity, which can be produced by renewable methods such as solar or wind power. Secondly, EVs produce significantly less greenhouse gases than ICEs, which will help reduce our carbon footprint and help improve the effects of climate change. Thirdly, EVs are safer and require less maintenance than ICEs, as there is no engine and less moving parts, and there is not a flammable fuel in the vehicle.
null
false
null
Classify each of the following as something used for drinking or eating: cup, bottle, plate, bowl, fork, spoon, can
Cup: Drinking Bottle: Drinking Plate: Eating Bowl: Eating Fork: Eating Spoon: Eating Can: Drinking
null
false
null
What is the pattern in this sequence and what is the next number: 0, 1, 1, 2, 3, 5, 8, 13
In this sequence the pattern is that each number is the sum of the two previous numbers. Therefore the next number in this sequence is 21.
null
false
null
why is context switching bad for computer performance?
context switching, which occurs when a process is paused by the cpu and another process starts running can affect performance of a computer systems for a couple of reasons. The act of switching the context is not in itself very time consuming. The real performance hit comes from the fact that different processes have different virtual memory address spaces so all the caches in the system need to be invalidated and flushed. Since there is such a huge difference in how fast a cpu can run compared to how fast instructions and data can be retrieved from main memory, these things are cached close to the cpu in onboard caches. Even a pointer to a mapping of virtual addresses to physical addresses are cached so that memory access is faster. When a new process gets loaded by the cpu with a different address space and mapping of virtual to physical addresses, all of the data in the caches is invalid and needs to be flushed. This leaves the cpu with empty caches that have to be rebuilt from scratch for the new process, meaning there will be a big slowdown in retrieving instructions and data since these will now be coming from main memory, which is much slower than the cpu's cache.
1908.06006
true
null
We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. They fall into two categories: topic classification (Yahoo) and fine-grained sentiment analysis (Amazon, Yelp). Dataset statistics are shown in Table TABREF29. Classes are perfectly balanced, for all datasets. We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5.
Do they compare to other models appart from HAN?
No.
null
false
null
Classify the plants based on whether they are aquatic or mangrove Pistia stratiotes, Lysichiton americanus , Rhizophora mangle, Avicennia germinans
Aquatic - Pistia stratiotes, Lysichiton americanus Mangrove - Rhizophora mangle, Avicennia germinans
null
false
null
Please classify the following items as either “waterproof” or “water absorbent”: umbrella, paper towel, sponge, raincoat, dish rag, car windshield.
The waterproof items are the umbrella, the raincoat and the car windshield. The water absorbent items are the sponge, paper towel, and dish rag.
null
false
null
Could you plan a canoe camping trip in Michigan? I want to canoe the river from start to end, and need specific camping locations for each night.
I would recommend a canoe camping trip on the Au Sable River in Michigan. The river is about 114 miles long beginning in Grayling and ending in Oscoda. The river features numerous campgrounds which you will be able to camp at each night. Day 1 Whitepine Campground Day 2 Parmalee Campground Day 3 Mio Campground Day 4 Alcona Dam Campground Day 5 Loud Dam Campground Day 6 End at Lake Huron
1909.04181
true
null
Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs. Our baseline is a GRU network for each of the three tasks.
Are the models compared to some baseline models?
Yes.
null
false
null
How to be happy?
To be happy for 1 hour ---- sleep To be happy for 3 hours ---- go to cinema To be happy for 1 day ---- go to picnic To be happy for lifetime ---- Love your work.
null
false
null
Marie Salomea Skłodowska–Curie (/ˈkjʊəri/ KURE-ee, French pronunciation: ​[maʁi kyʁi], Polish pronunciation: [ˈmarja skwɔˈdɔfska kʲiˈri]; born Maria Salomea Skłodowska, Polish: [ˈmarja salɔˈmɛa skwɔˈdɔfska]; 7 November 1867 – 4 July 1934) was a Polish and naturalized-French physicist and chemist who conducted pioneering research on radioactivity. She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields. Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first-ever married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes. She was, in 1906, the first woman to become a professor at the University of Paris.
Who was the first couple to win the Nobel prize?
Marie Curie and Pierre Curie are the first couple to win the Nobel prize.
null
false
null
Imagine you are the last person on Earth. Write a diary entry describing your thoughts and feelings.
Today is the 365th day that I've spent on this earth alone. While one year has passed, my days continue to remain the same. Every night, I wake up at 10:00pm, long before the scorching sunrise, to search for food, water, and any other essential items that I can cart back to my lonely den. My search concludes by 5:00am, which gives me a few hours before the first light and guaranteed death. The heat has become so destructive during the day, that I often find buildings and cars spontaneously up in flames during my nightly travels. It's becoming more and more difficult to find viable food that hasn't yet been burnt to a crisp. My nocturnal transition is the only reason that I am alive today. I use to look up at the stars in anger that the universe could be so cruel. As time has passed, I have learned to forgive the dwarf star off to the east, as he is just as alone as I am and for far longer. Sometimes I feel ashamed that I don't have a plan other than simply surviving one more day. As the days heat up, so do the nights. It's likely only a matter of months before earth becomes uninhabitable at all hours of the day. I know my existence is becoming increasingly more finite, and even if there are others out there, their future is just as damning.
null
false
null
Sam F.S. Chin (Chin Fan Siong, 曾帆祥, born August 1, 1954) is the Grandmaster and Gatekeeper of the martial art called I Liq Chuan, and also known as "the Martial Art of Awareness". Chin developed the grading system for this system.
Who created the grading system for the I Liq Chuan martial art?
Sam F. S. Chin developed the grading system for the I Liq Chuan martial art.
null
false
380
Since the advent of large structured knowledge bases (KBs) like Freebase BIBREF0 , YAGO BIBREF1 and DBpedia BIBREF2 , answering natural language questions using those structured KBs, also known as KB-based question answering (or KB-QA), is attracting increasing research efforts from both natural language processing and information retrieval communities. The state-of-the-art methods for this task can be roughly categorized into two streams. The first is based on semantic parsing BIBREF3 , BIBREF4 , which typically learns a grammar that can parse natural language to a sophisticated meaning representation language. But such sophistication requires a lot of annotated training examples that contains compositional structures, a practically impossible solution for large KBs such as Freebase. Furthermore, mismatches between grammar predicted structures and KB structure is also a common problem BIBREF4 , BIBREF5 , BIBREF6 . On the other hand, instead of building a formal meaning representation, information extraction methods retrieve a set of candidate answers from KB using relation extraction BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 or distributed representations BIBREF11 , BIBREF12 . Designing large training datasets for these methods is relatively easy BIBREF7 , BIBREF13 , BIBREF14 . These methods are often good at producing an answer irrespective of their correctness. However, handling compositional questions that involve multiple entities and relations, still remains a challenge. Consider the question what mountain is the highest in north america. Relation extraction methods typically answer with all the mountains in North America because of the lack of sophisticated representation for the mathematical function highest. To select the correct answer, one has to retrieve all the heights of the mountains, and sort them in descending order, and then pick the first entry. We propose a method based on textual evidence which can answer such questions without solving the mathematic functions implicitly. Knowledge bases like Freebase capture real world facts, and Web resources like Wikipedia provide a large repository of sentences that validate or support these facts. For example, a sentence in Wikipedia says, Denali (also known as Mount McKinley, its former official name) is the highest mountain peak in North America, with a summit elevation of 20,310 feet (6,190 m) above sea level. To answer our example question against a KB using a relation extractor, we can use this sentence as external evidence, filter out wrong answers and pick the correct one. Using textual evidence not only mitigates representational issues in relation extraction, but also alleviates the data scarcity problem to some extent. Consider the question, who was queen isabella's mother. Answering this question involves predicting two constraints hidden in the word mother. One constraint is that the answer should be the parent of Isabella, and the other is that the answer's gender is female. Such words with multiple latent constraints have been a pain-in-the-neck for both semantic parsing and relation extraction, and requires larger training data (this phenomenon is coined as sub-lexical compositionality by wang2015). Most systems are good at triggering the parent constraint, but fail on the other, i.e., the answer entity should be female. Whereas the textual evidence from Wikipedia, ...her mother was Isabella of Barcelos ..., can act as a further constraint to answer the question correctly. We present a novel method for question answering which infers on both structured and unstructured resources. Our method consists of two main steps as outlined in sec:overview. In the first step we extract answers for a given question using a structured KB (here Freebase) by jointly performing entity linking and relation extraction (sec:kb-qa). In the next step we validate these answers using an unstructured resource (here Wikipedia) to prune out the wrong answers and select the correct ones (sec:refine). Our evaluation results on a benchmark dataset WebQuestions show that our method outperforms existing state-of-the-art models. Details of our experimental setup and results are presented in sec:experiments. Our code, data and results can be downloaded from https://github.com/syxu828/QuestionAnsweringOverFB. In the first step we extract answers for a given question using a structured KB (here Freebase) by jointly performing entity linking and relation extraction (sec:kb-qa).
How do they extract answers for a given question?
Using a structured KB (here Freebase) by jointly performing entity linking and relation extraction.
null
false
null
Burnside's lemma, sometimes also called Burnside's counting theorem, the Cauchy–Frobenius lemma, the orbit-counting theorem, or the lemma that is not Burnside's, is a result in group theory that is often useful in taking account of symmetry when counting mathematical objects. Its various eponyms are based on William Burnside, George Pólya, Augustin Louis Cauchy, and Ferdinand Georg Frobenius. The result is not due to Burnside himself, who merely quotes it in his book 'On the Theory of Groups of Finite Order', attributing it instead to Frobenius (1887). Burnside's Lemma counts "orbits", which is the same thing as counting distinct objects taking account of a symmetry. Other ways of saying it are counting distinct objects up to an equivalence relation R, or counting objects that are in canonical form. In the following, let G be a finite group that acts on a set X. For each g in G, let Xg denote the set of elements in X that are fixed by g (also said to be left invariant by g), that is, Xg = { x ∈ X | g.x = x }. Burnside's lemma asserts the following formula for the number of orbits, denoted |X/G|: | � / � | = 1 | � | ∑ � ∈ � | � � | . {\displaystyle |X/G|={\frac {1}{|G|}}\sum _{g\in G}|X^{g}|.} Thus the number of orbits (a natural number or +∞) is equal to the average number of points fixed by an element of G (which is also a natural number or infinity). If G is infinite, the division by |G| may not be well-defined; in this case the following statement in cardinal arithmetic holds: | � | | � / � | = ∑ � ∈ � | � � | . {\displaystyle |G||X/G|=\sum _{g\in G}|X^{g}|.} Examples of applications to enumeration Necklaces There are 8 possible bit vectors of length 3, but only four distinct 2-colored necklaces of length 3: 000, 001, 011, and 111, because 100 and 010 are equivalent to 001 by rotation; similarly 110 and 101 are equivalent to 011. The formula is based on the number of rotations, which in this case is 3 (including the null rotation), and the number of bit vectors left unchanged by each rotation. All 8 bit vectors are unchanged by the null rotation, and two (000 and 111) are unchanged by each of the other two rotations, giving: 4 = 1 3 ( 8 + 2 + 2 ) {\displaystyle 4={\frac {1}{3}}(8+2+2)}. For length 4, there are 16 possible bit vectors; 4 rotations; the null rotation leaves all 16 bit vectors unchanged; the 1-rotation and 3-rotation each leave two bit vectors unchanged (0000 and 1111); the 2-rotation leaves 4 bit vectors unchanged (0000, 0101, 1010, and 1111); giving: 6 = 1 4 ( 16 + 2 + 4 + 2 ) {\displaystyle 6={\frac {1}{4}}(16+2+4+2)}. These are: 0000, 0001, 0011, 0101, 0111, and 1111. Colorings of a cube The number of rotationally distinct colourings of the faces of a cube using three colours can be determined from this formula as follows. Let X be the set of 36 possible face colour combinations that can be applied to a cube in one particular orientation, and let the rotation group G of the cube act on X in the natural manner. Then two elements of X belong to the same orbit precisely when one is simply a rotation of the other. The number of rotationally distinct colourings is thus the same as the number of orbits and can be found by counting the sizes of the fixed sets for the 24 elements of G. Cube with coloured faces one identity element which leaves all 36 elements of X unchanged six 90-degree face rotations, each of which leaves 33 of the elements of X unchanged three 180-degree face rotations, each of which leaves 34 of the elements of X unchanged eight 120-degree vertex rotations, each of which leaves 32 of the elements of X unchanged six 180-degree edge rotations, each of which leaves 33 of the elements of X unchanged A detailed examination of these automorphisms may be found here. The average fix size is thus 1 24 ( 3 6 + 6 ⋅ 3 3 + 3 ⋅ 3 4 + 8 ⋅ 3 2 + 6 ⋅ 3 3 ) = 57. {\frac {1}{24}}\left(3^{6}+6\cdot 3^{3}+3\cdot 3^{4}+8\cdot 3^{2}+6\cdot 3^{3}\right)=57. Hence there are 57 rotationally distinct colourings of the faces of a cube in three colours. In general, the number of rotationally distinct colorings of the faces of a cube in n colors is given by 1 24 ( � 6 + 3 � 4 + 12 � 3 + 8 � 2 ) . {\frac {1}{24}}\left(n^{6}+3n^{4}+12n^{3}+8n^{2}\right). 8 Queens Puzzle In the eight queens puzzle there are 92 solutions, of which 12 fundamental solutions are distinct up to rotation and reflection of the board. There are 8 combinations of rotations and reflections, including the null action. The null action leaves all 92 solutions unchanged. Four of the 92 solutions are symmetrical, unchanged by 180° rotation. That gives: 12 = 1 8 ( 92 + 4 ) {\displaystyle 12={\frac {1}{8}}(92+4)}.
What are some practical applications of Burnside's lemma in combinatorics based on this passage?
* Necklace counting problem: counting rotationally distinct necklaces of various lengths where each bead can be one of the two colors. The article counts necklaces of lengths 3 and 4, and the answer is 4 and 6, respectively. * Colorings of a cube problem: count rotationally distinct colorings of faces of a cube. The article considers a case when 3 colors are used and the answer is 57. * 8 queens puzzle: a classical version of the problem is counting different ways to place 8 chess queens onto an 8x8 board so that no queen attacks another (the answer is 92). With Burnside's lemma application the article counts the number of different placements of queens up to rotation and reflection, arriving at 12 as the answer.
null
false
null
Classify each of the following as either a bird, animal, reptile or insect: tiger, heron, eagle, alligator, snake, spider, ant, dog, cat, rhinoceros, kingfisher, chameleon, hornet, butterfly
mammal- tiger, rhinoceros, dog, cat bird- heron, kingfisher, eagle reptile- snake, alligator, chameleon inset- spider, ant, hornet, butterfly
2003.08808
false
null
There is usually a trade-off between the number of training samples and the number of trainable parameters in a deep network model BIBREF16. In general, more data we have, better result are generated by supervised deep learning methods. Data augmentation helps to increase the number of training data, but a bigger dataset needs a better and most likely bigger network architecture in terms of generalization. Otherwise, the model might over-fitted or under-fitted on training data. Using our annotation software, we automatically extracted landmarks of 2000 images from the UOttawa database BIBREF14 had been annotated for image segmentation tasks. The database was randomly divided into three sets: 90$\%$ training, 5$\%$ validation, and 5$\%$ testing datasets. For testing, we also applied the TongueNet on the UBC database BIBREF14 without any training to see the generalization ability of the model. During the training process of TongueNet, we employed an online data augmentation, including rotation (-25 to 25 degrees), translation (-30 to 30 points in two directions), scaling (from 0.5 to 2 times), horizontal flipping, and combination of these transformations, and annotation point locations are also was transformed, correspondingly. From our extensive random search hyper-parameter tuning, learning rate, the number of iterations, mini-batch sizes, the number of epochs was determined as 0.0005, 1000, 30, 10, respectively. We deployed our experiments using Keras with TensorFlow as the backend on a Windows PC with Core i7, 4.2 GHz speed using one NVIDIA 1080 GPU unit, and 32 GB of RAM. Adam optimization with fixed momentum values of 0.9, was utilized for training. For testing, we also applied the TongueNet on the UBC database BIBREF14 without any training to see the generalization ability of the model.
What previously annotated databases are available?
The answers are shown as follows: * the UBC database BIBREF14
null
false
null
What is the difference between it's and its?
"It's" is a contraction for "it is". Whereas "its" is the possessive pronoun for it
null
false
null
Segregate the below words as palindrome or not. kayak, deified, rotator, repaper, deed, peep, wow, noon, taste, bait, not, built
Palindrome - kayak, deified, rotator, repaper, deed, peep, wow, noon Not a palindrome - taste, bait, not, built
null
false
null
Give me a list of things I should bring when taking a baby to the park.
Here are some things you should bring when taking a baby to the park: - Diaper Bag - Toys - Stroller - Extra Diapers and Clothes - Carrying Pack - First Aid Kit - Water - Snacks - Milk
null
false
185
Learning natural language generation (NLG) models heavily relies on annotated training data. However, most available datasets are collected in a single language (typically English), which restricts deploying the applications to other languages. In this work, we aim at transferring the supervision of a monolingual NLG dataset to unseen languages, so that we can boost performance for the low-resource settings. Various methods have been proposed over the years to learn universal cross-lingual word embeddings BIBREF0, BIBREF1, BIBREF2 or sentence encoders BIBREF3, BIBREF4, BIBREF5, which tries to encode multilingual texts into a single shared vector space. Despite achieving promising results on cross-lingual classification problems, cross-lingual pre-trained models purposed for NLG tasks remains relatively understudied. The cross-lingual generation problem is challenging due to the following reasons. First, it requires the models to understand multilingual input texts, and generate multilingual target sequences. So both encoder and decoder should be pre-trained together. Second, the many-to-many nature of cross-lingual NLG increases language pairs with the square of the number of languages. Third, the prediction space of cross-lingual NLG is much larger than classification tasks, which makes the knowledge transfer of decoders quite critical. Previous work mainly relies on machine translation (MT) systems to map texts to different languages. The first strand of research directly uses MT in a pipeline manner BIBREF6. For example, the input written in other languages is first translated to English, and fed into the NLG model that is trained by English data. Then the generated English text is translated back to the target language. Another strand of work employs MT to generate pseudo training data for other language pairs that are lack of annotations BIBREF7, BIBREF8. However, such methods have to use multiple MT systems, which renders them suffering from error propagation. Moreover, because the pipeline-based methods do not explicitly share the same parameter space across the languages, we can not directly transfer the task-specific supervision to other low-resource languages. In this paper, we propose a cross-lingual pre-trained model (named as Xnlg) in order to transfer monolingual NLG supervision to other pre-trained languages by fine-tuning. Specifically, Xnlg shares the same sequence-to-sequence model across languages, and is pre-trained with both monolingual and cross-lingual objectives. The model not only learns to understand multilingual input, but also is able to generate specific languages by conditioning on the encoded semantics. Figure FIGREF2 demonstrates how to use Xnlg to perform cross-lingual transfer for downstream tasks. The proposed model enables us to fine-tune the pre-trained model on monolingual NLG training data, and then evaluate it beyond a single language, including zero-shot cross-lingual generation. Besides, we explore several fine-tuning strategies to make a compromise between cross-lingual ability and task ability. In addition, we introduce two cross-lingual NLG datasets (i.e., question generation, and abstractive summarization) for evaluation, which includes three languages, namely English, Chinese, and French. Experimental results on the NLG tasks show that Xnlg achieves competitive performance compared with the machine-translation-based pipeline model in zero-shot cross-lingual settings. The proposed model enables us to fine-tune the pre-trained model on monolingual NLG training data, and then evaluate it beyond a single language, including zero-shot cross-lingual generation.
What can the proposed model help them do?
It enables them to fine-tune the pre-trained model on monolingual NLG training data, and then evaluate it beyond a single language, including zero-shot cross-lingual generation.
null
false
null
Classify the below devices based on the brand which manufactures them. iPhone, iPad, MacBook, Pixel watch, Galaxy phone
iPhone, iPad, MacBook - Apple Pixel watch - Google Galaxy phone - Samsung
null
false
null
There are many modern variations of gazpacho with avocados, cucumbers, parsley, strawberries, watermelon, grapes, meat stock, seafood, and other ingredients instead of tomatoes and bread. The ingredients, texture, and thickness of gazpacho vary regionally and between different cooks. Similar cold raw soups such as arjamolho in Portugal, porra antequerana and ajoblanco, are also popular in Andalusia, although not as widespread as gazpacho. Gazpacho and salmorejo are especially similar since they are both tomato-based cold soups that are widely popular in Spain; the main difference between gazpacho and salmorejo is the culinary technique used since gazpacho is a soup whereas salmorejo is an emulsion. In addition, while both dishes share the main ingredients of tomato, olive oil, bread, and garlic, gazpacho can also be prepared with cucumber, peppers, and vinegar, whereas salmorejo cannot. The original recipe using bread, water, vinegar, oil, and salt is traditional in the Iberian Peninsula, perhaps going back to Roman times. Every central and southern region has its own variety. The humble gazpacho became a very deeply rooted food for peasants and shepherds in Spain. The basic gazpacho gave rise to many variants, some also called gazpacho, others not; some authors have tried to classify all these variations. Gazpachos may be classified by colour: the most usual red ones (which contain tomato), white ones (which contain no tomato, but include dried fruits), and green ones (which are white but contain some spices that make them green). These variants have their basic ingredients in common, including garlic paste which works as an emulsifier, bread, olive oil, vinegar and salt. In addition to the traditional ingredients, red fruits such as strawberries, muskmelon, etc., may be added, making the gazpacho a bit sweeter. Gazpacho may be served as a starter, main dish, or tapa. A popular variation comes from the town of Rota in the province of Cádiz. During times of drought, there was not enough water to make gazpacho; thus, arranque has the same ingredients as gazpacho, but requires less water and bread, making it a sort of cream. Some people add more bread until it takes on the consistency of a dip. In Extremadura, gazpachos are a kind of purée or thick gazpacho known as cojondongo, or cojondongo del gañán, made of breadcrumbs, garlic, oil, and vinegar, then topped with chopped onions, tomato and peppers. Gazpacho manchego, as its name implies, is made in the east region of La Mancha, in Albacete and nearby areas, and is popular in other areas in the center and southwest of the country. It is a meat stew, whose main ingredients are small game animals or birds such as rabbit, hare, quail, or pigeon, and flatbread, and may include garlic, tomatoes, and mushrooms. It is cooked in a cauldron and served hot. Another well-known variant in La Mancha is gazpacho de pastor or galiano. Some other hot meat or fish dishes from other regions are called gazpacho (gazpacho jumillano, gazpacho de Yecla, gazpacho de Requena, etc.) Gazpacho is often eaten during the very hot and dry summers in Castilla y León. The gazpacho made in La Moraña in the province of Ávila has large pieces of vegetables floating in a watery soup. Gazpacho manchego, despite its name, is a meat stew, served hot, not a variation on the cold vegetable soup.
What is this text about?
This text is about variations of Gazpachos. It lists different recipes and explains influences from regions and cultural roots. It also mentions that despite some dishes may contain Gazpacho in their name, not all are variations from the original Gazpacho.
null
false
104
As established in previous work BIBREF7 , BIBREF8 , we use a standard unsupervised SRL setup, consisting of the following steps: The task we model, unsupervised semantic role induction, is the step 4 of this pipeline. 1. Syntactic Parsing****2. Predicate Identification****3. Argument Identification****4. Argument Classification
What steps does the standard unsupervised SRL setup consist of?
Syntactic Parsing, Predicate Identification, Argument Identification and Argument Classification.
null
false
null
Define unemployment and how it is expressed in most countries.
Percentage of total workforce who are unemployed and are actively seeking a paid job. Unemployment rate (# of unemployed / # in labor force) = % Unemployment rate is one of the most closely watched statistics because a rising rate is seen as a sign of weakening economy that may call for cut in interest rate. A falling rate, similarly, indicates a growing economy which is usually accompanied by higher inflation rate and may call for increase in interest rates.
null
false
null
What is comparative genomics?
The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. It is these intergenomic maps that make it possible to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Ultimately, whole genomes are involved in processes of hybridization, polyploidization and endosymbiosis, often leading to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models. Many of these studies are based on the detection of sequence homology to assign sequences to protein families.
null
false
null
What are some key factors to consider in choosing a place to live?
1) Cost of living 2) Climate/weather 3) Access to quality healthcare 4) Access to quality education 5) Employment opportunities
null
false
271
Large repositories of textual communications (e.g. forum and microblog posts) have gained recent popularity as proxies for dialog BIBREF0, BIBREF1, BIBREF2. However, conversations in these settings differ from natural dialog: turns may be sparsely scattered over a large temporal span, contain distinct syntax and vocabulary BIBREF3, and differ greatly in formality and focus BIBREF4. In this paper, we investigate how appropriate such data is for modeling natural dialog, and introduce Interview, a new high-quality large-scale open-domain conversational dataset grounded in interview settings with annotations for specific speaker roles. We compare the performance of state-of-the-art language models fine-tuned on Interview and other popular conversational datasets, demonstrating that Interview contains more complex dialog and better models the characteristics of natural spoken conversations. Our dataset is an order of magnitude larger than existing high-quality natural dialog datasets and contains speaker role annotations for each turn, facilitating the development of conversational agents and assistive systems for settings involving specific speaker roles, such as doctor-patient interviews or hosted talk shows. In particular, we explore the tasks of role modeling in media dialog and role change detection on Interview and find that leveraging role information can enable more nuanced, on-topic and natural dialog generation, as well as improve role change classification performance. In summary, we present Interview, the first large-scale open-domain media dialog dataset. We explore two tasks for which it serves as a promising benchmark dataset: speaker role modeling and speaker change detection. We build simple yet strong models to show quantitatively that role labels from Interview improve performance on such tasks. Interview's scale, spoken origins, role diversity, and complex utterances make it a better source for grounded open-domain conversations. In this paper, we investigate how appropriate such data is for modeling natural dialog, and introduce Interview, a new high-quality large-scale open-domain conversational dataset grounded in interview settings with annotations for specific speaker roles.
What is the Interview in this paper?
It is an new high-quality large-scale open-domain conversational dataset grounded in interview settings with annotations for specific speaker roles.
null
false
null
In 1989, David teamed up with comedian Jerry Seinfeld to create a pilot for NBC called The Seinfeld Chronicles, which became the basis for Seinfeld, one of the most successful shows in history, reaching the top of TV Guide's list of the 50 greatest TV shows of all time. Entertainment Weekly ranked it the third-best TV show of all time. David made occasional uncredited appearances on the show, playing such roles as Frank Costanza's cape-wearing lawyer and the voice of George Steinbrenner. He was also the primary inspiration for the show's character George Costanza. David left Seinfeld on friendly terms after the seventh season but returned to write the series finale in 1998, two years later. He also continued to provide the voice for the Steinbrenner character. David wrote 62 of the episodes of Seinfeld, including 1992's "The Contest", for which he won a Primetime Emmy Award and which TV Guide ranked as episode No. 1 on its list of "TV's Top 100 Episodes of All Time". The syndication of Seinfeld earned David an estimated $250 million in 1998 alone. This amount has been steadily decreasing each year, but payments will continue until the full $1.7 billion from the original syndication deal has been paid. In 2008, David made $55 million from Seinfeld syndication, DVD sales, and Curb Your Enthusiasm. He was nominated for an Emmy award 19 times for Seinfeld, winning twice — once for best comedy and once for writing.
Give me a brief synopsis of Larry David’s contribution to Seinfeld based on the text.
Larry David was a co-writer for seven seasons on the hit TV show Seinfeld. In addition to writing 62 episodes, he played several characters on the show, including Frank Costanza’s lawyer and the voice of George Steinbrenner. The character of George Costanza was also based on Larry David. His work on the show earned him 19 Emmy award nominations and two wins, numerous accolades, and considerable syndication income.
null
false
null
Identify which car manufacturer is Italian or American: Lamborghini, Chrysler
Chrysler is American, Lamborghini is Italian
null
false
null
Is streaming better than cable?
Streaming, which is a short-hand term for on-demand programming streamed over the internet, is available from many service providers including Netflix, Disney, Hulu, and Amazon. It is often considered better than cable because there is more original content and the content can be downloaded or consumed at any time, which is more flexible as compared to the standard cable packages offered. While streaming is considered more flexible, it can be expensive since most people consume streaming services from many providers, increasing the aggregate cost of subscriptions.
null
false
336
Verb is crucial in sentence understanding BIBREF0 , BIBREF1 . A major issue of verb understanding is polysemy BIBREF2 , which means that a verb has different semantics or senses when collocating with different objects. In this paper, we only focus on verbs that collocate with objects. As illustrated in Example SECREF1 , most verbs are polysemous. Hence, a good semantic representation of verbs should be aware of their polysemy. Example 1 (Verb Polysemy) eat has the following senses: Many typical verb representations, including FrameNet BIBREF3 , PropBank BIBREF4 , and VerbNet BIBREF5 , describe verbs' semantic roles (e.g. ingestor and ingestibles for “eat”). However, semantic roles in general are too coarse to differentiate a verb's fine-grained semantics. A verb in different phrases can have different semantics but similar roles. In Example SECREF1 , both “eat”s in “eat breakfast” and “eat apple” have ingestor. But they have different semantics. The unawareness of verbs' polysemy makes traditional verb representations unable to fully understand the verb in some applications. In sentence I like eating pitaya, people directly know “pitaya” is probably one kind of food since eating a food is the most fundamental semantic of “eat”. This enables context-aware conceptualization of pitaya to food concept. But by only knowing pitaya's role is the “ingestibles”, traditional representations cannot tell if pitaya is a food or a meal. Verb Patterns We argue that verb patterns (available at http://kw.fudan.edu.cn/verb) can be used to represent more fine-grained semantics of a verb. We design verb patterns based on two word collocations principles proposed in corpus linguistics BIBREF6 : idiom principle and open-choice principle. Following the principles, we designed two types of verb patterns. According to the above definitions, we use verb patterns to represent the verb's semantics. Phrases assigned to the same pattern have similar semantics, while those assigned to different patterns have different semantics. By verb patterns, we know the “pitaya” in I like eating pitaya is a food by mapping “eat pitaya” to “eat $ INLINEFORM0 food”. On the other hand, idiom patterns specify which phrases should not be conceptualized. We list verb phrases from Example SECREF1 and their verb patterns in Table TABREF7 . And we will show how context-aware conceptualization benefits from our verb patterns in the application section. Thus, our problem is how to generate conceptualized patterns and idiom patterns for verbs. We use two public data sets for this purpose: Google Syntactic N-Grams (http://commondatastorage.googleapis.com/books/syntactic -ngrams/index.html) and Probase BIBREF7 . Google Syntactic N-grams contains millions of verb phrases, which allows us to mine rich patterns for verbs. Probase contains rich concepts for instances, which enables the conceptualization for objects. Thus, our problem is given a verb INLINEFORM0 and a set of its phrases, generating a set of patterns (either conceptualized patterns or idiom patterns) for INLINEFORM1 . However, the pattern generation for verbs is non-trivial. In general, the most critical challenge we face is the trade-off between generality and specificity of the generated patterns, as explained below. We design verb patterns based on two word collocations principles proposed in corpus linguistics (Sinclair 1991): idiom principle and openchoice principle.
Based on what principles do they design verb patterns?
Idiom principle and openchoice principle.
null
false
291
Coreference resolution is the task of identifying all mentions which refer to the same entity in a document. It has been shown beneficial in many natural language processing (NLP) applications, including question answering BIBREF0 and information extraction BIBREF1 , and often regarded as a prerequisite to any text understanding task. Coreference resolution can be regarded as a clustering problem: each cluster corresponds to a single entity and consists of all its mentions in a given text. Consequently, it is natural to evaluate predicted clusters by comparing them with the ones annotated by human experts, and this is exactly what the standard metrics (e.g., MUC, B INLINEFORM0 , CEAF) do. In contrast, most state-of-the-art systems are optimized to make individual co-reference decisions, and such losses are only indirectly related to the metrics. One way to deal with this challenge is to optimize directly the non-differentiable metrics using reinforcement learning (RL), for example, relying on the REINFORCE policy gradient algorithm BIBREF2 . However, this approach has not been very successful, which, as suggested by clark-manning:2016:EMNLP2016, is possibly due to the discrepancy between sampling decisions at training time and choosing the highest ranking ones at test time. A more successful alternative is using a `roll-out' stage to associate cost with possible decisions, as in clark-manning:2016:EMNLP2016, but it is computationally expensive. Imitation learning BIBREF3 , BIBREF4 , though also exploiting metrics, requires access to an expert policy, with exact policies not directly computable for the metrics of interest. In this work, we aim at combining the best of both worlds by proposing a simple method that can turn popular coreference evaluation metrics into differentiable functions of model parameters. As we show, this function can be computed recursively using scores of individual local decisions, resulting in a simple and efficient estimation procedure. The key idea is to replace non-differentiable indicator functions (e.g. the member function INLINEFORM0 ) with the corresponding posterior probabilities ( INLINEFORM1 ) computed by the model. Consequently, non-differentiable functions used within the metrics (e.g. the set size function INLINEFORM2 ) become differentiable ( INLINEFORM3 ). Though we assume that the scores of the underlying statistical model can be used to define a probability model, we show that this is not a serious limitation. Specifically, as a baseline we use a probabilistic version of the neural mention-ranking model of P15-1137, which on its own outperforms the original one and achieves similar performance to its global version BIBREF5 . Importantly when we use the introduced differentiable relaxations in training, we observe a substantial gain in performance over our probabilistic baseline. Interestingly, the absolute improvement (+0.52) is higher than the one reported in clark-manning:2016:EMNLP2016 using RL (+0.05) and the one using reward rescaling (+0.37). This suggests that our method provides a viable alternative to using RL and reward rescaling. The outline of our paper is as follows: we introduce our neural resolver baseline and the B INLINEFORM0 and LEA metrics in Section SECREF2 . Our method to turn a mention ranking resolver into an entity-centric resolver is presented in Section SECREF3 , and the proposed differentiable relaxations in Section SECREF4 . Section SECREF5 shows our experimental results. In this work, we aim at combining the best of both worlds by proposing a simple method that can turn popular coreference evaluation metrics into differentiable functions of model parameters.
What method do they propose in their paper?
A method that can turn popular coreference evaluation metrics into differentiable functions of model parameters.
null
false
null
How to people buy parts for classic cars that are no longer in production?
A challenge to owning a classic car is buying replacement parts when original components fail. There are a few options, some specialty supply houses create reproduction parts, but depending on the popularity of the car the availability is mixed, it can take some time for a custom part to be fabricated and it can be a costly option. If the failed part is on hand, you can take the part to a local supply house, and compare the specs to modern parts in the parts catalog, and select the closest matching part. This is a good option for items like fuel filters and brake pads that have similar dimensions and functionality of the original part. In some cases, it may be easier to upgrade a component, such as removing a generator and replacing it with an alternator to upgrade the reliability of the classic car.
null
false
null
Violet Constance Jessop (2 October 1887 – 5 May 1971), often referred to as the "Queen of sinking ships" or "Miss Unsinkable," was an Argentine woman of Irish heritage who worked as an ocean liner stewardess, memoirist, and nurse in the early 20th century. Jessop is most well known for having survived the sinking of both the RMS Titanic in 1912 and her sister ship the HMHS Britannic in 1916, as well as having been onboard the eldest of the three sister ships, the RMS Olympic, when it collided with the British warship HMS Hawke in 1911.
How many sinking ships did Violet Jessop survive? Be precise.
2.
null
false
84
Shallow syntax provides an approximation of phrase-syntactic structure of sentences; it can be produced with high accuracy, and is computationally cheap to obtain. We investigate the role of shallow syntax-aware representations for NLP tasks using two techniques. First, we enhance the ELMo architecture to allow pretraining on predicted shallow syntactic parses, instead of just raw text, so that contextual embeddings make use of shallow syntactic context. Our second method involves shallow syntactic features obtained automatically on downstream task data. Neither approach leads to a significant gain on any of the four downstream tasks we considered relative to ELMo-only baselines. Further analysis using black-box probes confirms that our shallow-syntax-aware contextual embeddings do not transfer to linguistic tasks any more easily than ELMo's embeddings. We take these findings as evidence that ELMo-style pretraining discovers representations which make additional awareness of shallow syntax redundant. Further analysis using black-box probes from Liu et al.(2019) confirms that our shallow-syntax-aware contextual embeddings do not transfer to linguistic tasks any more easily than ELMo’s embeddings.
What does further analysis using black-box probes from Liu et al.(2019) confirm?
Confirming that their shallow-syntax-aware contextual embeddings do not transfer to linguistic tasks any more easily than ELMo’s embeddings.
null
false
null
Identify which instrument is string or percussion: Snare drum, Zither
Zither is string, Snare drum is percussion.
null
false
null
Murray's performance in Sofia Coppola's Lost in Translation (2003) earned him a Golden Globe and a British Academy Film Award and an Oscar nomination for Best Actor. He has frequently collaborated with directors Ivan Reitman, Harold Ramis, Wes Anderson, Sofia Coppola, Jim Jarmusch, John McNaughton and the Farrelly brothers. He received Golden Globe nominations for his roles in Ghostbusters, Rushmore (1998), Hyde Park on Hudson (2012), St. Vincent (2014), On the Rocks (2020) and the HBO miniseries Olive Kitteridge (2014), for which he later won his second Primetime Emmy Award.
What awards did Bill Murray receive for Lost in Translation?
Bill Murray earned a Golden Globe and a British Academy Film Award and an Oscar nomination for Best Actor for his role in Lost in Translation.
null
false
null
The name Rozelle and Rozelle Bay (often shown as "Rozella Bay" on old maps), originated from the parrots found in abundance at Rose Hill (near Parramatta) the first suburb of Sydney, established as a prime farming area for the new colony. The parrots, also in abundance in the inner west Bay area of Sydney, were commonly called "Rose Hill parrots" or "Rose-hillers" then Rosella.
Where does the suburb of Rozelle's name come from?
The name Rozelle originated from the parrots found in abundance at Rose Hill the first suburb of Sydney. The parrots, also in abundance in the inner west Bay area of Sydney, were commonly called "Rose Hill parrots" or "Rose-hillers" then Rosella
1909.00154
false
null
Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair. Differently to textual data, our goal in this paper is to explore the large amount of categorical data that is often collected in travel surveys. This includes trip purpose, education level, or family type. We also consider other variables that are not necessarily of categorical nature, but typically end up as dummy encoding, due to segmentation, such as age, income, or even origin/destination pair.
How do they model travel behavior?
The data from collected travel surveys is used to model travel behavior.
null
false
null
In addition, she has presented on the topic of synesthesia at a number of universities including Yale University, Princeton University, the University of California, San Diego, Rockefeller University, the University of Virginia, the University of Almería, the University of Jaén, Stockholm University and others. Duffy was invited to be a Plenary Speaker on synesthesia at the "Towards a Science of Consciousness" conference at University of Arizona in Tucson. She is a co-founder of and consultant to the American Synesthesia Association.
Provide a comma separated list of all the universities listed in this passage
Yale University, Princeton University, the University of California, San Diego, Rockefeller University, the University of Virginia, the University of Almería, the University of Jaén, Stockholm University, University of Arizona
null
false
192
For evaluating topics with automatic quality measures, we used several English text collections and one Russian collection (Table TABREF7 ). We experiment with three thesauri: WordNet (155 thousand entries), information-retrieval thesaurus of the European Union EuroVoc (15161 terms), and Russian thesaurus RuThes (115 thousand entries) BIBREF19 . At the preprocessing step, documents were processed by morphological analyzers. Also, we extracted noun groups as described in BIBREF16 . As baselines, we use the unigram LDA topic model and LDA topic model with added 1000 ngrams with maximal NC-value BIBREF20 extracted from the collection under analysis. As it was found before BIBREF14 , BIBREF16 , the addition of ngrams without accounting relations between their components considerably worsens the perplexity because of the vocabulary growth (for perplexity the less is the better) and practically does not change other automatic quality measures (Table 2). We add the Wordnet data in the following steps. At the first step, we include WordNet synonyms (including multiword expressions) into the proposed similarity sets (LDA-Sim+WNsyn). At this step, frequencies of synonyms found in the same document are summed up in process LDA topic learning as described in Algorithm SECREF3 . We can see that the kernel uniqueness becomes very low, topics are very close to each other in content (Table 2: LDA-Sim+WNsyn). At the second step, we add word direct relatives (hyponyms, hypernyms, etc.) to similarity sets. Now the frequencies of semantically related words are added up enhancing the contribution into all topics of the current document. The Table 2 shows that these two steps lead to great degradation of the topic model in most measures in comparison to the initial unigram model: uniqueness of kernels abruptly decreases, perplexity at the second step grows by several times (Table 2: LDA-Sim+WNsynrel). It is evident that at this step the model has a poor quality. When we look at the topics, the cause of the problem seems to be clear. We can see the overgeneralization of the obtained topics. The topics are built around very general words such as "person", "organization", "year", etc. These words were initially frequent in the collection and then received additional frequencies from their frequent synonyms and related words. Then we suppose that these general words were used in texts to discuss specific events and objects, therefore, we change the constructions of the similarity sets in the following way: we do not add word hyponyms to its similarity set. Thus, hyponyms, which are usually more specific and concrete, should obtain additional frequencies from upper synsets and increase their contributions into the document topics. But the frequencies and contribution of hypernyms into the topic of the document are not changed. And we see the great improvement of the model quality: the kernel uniqueness considerably improves, perplexity decreases to levels comparable with the unigram model, topic coherence characteristics also improve for most collections (Table 2:LDA-Sim+WNsynrel/hyp). We further use the WordNet-based similarity sets with n-grams having the same components as described in BIBREF16 . All measures significantly improve for all collections (Table 2:LDA-Sim+WNsr/hyp+Ngrams). At the last step, we try to apply the same approach to ngrams that was previously utilized to hyponym-hypernym relations: frequencies of shorter ngrams and words are summed to frequencies of longer ngrams but not vice versa. In this case we try to increase the contribution of more specific longer ngrams into topics. It can be seen (Table 2) that the kernel uniqueness grows significantly, at this step it is 1.3-1.6 times greater than for the baseline models achieving 0.76 on the ACL collection (Table 2:LDA-Sim+WNsr/hyp+Ngrams/l). At the second series of the experiments, we applied EuroVoc information retrieval thesaurus to two European Union collections: Europarl and JRC. In content, the EuroVoc thesaurus is much smaller than WordNet, it contains terms from economic and political domains and does not include general abstract words. The results are shown in Table 3. It can be seen that inclusion of EuroVoc synsets improves the topic coherence and increases kernel uniqueness (in contrast to results with WordNet). Adding ngrams further improves the topic coherence and kernel uniqueness. At last we experimented with the Russian banking collection and utilized RuThes thesaurus. In this case we obtained improvement already on RuThes synsets and again adding ngrams further improved topic coherence and kernel uniqueness (Table 4). It is worth noting that adding ngrams sometimes worsens the TC-NPMI measure, especially on the JRC collection. This is due to the fact that in these evaluation frameworks, the topics' top elements contain a lot of multiword expressions, which rarely occur in Wikipedia, used for the coherence calculation, therefore the utilized automatic coherence measures can have insufficient evidence for correct estimates. We add the Wordnet data in the following steps. At the first step, we include WordNet synonyms (including multiword expressions) into the proposed similarity sets (LDASim+WNsyn). At this step, frequencies of synonyms found in the same document are summed up in process LDA topic learning as described in Algorithm 1.We can see that the kernel uniqueness becomes very low, topics are very close to each other in content (Table 2: LDA-Sim+WNsyn). At the second step, we add word direct relatives (hyponyms, hypernyms, etc.) to similarity sets.
In what steps do they add the Wordnet data?
Include WordNet synonyms into the proposed similarity sets, add word direct relatives to similarity sets.
1604.07236
false
null
We carried out the experimentation with a range of classifiers of different types: Support Vector Machines (SVM), Gaussian Naive Bayes, Multinomial Naive Bayes, Decision Trees, Random Forests and a Maximum Entropy classifier. They were tested in two different settings, one without balancing the weights of the different classes and the other by weighing the classes as the inverse of their frequency in the training set; the latter was tested as a means for dealing with the highly imbalanced data. The selection of these classifiers is in line with those used in the literature, especially with those tested by Han et al. BIBREF41 . This experimentation led to the selection of the weighed Maximum Entropy (MaxEnt) classifier as the most accurate. In the interest of space and focus, we only present results for this classifier. We carried out the experimentation with a range of classifiers of different types: Support Vector Machines (SVM), Gaussian Naive Bayes, Multinomial Naive Bayes, Decision Trees, Random Forests and a Maximum Entropy classifier.
What model do they train?
The answers are shown as follows: * Support Vector Machines (SVM), Gaussian Naive Bayes, Multinomial Naive Bayes, Decision Trees, Random Forests and a Maximum Entropy classifier
null
false
null
What programming language should I learn?
There are many factors to consider when choosing a programming language including ease of use, support by the developer community, licensing, and your application. Programming languages known for their readability include languages like Python and Go. These languages also have open source licenses with strong support by the developer community for various libraries one can include in their applications. Languages with a REPL are often easier to start learning because they’re interactive. Java, C, C++, and Scala are common languages but are often considered to be more challenging to start with than a scripting and general purpose language like Python. For specific applications like web pages, JavaScript is used by most websites.
null
false
89
Question generation (QG) is the task of generating a question from a reference sentence and a specified answer within the sentence. A major challenge in QG is to identify answer-relevant context words to finish the declarative-to-interrogative sentence transformation. Existing sequence-to-sequence neural models achieve this goal by proximity-based answer position encoding under the intuition that neighboring words of answers are of high possibility to be answer-relevant. However, such intuition may not apply to all cases especially for sentences with complex answer-relevant relations. Consequently, the performance of these models drops sharply when the relative distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question increases. To address this issue, we propose a method to jointly model the unstructured sentence and the structured answer-relevant relation (extracted from the sentence in advance) for question generation. Specifically, the structured answer-relevant relation acts as the to the point context and it thus naturally helps keep the generated question to the point, while the unstructured sentence provides the full information. Extensive experiments show that to the point context helps our question generation model achieve significant improvements on several automatic evaluation metrics. Furthermore, our model is capable of generating diverse questions for a sentence which conveys multiple relations of its answer fragment. However, such intuition may not apply to all cases especially for sentences with complex answer-relevant relations. Consequently, the performance of these models drops sharply when the relative distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question increases.
What is the drawback of existing models to in question generation?
The performance of these models drops sharply when the relative distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question increases.
null
false
null
Give me a list of popular SUVs available in North America
SUVs (Sports Utility Vehicle) have become increasingly popular in North America in recent years. Five popular SUVs available currently include: - Kia Telluride - BMW X5 - Hyundai Palisade - Jeep Grand Cherokee - Volvo XC40
null
false
null
What form of government is the United Kingdom?
The United Kingdom is a constitutional monarchy
1710.06700
false
null
To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each. To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.
Where did they collect their dataset from?
The answers are shown as follows: * from Arabic WikiNews site https://ar.wikinews.org/wiki
null
false
null
What are the pros and cons of charging your Tesla at home?
The most convenient way to charge your EV is to charge it at home overnight while you sleep. Home chargers tend to be slow chargers which reach around 3.6 - 7 kW, although you can also charge using your home’s plug sockets which charge at speeds of around 2.3 kW. Pros Charging your Tesla Model Y at home overnight is probably the cheapest, most cost-efficient way to charge your EV. This is because off-peak energy prices are significantly cheaper than peak charging. Model Ys come with an adapter wire, allowing you to charge your EV using your home's plugs easily. If you're interested in charging at home, installing a home charging unit could be worth your time. Over time this could save you money. There are grants available to help you afford a home charger. The OZEV Chargepoint Grant is up to £350 or 75% off the fully installed charge point cost if specifics are met. There is also the Scottish EST Domestic Chargepoint Grant which gives you up to £300 and can be used alongside the OZEV Chargepoint Grant. Cons Home chargers rarely exceed speeds of 7 kW. This means charging the Tesla Model Y's large battery would take a long time. If you don't invest in a home charging station, it might also be risky to home charge as the 3-pin plug can damage your home's wiring. If you choose to charge during the day, it can also get quite expensive and may not compare favourably with public charging stations, despite being much slower.
null
false
null
What Constitutes a Foul in Football?
As a rule, a football foul would be an unfair act committed by a player which interferes with active play. A match referee would decide on the foul while officiating the game's laws. There are different outcomes that punish soccer fouls. The referee can award either a direct or an indirect free-kick (depending on the offence). In some cases, a foul in football results in a penalty spot kick awarded to the opposing team.
null
false
null
What are some of the languages spoken in India?
Following are some of the languages spoken in India - Hindi, English, Tamil, Telugu, Malayalam, Oriya, Kannada, Telugu, Marathi, Punjabi, and Bengali.
1804.06506
false
null
In order to address the aforementioned problems we redesign the neural decoder in three different scenarios. In the first scenario we equip the decoder with an additional morphology table including target-side affixes. We place an attention module on top of the table which is controlled by the decoder. At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state. Signals sent from the table can be interpreted as additional constraints. In the second scenario we share the decoder between two output channels. The first one samples the target character and the other one predicts the morphological annotation of the character. This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions. In the third scenario we combine these two models. Section "Proposed Architecture" provides more details on our models. In the first scenario we equip the decoder with an additional morphology table including target-side affixes. We place an attention module on top of the table which is controlled by the decoder. At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state. Signals sent from the table can be interpreted as additional constraints. In the second scenario we share the decoder between two output channels. The first one samples the target character and the other one predicts the morphological annotation of the character. This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions. In the third scenario we combine these two models.
How are the auxiliary signals from the morphology table incorporated in the decoder?
The answers are shown as follows: * an additional morphology table including target-side affixes. * We inject the decoder with morphological properties of the target language.
null
false
null
Throughout his career, Mercury performed an estimated 700 concerts in countries around the world with Queen. A notable aspect of Queen concerts was the large scale involved. He once explained, "We're the Cecil B. DeMille of rock and roll, always wanting to do things bigger and better." The band was the first ever to play in South American stadiums, breaking worldwide records for concert attendance in the Morumbi Stadium in São Paulo in 1981. In 1986, Queen also played behind the Iron Curtain when they performed to a crowd of 80,000 in Budapest, in what was one of the biggest rock concerts ever held in Eastern Europe. Mercury's final live performance with Queen took place on 9 August 1986 at Knebworth Park in England and drew an attendance estimated as high as 200,000. A week prior to Knebworth, May recalled Mercury saying "I'm not going to be doing this forever. This is probably the last time." With the British national anthem "God Save the Queen" playing at the end of the concert, Mercury's final act on stage saw him draped in a robe, holding a golden crown aloft, bidding farewell to the crowd.
Given this paragraph about Freddie Mercury, where was his last performance with Queen.
Knebworth Park
null
false
null
classify each of these as either computer software or hardware: RAM, SSD, keyboard, mouse, trackpad, python, R, Scala, Google, Gmail, Figma, camera, screen
Computer software: python, R, Scala, Google, Gmail, Figma Computer hardware: RAM, SSD, keyboard, mouse, trackpad, camera, screen
null
false
null
How do you make Cincinnati style Venison Chili?
In a 4 to 6 quart pot, bring 1 quart of water to a boil Add: 2 lbs ground Venison Sit until separated and reduce heat to a simmer Add: 2 medium yellow onions, finely chopped 5 to 6 cloves of garlic, crushed (or 3 teaspoons minced garlic) One 15-ounce can tomato sauce 2 Tablespoons apple cider vinegar 1 tablespoon Worcestershire sauce In a ziplock bag, crush and add: 10 black peppercorns 8 whole allspice 8 whole cloves Stir and Add: 1 large bay leaf 2 teaspoons salt 2 teaspoons ground cinnamon 1 1/2 teaspoons ground cayenne pepper 1 teaspoon cumin 1/2 ounce grated unsweetened chocolate Return to a boil, then reduce heat to a simmer for 2 hours. If using ground beef, cool uncovered and store overnight in the refrigerator. Spoon off excess fat the next morning. Heat and serve
null
false
null
Who is the most sensational player in MLB today?
Shohei Ohtani
null
false
null
Which of these are characters in Ted Lasso: Rebecca Welton, Olivia Benson, Jamie Tartt
Rebecca Welton and Jamie Tartt are both characters in the TV show Ted Lasso. Olivia Benson is the main character on the TV Show Law & Order: SVU.
2004.03061
false
null
On another note, we apply our formalization to evaluate multilingual $\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\%$ in all languages), it only encodes at most $5\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings. We know $\textsc {bert} $ can generate text in many languages, here we assess how much does it actually know about syntax in those languages. And how much more does it know than simple type-level baselines. tab:results-full presents this results, showing how much information $\textsc {bert} $, fastText and onehot embeddings encode about POS tagging. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages. Finally, when put into perspective, multilingual $\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\%$ additional information. On another note, we apply our formalization to evaluate multilingual $\textsc {bert} $'s syntax knowledge on a set of six typologically diverse languages. Although it does encode a large amount of information about syntax (more than $81\%$ in all languages), it only encodes at most $5\%$ more information than some trivial baseline knowledge (a type-level representation). This indicates that the task of POS labeling (word-level POS tagging) is not an ideal task for contemplating the syntactic understanding of contextual word embeddings. We see that—in all analysed languages—type level embeddings can already capture most of the uncertainty in POS tagging. We also see that BERT only shares a small amount of extra information with the task, having small (or even negative) gains in all languages. Finally, when put into perspective, multilingual $\textsc {bert} $'s representations do not seem to encode much more information about syntax than a trivial baseline. $\textsc {bert} $ only improves upon fastText in three of the six analysed languages—and even in those, it encodes at most (in English) $5\%$ additional information.
Was any variation in results observed based on language typology?
It is observed some variability - but not significant. Bert does not seem to gain much more syntax information than with type level information.
null
false
null
How many teams compete in Formula one world championship?
Currently there are 10 teams competing in the Formula one world championships. The number of teams that compete in Formula one world championship have varied in the past.
null
false
null
Classify each team as playing in the EPL or La Liga: Barcelona, Tottenham, Brighton, Liverpool, Sevilla, Valencia, Everton
Barcelona: La Liga Tottenham: EPL Brighton: EPL Liverpool: EPL Sevilla: La Liga Valencia: La Liga Everton: EPL
null
false
null
The Mughal Empire was an early-modern empire that controlled much of the Indian subcontinent between the 16th and 19th centuries. For some two hundred years, the empire stretched from the outer fringes of the Indus river basin in the west, northern Afghanistan in the northwest, and Kashmir in the north, to the highlands of present-day Assam and Bangladesh in the east, and the uplands of the Deccan Plateau in South India.It included present day areas of India, Pakistan, Afghanistan and Bangladesh.
Tell me about the Mughal Empire?
The Mughal empire is conventionally said to have been founded in 1526 by Babur, a warrior chieftain from what is today Uzbekistan, who employed aid from the neighboring Safavid and Ottoman empires, to defeat the Sultan of Delhi, Ibrahim Lodi, in the First Battle of Panipat, and to sweep down the plains of North India. The Mughal imperial structure, however, is sometimes dated to 1600, to the rule of Babur's grandson, Akbar. This imperial structure lasted until 1720, shortly after the death of the last major emperor, Aurangzeb, during whose reign the empire also achieved its maximum geographical extent. Reduced subsequently to the region in and around Old Delhi by 1760, the empire was formally dissolved by the British Raj after the Indian Rebellion of 1857.