paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
What is a hockey hat trick ?
|
when a player scores three goals in a game, usually earning him a cascade of hats thrown onto the ice by fans (especially if the player is on the home team). A natural hat trick is when a player scores three consecutive goals in a game.
|
|
null | false
| 174
|
With the expanding amount of text data generated on different social media platforms, current filters are insufficient to prevent the spread of hate speech. Most internet users involved in a study conducted by the Pew Research Center report having been subjected to offensive name calling online or witnessed someone being physically threatened or harassed online. Additionally, Amnesty International within Element AI have lately reported that many women politicians and journalists are assaulted every 30 seconds on Twitter. This is despite the Twitter policy condemning the promotion of violence against people on the basis of race, ethnicity, national origin, sexual orientation, gender identity, religious affiliation, age, disability, or serious disease. Hate speech may not represent the general opinion, yet it promotes the dehumanization of people who are typically from minority groups BIBREF0, BIBREF1 and can incite hate crime BIBREF2.
Moreover, although people of various linguistic backgrounds are exposed to hate speech BIBREF3, BIBREF2, English is still at the center of existing work on toxic language analysis. Recently, some research studies have been conducted on languages such as German BIBREF4, Arabic BIBREF5, and Italian BIBREF6. However, such studies usually use monolingual corpora and do not contrast, or examine the correlations between online hate speech in different languages. On the other hand, tasks involving more than one language such as the hatEval task, which covers English and Spanish, include only separate classification tasks, namely (a) women and immigrants as target groups, (b) individual or generic hate and, (c) aggressive or non-aggressive hate speech.
Treating hate speech classification as a binary task may not be enough to inspect the motivation and the behavior of the users promoting it and, how people would react to it. For instance, the hateful tweets presented in Figure FIGREF5 show toxicity directed towards different targets, with or without using slurs, and generating several types of reactions. We believe that, in order to balance between truth and subjectivity, there are at least five important aspects in hate speech analysis. Hence, our annotations indicate (a) whether the text is direct or indirect; (b) if it is offensive, disrespectful, hateful, fearful out of ignorance, abusive, or normal; (c) the attribute based on which it discriminates against an individual or a group of people; (d) the name of this group; and (e) how the annotators feel about its content within a range of negative to neutral sentiments. To the best of our knowledge there are no other hate speech datasets that attempt to capture fear out of ignorance in hateful tweets or examine how people react to hate speech. We claim that our multi-aspect annotation schema would provide a valuable insight into several linguistic and cultural differences and bias in hate speech.
We use Amazon Mechanical Turk to label around 13,000 potentially derogatory tweets in English, French, and Arabic based on the above mentioned aspects and, regard each aspect as a prediction task. Since in natural language processing, there is a peculiar interest in multitask learning, where different tasks can be used to help each other BIBREF7, BIBREF8, BIBREF9, we use a unified model to handle the annotated data in all three languages and five tasks. We adopt BIBREF8 as a learning algorithm adapted to loosely related tasks such as our five annotated aspects and, use the Babylon cross-lingual embeddings BIBREF10 to align the three languages. We compare the multilingual multitask learning settings with monolingual multitask, multilingual single-task, and monolingual single-task learning settings respectively. Then, we report the performance results of the different settings and discuss how each task affects the remaining ones. We release our dataset and code to the community to extend research work on multilingual hate speech detection and classification.
Since in natural language processing, there is a peculiar interest in multitask learning, where different tasks can be used to help each other, we use a unified model to handle the annotated data in all three languages and five tasks.
|
Do they use a unified model to handle the annotated data in all three languages and five tasks?
|
Yes.
|
null | false
| null |
Which cities did some of the most famous English rock bands come from?
|
Cities that some of the most famous English rock bands came from are Liverpool (The Beatles), London (Led Zeppelin, The Rolling Stones, The Who, Pink Floyd, Queen, Deep Purple), Sheffield (Def Leppard), Birmingham (Black Sabbath), and Manchester (Oasis).
|
|
null | false
| 327
|
Hate speech represents written or oral communication that in any way discredits a person or a group based on characteristics such as race, color, ethnicity, gender, sexual orientation, nationality, or religion BIBREF0. Hate speech targets disadvantaged social groups and harms them both directly and indirectly BIBREF1. Social networks like Twitter and Facebook, where hate speech frequently occurs, receive many critics for not doing enough to deal with it. As the connection between hate speech and the actual hate crimes is high BIBREF2, the importance of detecting and managing hate speech is not questionable. Early identification of users who promote such kind of communication can prevent an escalation from speech to action. However, automatic hate speech detection is difficult, especially when the text does not contain explicit hate speech keywords. Lexical detection methods tend to have low precision because, during classification, they do not take into account the contextual information those messages carry BIBREF3. Recently, contextual word and sentence embedding methods capture semantic and syntactic relation among the words and improve prediction accuracy.
Recent works on combining probabilistic Bayesian inference and neural network methodology attracted much attention in the scientific community BIBREF4. The main reason is the ability of probabilistic neural networks to quantify trustworthiness of predicted results. This information can be important, especially in tasks were decision making plays an important role BIBREF5. The areas which can significantly benefit from prediction uncertainty estimation are text classification tasks which trigger specific actions. Hate speech detection is an example of a task where reliable results are needed to remove harmful contents and possibly ban malicious users without preventing the freedom of speech. In order to assess the uncertainty of the predicted values, the neural networks require a Bayesian framework. On the other hand, Srivastava et al. BIBREF6 proposed a regularization approach, called dropout, which has a considerable impact on the generalization ability of neural networks. The approach drops some randomly selected nodes from the neural network during the training process. Dropout increases the robustness of networks and prevents overfitting. Different variants of dropout improved classification results in various areas BIBREF7. Gal and Ghahramani BIBREF8 exploited the interpretation of dropout as a Bayesian approximation and proposed a Monte Carlo dropout (MCD) approach to estimate the prediction uncertainty. In this paper, we analyze the applicability of Monte Carlo dropout in assessing the predictive uncertainty.
Our main goal is to accurately and reliably classify different forms of text as hate or non-hate speech, giving a probabilistic assessment of the prediction uncertainty in a comprehensible visual form. We also investigate the ability of deep neural network methods to provide good prediction accuracy on small textual data sets. The outline of the proposed methodology is presented in Figure FIGREF2.
Our main contributions are:
investigation of prediction uncertainty assessment to the area of text classification,
implementation of hate speech detection with reliability output,
evaluation of different contextual embedding approaches in the area of hate speech,
a novel visualization of prediction uncertainty and errors of classification models.
The paper consists of six sections. In Section 2, we present related works on hate speech detection, prediction uncertainty assessment in text classification context, and visualization of uncertainty. In Section 3, we propose the methodology for uncertainty assessment using dropout within neural network models, as well as our novel visualization of prediction uncertainty. Section 4 presents the data sets and the experimental scenario. We discuss the obtained results in Section 5 and present conclusions and ideas for further work in Section 6.
We also investigate the ability of deep neural network methods to provide good prediction accuracy on small textual data sets.
|
The authors investigate the ability of deep neural network methods to provide good prediction accuracy on what kind of data sets?
|
Small textual data sets.
|
null | false
| null |
Karl Lienstater discovered which medical breakthrough in 1901
|
ABO Blood Groups
|
|
null | false
| null |
Why you need to save Money
|
Because you need it for emergency . You do not know you can not earn money always.
|
|
null | false
| null |
Identify the bird from the list: Queensbury, Kingfisher, Bishopsgate
|
Kingfisher
|
|
null | false
| null |
Write a short story about a person who discovers a bowling ball with special powers to roll strikes every time. The story should include a plot twist and a clear resolution at the end.
|
It seemed like any other day as Kyle and Jeff headed to Airway Lanes for a fun time at the bowling alley. While meticulously looking over house balls with fingertip grips to feel like the pros, Jeff found one that caught his eye. The beautiful blue ball, with pink swirls and matching holes, already somehow had his same initials carved in.
As the two began to bowl, Jeff started with a strike. "Oh, I like this one," he thought. Another strike! "I could get used to this," he told himself.
Kyle was also bowling well, notching a strike and a 9-spare but nothing like what Jeff was doing on the lanes this fateful afternoon.
As Jeff lined up for his 3rd roll, his palms started sweating. The dreaded Turkey is where he got stuck so many times. But not today... ANOTHER STRIKE! Jeff jumped for joy.
The two would go on to bowl their best games ever, a perfect 300 for Jeff and 279 for Kyle.
As Jeff headed home, he wondered if his new ball, which the bowling alley gave him after his perfect game, was magical.
He headed back to the lanes the next day. Again nothing but strikes. As Jeff figured out there was something magical about the ball, he had the decision to make. He could go on to be the best bowler never, never missing a pin again. Or, he could get a ball that wasn't magical. And play with honor.
Jeff decided to destroy the ball for the good of mankind. And continue his craft the good old fashion way, with hard work and dedication!
|
|
null | false
| 494
|
Training Datasets and Network Architectures We employ WideResNet28-10 (WRN) and ResNet50 (RN50) architectures that have been shown to produce state-of-the-art classification accuracies on real-world datasets. We train them on CIFAR-10 (C10) and CIFAR-100 (C100). For Domain-Shift experiments, we resort to the widely used CIFAR10-C and CIFAR100-C, corrupted versions of C10 and C100. For Out-of-Distribution detection experiments, following SNGP, we use C100 and SVHN as OOD for models trained on C10. Similarly, for models trained on C100, we use C10 and SVHN as OOD.
Methods considered for comparisons We consider both deterministic and Bayesian approaches for comparison. Following, we also create two additional strong and simple baselines where a ResNet is enforced to be bi-Lipschitz using Spectral Normalization (SN) and Stable Rank Normalization (SRN). Note, we are the first to consider SRN for these experiments as it induces more compact clusters in the feature space than SN (we provide a simple mathematical proof of this in the Appendix A). Therefore, we compare our approach with the following baselines:
• DNN: Standard deterministic neural network trained using cross-entropy loss.
• DNN-SN:DNN with SN.
• DNN-SRN: DNN with SRN.
• SNGP: Spectrally Normalized Gaussian Process.
• DUQ: Deterministic Uncertainty Quantification.
• Mixup: Standard Mixup training.
• KFAC-LLLA: KFAC-Laplace Last Layer Approximation. A method that makes a model Bayesian at test time by taking Laplace approximation of the last layer using a Kronecker-Factored approximation. For the sake of completeness, we provide a simple outline of this approach in Appendix B. Code base For fair comparisons, we developed our own code base for all the approaches mentioned above (except SNGP and DUQ) and performed an extensive hyperparameter search to obtain the strongest possible baselines. For SNGP, we used the available code and made sure that we follow exactly the same procedure as mentioned in their original paper. For DUQ, the original paper did not perform large scale experiments similar to ours. Unfortunately, we could not manage to make their code work on C100 as it exhibited unstable behaviour. For this reason, we borrowed numbers for DUQ from the SNGP paper. Please note that the authors of SNGP performed non-trivial modifications to the original DUQ methodology to make it work on C100. Further details provided in Appendix C.
We use SGD with Nesterov momentum 0.9 and a weight decay of 5 × 10 −4 . For WRN, we apply a dropout p = 0.1 at train time. We perform extensive cross-validation of all the hyperparameters for all the baselines. Details provided in Appendix C.
Evaluation Metrics For calibration, we employ: (1) the widely used Expected Calibration Error (ECE), and (2) the recently proposed Adaptive ECE (AdaECE). For all the methods, the ECE and AdaECE are computed after performing temperature scaling with a cross-validated temperature parameter. Metrics and uncertainty measures used for out-of-distribution detection are discussed in detail in Section 5.1.3.
Training Datasets and Network Architectures We employ WideResNet28-10 (WRN) and ResNet50 (RN50) architectures that have been shown to produce state-of-the-art classification accuracies on real-world datasets. We train them on CIFAR-10 (C10) and CIFAR-100 (C100). For Domain-Shift experiments, we resort to the widely used CIFAR10-C and CIFAR100-C, corrupted versions of C10 and C100. For Out-of-Distribution detection experiments, following SNGP, we use C100 and SVHN as OOD for models trained on C10. Similarly, for models trained on C100, we use C10 and SVHN as OOD.
Methods considered for comparisons We consider both deterministic and Bayesian approaches for comparison. Following, we also create two additional strong and simple baselines where a ResNet is enforced to be bi-Lipschitz using Spectral Normalization (SN) and Stable Rank Normalization (SRN). Note, we are the first to consider SRN for these experiments as it induces more compact clusters in the feature space than SN (we provide a simple mathematical proof of this in the Appendix A). Therefore, we compare our approach with the following baselines:
• DNN: Standard deterministic neural network trained using cross-entropy loss.
• DNN-SN:DNN with SN.
• DNN-SRN: DNN with SRN.
• SNGP: Spectrally Normalized Gaussian Process.
• DUQ: Deterministic Uncertainty Quantification.
• Mixup: Standard Mixup training.
• KFAC-LLLA: KFAC-Laplace Last Layer Approximation. A method that makes a model Bayesian at test time by taking Laplace approximation of the last layer using a Kronecker-Factored approximation. For the sake of completeness, we provide a simple outline of this approach in Appendix B. Code base For fair comparisons, we developed our own code base for all the approaches mentioned above (except SNGP and DUQ) and performed an extensive hyperparameter search to obtain the strongest possible baselines. For SNGP, we used the available code and made sure that we follow exactly the same procedure as mentioned in their original paper. For DUQ, the original paper did not perform large scale experiments similar to ours. Unfortunately, we could not manage to make their code work on C100 as it exhibited unstable behaviour. For this reason, we borrowed numbers for DUQ from the SNGP paper. Please note that the authors of SNGP performed non-trivial modifications to the original DUQ methodology to make it work on C100. Further details provided in Appendix C.
We use SGD with Nesterov momentum 0.9 and a weight decay of 5 × 10 −4 . For WRN, we apply a dropout p = 0.1 at train time. We perform extensive cross-validation of all the hyperparameters for all the baselines. Details provided in Appendix C.
Evaluation Metrics For calibration, we employ: (1) the widely used Expected Calibration Error (ECE), and (2) the recently proposed Adaptive ECE (AdaECE). For all the methods, the ECE and AdaECE are computed after performing temperature scaling with a cross-validated temperature parameter. Metrics and uncertainty measures used for out-of-distribution detection are discussed in detail in Section 5.1.3.
Therefore, we compare our approach with the following baselines: • DNN: Standard deterministic neural network trained using cross-entropy loss. • DNN-SN:DNN with SN (Miyato et al., 2018a). • DNN-SRN: DNN with SRN (Sanyal et al., 2020). • SNGP: Spectrally Normalized Gaussian Process (Liu et al., 2020a). • DUQ: Deterministic Uncertainty Quantification (van Amersfoort et al., 2020). • Mixup: Standard Mixup training (Zhang et al., 2018). • KFAC-LLLA: KFAC-Laplace Last Layer Approximation (Kristiadi et al., 2020). A method that makes a model Bayesian at test time by taking Laplace approximation of the last layer using a Kronecker-Factored approximation (Ritter et al., 2018). For the sake of completeness, we provide a simple outline of this approach in Appendix B.• DE: Deep Ensembles (Lakshminarayanan et al., 2017) with 5 members. Note, it is almost
5x slower than all other approaches mentioned above.****Code base For fair comparisons, we developed our own code base for all the approaches mentioned above (except SNGP and DUQ) and performed an extensive hyperparameter search to obtain the strongest possible baselines. For SNGP, we used the available code and made sure that we follow exactly the same procedure as mentioned in their original paper. For DUQ, the original paper did not perform large scale experiments similar to ours. Unfortunately, we could not manage to make their code work on C100 as it exhibited unstable behaviour. For this reason, we borrowed numbers for DUQ from the SNGP paper. Please note that the authors of SNGP performed non-trivial modifications to the original DUQ methodology to make it work on C100. Further details provided in Appendix C.****Code For the SNGP method we use the official code-base with the suggested hyperparameters and training procedures. The code diverges slightly from the procedure described in their paper, hence the slight differences in the performance. The only modification we performed to the official code-base was to make the inference procedure consistent with the one described in the paper: indeed, in their code they implement a mean-field approximation to estimate the predictive distribution (Lu et al., 2020), while in their paper they use Monte Carlo Integration with a number of samples equal to the number of members in the ensembles they use as a baseline, which provides better calibration. The rationale is that we could not find an obvious way to tune the mean-field approximation hyperparameters to improve at the same time both the calibration and OOD detection performance (indeed, the mean-field approximation imposes a trade-off between calibration and OOD detection performance). Additionally, since the standard KFAC-LLLA uses the same Monte Carlo Integration procedure,we opted for the latter for a fair comparison. For the KFAC-LLLA we leverage the official repository7 (Hobbhahn et al., 2021) and the Backpack library (Dangel et al., 2020) for the computation of the Kronecker-Factored Hessian. For the SNGP ResNet50 experiments, we tried running the official implementation. The official implementation for ResNet50 is specifically fine-tuned for ImageNet, and has not been used for experiments on CIFAR. We could not make SNGP converge to SOTA accuracy values both on CIFAR-10 and CIFAR-100. All the other methods were implemented by us in PyTorch and the training, cross-validation and evaluation code will be made publicly available upon acceptance of the paper.
|
Mixup tends to be the strongest baseline, which is quite surprising as it is not an algorithm specifically designed for robustness and uncertainty. I am not an expert in this area, whether the current SOTA approaches are compared?
|
Please note that we do compare our approach with several most recent SOTA approaches (mentioned below) and perform very **extensive and thorough experiments**.Though **recent papers on OOD have largely ignored mixup** as one of their baselines, mixup has been shown to **improve robustness** [1,2,3] and improved **calibration** as a result of the counterbalance between the usual overconfidence of neural networks and the ability of mixup to reduce the confidence [3]. We use mixup as an additional **strong baseline**.To summarize, while comparing our approach with the recent OOD papers published at top venues, we also **create new strong baselines** for OOD such as: - mixup [4];- DNN-SN [5] and DNN-SRN [6], following the observations in SNGP that showed normalisation techniques to improve uncertainty estimation properties The recent OOD works we compare our approach with are: - **SNGP** [7], often considered one of the most effective single-model approaches. - **DUQ** [8], as much as possible as we found it hard to make DUQ work on large experiments mostly because of training instabilities and centroid collapses. - **Deep Ensemble** [9] (often used as the gold standard); - **KFAC-LLLA** [10] (an incredibly effective Bayesian method).We mention most of these observations in **Section 5 and Appendix C, Code section**. We **omitted** methods simple to implement (like MC-Dropout) that are known to have **suboptimal performance** or that **assume knowledge of the OOD data** at training time. We would gently ask the reviewer to pick any of the above OOD papers (and these other papers [8, 11,12,13,14,15, 16] ) to realise the choice and amounts of baselines is either comparable or **superior** to most of the papers cited. Differently from most of these papers, we also compare using multiple uncertainty estimation tasks using several datasets and architectures. We trained each of the baselines from scratch (where possible) and cross-validated all the hyperparameters to allow them to produce the **best performance possible**.
|
null | false
| null |
What are the names of Daenerys Targaryen's three dragons?
|
Drogon, Viserion, and Rhaegal
|
|
null | false
| null |
What is Ultimate Frisbee?
|
Ultimate Frisbee is a popular sport around the world. It is played both recreationally and at a semi-professional level. It is played with two teams of 7 players who square off against each other on a field that is about the size of an American football field (120x40 yards). Much like American football, the objective is to catch the Frisbee, or disc, in the end zone in order to score points. The run of play, however, is continuous like soccer. Players cannot run while holding the disc, requiring teams to progress down the field by throwing the disc to each other. Each player must release the disc within 10 seconds of acquiring it. Under normal circumstances, the first team to score 15 points wins.
|
|
null | false
| null |
What are the colors of the rainbow?
|
A rainbow is formed when sunlight scatters from raindrops and reflects red, orange, yellow, green, blue, indigo and violet.
|
|
null | false
| null |
What is the difference between UI design, UX design and Product design?
|
While all of these types of design are related, there are several key differences. UI design focuses on the actual visual design of an experience and the look and feel of the controls. UX design is all about the actual flows, steps or scenarios that the experience is addressing. The order of the flow, the type of controls used, and how related various elements are to each other. Product design goes one step deeper, and asks the question of if the experience is solving the right problems for users, or addressing actual user needs.
|
|
null | false
| null |
Classify the below based on whether they are tree or animal.
Deer, Goat, Acacia, Banyan
|
Animal - Deer, Goat
Tree - Acacia, Banyan
|
|
null | false
| null |
How can I sleep well at night?
|
Sleep is essential for your brain and body to function productively. Studies have shown that stress and lifestyle have a direct correlation to the quality of sleep. Here are some suggestions for a good night's rest:
1. Establish a bedtime and try to sleep and wake up around the same time
2. Stay hydrated by drinking lots of water and reducing caffeine and alcohol intake
3. Exercise and stay active during the day
4. Eat a healthy, balanced diet and try to limit the consumption of foods high in sugar and trans fats a few hours leading up to bedtime
5. Turn off notifications and if possible, keep the phone away from your bed as you prepare to sleep
6. Listening to soothing music, reading a book or any activity that puts you in a good frame of mind for sleep is highly recommended
|
|
null | false
| null |
Name three movies that have an optimistic post-apocalyptic feel.
|
Love and Monsters (2020)
Free Guy (2021)
Dungeons & Dragons: Honor Among Thieves (2023)
|
|
null | false
| null |
Roger Franklin Bonk (June 9, 1944 – February 13, 2023) was an American professional football player who was a linebacker for one season in the Canadian Football League (CFL) for the Winnipeg Blue Bombers. A native of Minnesota, he played freshman football for the Minnesota Golden Gophers in 1962 before transferring to the University of North Dakota. He played three years for their football team and was twice named all-conference, additionally being named first-team Little All-American as a senior. Bonk then played 15 games for Winnipeg in 1967 before being released prior to the 1968 season.
|
Given this reference paragraph about Roger Franklin Bonk where was he born?
|
Minnesota
|
null | false
| null |
Which author created Fu Manchu
|
Sax Rohmer
|
|
null | false
| null |
Which dragon in Game of Thrones was reanimated as a wight?
|
Viserion
|
|
null | false
| null |
I need to improve my sleep. Give me a list of ideas for doing so.
|
* Make your bedroom darker, especially if daylight wakes you up in the morning.
* Make your room quieter, or use a white noise machine, if you are awoken by noises.
* If you find yourself getting too hot or cold, adjust your pajamas, bedding, or room temperature.
* Exercise during the day, but not too close to bedtime.
* Have a regular bedtime, as well as a bedtime routine beforehand.
* Avoid looking at computer, TV or phone screens for an hour or two before bedtime.
* Try to lower your stress levels.
* Give yourself enough time to sleep so that you don't need an alarm to wake up.
|
|
null | false
| 171
|
We first gather abusive utterances from 600K conversations with US-based customers. We search for relevant utterances by simple keyword spotting and find that about 5% of the corpus includes abuse, with mostly sexually explicit utterances. Previous research reports even higher levels of abuse between 11% BIBREF2 and 30% BIBREF6. Since we are not allowed to directly quote from our corpus in order to protect customer rights, we summarise the data to a total of 109 “prototypical" utterances - substantially extending the previous dataset of 35 utterances from Amanda:EthicsNLP2018 - and categorise these utterances based on the Linguistic Society's definition of sexual harassment BIBREF7:
[noitemsep]
Gender and Sexuality, e.g. “Are you gay?”, “How do you have sex?”
Sexualised Comments, e.g. “I love watching porn.”, “I'm horny.”
Sexualised Insults, e.g. “Stupid bitch.”, “Whore”
Sexual Requests and Demands, e.g. “Will you have sex with me?”, “Talk dirty to me.”
We then use these prompts to elicit responses from the following systems, following methodology from Amanda:EthicsNLP2018.
[leftmargin=5mm, noitemsep]
4 Commercial: Amazon Alexa, Apple Siri, Google Home, Microsoft's Cortana.
4 Non-commercial rule-based: E.L.I.Z.A. BIBREF8, Parry BIBREF9, A.L.I.C.E. BIBREF10, Alley BIBREF11.
4 Data-driven approaches:
Cleverbot BIBREF12;
NeuralConvo BIBREF13, a re-implementation of BIBREF14;
an implementation of BIBREF15's Information Retrieval approach;
a vanilla Seq2Seq model trained on clean Reddit data BIBREF1.
Negative Baselines: We also compile responses by adult chatbots: Sophia69 BIBREF16, Laurel Sweet BIBREF17, Captain Howdy BIBREF18, Annabelle Lee BIBREF19, Dr Love BIBREF20.
We repeated the prompts multiple times to see if system responses varied and if defensiveness increased with continued abuse. If this was the case, we included all responses in the study. Following this methodology, we collected a total of 2441 system replies in July-August 2018 - 3.5 times more data than Amanda:EthicsNLP2018 - which 2 expert annotators manually annotated according to the categories in Table TABREF14 ($\kappa =0.66$).
We search for relevant utterances by simple keyword spotting and find that about 5% of the corpus includes abuse, with mostly sexually explicit utterances.
|
How to search for relevant utterances in this paper?
|
The authors search for relevant utterances by simple keyword spotting.
|
null | false
| null |
Why Indians are so beautiful
|
Indians have many culture and the color of Asian subcontinent skin is makes them looks exceptional. Indian are Vibrant
|
|
null | false
| null |
The Bobcaygeon Independent was the weekly community newspaper in Bobcaygeon, Ontario, Canada. It was established in 1869. Ownership changed hands many times over the years, until it was purchased along with then owner, Citizens Communication Group, in 2000 by Metroland Publishing.
The Independent covered everything from community events, school issues & family milestones to local crimes and tragedies. During ownership by the Citizens Communication Group, the Bobcaygeon and Fenelon news teams worked closely together, and from time to time, some stories of general interest would run simultaneously in the Independent and its sister paper the Fenelon Falls Gazette. During ownership by Metroland Publishing, a similar news style was used with common pages in both Bobcaygeon and Fenelon papers, and in the Lindsay This Week newspaper.
The paper was officially incorporated into Kawartha Lakes This Week in late 2004 in keeping with the amalgamation of Lindsay, Bobcaygeon and Fenelon Falls into the City of Kawartha Lakes.
Today, back issues are available at the Bobcaygeon Public Library for viewing on Microfiche, as far back as the first issue in 1869.
Although the paper had ceased publication in 2004, the website Amazon.com still offers subscriptions.
|
Given a refrence text about The Bobcaygeon Independent, tell me when it was established and when it ceased publication.
|
The Bobcaygeon Independent was established in 1869 and ceased publication in 2004.
|
null | false
| null |
How do I pick the best bananas?
|
It depends on when you want to eat them. If you want to eat them immediately, look for ones that are almost entirely yellow. If you are waiting a few days, 1 inch or less of green on top. And if you are expecting them to last a week, select ones that are about half green.
|
|
null | false
| null |
Why do people blink?
|
Each time the eyelids close, salty secretions from the tear glands are swept over the surface of the eye, flushing away small dust particles and lubricating the exposed portion of the eyeball. In essence, people blink because to cleanse and moisten the eye.
|
|
null | false
| null |
Identify which instrument is string or woodwind: Octobass, Bass clarinet
|
Bass clarinet is woodwind, Octobass is string.
|
|
null | false
| null |
Timothy Leroy Lincecum (/ˈlɪnsəkʌm/ LIN-sə-kum; born June 15, 1984), nicknamed "The Freak", "The Franchise", "The Freaky Franchise" and "Big Time Timmy Jim", is an American former professional baseball pitcher. He played in Major League Baseball (MLB) for the San Francisco Giants (2007 to 2015) and Los Angeles Angels (2016). A two-time Cy Young Award winner, Lincecum helped the Giants win three World Series championships from 2010 through 2014.
|
Extract why Tim Lincecum was called "The Freak"
|
Nicknamed "The Freak" for his ability to generate powerful pitches despite his slight physique (5 feet 11 inches, 170 pounds), the power pitcher led the National League in strikeouts for three consecutive years in a span from 2008 to 2010. He also led the league in shutouts in 2009 and won the Babe Ruth Award in 2010 as the most valuable player of the MLB postseason. Lincecum won consecutive Cy Young Awards in 2008 and 2009, becoming the first MLB pitcher to win the award in his first two full seasons. He also appeared in four consecutive All-Star Games from 2008 through 2011 and pitched no-hitters in 2013 and 2014.
|
null | false
| null |
What are all of the professional football teams in the state of California?
|
The professional football teams in the state of California are the Los Angeles Rams, Los Angeles Chargers, and San Francisco 49ers.
|
|
1603.04553
| false
| null |
According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\Pi = \lbrace \pi _1, \ldots , \pi _n\rbrace $ , where for each mention $j$ the variable $\pi _j \in \lbrace str, prec, attr\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\Pi $ is deterministic when $D$ is given (i.e. $P(\Pi |D)$ is a point distribution). We determine $\pi _j$ for each mention $m_j$ in the following way:
$\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .
$\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.
$\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions.
Motivated by this, we introduce resolution mode variables $\Pi = \lbrace \pi _1, \ldots , \pi _n\rbrace $ , where for each mention $j$ the variable $\pi _j \in \lbrace str, prec, attr\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\Pi $ is deterministic when $D$ is given (i.e. $P(\Pi |D)$ is a point distribution). We determine $\pi _j$ for each mention $m_j$ in the following way:
$\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .
$\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.
$\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions.
|
What are resolution model variables?
|
Variables in the set {str, prec, attr} indicating in which mode the mention should be resolved.
|
null | false
| 156
|
Topic identification (topic ID) on speech aims to identify the topic(s) for given speech recordings, referred to as spoken documents, where the topics are a predefined set of classes or labels. This task is typically formulated as a three-step process. First, speech is tokenized into words or phones by automatic speech recognition (ASR) systems BIBREF0 , or by limited-vocabulary keyword spotting BIBREF1 . Second, standard text-based processing techniques are applied to the resulting tokenizations, and produce a vector representation for each spoken document, typically a bag-of-words multinomial representation, or a more compact vector given by probabilistic topic models BIBREF2 , BIBREF3 . Finally, topic ID is performed on the spoken document representations by supervised training of classifiers, such as Bayesian classifiers and support vector machines (SVMs).
However, in the first step, training the ASR system required for tokenization itself requires transcribed speech and pronunciations. In this paper, we focus on a difficult and realistic scenario where the speech corpus of a test language is annotated only with a minimal number of topic labels, i.e., no manual transcriptions or dictionaries for building an ASR system are available. We aim to exploit approaches that enable topic ID on speech without any knowledge of that language other than the topic annotations.
In this scenario, while previous work demonstrates that the cross-lingual phoneme recognizers can produce reasonable speech tokenizations BIBREF4 , BIBREF5 , the performance is highly dependent on the language and environmental condition (channel, noise, etc.) mismatch between the training and test data. Therefore, we focus on unsupervised approaches that operate directly on the speech of interest. Raw acoustic feature-based unsupervised term discovery (UTD) is one such approach that aims to identify and cluster repeating word-like units across speech based around segmental dynamic time warping (DTW) BIBREF6 , BIBREF7 . BIBREF8 shows that using the word-like units from UTD for spoken document classification can work well; however, the results in BIBREF8 are limited since the acoustic features on which UTD is performed are produced by acoustic models trained from the transcribed speech of its evaluation corpus. In this paper, we investigate UTD-based topic ID performance when UTD operates on language-independent speech representations extracted from multilingual bottleneck networks trained on languages other than the test language BIBREF9 . Another alternative to producing speech tokenizations without language dependency is the model-based approach, i.e., unsupervised learning of hidden Markov model (HMM) based phoneme-like units from untranscribed speech. We exploit the Variational Bayesian inference based acoustic unit discovery (AUD) framework in BIBREF10 that allows parallelized large-scale training. In topic ID tasks, such AUD-based systems have been shown to outperform other systems based on cross-lingual phoneme recognizers BIBREF5 , and this paper aims to further investigate how the performance compares among UTD, AUD and ASR based systems.
Moreover, after the speech is tokenized, these works BIBREF0 , BIBREF1 , BIBREF4 , BIBREF5 , BIBREF8 , BIBREF9 are limited to using bag-of-words features as spoken document representations. While UTD only identifies relatively long (0.5 – 1 sec) repeated terms, AUD/ASR enables full-coverage segmentation of continuous speech into a sequence of units/words, and such a resulting temporal sequence enables another feature learning architecture based on convolutional neural networks (CNNs) BIBREF11 ; instead of treating the sequential tokens as a bag of acoustic units or words, the whole token sequence is encoded as concatenated continuous vectors, and followed by convolution and temporal pooling operations that capture the local and global dependencies. Such continuous space feature extraction frameworks have been used in various language processing tasks like spoken language understanding BIBREF12 , BIBREF13 and text classification BIBREF14 , BIBREF15 . However, three questions are worth investigating in our AUD-based setting: (i) if such a CNN-based framework can perform as well on noisy automatically discovered phoneme-like units as on orthographic words/characters, (ii) if pre-trained vectors of phoneme-like units from word2vec BIBREF16 provide superior performance to random initialization as evidenced by the word-based tasks, and (iii) if CNNs are still competitive in low-resource settings of hundreds to two-thousand training exemplars, rather than the large/medium sized datasets as in previous work BIBREF14 , BIBREF15 .
Finally, incorporating the different tokenization and feature representation approaches noted above, we perform comprehensive topic ID evaluations on both single-label and multi-label spoken document classification tasks.
We exploit the Variational Bayesian inference based acoustic unit discovery (AUD) framework in that allows parallelized large-scale training.
|
Which kind of training does the Variational Bayesian inference-based acoustic unit discovery framework allow?
|
Allows parallelized large-scale training.
|
null | false
| null |
Friends is an American television sitcom created by David Crane and Marta Kauffman, which aired on NBC from September 22, 1994, to May 6, 2004, lasting ten seasons. With an ensemble cast starring Jennifer Aniston, Courteney Cox, Lisa Kudrow, Matt LeBlanc, Matthew Perry and David Schwimmer, the show revolves around six friends in their 20s and 30s who live in Manhattan, New York City. The series was produced by Bright/Kauffman/Crane Productions, in association with Warner Bros. Television. The original executive producers were Kevin S. Bright, Kauffman, and Crane.
|
Give me list of main cast of Friends TV show
|
Jennifer Aniston,
Courteney Cox,
Lisa Kudrow,
Matt LeBlanc,
Matthew Perry,
David Schwimmer
|
null | false
| null |
Bangladesh is the second largest economy in South Asia after India. The country has outpaced India (of which it was a part until 1947) and Pakistan (of which it was a part until 1971) in terms of per capita income. According to the World Bank, "When the newly independent country of Bangladesh was born on December 16, 1971, it was the second poorest country in the world—making the country's transformation over the next 50 years one of the great development stories. Since then, poverty has been cut in half at record speed. Enrolment in primary school is now nearly universal. Hundreds of thousands of women have entered the workforce. Steady progress has been made on maternal and child health. And the country is better buttressed against the destructive forces posed by climate change and natural disasters. Bangladesh's success comprises many moving parts—from investing in human capital to establishing macroeconomic stability. Building on this success, the country is now setting the stage for further economic growth and job creation by ramping up investments in energy, inland connectivity, urban projects, and transport infrastructure, as well as prioritizing climate change adaptation and disaster preparedness on its path toward sustainable growth".
After the partition of India, the region underwent a change in economic geography. In East Pakistan, free market principles were generally accepted. The government promoted industrialization to produce consumer goods as quickly as possible in order to avoid dependence on imports. Certain sectors, like public utilities, fell under state ownership. Demand for jute during the Korean War led to the creation of the Adamjee Jute Mills, which replaced jute mills in Dundee and Calcutta as the largest jute mill in the world. However, by the 1960s, East Pakistan's share of exports fell from 70% to 50% as West Pakistan received the major portion of investments. Economic grievances played a key role in the pro-independence aspirations of East Pakistanis. During the initial five years of independence (1971-1975), newly created Bangladesh followed a socialist economy. In the late 1970s, socialist policies were largely reversed, industrial plants were returned to private owners, and private industry was increasingly promoted. The government set up export processing zones to stimulate the export economy. Between 1991 and 1993, finance minister Saifur Rahman launched further reforms with support from the IMF which liberalized the economy and boosted industrial growth, services, and exports. By the late 1990s and early 2000s, the reform momentum lost steam due to chronic political instability, but the economy continued to grow.
In 2022, Bangladesh had the second largest foreign-exchange reserves in South Asia. The reserves have boosted the government's spending capacity in spite of tax revenues forming only 7.7% of government revenue. A big chunk of investments have gone into the power sector. In 2009, Bangladesh was experiencing daily blackouts several times a day. In 2022, the country achieved 100% electrification. One of the major anti-poverty schemes of the Bangladeshi government is the Ashrayan Project which aims to eradicate homelessness by providing free housing. The poverty rate has gone down from 80% in 1971, to 44.2% in 1991, to 12.9% in 2021. The literacy rate stood at 74.66% in 2022. Bangladesh has a labor force of roughly 70 million, which is the world's seventh-largest; with an unemployment rate of 5.2% as of 2021. The government is setting up 100 special economic zones to attract foreign direct investment (FDI) and generate 10 million jobs. The Bangladesh Investment Development Authority (BIDA) and the Bangladesh Economic Zones Authority (BEZA) have been established to help investors in setting up factories; and to complement the longstanding Bangladesh Export Processing Zone Authority (BEPZA). The Bangladeshi taka is the national currency. The service sector accounts for about 51.3% of total GDP and employs 39% of the workforce. The industrial sector accounts for 35.1% of GDP and employs 20.4% of the workforce. The agriculture sector makes up 13.6% of the economy but is the biggest employment sector, with 40.6% of the workforce. In agriculture, the country is a major producer of rice, fish, tea, fruits, vegetables, flowers, and jute. Lobsters and shrimps are some of Bangladesh's well known exports.
|
Please list the key points about Bangladesh economy from the paragraph below:
|
1. Bangladesh is the second-largest economy in South Asia
2. In 2022, Bangladesh had the second largest foreign-exchange reserves in South Asia
3. The poverty rate of Bangladesh is currently at 12.9%. Down from 80% in 1971 and 44.2% in 1991.
4. Bangladesh has the world's seventh-largest labor force.
5. The service sector of Bangladesh makes up 51.3% of the GDP, the industrial sector makes up 35.1% and the agriculture sector 35.1%.
|
null | false
| null |
What is a useful set of criteria to use when determining who has a lot of leadership potential in my team?
|
1. Aspiration. Does this person have a high degree of aspiration when compared to their peers? It takes a lot of risk and a lot of work to step up and grow into larger leadership responsibilities. Thus, it is important that a person has the desire to take that risk and to put in the work needed.
2. Capabilities. Has this person demonstrated uniquely strong leadership abilities, particularly in creating vision, setting strategy, and building great teams? Do they have a track record of setting direction and rallying followership? Are they a fast learner, quickly ramping to full performance when put in unfamiliar and increasingly more difficult jobs? Do they distinguish themselves as a thought leader, especially in ambiguous situations? Do they display high amounts of self-awareness and empathy, achieving results in a way that builds and strengthens relationships/
3. Commitment and courage. Does the person have sufficient tenure, have they demonstrated a long-term commitment to the organization? Are they highly engaged? Do they help to engage and develop others? Do they have a track record of accomplishing impressive results oriented to the total organization’s results, not just their own job or silo? Have they demonstrated the tendency to take on tough, important assignments in difficult environments that broaden their experience? Do they have a track-record of rapid career progression into the larger more complex jobs every few years?
|
|
null | false
| null |
Why is football/soccer such a popular sport?
|
I believe that football/soccer is a popular sport for various reasons. On one hand, everyone has kicked a ball in his or her childhood and can relate to the basic movement. On the other hand, it has very easy-to-follow rules which makes it possible to understand the game when you watch it the first time. Football stars are best known throughout a wide general range of people across the world.
|
|
null | false
| null |
Karl Heinrich Marx FRSA (German: [maʁks]; 5 May 1818 – 14 March 1883) was a German philosopher, economist, historian, sociologist, political theorist, journalist, critic of political economy, and socialist revolutionary. His best-known titles are the 1848 pamphlet The Communist Manifesto and the four-volume Das Kapital (1867–1883). Marx's political and philosophical thought had enormous influence on subsequent intellectual, economic, and political history. His name has been used as an adjective, a noun, and a school of social theory.
Born in Trier, Germany, Marx studied law and philosophy at the universities of Bonn and Berlin. He married German theatre critic and political activist Jenny von Westphalen in 1843. Due to his political publications, Marx became stateless and lived in exile with his wife and children in London for decades, where he continued to develop his thought in collaboration with German philosopher Friedrich Engels and publish his writings, researching in the British Museum Reading Room.
Marx's critical theories about society, economics, and politics, collectively understood as Marxism, hold that human societies develop through class conflict. In the capitalist mode of production, this manifests itself in the conflict between the ruling classes (known as the bourgeoisie) that control the means of production and the working classes (known as the proletariat) that enable these means by selling their labour-power in return for wages. Employing a critical approach known as historical materialism, Marx predicted that capitalism produced internal tensions like previous socioeconomic systems and that these tensions would lead to its self-destruction and replacement by a new system known as the socialist mode of production. For Marx, class antagonisms under capitalism—owing in part to its instability and crisis-prone nature—would eventuate the working class's development of class consciousness, leading to their conquest of political power and eventually the establishment of a classless, communist society constituted by a free association of producers. Marx actively pressed for its implementation, arguing that the working class should carry out organised proletarian revolutionary action to topple capitalism and bring about socio-economic emancipation.
Marx has been described as one of the most influential figures in human history, and his work has been both lauded and criticised. His work in economics laid the basis for some current theories about labour and its relation to capital. Many intellectuals, labour unions, artists, and political parties worldwide have been influenced by Marx's work, often modifying or adapting his ideas. Marx is typically cited as one of the principal architects of modern social science.
|
Given the reference text about Karl Marx, tell me when he died, what subjects he studied and at which universities he studied.
|
Karl Marx studied law and philosophy at the universities of Bonn and Berlin. He died on March 14 1883.
|
null | false
| null |
Tell me about the junk mail circle of life
|
Companies send mailers to every home in a certain vicinity/zip code. The post office delivers the junk mail to your mailbox. You get your mail and place the junk mail in the garbage. You take your garbage out for collection. The garbage people throw the junk mail into the dumpster. The end.
|
|
null | false
| 206
|
Keyphrases are single or multi-word lexical units that best summarise a document BIBREF0. As such, they are of great importance for indexing, categorising and browsing digital libraries BIBREF1. Yet, very few documents have keyphrases assigned, thus raising the need for automatic keyphrase generation systems. This task falls under the task of automatic keyphrase extraction which can also be the subtask of finding keyphrases that only appear in the input document. Generating keyphrases can be seen as a particular instantiation of text summarization, where the goal is not to produce a well-formed piece of text, but a coherent set of phrases that convey the most salient information. Those phrases may or may not appear in the document, the latter requiring some form of abstraction to be generated. State-of-the-art systems for this task rely on recurrent neural networks BIBREF2, BIBREF3, BIBREF4, and hence require large amounts of annotated training data to achieve good performance. As gold annotated data is expensive and difficult to obtain BIBREF5, previous works focused on readily available scientific abstracts and used author-assigned keyphrases as a proxy for expert annotations. However, this poses two major issues: 1) neural models for keyphrase generation do not generalize well across domains, thus limiting their use in practice; 2) author-assigned keyphrases exhibit strong consistency issues that negatively impacts the model's performance. There is therefore a great need for annotated data from different sources, that is both sufficiently large to support the training of neural-based models and that comprises gold-standard labels provided by experts. In this study, we address this need by providing KPTimes, a dataset made of 279 923 news articles that comes with editor-assigned keyphrases.
Online news are particularly relevant to keyphrase generation since they are a natural fit for faceted navigation BIBREF6 or topic detection and tracking BIBREF7. Also, and not less importantly, they are available in large quantities and are sometimes accompanied by metadata containing human-assigned keyphrases initially intended for search engines. Here, we divert these annotations from their primary purpose, and use them as gold-standard labels to automatically build our dataset. More precisely, we collect data by crawling selected news websites and use heuristics to draw texts paired with gold keyphrases. We then explore the resulting dataset to better understand how editors tag documents, and how these expert annotations differ from author-assigned keyphrases found in scholarly documents. Finally, we analyse the performance of state-of-the-art keyphrase generation models and investigate their transferability to the news domain and the impact of domain shift.
Finally, we analyse the performance of state-of-the-art keyphrase generation models and investigate their transferability to the news domain and the impact of domain shift.
|
Do the authors investigate models' transferability to the news domain and the impact of domain shift?
|
Yes, they do.
|
1701.03051
| false
| null |
We define the Effective Word Score of score x as
EFWS(x) = N(+x) - N(-x),
where N(x) is the number of words in the tweet with polarity score x.
We define the Effective Word Score of score x as
EFWS(x) = N(+x) - N(-x),
where N(x) is the number of words in the tweet with polarity score x.
|
How is effective word score calculated?
|
The answers are shown as follows:
* We define the Effective Word Score of score x as
EFWS(x) = N(+x) - N(-x),
where N(x) is the number of words in the tweet with polarity score x.
|
null | false
| null |
Texas hold 'em (also known as Texas holdem, hold 'em, and holdem) is one of the most popular variants of the card game of poker. Two cards, known as hole cards, are dealt face down to each player, and then five community cards are dealt face up in three stages. The stages consist of a series of three cards ("the flop"), later an additional single card ("the turn" or "fourth street"), and a final card ("the river" or "fifth street"). Each player seeks the best five card poker hand from any combination of the seven cards; the five community cards and their two hole cards. Players have betting options to check, call, raise, or fold. Rounds of betting take place before the flop is dealt and after each subsequent deal. The player who has the best hand and has not folded by the end of all betting rounds wins all of the money bet for the hand, known as the pot. In certain situations, a "split-pot" or "tie" can occur when two players have hands of equivalent value. This is also called a "chop-pot". Texas hold 'em is also the H game featured in HORSE and in HOSE.
|
Based on the text how does someone win in Texas Hold 'em?
|
In Texas Hold 'em there are two main ways to win. You can win by obtaining the highest possible hand utilizing the two cards that only you possess (the hold cards) and the five community cards on the table, or you outlast the rest of the players after each round of betting without folding. There are sometimes ties and in those situations the pot is split evenly.
|
null | false
| null |
The Happytime Murders was released on August 24, 2018, by STXfilms. It received mostly negative reviews from critics for its humor, performances and story, though the puppetry received praise. The film was a box-office bomb, grossing $27.5 million worldwide against a $40–47 million budget. The film received six nominations at the 39th Golden Raspberry Awards, including Worst Picture, and Worst Actress for McCarthy who won that award.
|
Why was the movie The Happytime Murders given such bad reviews if it was award winning?
|
The Golden Raspberry Awards, also known as the Razzies, is a parody award show. They celebrate the failures in cinema. Their tagline is "Own Your Bad" and often celebrities that are nominated for a Razzie take it in jest. Sometimes the roles or categories a movie is nominated in the Razzies are also nominated for Oscars or other acclaimed awards. The Happytime Murders film, however, was only nominated for parody awards such as the Razzies.
|
null | false
| 159
|
Natural Language Generation (NLG) plays a critical role in Spoken Dialogue Systems (SDS) with task is to convert a meaning representation produced by the Dialogue Manager into natural language utterances. Conventional approaches still rely on comprehensive hand-tuning templates and rules requiring expert knowledge of linguistic representation, including rule-based BIBREF0 , corpus-based n-gram models BIBREF1 , and a trainable generator BIBREF2 .
Recently, Recurrent Neural Networks (RNNs) based approaches have shown promising performance in tackling the NLG problems. The RNN-based models have been applied for NLG as a joint training model BIBREF3 , BIBREF4 and an end-to-end training model BIBREF5 . A recurring problem in such systems is requiring annotated datasets for particular dialogue acts (DAs). To ensure that the generated utterance representing the intended meaning of the given DA, the previous RNN-based models were further conditioned on a 1-hot vector representation of the DA. BIBREF3 introduced a heuristic gate to ensure that all the slot-value pair was accurately captured during generation. BIBREF4 subsequently proposed a Semantically Conditioned Long Short-term Memory generator (SC-LSTM) which jointly learned the DA gating signal and language model.
More recently, Encoder-Decoder networks BIBREF6 , BIBREF7 , especially the attentional based models BIBREF8 , BIBREF9 have been explored to solve the NLG tasks. The Attentional RNN Encoder-Decoder BIBREF10 (ARED) based approaches have also shown improved performance on a variety of tasks, e.g., image captioning BIBREF11 , BIBREF12 , text summarization BIBREF13 , BIBREF14 .
While the RNN-based generators with DA gating-vector can prevent the undesirable semantic repetitions, the ARED-based generators show signs of better adapting to a new domain. However, none of the models show significant advantage from out-of-domain data. To better analyze model generalization to an unseen, new domain as well as model leveraging the out-of-domain sources, we propose a new architecture which is an extension of the ARED model. In order to better select, aggregate and control the semantic information, a Refinement Adjustment LSTM-based component (RALSTM) is introduced to the decoder side. The proposed model can learn from unaligned data by jointly training the sentence planning and surface realization to produce natural language sentences. We conducted experiments on four different NLG domains and found that the proposed methods significantly outperformed the state-of-the-art methods regarding BLEU BIBREF15 and slot error rate ERR scores BIBREF4 . The results also showed that our generators could scale to new domains by leveraging the out-of-domain data. To sum up, we make three key contributions in this paper:
We review related works in Section "Related Work" . Following a detail of proposed model in Section "Recurrent Neural Language Generator" , Section "Experiments" describes datasets, experimental setups, and evaluation metrics. The resulting analysis is presented in Section "Results and Analysis" . We conclude with a brief summary and future work in Section "Conclusion and Future Work" .
While the RNN-based generators with DA gating-vector can prevent the undesirable seman- tic repetitions, the ARED-based generators show signs of better adapting to a new domain.
|
What is the advantage of RNN-based generators with DA gating-vector?
|
It can prevent the undesirable semantic repetitions.
|
null | false
| null |
What is Twenty20 or T20 in cricket?
|
Twenty20 or T20 for short is a shortened version of the game of cricket in which each side faces a maximum of 20 overs. The game typically lasts about 4 hours with a quick turn between innings unlike other forms of the game, thus increasing its viewership and popularity.
This format was introduced by the England Cricket Board in an inter-county competition back in 2003. Since then, its become really popular with several international competitions and country wide leagues across all the major cricket playing nations world wide
|
|
null | false
| null |
Why do babies cry?
|
Babies cry in order to tell us things about their needs, as it is their only form of communication. The most common reasons a baby may cry are that they are hungry, have a dirty diaper, or are tired. Babies may also cry if they are hot, cold, or uncomfortable (gas pains, itchy clothes, rash). Babies do not cry to manipulate their parents; they simply are not capable of such advanced thinking and behavior.
|
|
null | false
| null |
According to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), symptoms of DID include "the presence of two or more distinct personality states" accompanied by the inability to recall personal information beyond what is expected through normal memory issues. Other DSM-5 symptoms include a loss of identity as related to individual distinct personality states, loss of one's subjective experience of the passage of time, and degradation of a sense of self and consciousness. In each individual, the clinical presentation varies and the level of functioning can change from severe impairment to minimal impairment. The symptoms of dissociative amnesia are subsumed under a DID diagnosis, and thus should not be diagnosed separately if DID criteria are met. Individuals with DID may experience distress from both the symptoms of DID (intrusive thoughts or emotions) and the consequences of the accompanying symptoms (dissociation rendering them unable to remember specific information). The majority of patients with DID report childhood sexual or physical abuse. Amnesia between identities may be asymmetrical; identities may or may not be aware of what is known by another. Individuals with DID may be reluctant to discuss symptoms due to associations with abuse, shame, and fear. DID patients may also frequently and intensely experience time disturbances. Around half of people with DID have fewer than 10 identities and most have fewer than 100; although as many as 4,500 have been reported. The average number of identities has increased over the past few decades, from two or three to now an average of approximately 16. However, it is unclear whether this is due to an actual increase in identities, or simply that the psychiatric community has become more accepting of a high number of compartmentalized memory components.
|
what are some of the signs and symptoms of Dissociative identity disorder?
|
Dissociative identity Disorder, or DIS in short, causes many symptoms that range from intrusive thoughts, and emotions to Amnesia. The severeness of Amnesia will vary between identities. Other symptoms include time disturbances between identities. Some of the other speculative signs include the number of identities that can range from 10 to 4500.
|
1910.14443
| false
| null |
An octave convolutional layer BIBREF0 factorizes the output feature maps of a convolutional layer into two groups. The resolution of the low-frequency feature maps is reduced by an octave – height and width dimensions are divided by 2. In this work, we explore spatial reduction by up to 3 octaves – dividing by $2^t$, where $t=1,2,3$ – and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer, and an example with three groups and reductions of one and two octaves is depicted in Fig. FIGREF1.
An octave convolutional layer BIBREF0 factorizes the output feature maps of a convolutional layer into two groups. The resolution of the low-frequency feature maps is reduced by an octave – height and width dimensions are divided by 2. In this work, we explore spatial reduction by up to 3 octaves – dividing by $2^t$, where $t=1,2,3$ – and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer, and an example with three groups and reductions of one and two octaves is depicted in Fig. FIGREF1.
|
How is octave convolution concept extended to multiple resolutions and octaves?
|
The answers are shown as follows:
* The resolution of the low-frequency feature maps is reduced by an octave – height and width dimensions are divided by 2. In this work, we explore spatial reduction by up to 3 octaves – dividing by $2^t$, where $t=1,2,3$ – and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer,
|
null | false
| null |
How do i accept the change
|
Embrace the change and see the difference
|
|
null | false
| null |
The Dartmouth Workshop of 1956 was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it".The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research. At the workshop Newell and Simon debuted the "Logic Theorist" and McCarthy persuaded the attendees to accept "Artificial Intelligence" as the name of the field. The 1956 Dartmouth workshop was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI. The term "Artificial Intelligence" was chosen by McCarthy to avoid associations with cybernetics and connections with the influential cyberneticist Norbert Wiener.
The programs developed in the years after the Dartmouth Workshop were, to most people, simply "astonishing" computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such "intelligent" behavior by machines was possible at all. Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years. Government agencies like DARPA poured money into the new field.
In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared. At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky's devastating criticism of perceptrons. Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.
In the 1980s a form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.
The business community's fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble. The collapse was due to the failure of commercial vendors to develop a wide variety of workable solutions. As dozens of companies failed, the perception was that the technology was not viable. However, the field continued to make advances despite the criticism. Numerous researchers, including robotics developers Rodney Brooks and Hans Moravec, argued for an entirely new approach to artificial intelligence.
The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of "artificial intelligence". AI was both more cautious and more successful than it had ever been.
In the first decades of the 21st century, access to large amounts of data (known as "big data"), cheaper and faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. In fact, McKinsey Global Institute estimated in their famous paper "Big data: The next frontier for innovation, competition, and productivity" that "by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data".
By 2016, the market for AI-related products, hardware, and software reached more than 8 billion dollars, and the New York Times reported that interest in AI had reached a "frenzy". The applications of big data began to reach into other fields as well, such as training models in ecology and for various applications in economics. Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition.
|
Please give a brief history of AI.
|
The 1956 Dartmouth workshop was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI. In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. In the 1980s a form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In the first decades of the 21st century, access to large amounts of data (known as "big data"), cheaper and faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. By 2016, advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition.
|
1810.13024
| false
| null |
FLOAT SELECTED: Fig. 2: Standard ASR outputs
A number of important downstream and upstream applications rely on accurate confidence scores in graph-like structures, such as confusion networks (CN) in Fig. 2 and lattices in Fig. 2 , where arcs connected by nodes represent hypothesised words. This section describes an extension of BiRNNs to CNs and lattices.
FLOAT SELECTED: Fig. 2: Standard ASR outputs
A number of important downstream and upstream applications rely on accurate confidence scores in graph-like structures, such as confusion networks (CN) in Fig. 2 and lattices in Fig. 2 , where arcs connected by nodes represent hypothesised words.
Fig. 2b shows that compared to 1-best sequences in Fig. 2a, each node in a CN may have multiple incoming arcs.
|
What is a confusion network or lattice?
|
graph-like structures where arcs connect nodes representing multiple hypothesized words, thus allowing multiple incoming arcs unlike 1-best sequences
|
null | false
| null |
Identify which animal species is alive or extinct: Purussaurus, Albacore Tuna
|
Albacore Tuna is alive, Purussaurus is extinct.
|
|
null | false
| null |
"Key West (Philosopher Pirate)" is a song written and performed by the American singer-songwriter Bob Dylan and released as the ninth track on his 2020 album Rough and Rowdy Ways. It is a mid-tempo, accordion-driven ballad that has been cited as a high point of the album by many critics.
It is the only song on the album to feature a traditional chorus and one of only two songs (the other being "I've Made Up My Mind to Give Myself to You") to feature backing vocals.
|
From the given text, List some details about the song "Key West (Philosopher Pirate)"
|
1. Bob Dylan, an American singer-songwriter, wrote and recorded the song "Key West (Philosopher Pirate)," which was included as the ninth track on his 2020 album Rough and Rowdy Ways.
2. Many reviews have praised this mid-tempo, accordion-driven ballad as the album's best track.
3. It is one of just two songs on the album (the other being "I've Made Up My Mind to Give Myself to You") to have backup singers, and it is the only song on the album with a conventional chorus.
|
null | false
| 262
|
We relied on bigram and trigram language models because tweets are short and concise, and often only consist of just a few words.
The performance of our system was not consistent when comparing the development to the evaluation results. During development language models trained on the tweet data performed better. However during the evaluation and post-evaluation stage, language models trained on the news data were significantly more effective. We also observed that bigram language models performed slightly better than trigram models on the evaluation data. This suggests that going forward we should also consider both the use of unigram and character–level language models.
These results suggest that there are only slight differences between bigram and trigram models, and that the type and quantity of corpora used to train the models is what really determines the results.
The task description paper BIBREF6 reported system by system results for each hashtag. We were surprised to find that our performance on the hashtag file #BreakUpIn5Words in the evaluation stage was significantly better than any other system on both Subtask A (with accuracy of 0.913) and Subtask B (with distance score of 0.636). While we still do not fully understand the cause of these results, there is clearly something about the language used in this hashtag that is distinct from the other hashtags, and is somehow better represented or captured by a language model. Reaching a better understanding of this result is a high priority for future work.
The tweet data was significantly smaller than the news data, and so certainly we believe that this was a factor in the performance during the evaluation stage, where the models built from the news data were significantly more effective. Going forward we plan to collect more tweet data, particularly those that participate in #HashtagWars. We also intend to do some experiments where we cut the amount of news data and then build models to see how those compare.
While our language models performed well, there is some evidence that neural network models can outperform standard back-off N-gram models BIBREF12 . We would like to experiment with deep learning methods such as recurrent neural networks, since these networks are capable of forming short term memory and may be better suited for dealing with sequence data.
We were surprised to find that our performance on the hashtag file #BreakUpIn5Words in the evaluation stage was significantly better than any other system on both Subtask A (with accuracy of 0.913) and Subtask B (with distance score of 0.636).
|
What is the accuracy of Subtask A on the hashtag file #BreakUpIn5Words in the evaluation stage?
|
0.913.
|
null | false
| 13
|
From the point of view of dialog systems providing general information from a knowledge base, the most limiting factor is that a large portion of the questions is understood poorly.
Current approaches BIBREF3 , BIBREF4 can only achieve around 50% accuracy on some question answering datasets. Therefore, we think that there is a room for improvements which can be achieved by interactively asking for additional information in conversational dialogs with users. This extra information can be used for improving policies of dialog systems. We call this approach the interactive learning from dialogs.
We can improve dialog systems in several aspects through interactive learning in a direct interaction with users. First, the most straightforward way obviously is getting the correct answer for questions that the system does not know. We can try to ask users for answers on questions that the system encountered in a conversation with a different user and did not understand it. Second, the system can ask the user for a broader explanation of a question. This explanation could help the system to understand the question and provide the correct answer. In addition, the system can learn correct policy for the question which allows providing answers without asking any extra information for similar questions next time. We hypothesize that users are willing to give such explanations because it could help them to find answers for their own questions. The last source of information that we consider for interactive learning is rephrasing, which could help when the system does know the concept but does not know the correct wording. This area is extensively studied for the purposes of information retrieval BIBREF5 , BIBREF6 .
The main purpose of the collected dataset is to enable interactive learning using the steps proposed above and potentially to evaluate how different systems perform on this task.
The last source of information that we consider for interactive learning is rephrasing, which could help when the system does know the concept but does not know the correct wording.
|
What is the last source of information that the system considers for interactive learning?
|
Rephrasing.
|
null | false
| null |
Tell me whether these are shoes or clothing brands: Nike, Tommy Hilfiger, Adidas, Levi's, Reebok, Puma, Wrangler.
|
Nike is a shoe brand.
Tommy Hilfiger is a clothing brand.
Adidas is a shoe brand,
Levi's is a clothing brand.
Reebok is a shoe brand.
Puma is a shoe brand.
Wrangler is a clothing brand.
|
|
null | false
| null |
On April 6, 2009 (starting with the 9:00 a.m. to 12:00 p.m. Eastern Time edition, which was anchored by Hannah Storm and Sage Steele), SportsCenter debuted a new graphics package that saw the "rundown" graphic – shown during the daytime editions – being shifted to the left side of the screen. On that same date, SportsCenter began producing its 1:00 a.m. Eastern Time edition of SportsCenter live from ESPN's production facilities in the newly constructed L.A. Live complex (just across from the Staples Center) in Los Angeles. The set is virtually identical to the setup at the main facilities in Bristol, and the late-night West Coast broadcast would be produced as simply another edition of the program. Neil Everett and Stan Verrett were appointed as the primary anchors for the Los Angeles-based editions of SportsCenter. A new BottomLine ticker was also unveiled that day on four of the five ESPN networks (ESPN, ESPN2, ESPN Classic and ESPNU); the redesigned ticker was quickly dropped, reverting to the old BottomLine design – which had been in use since April 2003 – due to an equipment failure (however, this ticker was operational for the 2009 NFL Draft and the 2009 NBA draft). After technical issues with the revamped BottomLine were fixed, the new BottomLine was reinstated on July 8.
|
Given this paragraph about ESPN's SportsCenter in 2009, in what west coast city did ESPN begin producing it's 1:00 a.m. Eastern Time edition of SportsCenter?
|
Los Angeles is the west coast city that ESPN began producing it's 1:00 a.m. Eastern Time edition of SportsCenter.
|
null | false
| null |
Name three creative ways to utilize plastic wrap other than sealing food to keep it fresh.
|
1. If you have wounds in your body and want to take a shower while keeping the wounded area dry, you can wrap that area with plastic wrap closely to prevent water from coming in.
2. If you are moving and have lots of fragile items (like monitors, chairs, etc), you can wrap your furniture or electronics with plastic wrap to prevent damages.
3. If you have an outdoor garden mini greenhouse and want to prevent cold air from getting into it at night, you can wrap the bottom of the greenhouse with plastic wrap so that it is airtight.
|
|
null | false
| null |
How many basketball teams are in NBA?
|
There are 30 teams in NBA, 15 in east and 15 in west. The most recent expand with the Charlotte Bobcats in 2004. There is rumor that there is a possible NBA expansion to 32 teams around 2025.
|
|
null | false
| null |
The 95th Academy Awards was a ceremony held by the Academy of Motion Picture Arts and Sciences (AMPAS) on March 12, 2023, at the Dolby Theatre in Los Angeles. It honored films released in 2022.
The event was televised in the U.S. by ABC and was produced by Ricky Kirshner and Glenn Weiss. Weiss was also the director. Comedian and late-night talk show host Jimmy Kimmel hosted the show for the third time, after emceeing the 89th and 90th editions of the ceremony in 2017 and 2018, respectively.
Everything Everywhere All at Once led the ceremony with eleven nominations and seven wins, including Best Picture, Best Director and Best Original Screenplay for Daniel Kwan and Daniel Scheinert, and three of the four acting awards. Other winners included All Quiet on the Western Front with four awards, The Whale with two, and Avatar: The Way of Water, Black Panther: Wakanda Forever, Guillermo del Toro's Pinocchio, Navalny, RRR, Top Gun: Maverick, and Women Talking each with one. Short film winners included The Boy, the Mole, the Fox and the Horse, The Elephant Whisperers, and An Irish Goodbye.
|
Which movie won the Best Picture award at 95th Academy Awards?
|
Everything Everywhere All at Once won the award for Best Picture at the 95th Academy Awards. Additionally, this movie also won Best Director and Best Original Screenplay for Daniel Kwan and Daniel Scheinert, and three of the four acting awards.
|
null | false
| null |
Antestor (Norwegian: [ˈɑ̀ntəstuːɽ]) is a Norwegian Christian extreme metal band formed in 1990 in Jessheim. Credited for starting the northern European Christian black metal scene, Antestor is the only Christian band to have an album released by Cacophonous Records, which has also released records by bands such as Dimmu Borgir, Sigh, and Cradle of Filth. The band's only release on Cacophonous, The Return of the Black Death, proved influential for the Christian black metal movement, and has sold over 10,000 copies.
In the late 1990s they dubbed their musical style as "sorrow metal" rather than black metal because the black metal movement was publicly affiliated with Satanism in Norway. According to HM magazine, the progressive elements on the debut album, Martyrium, were possibly ahead of their time in the Norwegian extreme metal scene.
The group has gone through several line-up changes over the years, and currently Antestor consists of the founding guitarist Lars Stokstad (Vemod), vocalist Ronny Hansen (Vrede), bassist Erik Normann Aanonsen, guitarist Robert Bordevick, and drummer Henning Børven. The original vocalist Kjetil Molnes (Martyr) and drummer Svein Sander (Armoth) left the band around 2000, and the members of a fellow Norwegian black metal group, Vaakevandring, joined Antestor. The reputable Norwegian extreme metal drummer Jan Axel Blomberg (Hellhammer) played session drums for their 2005 album The Forsaken and Det tapte liv EP. A fourth studio album, Omen, was released in 2012.
|
Why was Antestor known as sorrow metal?
|
Because black metal was associated with Satanism in Antestor's home country Norway, and they are a Christian extreme metal band.
|
null | false
| null |
What are the different methods to open a pomegranate?
|
1 - Vertical cutting: roll the pomegranate over the surface of the cutting board to loosen the seeds; position it in the middle; cut the top (flower) and pop it off; slice it in half vertically along the ridges; cut each of the ridges to create 5 slices and then tear it apart to remove the seeds.
2 - Submerging it in water to remove the seeds: fill a large bowl with cool water; make a shallow cut down the side and use your thumbs to tear it into 2 halves then place them in the water and use your fingers to nudge the seeds free and finally strain the seeds into a bowl.
3 - Removing seeds with a wooden spoon: make a shallow horizontal slice in the middle and tear it into 2 halves; for each half place it seeds side down over a bowl and hit the skin with a wooden spoon for seeds to fall out.
|
|
null | false
| null |
Why is strength training important for distance runners?
|
Strength training is important for distance runners to get faster, and stay injury-free. Running is a high impact activity that puts a lot of stress on the body. Strength training can help the body to deal with this stress, and correct imbalances in the body that can lead to injury over time.
|
|
1903.03467
| false
| null |
Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.
To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology.
Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.
To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology.
|
What are the components of the black-box context injection system?
|
The answers are shown as follows:
* supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences
|
null | false
| 26
|
One of the most fundamental topics in natural language processing is how best to derive high-level representations from constituent parts, as natural language meanings are a function of their constituent parts. How best to construct a sentence representation from distributed word embeddings is an example domain of this larger issue. Even though sequential neural models such as recurrent neural networks (RNN) BIBREF0 and their variants including Long Short-Term Memory (LSTM) BIBREF1 and Gated Recurrent Unit (GRU) BIBREF2 have become the de-facto standard for condensing sentence-level information from a sequence of words into a fixed vector, there have been many lines of research towards better sentence representation using other neural architectures, e.g. convolutional neural networks (CNN) BIBREF3 or self-attention based models BIBREF4 .
From a linguistic point of view, the underlying tree structure—as expressed by its constituency and dependency trees—of a sentence is an integral part of its meaning. Inspired by this fact, some recursive neural network (RvNN) models are designed to reflect the syntactic tree structure, achieving impressive results on several sentence-level tasks such as sentiment analysis BIBREF5 , BIBREF6 , machine translation BIBREF7 , natural language inference BIBREF8 , and discourse relation classification BIBREF9 .
However, some recent works have BIBREF10 , BIBREF11 proposed latent tree models, which learn to construct task-specific tree structures without explicit supervision, bringing into question the value of linguistically-motivated recursive neural models. Witnessing the surprising performance of the latent tree models on some sentence-level tasks, there arises a natural question: Are linguistic tree structures the optimal way of composing sentence representations for NLP tasks?
In this paper, we demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations, showing that our novel architecture based on constituency trees and their tag information obtains superior performance on several sentence-level tasks, including sentiment analysis and natural language inference.
A chief novelty of our approach is that we introduce a small separate tag-level tree-LSTM to control the composition function of the existing word-level tree-LSTM, which is in charge of extracting helpful syntactic signals for meaningful semantic composition of constituents by considering both the structures and linguistic tags of constituency trees simultaneously. In addition, we demonstrate that applying a typical LSTM to preprocess the leaf nodes of a tree-LSTM greatly improves the performance of the tree models. Moreover, we propose a clustered tag set to replace the existing tags on the assumption that the original syntactic tags are too fined-grained to be useful in neural models.
In short, our contributions in this work are as follows:
In this paper, we demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations, showing that our novel architecture based on constituency trees and their tag information obtains superior performance on several sentence-level tasks, including sentiment analysis and natural language inference.
|
What do the authors demonstrate about linguistic priors in this paper?
|
Demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations.
|
1912.00871
| true
| null |
We calculate the loss in training according to a mean of the sparse categorical cross-entropy formula. Sparse categorical cross-entropy BIBREF23 is used for identifying classes from a feature set, which assumes a large target classification set. Evaluation between the possible translation classes (all vocabulary subword tokens) and the produced class (predicted token) is the metric of performance here. During each evaluation, target terms are masked, predicted, and then compared to the masked (known) value. We adjust the model's loss according to the mean of the translation accuracy after predicting every determined subword in a translation.
During each evaluation, target terms are masked, predicted, and then compared to the masked (known) value.
|
Are the Transformers masked?
|
Yes.
|
null | false
| null |
What is the size of the Eiffel Tower?
|
The Eiffel Tower is 330 meters high.
|
|
null | false
| 34
|
Single-relation factoid questions are the most common form of questions found in search query logs and community question answering websites BIBREF1 , BIBREF2 . A knowledge-base (KB) such as Freebase, DBpedia, or Wikidata can help answer such questions after users reformulate them as queries. For instance, the question Where was Barack Obama born? can be answered by issuing the following KB query: $
\lambda (x).place\_of\_birth(Barack\_Obama, x)
$
However, automatically mapping a natural language question such as Where was Barack Obama born? to its corresponding KB query remains a challenging task.
There are three key issues that make learning this mapping non-trivial. First, there are many paraphrases of the same question. Second, many of the KB entries are unseen during training time; however, we still need to correctly predict them at test time. Third, a KB such as Freebase typically contains millions of entities and thousands of predicates, making it difficult for a system to predict these entities at scale BIBREF1 , BIBREF3 , BIBREF0 . In this paper, we address all three of these issues with a character-level encoder-decoder framework that significantly improves performance over state-of-the-art word-level neural models, while also providing a much more compact model that can be learned from less data.
First, we use a long short-term memory (LSTM) BIBREF4 encoder to embed the question. Second, to make our model robust to unseen KB entries, we extract embeddings for questions, predicates and entities purely from their character-level representations. Character-level modeling has been previously shown to generalize well to new words not seen during training BIBREF5 , BIBREF6 , which makes it ideal for this task. Third, to scale our model to handle the millions of entities and thousands of predicates in the KB, instead of using a large output layer in the decoder to directly predict the entity and predicate, we use a general interaction function between the question embeddings and KB embeddings that measures their semantic relevance to determine the output. The combined use of character-level modeling and a semantic relevance function allows us to successfully produce likelihood scores for the KB entries that are not present in our vocabulary, a challenging task for standard encoder-decoder frameworks.
Our novel, character-level encoder-decoder model is compact, requires significantly less data to train than previous work, and is able to generalize well to unseen entities in test time. In particular, without use of ensembles, we achieve 70.9% accuracy in the Freebase2M setting and 70.3% accuracy in the Freebase5M setting on the SimpleQuestions dataset, outperforming the previous state-of-arts of 62.7% and 63.9% BIBREF0 by 8.2% and 6.4% respectively. Moreover, we only use the training questions provided in SimpleQuestions to train our model, which cover about 24% of words in entity aliases on the test set. This demonstrates the robustness of the character-level model to unseen entities. In contrast, data augmentation is usually necessary to provide more coverage for unseen entities and predicates, as done in previous work BIBREF0 , BIBREF1 .
Second, to make our model robust to unseen KB entries, we extract embeddings for questions, predicates and entities purely from their character-level representations.
|
What method do authors take to make their model robust to unseen KB entries?
|
They extract embeddings for questions, predicates and entities purely from their character-level representations.
|
null | false
| 140
|
Antisocial behavior. Antisocial behavior online comes in many forms, including harassment BIBREF30, cyberbullying BIBREF31, and general aggression BIBREF32. Prior work has sought to understand different aspects of such behavior, including its effect on the communities where it happens BIBREF33, BIBREF34, the actors involved BIBREF35, BIBREF36, BIBREF37, BIBREF38 and connections to the outside world BIBREF39.
Post-hoc classification of conversations. There is a rich body of prior work on classifying the outcome of a conversation after it has concluded, or classifying conversational events after they happened. Many examples exist, but some more closely related to our present work include identifying the winner of a debate BIBREF40, BIBREF41, BIBREF42, identifying successful negotiations BIBREF21, BIBREF43, as well as detecting whether deception BIBREF44, BIBREF45, BIBREF46 or disagreement BIBREF47, BIBREF48, BIBREF49, BIBREF50, BIBREF51 has occurred.
Our goal is different because we wish to forecast conversational events before they happen and while the conversation is still ongoing (potentially allowing for interventions). Note that some post-hoc tasks can also be re-framed as forecasting tasks (assuming the existence of necessary labels); for instance, predicting whether an ongoing conversation will eventually spark disagreement BIBREF18, rather than detecting already-existing disagreement.
Conversational forecasting. As described in Section SECREF1, prior work on forecasting conversational outcomes and events has largely relied on hand-crafted features to capture aspects of conversational dynamics. Example feature sets include statistical measures based on similarity between utterances BIBREF16, sentiment imbalance BIBREF20, flow of ideas BIBREF20, increase in hostility BIBREF8, reply rate BIBREF11 and graph representations of conversations BIBREF52, BIBREF17. By contrast, we aim to automatically learn neural representations of conversational dynamics through pre-training.
Such hand-crafted features are typically extracted from fixed-length windows of the conversation, leaving unaddressed the problem of unknown horizon. While some work has trained multiple models for different window-lengths BIBREF8, BIBREF18, they consider these models to be independent and, as such, do not address the issue of aggregating them into a single forecast (i.e., deciding at what point to make a prediction). We implement a simple sliding windows solution as a baseline (Section SECREF5).
Pre-training for NLP. The use of pre-training for natural language tasks has been growing in popularity after recent breakthroughs demonstrating improved performance on a wide array of benchmark tasks BIBREF53, BIBREF54. Existing work has generally used a language modeling objective as the pre-training objective; examples include next-word prediction BIBREF55, sentence autoencoding, BIBREF56, and machine translation BIBREF57. BERT BIBREF58 introduces a variation on this in which the goal is to predict the next sentence in a document given the current sentence. Our pre-training objective is similar in spirit, but operates at a conversation level, rather than a document level. We hence view our objective as conversational modeling rather than (only) language modeling. Furthermore, while BERT's sentence prediction objective is framed as a multiple-choice task, our objective is framed as a generative task.
Pre-training for NLP. The use of pre-training for natural language tasks has been growing in popularity after recent breakthroughs demonstrating improved performance on a wide array of benchmark tasks (Peters et al., 2018; Radford et al., 2018). Existing work has generally used a language modeling objective as the pre-training objective; examples include next-word prediction (Howard and Ruder, 2018), sentence autoencoding, (Dai and Le, 2015), and machine translation (McCann et al., 2017).
|
What has been used as the pre-training objective on pre-training for NLP in existing work?
|
A language modeling objective.
|
null | false
| 80
|
Automatic text summarization has been an active research area in natural language processing for several decades. To compare and evaluate the performance of different summarization systems, the most intuitive approach is assessing the quality of the summaries by human evaluators. However, manual evaluation is expensive and the obtained results are subjective and difficult to reproduce BIBREF0 . To address these problems, automatic evaluation measures for summarization have been proposed. Rouge BIBREF1 is one of the first and most widely used metrics in summarization evaluation. It facilitates evaluation of system generated summaries by comparing them to a set of human written gold-standard summaries. It is inspired by the success of a similar metric Bleu BIBREF2 which is being used in Machine Translation (MT) evaluation. The main success of Rouge is due to its high correlation with human assessment scores on standard benchmarks BIBREF1 . Rouge has been used as one of the main evaluation metrics in later summarization benchmarks such as TAC[1] BIBREF3 .
[1]Text Analysis Conference (TAC) is a series of workshops for evaluating research in Natural Language Processing
Since the establishment of Rouge, almost all research in text summarization have used this metric as the main means for evaluating the quality of the proposed approaches. The public availability of Rouge as a toolkit for summarization evaluation has contributed to its wide usage. While Rouge has originally shown good correlations with human assessments, the study of its effectiveness was only limited to a few benchmarks on news summarization data (DUC[2] 2001-2003 benchmarks). Since 2003, summarization has grown to much further domains and genres such as scientific documents, social media and question answering. While there is not enough compelling evidence about the effectiveness of Rouge on these other summarization tasks, published research is almost always evaluated by Rouge. In addition, Rouge has a large number of possible variants and the published research often (arbitrarily) reports only a few of these variants.
[2]Document Understanding Conference (DUC) was one of NIST workshops that provided infrastructure for evaluation of text summarization methodologies (http://duc.nist.gov/).
By definition, Rouge solely relies on lexical overlaps (such as n-gram and sequence overlaps) between the system generated and human written gold-standard summaries. Higher lexical overlaps between the two show that the system generated summary is of higher quality. Therefore, in cases of terminology nuances and paraphrasing, Rouge is not accurate in estimating the quality of the summary.
We study the effectiveness of Rouge for evaluating scientific summarization. Scientific summarization targets much more technical and focused domains in which the goal is providing summaries for scientific articles. Scientific articles are much different than news articles in elements such as length, complexity and structure. Thus, effective summarization approaches usually have much higher compression rate, terminology variations and paraphrasing BIBREF4 .
Scientific summarization has attracted more attention recently (examples include works by abu2011coherent, qazvinian2013generating, and cohan2015scientific). Thus, it is important to study the validity of existing methodologies applied to the evaluation of news article summarization for this task. In particular, we raise the important question of how effective is Rouge, as an evaluation metric for scientific summarization? We answer this question by comparing Rouge scores with semi-manual evaluation score (Pyramid) in TAC 2014 scientific summarization dataset[1]. Results reveal that, contrary to the common belief, correlations between Rouge and the Pyramid scores are weak, which challenges its effectiveness for scientific summarization. Furthermore, we show a large variance of correlations between different Rouge variants and the manual evaluations which further makes the reliability of Rouge for evaluating scientific summaries less clear. We then propose an evaluation metric based on relevance analysis of summaries which aims to overcome the limitation of high lexical dependence in Rouge. We call our metric Sera (Summarization Evaluation by Relevance Analysis). Results show that the proposed metric achieves higher and more consistent correlations with semi-manual assessment scores.
[1]http://www.nist.gov/tac/2014/BiomedSumm/
Our contributions are as follows:
[2]The annotations can be accessed via the following repository: https://github.com/acohan/TAC-pyramid-Annotations/
We then propose an evaluation metric based on relevance analysis of summaries which aims to overcome the limitation of high lexical dependence in ROUGE.
|
Why do they propose an evaluation metric based on relevance analysis of summaries?
|
To overcome the limitation of high lexical dependence in ROUGE.
|
null | false
| null |
By the late 19th century, the vast grasslands of the Great Plains had been opened up for cattle ranching. This made it possible for many Americans to consume beef almost daily. The hamburger remains as one of the cheapest sources of beef in America.
Adding cheese to hamburgers became popular in 1920. There are several competing claims as to who created the first cheeseburger. Lionel Sternberger is reputed to have introduced the cheeseburger in 1924 at the age of 16. He was working as a fry cook at his father's Pasadena, California sandwich shop, "The Rite Spot", and "experimentally dropped a slab of American cheese on a sizzling hamburger." An early example of the cheeseburger appearing on a menu is a 1928 menu for the Los Angeles restaurant O'Dell's which listed a cheeseburger smothered with chili for 25 cents.
Other restaurants also claim to have invented the cheeseburger. For example, Kaelin's Restaurant in Louisville, Kentucky, said it invented the cheeseburger in 1934. One year later, a trademark for the name "cheeseburger" was awarded to Louis Ballast of the Humpty Dumpty Drive-In in Denver, Colorado. According to Steak 'n Shake archives, the restaurant's founder, Gus Belt, applied for a trademark on the word in the 1930s.
An A&W Restaurants franchise in Lansing, Michigan is credited with inventing the bacon cheeseburger in 1963, putting it on the menu after repeated requests from the same customer.
The steamed cheeseburger, a variation almost exclusively served in central Connecticut, is believed to have been invented at a restaurant called Jack's Lunch in Middletown, Connecticut, in the 1930s.
The largest cheeseburger ever made weighed 2,014 pounds (914 kg). It is said to have included "60 pounds (27 kg) of bacon, 50 pounds (23 kg) of lettuce, 50 pounds (23 kg) of sliced onions, 40 pounds (18 kg) of pickles, and 40 pounds (18 kg) of cheese." This record was set in 2012 by Minnesota's Black Bear Casino, smashing the previous record of 881 pounds (400 kg).
In the United States, National Cheeseburger Day is celebrated annually on September 18.
|
Given this reference text, how was the cheeseburger invented?
|
The cheeseburger was invented after the late 19th century after the vast grasslands of the Great Plains opened up for cattle ranching. Once this happened Americans consumed meat daily of which ground beef/hamburger meat was the most affordable. In 1920, adding cheese to the hamburger meat popular. It is said that Lionel Sternberger invented the cheeseburger in 1924 while working at his father's sandwich shop.
|
null | false
| null |
Classify the following devices as "electronic" or "not electronic": iPhone, shovel, Samsung Galaxy, calculator, ladder, paper
|
iPhone: Electronic
Shovel: Not Electronic
Samsung Galaxy: Electronic
Calculator: Electronic
Ladder: Not Electronic
Paper: Not Electronic
|
|
null | false
| null |
Naul (Irish: An Aill, meaning 'The Cliff', also known as "The Naul"), is a village, townland, and civil parish at the northern edge of Fingal and the traditional County Dublin in Ireland. The Delvin River to the north of the village marks the county boundary with County Meath. Naul civil parish is in the historic barony of Balrothery West.
Location and geography
The village sits on the crossroad of the R122 and R108 regional roads, the latter being the traditional route between Dublin and the port of Drogheda, while the R122 travels from Finglas in the south to Balbriggan.: 2
The River Delvin passes through Naul at the north, through a deep valley known as 'The Roche' which is hemmed in by steep banks and rocky cliffs which rise to 20 metres at one point. In the valley, there is a natural waterfall known as 'Waterfall of The Roches'. Further downstream the river has been dammed, forming an artificial pond and cascade with a small private hydroelectric plant.: 2
Naul village and the surrounding townlands which comprise the area of Naul, sit on the Northern border of County Dublin and Fingal. However, the area locally known as Naul also extends north of the county border into county Meath. The area of north county Dublin comprises 2,627 acres and includes 15 townlands: Naul (An Aill), Hazardstown (Baile an Hasardaigh), Reynoldstown (Baile Raghnaill), Coolfores (An Chúil Fhuar), Doolagh (Dúlach), Fortyacres (Daichead Acra), Winnings (Uininn), Hynestown (Baile Héin), Cabin Hill (Cnoc an Chábáin), Flacketstown (Baile Fhlaicéid), Lecklinstown (Baile Leithghlinne) and Westown (An Baile Thiar).
|
Given a reference text about Naul, tell me what country it's in and what river passes through it.
|
Naul is located in Ireland and the River Delvin passes through it.
|
null | false
| null |
In the United States between 1973 and 1999, September 16 is the most common birthday in the United States and December 25 the least common birthday (other than February 29, because of leap years). In 2011, October 5 and 6 were reported as the most frequently occurring birthdays.
|
What is the least common birthday in the US other than Feb 29?
|
The least common birthday in the United States between 1973 and 1999, excluding February 29, is December 25.
|
null | false
| null |
Why there is no infant rice cereal in the market now?
|
Because there are more concerns on the arsenic in the rice. And it is a bigger concerns for the babies. FDA says that the rice intake for infants is about three times greater than for adults relative to body weight. So more and more families start to change to other alternatives. infant rice cereal is gone by time.
|
|
null | false
| null |
Give me a bulleted list of some programming languages
|
* Java
* C++
* Python
* C#
* Rust
There are many programming languages. These are a few of the popular ones.
|
|
null | false
| null |
Im considering buying a boat? How do I decide which boat meets my needs?
|
Things you should consider when making a boat buying decision. How much space do you have to store the boat? Will the boat be used in fresh or salt water? Inboard motors are great for fresh water environments. Outboard motors are often preferable for salt water applications. If you plan on using the boat for fishing, you need to decide if you will be fishing offshore or inshore. Inshore fishing requires a smaller draft. Draft is how much water a boat needs to float. Offshore fishing boats have deeper drafts based on the hull. Offshore boats can handle rougher water based on more aggressive hulls. Offshore boats have steeper dead rise. Dead rise is the angle of entry for the boat hull. More aggressive dead-rise leas to a softer land in harsh water conditions.
|
|
null | false
| null |
what is the Algeria independence date
|
The Algeria independence date is July 5, 1962.
|
|
null | false
| 328
|
We apply the optimal hyperparameter settings and compare our model against the following state-of-the-art models:
NBOW Neural Bag-of-Words that simply sums up embedding vectors of all words.
PV Paragraph Vectors followed by logistic regression BIBREF21 .
MT-RNN Multi-Task learning with Recurrent Neural Networks by a shared-layer architecture BIBREF11 .
MT-CNN Multi-Task learning with Convolutional Neural Networks BIBREF8 where lookup tables are partially shared.
MT-DNN Multi-Task learning with Deep Neural Networks BIBREF9 that utilizes bag-of-word representations and a hidden shared layer.
GRNN Gated Recursive Neural Network for sentence modeling BIBREF1 .
As Table TABREF48 shows, our model obtains competitive or better performances on all tasks except for the QC dataset, as it contains poor correlations with other tasks. MT-RNN slightly outperforms our model on SST, as sentences from this dataset are much shorter than those from IMDB and MDSD, and another possible reason may be that our model are more complex and requires larger data for training. Our model proposes the designs of various interactions including coupling, local and global fusion, which can be further implemented by other state-of-the-art models and produce better performances.
As Table 4 shows, our model obtains competitive or better performances on all tasks except for the QC dataset, as it contains poor correlations with other tasks.
|
Do the authors obtain competitive or better performances on all tasks except for the QC dataset?
|
Yes, it does.
|
null | false
| null |
What are some hobbies that I can try that have a relatively low cost to start?
|
Rock collecting, coin collecting, bird watching, gardening, writing
|
|
1804.05868
| false
| null |
Recently bhat-EtAl:2017:EACLshort provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks. We extend this dataset by annotating 1,448 more sentences. Following bhat-EtAl:2017:EACLshort we first sampled CS data from a large set of tweets of Indian language users that we crawled from Twitter using Tweepy–a Twitter API wrapper. We then used a language identification system trained on ICON dataset (see Section "Preliminary Tasks" ) to filter Hindi-English CS tweets from the crawled Twitter data. Only those tweets were selected that satisfied a minimum ratio of 30:70(%) code-switching. From this dataset, we manually selected 1,448 tweets for annotation. The selected tweets are thoroughly checked for code-switching ratio. For POS tagging and dependency annotation, we used Version 2 of Universal dependency guidelines BIBREF21 , while language tags are assigned based on the tag set defined in BIBREF22 , BIBREF23 . The dataset was annotated by two expert annotators who have been associated with annotation projects involving syntactic annotations for around 10 years. Nonetheless, we also ensured the quality of the manual annotations by carrying an inter-annotator agreement analysis. We randomly selected a dataset of 150 tweets which were annotated by both annotators for both POS tagging and dependency structures. The inter-annotator agreement has a 96.20% accuracy for POS tagging and a 95.94% UAS and a 92.65% LAS for dependency parsing.
Recently bhat-EtAl:2017:EACLshort provided a CS dataset for the evaluation of their parsing models which they trained on the Hindi and English Universal Dependency (UD) treebanks. We extend this dataset by annotating 1,448 more sentences.
|
How big is the provided treebank?
|
1448 sentences more than the dataset from Bhat et al., 2017
|
null | false
| null |
What are the three branches of the US Federal Government?
|
The US Constitution divides the federal government into three branches: legislative, executive and judicial. This ensures that no individual or group will have too much power.
|
|
null | false
| null |
Which of these animals are domesticated: Dog, Cat, Serval, Monkey, Ferret, Goldfish, Lion, Bear, Sable
|
Domesticated: Dog, Cat, Serval, Ferret, Goldfish, Sable
|
|
1805.00760
| false
| null |
As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step.
As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step.
|
How do they determine the opinion summary?
|
The answers are shown as follows:
* the weighted sum of the new opinion representations, according to their associations with the current aspect representation
|
null | false
| null |
Endemic to the southwestern highlands of Ethiopia, Coffea arabica is today grown in dozens of countries between the Tropic of Capricorn and the Tropic of Cancer. It is commonly used as an understory shrub. It has also been recovered from the Boma Plateau in South Sudan. Coffea arabica is also found on Mount Marsabit in northern Kenya, but it is unclear whether this is a truly native or naturalised occurrence; recent studies support it being naturalised. The species is widely naturalised in areas outside its native land, in many parts of Africa, Latin America, Southeast Asia, India, China, and assorted islands in the Caribbean and in the Pacific.
The coffee tree was first brought to Hawaii in 1813, and it began to be extensively grown by about 1850. It was formerly more widely grown than at present, especially in Kona, and it persists after cultivation in many areas. In some valleys, it is a highly invasive weed. In the Udawattakele and Gannoruwa Forest Reserves near Kandy, Sri Lanka, coffee shrubs are also a problematic invasive species.
Coffee has been produced in Queensland and New South Wales of Australia, starting in the 1980s and 90s. The Wet Tropics Management Authority has classified Coffea arabica as an environmental weed for southeast Queensland due to its invasiveness in non-agricultural areas.
|
Extract the list of the countries where Coffea arabica has been grown, and separate them with a comma
|
Ethiopia, Sudan, USA(Hawaii), Sri Lanka, Australia
|
null | false
| null |
Which ski resorts are close to Seattle?
|
There are many downhill skiing resorts within driving distance of Seattle. The closest resort is the Summit at Snoqualmie. But Stevens Pass and Crystal Mountain are also within driving distance. If you are willing to drive even further, you can ski at Mount Baker or White Pass.
|
|
null | false
| null |
Did Professor Snape jinx Harry's broom during his first quidditch match?
|
No. Professor Quirrell, the Defense Against the Dark Arts teacher secretly in league with Lord Voldemort, jinxed Harry's broom in an attempt to hurt him since Harry was responsible for Lord Voldemort's downfall a decade earlier. Hermione mistakenly believed Snape was jinxing Harry's Broom when she saw his mouth moving when in fact Snape was trying to save Harry by issuing a counter-jinx.
|
|
null | false
| null |
Andhra Pradesh (English: /ˌɑːndrə prəˈdɛʃ/, Telugu: [ãːndʱrɐ prɐdeːʃ] (listen) abbr. AP) is a state in the south-eastern coastal region of India. It is the seventh-largest state by area covering an area of 162,975 km2 (62,925 sq mi) and tenth-most-populous state, with 49,386,799 inhabitants. It is bordered by Telangana to the north-west, Chhattisgarh to the north, Odisha to the north-east, Tamil Nadu to the south, Karnataka to the west and the Bay of Bengal to the east. It has the second longest coastline in India after Gujarat, of about 974 km (605 mi). Andhra State was the first state to be formed on a linguistic basis in India on 1 October 1953. On 1 November 1956, Andhra State was merged with the Telugu-speaking areas (ten districts) of the Hyderabad State to form United Andhra Pradesh. In 2014, these merged areas of Hyderabad State were bifurcated from United Andhra Pradesh to form the new state Telangana. The present form of Andhra is similar to Andhra state, but some mandalas like Bhadrachalam are still included in Telangana. Amaravati serves as the capital of the state with the largest city being Visakhapatnam.
Andhra Pradesh was once a major Buddhist pilgrimage site in the country and a Buddhist learning center which can be seen in many sites in the state in the form of monastery ruins, chaityas, and stupas. It is also known for being the land of Koh-i-Noor and other globally known diamonds from Kollur Mine. It is also a major producer of rice known as the "Rice bowl of India". Its official language is Telugu; one of the classical languages of India, the fourth most spoken language in India, and the 13th-most spoken language in the world. Andhra Pradesh's second official language is Urdu.
|
From the passage provided, extract the population of Andhra Pradesh
|
The population of Andhra Pradesh is 49,386,799
|
null | false
| 144
|
We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 ).
In preliminary tests, we found that this design, despite the minimal context, works reasonably on our focused clusters on common educational topics. For instance, consider Figure FIGREF4 : One can easily say that P1 is more important than P2 without reading the documents.
We distinguish two task variants:
Instead of enforcing binary importance decisions, we use a 5-point Likert-scale to allow more fine-grained annotations. The obtained labels are translated into scores (5..1) and the average of all scores for a proposition is used as an estimate for its importance. This follows the idea that while single workers might find the task subjective, the consensus of multiple workers, represented in the average score, tends to be less subjective due to the “wisdom of the crowd”. We randomly group five propositions into a task.
As an alternative, we use a second task design based on pairwise comparisons. Comparisons are known to be easier to make and more consistent BIBREF32 , but also more expensive, as the number of pairs grows quadratically with the number of objects. To reduce the cost, we group five propositions into a task and ask workers to order them by importance per drag-and-drop. From the results, we derive pairwise comparisons and use TrueSkill BIBREF35 , a powerful Bayesian rank induction model BIBREF34 , to obtain importance estimates for each proposition.
In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster’s topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure 3).
|
Why don't they show the documents to the workers like others?
|
They want to ensure the tasks are small and easy and can be completed quickly.
|
null | false
| null |
How many sides and vertices does a polygon have?
|
A polygon is a general case for a two dimensional shape where all edges and vertices essentially form a closed shape. A triangle for example has three edges/sides, with three vertices. A Pentagon has five vertices and five sides. A Nonagon has nine sides and nine vertices, and so forth. One can formulate a rule which defines an N sided polygon. This rule would look something like: an n-sided polygon has n sides and n vertices where n is an integer greater than or equal to three and forming a closed two-dimensional shape.
|
|
null | false
| null |
Solar power plants derive their energy from sunlight, which is made accessible via photovoltaics (PV's). Photovoltaic panels, or solar panels, are constructed using photovoltaic cells which are made of silica materials that release electrons when they are warmed by the thermal energy of the sun. The new flow of electrons generates electricity within the cell. While PV's are an efficient method of producing electricity, they do burn out after a decade and thus, must be replaced; however, their efficiency, cost of operation, and lack of noise/physical pollutants make them one of the cleanest and least expensive forms of energy. Solar power plants require the work of many facets of engineering; electrical engineers are especially crucial in constructing the solar panels and connecting them into a grid, and computer engineers code the cells themselves so that electricity can be effectively and efficiently produced, and civil engineers play the very important role of identifying areas where solar plants are able to collect the most energy.
|
Using the text provided, give me the type of engineers that are involved in fabrication of a solar power plant
|
Although solar power plants require the work of many aspects of engineering, the three main types of engineers involved in fabrication of solar power plants are:
1. Electrical engineers who build solar panels and link them to the electrical grid
2. Computer engineers who program the cells themselves so that power can be produced effectively and efficiently
3. Civil engineer who play a critical role in determining sites where the solar plants can capture the most energy.
|
null | false
| 254
|
Every day textual information is being added/updated on Wikipedia, as well as other social media platforms like Facebook, Twitter, etc. These platforms receive a huge amount of unverified textual data from all its users such as News Channels, Bloggers, Journalists, Field-Experts which ought to be verified before other users start consuming it. This information boom has increased the demand of information verification also known as Fact Checking. Apart from the encyclopedia and other platforms, domains like scientific publications and e-commerce also require information verification for reliability purposes. Generally, Wikipedia authors, bloggers, journalists and scientists provide references to support their claims. Providing referenced text against the claims makes the fact checking task a little easier as the verification system no longer needs to search for the relevant documents.
Wikipedia manages to verify all this new information with a number of human reviewers. Manual review processes introduce delays in publishing and is not a well scalable approach. To address this issue, researchers have launched relevant challenges, such as the Fake News Challenge (BIBREF0), Fact Extraction and VERification (FEVER) (BIBREF1) challenge along with the datasets. Moreover, Thorne and Vlachos (BIBREF2) released a survey on the current models for automated fact-checking. FEVER is the largest dataset and contains around 185k claims from the corpus of 5.4M Wikipedia articles. The claims are labeled as “SUPPORTS”, “REFUTES”, or “NOT ENOUGH INFO”, based on the evidence set.
In this paper, we propose an unsupervised question-answering based approach for solving the fact-checking problem. This approach is inspired from the memory-based reading comprehension task that humans perform at an early age. As we know that kids in schools, first read and learn the syllabus content so that they can answer the questions in the exam. Similarly, our model learns a language model and linguistics features in unsupervised fashion from the provided Wikipedia pages.
To transform the FEVER dataset into the above-mentioned task, we first generate the questions from the claims. In literature, there are majorly two types of Question Generation systems: Rule-based and Neural Question Generation (NQG) model based. Ali et al. (BIBREF3) proposed a rule-based pipeline to automate the question generation using POS (Part-of-speech) tagging and Named Entity Recognition (NER) tagging from the sentences. Recently, many NQG models have been introduced to generate questions in natural language. Serban et al. (BIBREF4) achieved better performance for question generation utilizing (passage, question, answer) triplets as training data and an encoder-decoder based architecture as their learning model.
Du et al. (BIBREF5) introduced a sequence-to-sequence model with an attention mechanism, outperforming rule-base question generation systems. Although the models proposed in (BIBREF6; BIBREF7) are effective, they require a passage to generate the plausible questions which is not readily available in the FEVER dataset. To resolve the issues and to keep the system simple but effective, we chose to generate questions similar to a Cloze-task or masked language modeling task. Such a task makes the problem more tractable as the masked entities are already known (i.e. named entities) and tight as there is only one correct answer for a given question. Later when the answers are generated, due to the question generation process, it becomes very easy to identify the correct answers.
We use the BERT's (Bidirectional Encoder Representations from Transformers) (BIBREF8) masked language model, that is pre-trained on Wikipedia articles for predicting the masked entities. Currently, neither the claim verification process nor the question generation process mandates explicit reasoning. For the same reason, it is difficult to put “REFUTES” or “NOT ENOUGH INFO” labels. To resolve this issue, we classify the unsupported claims as “MANUAL_REVIEW” instead of labeling them as “NOT ENOUGH INFO” or “REFUTES”.
In the literature, the shared task has been tackled using pipeline-based supervised models (BIBREF9; BIBREF10; BIBREF11). To our knowledge, only BIBREF10 has provided the confusion matrix for each of the labels for their supervised system. For the same reason, we are only providing the comparison of the label accuracy on the “SUPPORTS” label in the results section.
In this paper, we propose an unsupervised question-answering based approach for solving the fact-checking problem. This approach is inspired from the memory-based reading comprehension task that humans perform at an early age
|
What is their approach inspired from?
|
The memory-based reading comprehension task that humans perform at an early age.
|
null | false
| null |
Iolaus crawshayi, the Crawshay's sapphire, is a butterfly in the family Lycaenidae. It is found in Ethiopia, Somalia, Kenya, Uganda, the Democratic Republic of the Congo and Tanzania. The habitat consists of moist savanna.
The larvae feed on the young leaves of Erianthemum dregei and Phragmanthera usuiensis. They are dull dirty whitish green or brown.
|
What kind of animal is the Iolaus crawshayi
|
Iolaus crawshayi, the Crawshay's sapphire, is a butterfly in the family Lycaenidae. It is found in Ethiopia, Somalia, Kenya, Uganda, the Democratic Republic of the Congo and Tanzania.
The habitat consists of moist savanna.
The larvae feed on the young leaves of Erianthemum dregei and Phragmanthera usuiensis. They are dull dirty whitish green or brown.
|
null | false
| null |
Panch phoron, panch phodan or pancha phutana is a whole spice blend, originating from the eastern part of the Indian subcontinent and used especially in the cuisine of Eastern India and Northeastern India, especially in the cuisine of Bhojpur, Mithila, Odisha, Bengal and Nepal. The name literally means "five spices".
All of the spices in panch phoron are seeds. Typically, panch phoron consists of fenugreek seed, nigella seed, cumin seed, black mustard seed and fennel seed in equal parts. Some cooks prefer to use a smaller proportion of fenugreek seeds, because of their mildly bitter taste.
|
What are the ingredients in panch phuran?
|
Panch phuran, also known as panch phoron, comprises of the following spices in equal parts: cumin seeds, fenugreek seeds, black mustard seeds, fennel seeds and nigella seeds.
|
null | false
| 408
|
In this section, we compare our method with other baselines in generating examples with the same number of Langevin steps. We compare our methods with two closely related baselines, (1) the single EBM with short-run MCMC starting from noise distribution (Nijkamp et al., 2019), and (ii) Cooperative learning of an EBM and a generic generator (Xie et al., 2020a). The numbers of MCMC steps and FID reported in Table 9 are collected from the original papers. In Table 10, we also report the FID scores of different baseline methods using the different numbers of MCMC steps. We can see that with the same number of Langevin steps, the CoopFlow can generate much more realistic image patterns than the other two baselines. The results of the experiments show that the proposed CoopFlow can use less number of MCMC steps to achieve better results.
|
Since the main advantage of CoopFlow is a fast generation, the author should discuss such a cost perspective in detail. For example, how many iteration MCMC/SDE steps are required for other models, and how about generating samples with the same (and small) number of steps for all models?
|
Thank you for your valuable suggestion. We have followed your suggestions and compared our method with closely related baselines in Table 4.1 from the perspective of computational cost in terms of the number of MCMC steps for EBM.
We compare our methods with two closely related baselines, (1) the single EBM with short-run MCMC starting from noise distribution [a], and (ii) Cooperative learning of an EBM and a generic generator [b]. The numbers of MCMC steps and FID reported in Table 4.1 are collected from the original papers.
Table 4.1: A comparison of computational costs in terms of number of MCMC steps.
Method number of MCMC steps FID
CoopFlow = EBM + flow (ours) 30 15.80
EBM with short-run MCMC [a] 100 48.21
CoopNet = EBM + generator [b] 10 33.61
In Table 4.2, we also report the FID performance of different related baseline methods under the different numbers of MCMC steps. We can see that with the same number of Langevin steps, the CoopFlow can generate much more realistic image patterns than the other two baselines. The quality of the generated images are evaluated by FID. The results of the experiments show that the proposed CoopFlow can use less number of MCMC steps to achieve better results.
Table 4.2: A comparison of FID scores under different numbers of MCMC steps.
Method 10 20 30 40 50 200
CoopFlow = EBM + flow (ours) 16.46 15.20 15.80 16.80 15.64 17.94
EBM with short-run MCMC 421.3 194.88 117.02 140.79 198.09 54.23
CoopNet = EBM + generator 33.74 33.48 34.12 33.85 42.99 38.88
We have followed you suggestion to include the above additional experiments in our revised paper. Please check a newly added subsection "A7. Analysis of the number of Langevin steps" in the Appendix.
|
1912.09713
| false
| null |
CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data. (The data URL is not yet provided for anonymous review.) We include a list of MIDs such that their English names map unambiguously to a MID. Table TABREF17(a) summarizes the overall statistics of CFQ. Table TABREF17(b) uses numbers from BIBREF8 and from an analysis of WebQuestionsSP BIBREF17 and ComplexWebQuestions BIBREF18 to compare three key statistics of CFQ to other semantic parsing datasets (none of which provide annotations of their compositional structure). CFQ contains the most query patterns by an order of magnitude and also contains significantly more queries and questions than the other datasets. Note that it would be easy to boost the raw number of questions in CFQ almost arbitrarily by repeating the same question pattern with varying entities, but we use at most one entity substitution per question pattern. Appendix SECREF10 contains more detailed analyses of the data distribution.
CFQ contains 239,357 English question-answer pairs that are answerable using the public Freebase data.
|
How big is new question answering dataset?
|
The answers are shown as follows:
* 239,357 English question-answer pairs
|
null | false
| null |
Bon Jovi is an American rock band formed in 1983 in Sayreville, New Jersey. It consists of singer Jon Bon Jovi, keyboardist David Bryan, drummer Tico Torres, guitarist Phil X, and bassist Hugh McDonald. Original bassist Alec John Such quit the band in 1994, and longtime guitarist and co-songwriter Richie Sambora left in 2013. The band has been credited with " the gap between heavy metal and pop with style and ease".
|
Who is Bon Jovi
|
Bon Jovi is an American rock band formed in 1983 in Sayreville, New Jersey. It consists of singer Jon Bon Jovi, keyboardist David Bryan, drummer Tico Torres, guitarist Phil X, and bassist Hugh McDonald. Original bassist Alec John Such quit the band in 1994, and longtime guitarist and co-songwriter Richie Sambora left in 2013. The band has been credited with bridging the gap between heavy metal and pop with style and ease.
|
null | false
| null |
Greenmantle is the second of five novels by John Buchan featuring the character Richard Hannay. It was first published in 1916 by Hodder & Stoughton, London. It is one of two Hannay novels set during the First World War, the other being Mr Standfast (1919); Hannay's first and best-known adventure, The Thirty-Nine Steps (1915), is set in the period immediately preceding the war.
|
Given this paragraph, who wrote "Greenmantle?"
|
John Buchan
|
null | false
| null |
What is electricity?
|
Electricity is the flowing of electrons from a higher electric potential to a lower one (or alternatively, flowing of electron holes in the opposite direction). The flowing electrons can perform work, ranging from heating a resistive element in a light bulb to turning a motor through a magnetic field.
|
|
null | false
| 475
|
Deep neural networks (DNNs) have achieved significant progress in wide applications, such as image classification, face recognition, object detection, speech recognition and machine translation. Despite their success, deep learning models have revealed vulnerability to adversarial attacks. Crafted by adding some small perturbations to benign inputs, adversarial examples (AEs) can fool DNNs into making wrong predictions, which is a critical threat especially for some security-sensitive scenarios such as autonomous driving.
Based on the knowledge of target model, the adversarial attacks could be divided into white-box attack and black-box attack. White-box attacks have full access to the architecture and parameters of the target model, and can easily generate a successful adversarial example via back propagation. By contrast, black-box attacks can only query the target model to obtain the output prediction, which is a more realistic and challenging setting. Since many real-world online application programming interfaces (APIs) have time or monetary limit for user query, most current research efforts focus on how to design query-efficient attacks, which indeed achieve a huge improvement. For example, so far the state-of-the-art black-box attack could succeed in untargeted attack on ImageNet dataset with only tens of queries on average. However, this success has sacrificed the imperceptibility of AEs, whose global perturbations are generated with various tricks like random vertical stripes, and thus very obvious and perceptible to human eyes (see Figure (e) and (f) for examples). In effect, imperceptibility is essential for attackers. Currently online APIs usually integrate detectors into their services to detect anomalous inputs. The images with too visible perturbations are difficult to pass these detectors, not to mention human judgement. To alleviate or even address this problem, we could utilize some priors to lead black-box attacks to search in small but efficient space, instead of global perturbations. Previous DNNs visualization work proposed a type of saliency map to indicate which features should influence the output significantly. This BP saliency map 1 is generated by calculating the derivatives of the model output with respect to input. Just as show in Figure.(a), the brighter pixels in BP saliency maps denote they have a greater impact on the model output. The classical white-box attack JSMA just constructs such an adversarial saliency map and iteratively selects pixels from this map to perturb. But due to the black-box setting, we cannot derive this saliency map directly. Nevertheless, we could find that the region of bright pixels roughly represents the position of the main object in an image, which also accords with our intuition. Inspired by this, we propose to first generate a saliency mask to extract the regions of the main object in an image via image segmentation, and then limit the perturbation search in this smaller but more important regions, as presented in Figure.
Several recent studies also propose to generate similar local perturbations but applied in different attack settings with different methods. In particular, produce saliency maps with the class activation mapping (CAM), which requires the internal information of the model and needs to change the model architecture. This is infeasible in black-box scenarios. replace CAM with Grad-CAM, which does not need to change the architecture of the model. However, both CAM and Grad-CAM still rely on the knowledge of the model, but in black-box setting the internal information of the target model is inaccessible. Thus, we choose salient object segmentation, which can automatically extract salient object(s) in an image (see Figure.(b)). Compared with CAM and Grad-CAM, it doesn't need any other information except for the input image.
On the other hand, these studies or assume a pretrained substitute model is trained on the similar data distribution with the target model's. And for the latter, the transferability of both adversarial examples and saliency maps highly depends on the selection of the pair of substitute model(s) and target model. Hence, in this paper we focus on gradient-free black-box attacks that take no account of gradients and search for successful perturbations only according to the model output.
We add these segmentation priors to SOTA black-box attacks and find the imperceptibility of AEs is indeed enhanced but still limited due to "global perturbation" in local region (See Figure). To further improve the imperceptibility, our another novelty is that even in the salient region, "saliency" could still be refined. Our inspiration also comes from some DNNs visualization works. For instance, given a dog image, CAM produces a localization heatmap and shows the dog face region is most highly lighted. systematically cover up different portions of a dog image, and finds when the dog face is obscured, the activity in the feature map and classifier output changes dramatically. In further, by combining internal feature visualization with output prediction, reveal even in dog's face region, its ears and eyes seem to be more important when distinguishing dogs. Therefore, we are inspired to assume the salient region in an image is progressive with respect to its impact on model output. If we could find smaller but more salient region, the perturbation will be more efficient and meanwhile the imperceptibility of AEs could also be enhanced. Thus, we propose our Saliency Attack, a new gradient-free black-box attack via refining perturbations in the salient region according to their saliency.
Our main contributions can be summarized as follows:
• To our best knowledge, we are the first to use salient object segmentation to extract binary salient masks in black-box settings. Experiments show that the SOTA black-box attacks limited in such regions can achieve much better imperceptibility performance with little reduction in query efficiency and success rate.
• We propose a new gradient-free black-box attack via refining in salient regions. Compared with the search methods used in other gradient-free attacks, our method is able to generate smaller but effective perturbations, which is interpretable to some extent and can further improve the imperceptibility.
• We demonstrate that the perturbations generated by our Saliency Attack is more robust against some detection-based defense like Feature Squeezing.
On the other hand, these studies (Dong et al., 2020; Xiang et al., 2021) are applied to transfer-based attacks (actually a grey-box setting), which first build a substitute model to approximate the target model, then use gradient-based white-box attacks to generate adversarial examples, and finally transfer them to attack the target model. For transfer-based attack, we must either train a substitute model from scratch with huge number of queries (Papernot et al., 2017) or assume a pretrained substitute model is trained on the similar data distribution with the target model’s (Liu et al., 2017). And for the latter, the transferability of both adversarial examples and saliency maps highly depends on the selection of the pair of substitute model(s) and target model. Hence, in this paper we focus on gradient-free black-box attacks (Narodytska & Kasiviswanathan, 2016; Moon et al., 2019; Andriushchenko et al., 2020) that take no account of gradients and search for successful perturbations only according to the model output. We add these segmentation priors to SOTA black-box attacks and find the imperceptibility of AEs is indeed enhanced but still limited due to ”global perturbation” in local region (See Figure 4 and 5). To further improve the imperceptibility, our another novelty is that even in the salient region, “saliency” could still be refined. Our inspiration also comes from some DNNs visualization works (Zhou et al., 2016; Zeiler & Fergus, 2014; Olah et al., 2018). For instance, given a dog image, CAM produces a localization heatmap and shows the dog face region is most highly lighted (Zhou et al., 2016). Zeiler & Fergus (2014) systematically cover up different portions of a dog image, and finds when the dog face is obscured, the activity in the feature map and classifier output changes dramatically. In further, by combining internal feature visualization with output prediction, Olah et al. (2018) reveal even in dog’s face region, its ears and eyes seem to be more important when distinguishing dogs. Therefore, we are inspired to assume the salient region in an image is progressive with respect to its impact on model output. If we could find smaller but more salient region, the perturbation will be more efficient and meanwhile the imperceptibility of AEs could also be enhanced. Thus, we propose our Saliency Attack, a new gradient-free black-box attack via refining perturbations in the salient region according to their saliency. Our main contributions can be summarized as follows: • To our best knowledge, we are the first to use salient object segmentation to extract binary salient masks in black-box settings. Experiments show that the SOTA black-box attacks limited in such regions can achieve much better imperceptibility performance with little reduction in query efficiency and success rate. • We propose a new gradient-free black-box attack via refining in salient regions. Compared with the search methods used in other gradient-free attacks, our method is able to generate smaller but effective perturbations, which is interpretable to some extent and can further improve the imperceptibility. • We demonstrate that the perturbations generated by our Saliency Attack is more robust against some detection-based defense like Feature Squeezing.
|
What is the technical significance of the proposed approach?
|
First we notice there is a misunderstanding of our approach in this comment. After reaching the minimal block or no smaller block has a better loss, we backtrack to the last level of split blocks (instead of going back to the level of initial blocks and “applying the procedure again to the second best block”) and choose the best one of the remaining blocks for further perturbation. The intuition is that we always refine perturbations in a region that is as small as possible, which accords with our goal of perturbing smaller but more important regions to generate imperceptible AEs.
For our search algorithm, we agree that it is simple. However, we think that a simple yet effective algorithm is always desirable, because it can be easily understood and implemented. For examples, Parsimonious attack (Moon et al., 2019) uses greedy local search; Square attack (Andriushchenko et al., 2020) adopts an even simpler random search. But these simple algorithms can achieve very strong performance, because they are particularly suited to the problems. For the present work, the proposed search algorithm is also suited to the problem of generating AEs with small perturbations, and the results have demonstrated its effectiveness. We appreciate for the suggestion that we need to justify the search algorithm. In Table 2 (page 8), Saliency significantly outperforms Parsimonious-seg and Square-seg, which indicates that our search algorithm is better than the local search algorithm in Parsimonious Attack and the random search algorithm in Square Attack. Moreover, we have added experiments to compare our search algorithm with a new baseline greedy search in salient region, which greedily perturbs blocks to maximize marginal gain. We have fine-tuned the block size of the greedy search algorithm and the comparison results are presented in Table 3 (page 9). It can be seen that our algorithm is still significantly better. The above results that our search algorithm outperforms existing search algorithms (i.e., local search and random search) and common baseline (i.e., greedy search) indicate that it is indeed effective.
For using saliency maps, indeed, this idea has already appeared in the field of adversarial attack. However, to the best of our knowledge, we are the first to apply it to black-box attack on image classification model. The two papers mentioned in the comment (Guo et al.,2020; Sun et al., 2021) are both for white-box attacks. Actually we have reviewed two similar works (Dong et al., 2020; Xiang et al., 2021) in the paper, which use CAM/grad-CAM to generate saliency maps and can only be applied in white-box or grey-box settings. We introduced them and analyzed their difference and infeasibility to black-box setting thoroughly (para. 3,4, page 2). Besides, we want to clarify that our main contribution is not just introducing saliency maps into black-box attacks, but how to process and utilize saliency maps in black-box setting. The combination of saliency maps and our search algorithm bring out smaller and more imperceptible perturbations, which greatly alleviates the problem of imperceptibility in black-box attacks.
|
null | false
| 198
|
The prevalent use of emoji—and their text-based precursors—is mostly unaddressed in current natural language processing (NLP) tasks. The support of the Unicode Standard BIBREF0 for emoji characters in 2010 ushered in a wide-spread, international adoption of these graphical elements in casual contexts. Interpreting the meaning of these characters has been challenging however, since they take on multiple semantic roles BIBREF1.
Whether or not emoji are used depends on the context of a text or conversation, with more formal settings generally being less tolerating. So is the popular aligned corpus Europarl BIBREF2 naturally devoid of emoji. Technical limitations, like no Unicode support, also limit its use. This in turn affects commonly used corpora, tokenizers, and pre-trained networks.
Take for example the Ubuntu Dialog Corpus by BIBREF3, a commonly used corpus for multi-turn systems. This dataset was collected from an Internet Relay Chat (IRC) room casually discussing the operating system Ubuntu. IRC nodes usually support the ASCII text encoding, so there's no support for graphical emoji. However, in the 7,189,051 utterances, there are only 9946 happy emoticons (i.e. :-) and the cruelly denosed :) version) and 2125 sad emoticons.
Word embeddings are also handling emoji poorly: Word2vec BIBREF4 with the commonly used pre-trained Google News vectors doesn't support the graphical emoji at all and vectors for textual emoticons are inconsistent. As another example with contextualized word embeddings, there are also no emoji or textual emoticons in the vocabulary list of BERT BIBREF5 by default and support for emoji is only recently added to the tokenizer. The same is true for GPT-2 BIBREF6. As all downstream systems, ranging from multilingual résumé parsing to fallacy detection BIBREF7, rely on the completeness of these embeddings, this lack of emoji support can affect the performance of some of these systems.
Another challenge is that emoji usage isn't static. Think of shifting conventions, different cultures, and newly added emoji to the Unicode list. Several applications also use their own custom emoji, like chat application Slack and streaming service Twitch. This becomes an issue for methods that leverage the Unicode description BIBREF8 or that rely on manual annotations BIBREF9.
Our contribution with this paper is two-fold: firstly, we argue that the current use—or rather non-existing use—of emoji in the tokenizing, training, and the datasets themselves is insufficient. Secondly, we attempt to quantify the significance of incorporating emoji-based features by presenting a fine-tuned model. We then compare this model to a baseline, but without special attention to emoji.
Section SECREF2 will start with an overview of work on emoji representations, emoji-based models and analysis of emoji usage. A brief introduction in conversational systems will also be given. Section SECREF3 will then look into popular datasets with and without emoji and then introduce the dataset we used.
Our model will then be discussed in Section SECREF4, including the tokenization in Subsection SECREF4, training setup in Subsection SECREF6 and evaluation in Subsection SECREF10. This brings us to the results of our experiment, which is discussed in Section SECREF5 and finally our conclusion and future work are presented in Section SECREF6.
Take for example the Ubuntu Dialog Corpus by Lowe et al. [20], a commonly used corpus for multi-turn systems. This dataset was collected from an Internet Relay Chat (IRC) room casually discussing the operating system Ubuntu.
|
Where is their dataset from?
|
An Internet Relay Chat (IRC) room.
|
null | false
| null |
write a story about the first contact of Alien would be
|
The first contact with Alien will be a big event that the entire earth would remember for generations. the first thing that will determine the nature of this first contact depends who get to who first. Alien arrived to earth, or human discovered another planet with other civilisation. it is high likely that in either way, the first contact will be between robots of the more advanced civilisation and the the people of the less advanced civilisation. this is due to the fact that the high advanced civilization tends to explore other worlds using their robots first.
|
|
null | false
| null |
Beijing (/beɪˈdʒɪŋ/ bay-JING; Chinese: 北京; pinyin: Běijīng; Mandarin pronunciation: [pèɪ.tɕíŋ] (listen)), alternatively romanized as Peking (/piːˈkɪŋ/ pee-KING), is the capital of the People's Republic of China. With over 21 million residents, Beijing is the world's most populous national capital city and is China's second largest city after Shanghai. It is located in Northern China, and is governed as a municipality under the direct administration of the State Council with 16 urban, suburban, and rural districts. Beijing is mostly surrounded by Hebei Province with the exception of neighboring Tianjin to the southeast; together, the three divisions form the Jingjinji megalopolis and the national capital region of China.
|
What name is Beijing also known by?
|
Běijīng is alternatively romanized as Peking and is the capital of the People's Republic of China
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.