paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
What is the most popular sport in Canada?
|
The most popular sport in Canada is ice hockey. It is also the national sport of Canada.
|
|
null | false
| null |
Which of these movies were nominated for Best Picture? "All Quiet on the Western Front", "Everything Everywhere All at Once", "Avatar: The Way of Water", "Glass Onion", "Pinocchio", "Navalny", "The Whale".
|
The following films were nominated for Best Picture in the 2023 Academy Awards: "All Quiet on the Western Front", "Everything Everywhere All at Once", and "Avatar: The Way of Water". "Everything Everywhere All at Once" won Best Picture.
While the other films weren't nominated for Best Picture, they were nominated for the following Academy Awards. "Glass Onion: A Knives Out Mystery" was nominated for Best Adapted Screenplay. "Guillermo Del Toro's Pinocchio" won Best Animated Feature Film. "Navalny" won Best Documentary Feature Film. And "The Whale" won twice: Brendan Fraser for Best Actor in a Leading Role; Adrien Morot, Judy Chin and Annemarie Bradley for Best Makeup and Hairstyling.
|
|
null | false
| null |
Different theorists have different categorizations and conceptualizations of defence mechanisms. Large reviews of theories of defence mechanisms are available from Paulhus, Fridhandler and Hayes (1997) and Cramer (1991). The Journal of Personality published a special issue on defence mechanisms (1998).
In the first definitive book on defence mechanisms, The Ego and the Mechanisms of Defence (1936), Anna Freud enumerated the ten defence mechanisms that appear in the works of her father, Sigmund Freud: repression, regression, reaction formation, isolation, undoing, projection, introjection, turning against one's own person, reversal into the opposite, and sublimation or displacement.
Sigmund Freud posited that defence mechanisms work by distorting id impulses into acceptable forms, or by unconscious or conscious blockage of these impulses. Anna Freud considered defense mechanisms as intellectual and motor automatisms of various degrees of complexity, that arose in the process of involuntary and voluntary learning.
Anna Freud introduced the concept of signal anxiety; she stated that it was "not directly a conflicted instinctual tension but a signal occurring in the ego of an anticipated instinctual tension". The signalling function of anxiety was thus seen as crucial, and biologically adapted to warn the organism of danger or a threat to its equilibrium. The anxiety is felt as an increase in bodily or mental tension, and the signal that the organism receives in this way allows for the possibility of taking defensive action regarding the perceived danger.
Both Freuds studied defence mechanisms, but Anna spent more of her time and research on five main mechanisms: repression, regression, projection, reaction formation, and sublimation. All defence mechanisms are responses to anxiety and how the consciousness and unconscious manage the stress of a social situation.
Repression: when a feeling is hidden and forced from the consciousness to the unconscious because it is seen as socially unacceptable
Regression: falling back into an early state of mental/physical development seen as "less demanding and safer"
Projection: possessing a feeling that is deemed as socially unacceptable and instead of facing it, that feeling or "unconscious urge" is seen in the actions of other people
Reaction formation: acting the opposite way that the unconscious instructs a person to behave, "often exaggerated and obsessive". For example, if a wife is infatuated with a man who is not her husband, reaction formation may cause her to – rather than cheat – become obsessed with showing her husband signs of love and affection.
Sublimation: seen as the most acceptable of the mechanisms, an expression of anxiety in socially acceptable ways
Otto F. Kernberg (1967) developed a theory of borderline personality organization of which one consequence may be borderline personality disorder. His theory is based on ego psychological object relations theory. Borderline personality organization develops when the child cannot integrate helpful and harmful mental objects together. Kernberg views the use of primitive defence mechanisms as central to this personality organization. Primitive psychological defences are projection, denial, dissociation or splitting and they are called borderline defence mechanisms. Also, devaluation and projective identification are seen as borderline defences.
In George Eman Vaillant's (1977) categorization, defences form a continuum related to their psychoanalytical developmental level.[non-primary source needed] They are classified into pathological, immature, neurotic and "mature" defences.
Robert Plutchik's (1979) theory views defences as derivatives of basic emotions, which in turn relate to particular diagnostic structures. According to his theory, reaction formation relates to joy (and manic features), denial relates to acceptance (and histrionic features), repression to fear (and passivity), regression to surprise (and borderline traits), compensation to sadness (and depression), projection to disgust (and paranoia), displacement to anger (and hostility) and intellectualization to anticipation (and obsessionality).
The Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) published by the American Psychiatric Association (1994) includes a tentative diagnostic axis for defence mechanisms. This classification is largely based on Vaillant's hierarchical view of defences, but has some modifications. Examples include: denial, fantasy, rationalization, regression, isolation, projection, and displacement.
|
What are the defense mechanisms?
|
There are multiple definitions of defense mechanism based on different theories. Generally, defense mechanisms are human responses responses to anxiety and how the consciousness and unconscious manage the stress of a social situation. Some of the most common defense mechanisms include:
Repression: forcing a feeling from the consciousness to the unconscious because it is seen as socially unacceptable
Regression: falling back into an early state of mental/physical development seen as less demanding and safer
Projection: possessing a feeling that is deemed as socially unacceptable and instead of facing it, that feeling or "unconscious urge" is seen in the actions of other people
Reaction formation: acting the opposite way that the unconscious instructs a person to behave, "often exaggerated and obsessive". For example, if a wife is infatuated with a man who is not her husband, reaction formation may cause her to – rather than cheat – become obsessed with showing her husband signs of love and affection.
Sublimation: seen as the most acceptable of the mechanisms, an expression of anxiety in socially acceptable ways
|
null | false
| null |
Identify which instrument is string or percussion: Geophone, Penorcon
|
Penorcon is string, Geophone is percussion.
|
|
null | false
| null |
Taylor Alison Swift (born December 13, 1989) is an American singer-songwriter. Her genre-spanning discography, songwriting abilities and artistic reinventions have received critical praise and wide media coverage. Born in West Reading, Pennsylvania, Swift moved to Nashville at age 14 to become a country artist. She signed a songwriting deal with Sony/ATV Music Publishing in 2004 and a recording contract with Big Machine Records in 2005. Her 2006 self-titled debut album made her the first female country artist to write a U.S. platinum-certified album.
Swift's next albums, Fearless (2008) and Speak Now (2010), explored country pop. The former's "Love Story" and "You Belong with Me" were the first country songs to top the U.S. pop and all-genre airplay charts, respectively. She experimented with rock and electronic styles on Red (2012), which featured her first Billboard Hot 100 number-one song, "We Are Never Ever Getting Back Together", and eschewed her country image in her synth-pop album, 1989 (2014), supported by chart-topping songs "Shake It Off", "Blank Space", and "Bad Blood". Media scrutiny inspired the urban-flavored Reputation (2017) and its number-one single "Look What You Made Me Do".
Exiting Big Machine, Swift signed with Republic Records in 2018 and released her seventh studio album, Lover (2019), followed by the autobiographical documentary Miss Americana (2020). She ventured into indie folk and alternative rock in her 2020 albums Folklore and Evermore, whose singles "Cardigan" and "Willow" topped the Hot 100. Swift began re-recording her first six albums after a dispute over their masters, re-releasing two in 2021—Fearless (Taylor's Version) and Red (Taylor's Version). The latter's "All Too Well (10 Minute Version)" became the longest song to top the Hot 100. Her 2022 album Midnights and single "Anti-Hero" broke all-time streaming records. Swift has directed music videos and films, such as All Too Well: The Short Film (2021), and played supporting roles in others.
Having sold over 200 million records globally, Swift is one of the best-selling musicians in history. She is the most streamed woman on Spotify and the only act to have five albums open with over one million copies sold in the US. Among her accolades are 12 Grammy Awards, including three Album of the Year wins; an Emmy Award; 40 American Music Awards; 29 Billboard Music Awards; three IFPI Global Recording Artist of the Year awards; and 92 Guinness World Records. Swift has been featured in rankings such as Rolling Stone's 100 Greatest Songwriters of All Time, Billboard's Greatest of All Time Artists, the Time 100 and Forbes Celebrity 100. Honored with titles such as Artist of the Decade and Woman of the Decade, Swift is an advocate for artists' rights and women's empowerment. Her music is credited with influencing a generation of singers and songwriters.
|
Mention Taylor’s work in terms of Music videos and acting.
|
Taylor Swift has directed music videos and films, such as All Too Well: The Short Film (2021). She also played supporting roles in other ventures.
|
null | false
| 425
|
In this subsection, we discuss our main experimental results. As we observe that the results are heavily influenced by hyper-parameter tuning strategy, our main goal throughout our experiments is the validation of the proposed sample probing idea by integrating it into strong generative GZSL baselines, and then comparing results using the same tuning methodology. Using this principle, we present two main types of analysis: (i) the evaluation of the proposed approach using ESZSL as the probe model in combination with a number of generative GZSL models, and (ii) the evaluation of alternative closed-form probe models within our framework.
Generative GZSL models with sample probing. To evaluate the sample probing approach as a general technique to improve generative model training, we integrate it to four recent generative GZSL approaches: conditional Wasserstein GAN (cWGAN), LisGAN, TF-VAEGAN and FREE. We additionally report results for the variant of TF-VAEGAN with the fine-tuned representations (TF-VAEGAN-FT), as it is the only one among them with reported fine-tuning results.
For the cWGAN, we follow the implementation details described in Sariyildiz & Cinbis (2019), and tune hyper-parameters using our policy. For LisGAN, TF-VAEGAN and FREE models, we use the official repositories shared by their respective authors. We use the version of TF-VAEGAN without feedback loop, for simplicity, as the model yields excellent performance with and without feedback loop. In all models (except cWGAN), we only re-tune the number of training iterations of the original models using our hyper-parameter tuning policy, to make the re-Table: Evaluation of sample probing with multiple generative GZSL models on four benchmark datasets. Each row pair shows the effect of adding sample probing to a particular generative GZSL model, using ESZSL as the closed-form probe model. We use the same hyper-parameter optimization policy in all cases to make results comparable. sults comparable, as it is unclear how the original values were obtained. We keep all remaining hyper-parameters unchanged to remain as close as possible to the original implementations.
The results over the four benchmark datasets are presented in Table. In terms of the h-scores, we observe improvements in 17 out of 19 cases, at varying degrees (up to 4.6 points ). Only in two cases we observe a slight degradation (maximum of 0.2 points) in performance. Overall, these improvements over already strong and state-of-the-art (or competitive) baselines validate the effectiveness of the proposed sample probing approach, suggesting that it is a valid method towards end-to-end learning of generative GZSL models directly optimized for synthetic train data generation purposes.
Sample probing with alternative closed-form models. We now evaluate our approach with different closed-form probe models, specifically ESZSL, Sem2Vis and Vis2Sem, as described in Section 3.3. For these experiments, we use the TF-VAEGAN as the base generative model.
The results with four configurations over four benchmark datasets are presented in Table. First of all, in terms of h-scores, we observe considerable performance variations across the probe models and datasets: Sem2Vis performs the best on CUB (+0.9 over the baseline), ESZSL provides a clear gain on FLO (+4.6) and a relative improvement on AWA (+0.3), and Vis2Sem improves the most on SUN (+1.8). These results suggest that sample probe alternatives have their own advantages and disadvantages, and their performances can be data dependent. Therefore, in a practical application, probe model options can be incorporated into the model selection process. More in-depth understanding of closed-form model characteristics for sample probing purposes, and the formulation and evaluation of other probe models can be important future work directions. Overall, the fact that we observe equivalent (2) or better (9) h-scores in 11 out of 12 sample probing experiments indicates the versatility of the approach in terms of compatibility with various closed-form probe models.
Comparison to other generative GZSL approaches. Performance comparisons across independent experiment results can be misleading due to differences in formulation-agnostic implementation and model selection details. Nevertheless, an overall comparison can be found in Appendix A. and probe-validation classes, respectively. We observe that using either one during the evaluation of the sample probing ZSL model, brings its own characteristic results (Table). On all datasets, using zsl-loss increases the seen accuracy while using gzsl-loss increases the unseen accuracy. We choose among these two options using our same hyper-parameter tuning policy, on the validation set. Effect of sample probing loss weight. Figure shows the validation and test set h-score values as a function of sample probing loss weight. In the test set results, we observe an overall increasing performance trend with larger loss weights, up to the weight 6, highlighting the contribution of sample probing. The optimal weight with respect to the validation and the test sets, however, differs. This observation is an example for the difficulty of tuning the ZSL model based on validation set. We set the loss weight to 5 following our hyper-parameter tuning policy, which yields almost 0.5 lower than the maximum test-set score observed for this single hyper-parameter.
Quantitative analysis of sample quality. We quantitatively evaluate the sample quality using Fréchet (Wasserstein-2) distance, which is also used in the FID metric for evaluating GAN models. In Table, we provide a comparison for TF-VAEGAN and our approach for respective mean per-class Fréchet distances between real and synthetic samples (200 synthetic samples per class) of unseen classes on three datasets. The results show that sample probing helps generator to generate more realistic samples compared to TF-VAEGAN. We further investigate the qualitative analysis of the sample quality via t-SNE visualizations in Appendix B.
Table 1: Evaluation of sample probing with multiple generative GZSL models on four bench_x0002_mark datasets. Each row pair shows the effect of adding sample probing to a particular generative GZSL model, using ESZSL as the closed-form probe model. We use the same hyper-parameter optimization policy in all cases to make results comparable. We observe h-score improvements at varying degrees in 17 out of 19 model, feature & dataset variations.
|
Can your method be applied to other generative ZSL models?
|
Many thanks for this suggestion. In the revised manuscript, we now evaluate the proposed approach using two additional closed-form ZSL models as the probe models, and report the results in the newly added Table 2. As discussed in Section 4.1, we observe improvements via sample probing in 11 out of 12 cases, which supports the general applicability of the method.
|
null | false
| null |
Who was Else Schmitz-Gohr
|
Else Schmitz-Gohr (12 August 1901 – 13 December 1987) was a German composer, pianist, and teacher who is best remembered for her Elegy for the Left Hand for piano, her successful students, and her recordings of Max Reger’s works for piano.
Schmitz-Gohr was born in Cologne. She studied at the Cologne Conservatory and the Stern Conservatory in Berlin. Her teachers included Franz Bolsche, Wilhelm Klatte, Dr. Otto Klauwell, James Kwast, and Fritz Hans Rehbold. She socialized with sculptor Lili Graf. In 1918, she debuted as a pianist under conductor Hermann Abendroth, and in 1922 won the Gustav Hollaender medal.
|
|
null | false
| 294
|
Although internet users accept unilateral contracts such as terms of service on a regular basis, it is well known that these users rarely read them. Nonetheless, these are binding contractual agreements. A recent study suggests that up to 98% of users do not fully read the terms of service before accepting them BIBREF0 . Additionally, they find that two of the top three factors users reported for not reading these documents were that they are perceived as too long (`information overload') and too complicated (`difficult to understand'). This can be seen in Table TABREF3 , where a section of the terms of service for a popular phone app includes a 78-word paragraph that can be distilled down to a 19-word summary.
The European Union's BIBREF1 , the United States' BIBREF2 , and New York State's BIBREF3 show that many levels of government have recognized the need to make legal information more accessible to non-legal communities. Additionally, due to recent social movements demanding accessible and transparent policies on the use of personal data on the internet BIBREF4 , multiple online communities have formed that are dedicated to manually annotating various unilateral contracts.
We propose the task of the automatic summarization of legal documents in plain English for a non-legal audience. We hope that such a technological advancement would enable a greater number of people to enter into everyday contracts with a better understanding of what they are agreeing to. Automatic summarization is often used to reduce information overload, especially in the news domain BIBREF5 . Summarization has been largely missing in the legal genre, with notable exceptions of judicial judgments BIBREF6 , BIBREF7 and case reports BIBREF8 , as well as information extraction on patents BIBREF9 , BIBREF10 . While some companies have conducted proprietary research in the summarization of contracts, this information sits behind a large pay-wall and is geared toward law professionals rather than the general public.
In an attempt to motivate advancement in this area, we have collected 446 sets of contract sections and corresponding reference summaries which can be used as a test set for such a task. We have compiled these sets from two websites dedicated to explaining complicated legal documents in plain English.
Rather than attempt to summarize an entire document, these sources summarize each document at the section level. In this way, the reader can reference the more detailed text if need be. The summaries in this dataset are reviewed for quality by the first author, who has 3 years of professional contract drafting experience.
The dataset we propose contains 446 sets of parallel text. We show the level of abstraction through the number of novel words in the reference summaries, which is significantly higher than the abstractive single-document summaries created for the shared tasks of the Document Understanding Conference (DUC) in 2002 BIBREF11 , a standard dataset used for single document news summarization. Additionally, we utilize several common readability metrics to show that there is an average of a 6 year reading level difference between the original documents and the reference summaries in our legal dataset.
In initial experimentation using this dataset, we employ popular unsupervised extractive summarization models such as TextRank BIBREF12 and Greedy KL BIBREF13 , as well as lead baselines. We show that such methods do not perform well on this dataset when compared to the same methods on DUC 2002. These results highlight the fact that this is a very challenging task. As there is not currently a dataset in this domain large enough for supervised methods, we suggest the use of methods developed for simplification and/or style transfer.
In this paper, we begin by discussing how this task relates to the current state of text summarization and similar tasks in Section SECREF2 . We then introduce the novel dataset and provide details on the level of abstraction, compression, and readability in Section SECREF3 . Next, we provide results and analysis on the performance of extractive summarization baselines on our data in Section SECREF5 . Finally, we discuss the potential for unsupervised systems in this genre in Section SECREF6 .
We propose the task of the automatic summarization of legal documents in plain English for a non-legal audience.
|
What task do they propose in the paper?
|
The task of the automatic summarization of legal documents in plain English for a non-legal audience.
|
2003.05995
| false
| null |
For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants). All the dialogues were manually checked by one of the authors and those where the workers were clearly not partaking in the task or collaborating were removed from the dataset. The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4.
Data Analysis ::: Subjective Data
Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.
Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.
Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“.
Data Analysis ::: Single vs Multiple Wizards
In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.
Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.
The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate.
Data Analysis ::: Limitations
It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use.
For the intitial data collection using the CRWIZ platform, 145 unique dialogues were collected (each dialogue consists of a conversation between two participants).
The average time per assignment was 10 minutes 47 seconds, very close to our initial estimate of 10 minutes, and the task was available for 5 days in AMT. Out of the 145 dialogues, 14 (9.66%) obtained the bonus of $0.2 for resolving the emergency. We predicted that only a small portion of the participants would be able to resolve the emergency in less than 6 minutes, thus it was framed as a bonus challenge rather than a requirement to get paid. The fastest time recorded to resolve the emergency was 4 minutes 13 seconds with a mean of 5 minutes 8 seconds. Table TABREF28 shows several interaction statistics for the data collected compared to the single lab-based WoZ study BIBREF4.
Data Analysis ::: Subjective Data
Table TABREF33 gives the results from the post-task survey. We observe, that subjective and objective task success are similar in that the dialogues that resolved the emergency were rated consistently higher than the rest.
Mann-Whitney-U one-tailed tests show that the scores of the Emergency Resolved Dialogues for Q1 and Q2 were significantly higher than the scores of the Emergency Not Resolved Dialogues at the 95% confidence level (Q1: $U = 1654.5$, $p < 0.0001$; Q2: $U = 2195$, $p = 0.009$, both $p < 0.05$). This indicates that effective collaboration and information ease are key to task completion in this setting.
Regarding the qualitative data, one of the objectives of the Wizard-of-Oz technique was to make the participant believe that they are interacting with an automated agent and the qualitative feedback seemed to reflect this: “The AI in the game was not helpful at all [...]” or “I was talking to Fred a bot assistant, I had no other partner in the game“.
Data Analysis ::: Single vs Multiple Wizards
In Table TABREF28, we compare various metrics from the dialogues collected with crowdsourcing with the dialogues previously collected in a lab environment for a similar task. Most figures are comparable, except the number of emergency assistant turns (and consequently the total number of turns). To further understand these differences, we have first grouped the dialogue acts in four different broader types: Updates, Actions, Interactions and Requests, and computed the relative frequency of each of these types in both data collections. In addition, Figures FIGREF29 and FIGREF30 show the distribution of the most frequent dialogue acts in the different settings. It is visible that in the lab setting where the interaction was face-to-face with a robot, the Wizard used more Interaction dialogue acts (Table TABREF32). These were often used in context where the Wizard needed to hold the turn while looking for the appropriate prompt or waiting for the robot to arrive at the specified goal in the environment. On the other hand, in the crowdsourced data collection utterances, the situation updates were a more common choice while the assistant was waiting for the robot to travel to the specified goal in the environment.
Perhaps not surprisingly, the data shows a medium strong positive correlation between task success and the number of Action type dialogue acts the Wizard performs, triggering events in the world leading to success ($R=0.475$). There is also a positive correlation between task success and the number of Request dialogue acts requesting confirmation before actions ($R=0.421$), e.g., “Which robot do you want to send?”. As Table 3 shows, these are relatively rare but perhaps reflect a level of collaboration needed to further the task to completion. Table TABREF40 shows one of the dialogues collected where the Emergency Assistant continuously engaged with the Operator through these types of dialogue acts.
The task success rate was also very different between the two set-ups. In experiments reported in BIBREF4, 96% of the dialogues led to the extinction of the fire whereas in the crowdsourcing setting only 9.66% achieved the same goal. In the crowdsourced setting, the robots were slower moving at realistic speeds unlike the lab setting. A higher bonus and more time for the task might lead to a higher task success rate.
Data Analysis ::: Limitations
It is important to consider the number of available participants ready and willing to perform the task at any one time. This type of crowdsourcing requires two participants to connect within a few minutes of each other to be partnered together. As mentioned above, there were some issues with participants not collaborating and these dialogues had to be discarded as they were not of use.
|
Is CRWIZ already used for data collection, what are the results?
|
Yes, CRWIZ has been used for data collection and its initial use resulted in 145 dialogues. The average time taken for the task was close to the estimate of 10 minutes, 14 dialogues (9.66%) resolved the emergency in the scenario, and these dialogues rated consistently higher in subjective and objective ratings than those which did not resolve the emergency. Qualitative results showed that participants believed that they were interacting with an automated assistant.
|
null | false
| null |
Born in Domont, France, Djidonou has played club football for Racing Paris, Entente SSG, Red Star, Libourne, Racing Paris and Mulhouse. In June 2013, he signed for Championnat National 2 side Romorantin.
|
Extract the name of the country where Djidonou was born.
|
France
|
null | false
| 349
|
To enhance annotation efficiency, we present a web-based application for caption annotation. Fig. FIGREF10 is the annotation screen of the application.
Our tool assists annotators conveniently load images into a display and store captions they created into a new dataset. With saving function, annotator can save and load written captions for reviewing purposes. Furthermore, users are able to look back their works or the others’ by searching image by image ids.
The tool also supports content suggestions taking advantage of existing information from MS-COCO. First, there are categories hints for each image, displaying as friendly icon. Second, original English captions are displayed if annotator feels their needs. Those content suggestions are helpful for annotators who can’t clearly understand images, especially when there are issues with images’ quality.
To enhance annotation efficiency, we present a web-based application for caption annotation. Fig. 2 is the annotation screen of the application.
|
What is used to enhance annotation efficiency?
|
They presents a web-based application for caption annotation.
|
null | false
| null |
In domestic cats, many signals that occur when interacting with humans seem to originate from when the animal was dependent on the mother. Cats have been observed to purr for most of their lifespan, starting from when they were young and suckling from their mother. Purring may have developed as an evolutionary advantage as a signaling mechanism of reassurance between mother cats and nursing kittens. Post-nursing cats often purr as a sign of contentment: when being petted, becoming relaxed, or eating. Some purring may be a signal to another animal that the purring cat does not pose a threat.
|
Why do cats purr?
|
Cats purr for a variety of reasons. Purring can be a sign of contentment or comfort, for example when being pet, while relaxing or eating. It is thought that purring may have developed as an evolutionary trait signaling reassurance between the mother cat and its nursing kittens.
|
null | false
| null |
Do RVs have bathrooms?
|
Whether or not RVs have bathrooms, and the type of bathroom, depends on the RV. Most larger RVs have a “dry bath”. “Dry” means that there is a stand-alone shower. Some showers in dry baths are very small, and some are the size of residential showers. Many smaller RVs have “wet baths”, meaning there is no separate shower. There is a shower nozzle in the bathroom space, and when taking a shower everything in the space gets wet (including the toilet, shelves, walls, etc). Wet baths are usually carefully designed so that everything in the bathroom can get wet without damage. Some RVs do not have bathrooms, but have toilets that are hidden in a cabinet and slide out. This is more common in DIY conversions.
|
|
null | false
| null |
Prokaryotic ribosomes are around 20 nm (200 Å) in diameter and are composed of 65% rRNA and 35% ribosomal proteins. Eukaryotic ribosomes are between 25 and 30 nm (250–300 Å) in diameter with an rRNA-to-protein ratio that is close to 1. Crystallographic work has shown that there are no ribosomal proteins close to the reaction site for polypeptide synthesis. This suggests that the protein components of ribosomes do not directly participate in peptide bond formation catalysis, but rather that these proteins act as a scaffold that may enhance the ability of rRNA to synthesize protein (See: Ribozyme).
|
From the passage provided, extract the diameter of eukaryotic ribosomes.
|
Eukaryotic ribosomes are between 25 and 30 nm (250–300 Å) in diameter.
|
null | false
| null |
Since the eyes are believed to be expressive, the eyes portrayed in Maitum jars are given high regard in describing the dead. The eyes are classified to four major shapes: (1) almond, (2) ovaloid, (3) round and (4) rectangular.
|
Find the four eye shapes on Maitum anthropomorphic pottery and separate them with a comma
|
almond, ovaloid, round, rectangular
|
null | false
| null |
What is a medallion architecture in data engineering?
|
A medallion architecture is a data engineering concept. The idea is that system source data or system of record data is landed in a bronze layer. In the bronze layer data is not changed, and it should match the source system exactly even if there are data quality issues. Next in the silver layer the raw daw from bronze is transformed and joined together with data quality applied and landed in silver. Next in the gold layer are produced datasets which are silver datasets joined together and enriched with other data like machine learning model output. These gold datasets are then what are used and served up by the business in dashboards and reporting.
|
|
null | false
| null |
Give me a list of the world's most popular leaders at the moment.
|
As per the "Morning Consult," here are the top five most popular leaders, arranged by percentage of popularity.
1. Narendra Modi (India)
2. Andrés Manuel López Obrador (Mexico)
3. Anthony Albanese (Australia)
4. Alain Berset (Switzerland)
5. Luiz Inácio Lula da Silva (Brazil)
|
|
null | false
| null |
Tell me whether these pokemon are electric or grass type: Pikachu, Bulbasaur, Zapdos, Voltorb, Oddish, Bellsprout
|
Pikachu: Electric
Bulbasaur: Grass
Zapdos: Electric
Voltorb: Electric
Oddish: Grass
Bellsprout: Grass
|
|
null | false
| null |
what is one good example on a hackathon project for a middle school student for programming?
|
After months of lockdowns and social distancing, schools finally reopened for in-person classes, and parents are thrilled to be able to attend their children's concerts and performances once again. Many parents had missed these events and were eager to show their support for their children's artistic pursuits. Parents are excited to attend her in-person concert, but many of them quickly realized that they didn't know much about classical music, which made it difficult for them to fully appreciate the performance.
As a kid who loves music, I always feel so proud and happy when my parents attend my concerts and performances. However, I often wonder if they are truly able to enjoy the music as much as I do, especially since they may not know much about the background and context of the pieces that are being played.
I believe that knowing the background of the music can make a huge difference in how much someone enjoys it. When you understand the story and history behind a piece of music, it becomes more than just a series of notes and melodies; it becomes a powerful expression of emotion and meaning. Also knowing the background of the music can help us appreciate and connect with the music on a deeper level, allowing us to use it as a powerful tool for stress relief and relaxation. By actively listening and understanding the expressive qualities of the music, we can harness its potential to enhance our mood and promote emotional wellbeing.
|
|
null | false
| null |
Why you need to be honest?
|
Because you should not cheat to anyone and that will give you a good live
|
|
null | false
| null |
Can you provide a list of foods that are both high in protein and low in fat?
|
Certainly! Here is a list of foods, ranked by their protein-fat ratio from the USDA, which can be a great source of protein while helping you maintain a fit and healthy body:
1. Shrimp: 80
2. Egg white: 55
3. Nonfat Milk: 34
4. Tuna: 21.5
5. Rabbit meat: 9.4
6. Chicken breast: 8.6
However, it is important to keep in mind that healthy fats are also essential in a balanced diet. Incorporating sources of healthy fats like fatty fish, nuts, and seeds is important for overall health and well-being.
|
|
null | false
| null |
Since the 1980s, while Colnago continued to produce high-end steel bikes, they began to produce cycle frames using material other than steel, including titanium, aluminum, carbon fiber and mixed materials. One unique frame from this period, the Bititan, has a dual titanium down tube. Crimped and oversize tubes were used on the Tecnos–one of the lightest production steel bikes produced. Similarly crimped oversized aluminum tubes were used on the Dream frame. In 1981 Colnago prototyped the CX Pista–a full monocoque carbon fiber cycle with disc wheels, which was shown at the Milan bike show. Subsequently, Colnago worked with Ferrari to develop new carbon fiber technology, and Ernesto also credits their engineers for challenging him regarding fork design, which led to Colnago's innovative Precisa straight-bladed steel fork (1987). Colnago also experimented with multi-material frames, including the CT-1 and CT-2 constructed with titanium main tubes, carbon fiber forks and rear stays, and a similarly constructed (although short-lived) Master frame constructed with steel main tubes, carbon forks and stays.
|
Given this paragraph about bicycle manufacturer Colnago, what Italian automobile company did Colnago work with to develop new carbon fiber technology?
|
Colnago worked with Ferrari to develop new carbon fiber technology.
|
null | false
| null |
The Goa Civil Code, also called the Goa Family Law, is the set of civil laws that governs the residents of the Indian state of Goa. The Goan civil code was introduced after Portuguese Goa and Damaon were elevated from being mere Portuguese colonies to the status of a Província Ultramarina (Overseas possession) in 1869 AD. The Goan civil code is a Indianised variant of Portuguese legal system that draws largely from Code Napoleon, a common legal system in a number of Continental European nations, Indian law mostly derives from English common law that was formulated and applied in British India, and remains pegged to developments in the "Charter of the British Commonwealth". With a number of amendments, post the Partition of India, Indian laws as a whole, have religion-specific civil codes that separately govern adherents of different religions; (like the Muslim and Hindu personal laws,) and also has caste reservations. Goa and Damaon are an exception to that rule, in that a single code governs all the native Goans and the native Damanese of Damaon, Diu& Silvassa, irrespective of affiliation to religion, ethnicity& social strata. The English translation of the civil code| is available on the Government of Goa's e-Gazette| dated 19/10/2018.
|
When was the Goan civil code established?
|
It was established in 1869 AD after the Portuguese Goa and Damaon were no longer considered colonies.
|
null | false
| null |
The South Western Mail was introduced as a twice weekly service in 1910. Upon the opening of the line to Dirranbandi, the train departed Brisbane at 20:45, arrived at Warwick 04:00, Goondiwindi 10:00 and Dirranbandi 16:30. The return service departed 11.30, arrived Goondiwindi 17.35, Warwick 23.35 and Brisbane 07.10 the following morning.
|
Please extract the stops of the South Western Mail in order in a list delimited by underscores and exclude return service.
|
Brisbane_Warwick_Goondiwindi_Dirranbandi
|
null | false
| 83
|
Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM).
The contributions of this paper are as follows:
We built the largest Arabic offensive language dataset to date that includes special tags for vulgar language and hate speech. We describe the methodology for building it along with annotation guidelines.
We performed thorough analysis of the dataset and described the peculiarities of Arabic offensive language.
We experimented with Support Vector Machine classifiers on character and word ngrams classification techniques to provide strong results on Arabic offensive language classification.
Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets.
|
What is the percentage of offensive tweets in the authors' tagged tweets?
|
The offensive tweets account for roughly 19%.
|
null | false
| null |
I have a question regarding git. I first have a master branch, based on which i created a working branch called feature; we normally send pull request to merge the changes in feature branch to master branch. But then the team has renamed master branch to main, how should i deal with my feature branch, in order to sync with the new main branch and later pull request to merge the changes in feature to main branch instead of master branch.
|
Here is some suggestions to do this:
1. First create a local “feature” branch from the remote “feature” branch
git branch -u origin/feature feature
2. Merge the local “main” branch to the local feature branch
git checkout feature
git merge main
3. Resolve any conflicts, sync feature with main
4. Commit and push to remote feature
5. Create a pull request to merge from remote feature branch to remote main branch
|
|
null | false
| null |
Please describe a fantasy animal
|
An unicorn, a white horse like animal with a horn in the forehead
|
|
null | false
| null |
Is it football or soccer that they play in the Netherlands?
|
In the Netherlands they play football.
|
|
null | false
| null |
Tell me which of the following have been known to attack humans: mouse, goldfish, bears, mugs, cactus, moss, nudibranches, corals, hippos, wolves.
|
bears, hippos, wolves
|
|
null | false
| null |
Who was the first person to do spacewalk?
|
Alexei Leonov, a soviet cosmonaut was the first person to do spacewalk
|
|
null | false
| null |
What is the difference between Kuchipudi and Kathak dance forms of India?
|
Though both are Indian classical dance forms, the difference lies in their place of origin the associated music and mudras ( body movement).
Kuchipudi originated in South Indian region of Andhra, is performed on Carnatic music, dance movement can have bending the knee.
Kathak originated in North India regions of UP, Bihar, is performed on Gharana music, dance movement is fast involves lot of rotations but the knee is not bend
|
|
null | false
| null |
What are some ways in which I can use a pen for other than writing?
|
Here are some ways in which you could use a pen for anything other than writing:
- you could use it to punch holes into bubble wrap
- you could use it as a weapon
- you could use a hollow pen as a straw
- you could use it as a cover for pointed objects
- you could use it for practicing pen spinning
- you could use the nib as a tooth pick
- you could use the nib to open the sim card tray on your phone
|
|
null | false
| 369
|
Stylometry is defined as, "the statistical analysis of variations in literary style between one writer or genre and another". It is a centuries-old practice, dating back the early Renaissance. It is most often used to attribute authorship to disputed or anonymous documents. Stylometry techniques have also successfully been applied to other, non-linguistic fields, such as paintings and music. The main principles of stylometry were compiled and laid out by the philosopher Wincenty Lutosławski in 1890 in his work "Principes de stylométrie" BIBREF0 .
Today, there are millions of users with accounts and profiles on many different social media and networking sites. It is not uncommon for users to have multiple accounts on different social media and networking sites. With so many networking, emailing, and photo sharing sites on the Web, a user often accumulates an abundance of account profiles. There is an increasing focus from the academic and business worlds on aggregating user information across different sites, allowing for the development of more complete user profiles. There currently exist several businesses that focus on this task BIBREF1 , BIBREF2 , BIBREF3 . These businesses use the aggregate profiles for advertising, background checks or customer service related tasks. Moreover, profile matching across social networks, can assist the growing field of social media rumor detection BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , since many malicious rumors are spread on different social media platforms by the same people, using different accounts and usernames.
Motivated by traditional stylometry and the growing interest in matching user accounts across Internet services, we created models for Digital Stylometry, which fuses traditional stylometry techniques with big-data driven social informatics methods used commonly in analyzing social networks. Our models use linguistic and temporal activity patterns of users on different accounts to match accounts belonging to the same person. We evaluated our models on $11,224$ accounts belonging to $5,612$ distinct users on two of the largest social media networks, Twitter and Facebook. The only information that was used in our models were the time and the linguistic content of posts by the users. We intentionally did not use any other information, especially the potentially personally identifiable information that was explicitly provided by the user, such as the screen name, birthday or location. This is in accordance with traditional stylometry techniques, since people could misstate, omit, or lie about this information. Also, we wanted to show that there are implicit clues about the identities of users in the content (language) and context (time) of the users' interactions with social networks that can be used to link their accounts across different services.
Other than the obvious technical goal, the purpose of this paper is to shed light on the relative ease with which seemingly innocuous information can be used to track users across social networks, even when signing up on different services using completely different account and profile information (such as name and birthday). This paper is as much of a technical contribution, as it is a warning to users who increasingly share a large part of their private lives on these services.
The rest of this paper is structured as follows. In the next sections we will review related work on linking profiles, followed by a description of our data collection and annotation efforts. After that, we discuss the linguistic, temporal and combined temporal-linguistic models developed for linking user profiles. Finally, we discuss and summarize our findings and contributions and discuss possible paths for future work.
Stylometry is defined as, ”the statistical analysis of variations in literary style between one writer or genre and another”.
|
What is Stylometry?
|
The statistical analysis of variations in literary style between one writer or genre and another.
|
null | false
| 208
|
Neural machine translation has attracted much attention in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3. Given a sentence $x=(x_1, \dots ,x_{T_x})$ from the source language, the straight-forward way for translation is to generate the target words $y=(y_1, \dots , y_{T_y})$ one by one from left to right. This is also known as the AutoRegressive Translation (ART) models, in which the joint probability is decomposed into a chain of conditional probabilities:
While the ART models have achieved great success in terms of translation quality, the time consumption during inference is still far away from satisfactory. During training, the predictions at different positions can be estimated in parallel since the ground truth pair $(x,y)$ is exposed to the model. However, during inference, the model has to generate tokens sequentially as $y_{<t}$ must be inferred on the fly. Such autoregressive behavior becomes the bottleneck of the computational time BIBREF4.
In order to speed up the inference process, a line of works begin to develop non-autoregressive translation models. These models break the autoregressive dependency by decomposing the joint probability with
The lost of autoregressive dependency largely hurt the consistency of the output sentences, increase the difficulty in the learning process and thus lead to a low quality translation. Previous works mainly focus on adding different components into the NART model to improve the expressiveness of the network structure to overcome the loss of autoregressive dependency BIBREF5, BIBREF6, BIBREF7. However, the computational overhead of new components will hurt the inference speed, contradicting with the goal of the NART models: to parallelize and speed up neural machine translation models.
To tackle this, we proposed a novel hint-based method for NART model training. We first investigate the causes of the poor performance of the NART model. Comparing with the ART model, we find that: (1) the positions where the NART model outputs incoherent tokens will have very high hidden states similarity; (2) the attention distributions of the NART model are more ambiguous than those of ART model. Therefore, we design two kinds of hints from the hidden states and attention distributions of the ART model to help the training of the NART model. The experimental results show that our model achieves significant improvement over the NART baseline models and is even comparable to a strong ART baseline in BIBREF4.
We first investigate the causes of the poor performance of the NART model.
|
What do the authors first investigate in the paper?
|
The causes of the poor performance of the NART model.
|
null | false
| null |
How to make a cup of spiced chai?
|
Among all ingredients, the most important element is cardamom. Of course, you cannot skimp on loose black tea leaves too!
|
|
null | false
| null |
Identify which instrument is string or woodwind: Wheelharp, Clarinet
|
Wheelharp is string, Clarinet is woodwind.
|
|
null | false
| 295
|
The AUD model described in BIBREF0 , BIBREF1 is a non-parametric Bayesian Hidden Markov Model (HMM). This model is topologically equivalent to a phone-loop model with two major differences:
In this work, we have used two variants of this original model. The first one (called HMM model in the remainder of this paper), following the analysis led in BIBREF8 , approximates the Dirichlet Process prior by a mere symmetric Dirichlet prior. This approximation, while retaining the sparsity constraint, avoids the complication of dealing with the variational treatment of the stick breaking process frequent in Bayesian non-parametric models. The second variant, which we shall denote Structured Variational AutoEncoder (SVAE) AUD, is based upon the work of BIBREF4 and embeds the HMM model into the Variational AutoEncoder framework BIBREF9 . A very similar version of the SVAE for AUD was developed independently and presented in BIBREF5 . The main noteworthy difference between BIBREF5 and our model is that we consider a fully Bayesian version of the HMM embedded in the VAE; and the posterior distribution and the VAE parameters are trained jointly using the Stochastic Variational Bayes BIBREF4 , BIBREF10 . For both variants, the prior over the HMM parameters were set to the conjugate of the likelihood density: Normal-Gamma prior for the mean and variance of the Gaussian components, symmetric Dirichlet prior over the HMM's state mixture's weights and symmetric Dirichlet prior over the acoustic units' weights. For the case of the uninformative prior, the prior was set to be vague prior with one pseudo-observation BIBREF11 .
This model is topologically equivalent to a phone-loop model with two major differences: • since it is trained in an unsupervised fashion the elements of the loop cannot directly be interpreted as the actual phones of the target language but rather as some acoustic units (defined as 3-states left-to-right subHMM) whose time scale approximately corresponds the phonetic time scale. • to cope with the unknown number of acoustic units needed to properly describe speech, the model assumes a theoretically infinite number of potential acoustic units.
|
What are the differences between the phone-loop model and the AUD model?
|
This model is topologically equivalent to a phone-loop model with two major differences: • since it is trained in an unsupervised fashion the elements of the loop cannot directly be interpreted as the actual phones of the target language but rather as some acoustic units (defined as 3-states left-to-right subHMM) whose time scale approximately corresponds the phonetic time scale. • to cope with the unknown number of acoustic units needed to properly describe speech, the model assumes a theoretically infinite number of potential acoustic units.
|
null | false
| null |
Arnold Boate, originally called de Boot (1606–1653) was a Dutch physician, writer and Hebraist who spent much of his life abroad, and lived for several years in Dublin. There he married Margaret Dongan, a judge's daughter, whom he portrayed lovingly in his book The Character of a Truly Virtuous and Pious Woman. He was the brother of Gerard Boate, author of The Natural History of Ireland, for which Arnold supplied much of the material. Both Gerard and Arnold were members of the Hartlib circle.
|
Given the paragraph summarizing the life of Arnold Boate, list the countries in which he lived.
|
Arnold Boate was born in the Netherlands and spent much of his life abroad, most notably in Ireland.
|
null | false
| 49
|
Our systems are attentional encoder-decoder networks BIBREF0 . We base our implementation on the dl4mt-tutorial, which we enhanced with new features such as ensemble decoding and pervasive dropout.
We use minibatches of size 80, a maximum sentence length of 50, word embeddings of size 500, and hidden layers of size 1024. We clip the gradient norm to 1.0 BIBREF4 . We train the models with Adadelta BIBREF5 , reshuffling the training corpus between epochs. We validate the model every 10000 minibatches via Bleu on a validation set (newstest2013, newstest2014, or half of newsdev2016 for EN INLINEFORM0 RO). We perform early stopping for single models, and use the 4 last saved models (with models saved every 30000 minibatches) for the ensemble results. Note that ensemble scores are the result of a single training run. Due to resource limitations, we did not train ensemble components independently, which could result in more diverse models and better ensembles.
Decoding is performed with beam search with a beam size of 12. For some language pairs, we used the AmuNMT C++ decoder as a more efficient alternative to the theano implementation of the dl4mt tutorial.
We validate the model every 10 000 minibatches via BLEU on a validation set (newstest2013, newstest2014, or half of newsdev2016 for EN↔RO).
|
What metric is used to validate the model?
|
BLEU.
|
1911.03514
| true
| null |
The biggest difference between discourse parsing for well-written document and dialogues is that discourse relations can exist on two nonadjacent utterances in dialogues. When we annotate dialogues, we should read dialogues from begin to the end. For each utterance, we should find its one parent node at least from all its previous utterances. We assume that the discourse structure is a connected graph and no utterance is isolated.
We propose three questions for eache dialogue and annotate the span of answers in the input dialogue. As we know, our dataset is the first corpus for multi-party dialogues reading comprehension.
We construct following questions and answers for the dialogue in Example 1:
Q1: When does Bdale leave?
A1: Fri morning
Q2: How to get people love Mark in Mjg59's opinion.
A2: Hire people to work on reverse-engineering closed drivers.
On the other hand, to improve the difficulty of the task, we propose $ \frac{1}{6}$ to $ \frac{1}{3}$ unanswerable questions in our dataset. We annotate unanswerable questions and their plausible answers (PA). Each plausible answer comes from the input dialogue, but is not the answer for the plausible question.
Q1: Whis is the email of daniels?
PA: +61 403 505 896
When we annotate dialogues, we should read dialogues from begin to the end. For each utterance, we should find its one parent node at least from all its previous utterances. We assume that the discourse structure is a connected graph and no utterance is isolated.
We propose three questions for eache dialogue and annotate the span of answers in the input dialogue. As we know, our dataset is the first corpus for multi-party dialogues reading comprehension.
We construct following questions and answers for the dialogue in Example 1:
Q1: When does Bdale leave?
A1: Fri morning
Q2: How to get people love Mark in Mjg59's opinion.
A2: Hire people to work on reverse-engineering closed drivers.
On the other hand, to improve the difficulty of the task, we propose $ \frac{1}{6}$ to $ \frac{1}{3}$ unanswerable questions in our dataset. We annotate unanswerable questions and their plausible answers (PA). Each plausible answer comes from the input dialogue, but is not the answer for the plausible question.
Q1: Whis is the email of daniels?
PA: +61 403 505 896
|
Is annotation done manually?
|
Yes.
|
null | false
| null |
In computing, a materialized view is a database object that contains the results of a query. For example, it may be a local copy of data located remotely, or may be a subset of the rows and/or columns of a table or join result, or may be a summary using an aggregate function.
|
What is Materialized View?
|
In Database, a materialized view (MV) is a database object that stores the results of a query. It can be a copy of data from a remote location, a subset of rows and/or columns from a table or join result, or a summary using an aggregate function. MV is normally used for frequent queries of based tables.
|
null | false
| 340
|
* Equal contribution. Listing order is random.
Lemmatization is a core NLP task that involves a string-to-string transduction from an inflected word form to its citation form, known as the lemma. More concretely, consider the English sentence: The bulls are running in Pamplona. A lemmatizer will seek to map each word to a form you may find in a dictionary—for instance, mapping running to run. This linguistic normalization is important in several downstream NLP applications, especially for highly inflected languages. Lemmatization has previously been shown to improve recall for information retrieval BIBREF0 , BIBREF1 , to aid machine translation BIBREF2 , BIBREF3 and is a core part of modern parsing systems BIBREF4 , BIBREF5 .
However, the task is quite nuanced as the proper choice of the lemma is context dependent. For instance, in the sentence A running of the bulls took place in Pamplona, the word running is its own lemma, since, here, running is a noun rather than an inflected verb. Several counter-examples exist to this trend, as discussed in depth in haspelmath2013understanding. Thus, a good lemmatizer must make use of some representation of each word's sentential context. The research question in this work is, then, how do we design a lemmatization model that best extracts the morpho-syntax from the sentential context?
Recent work BIBREF7 has presented a system that directly summarizes the sentential context using a recurrent neural network to decide how to lemmatize. As N18-1126's system currently achieves state-of-the-art results, it must implicitly learn a contextual representation that encodes the necessary morpho-syntax, as such knowledge is requisite for the task. We contend, however, that rather than expecting the network to implicitly learn some notion of morpho-syntax, it is better to explicitly train a joint model to morphologically disambiguate and lemmatize. Indeed, to this end, we introduce a joint model for the introduction of morphology into a neural lemmatizer. A key feature of our model is its simplicity: Our contribution is to show how to stitch existing models together into a joint model, explaining how to train and decode the model. However, despite the model's simplicity, it still achieves a significant improvement over the state of the art on our target task: lemmatization.
Experimentally, our contributions are threefold. First, we show that our joint model achieves state-of-the-art results, outperforming (on average) all competing approaches on a 20-language subset of the Universal Dependencies (UD) corpora BIBREF8 . Second, by providing the joint model with gold morphological tags, we demonstrate that we are far from achieving the upper bound on performance—improvements on morphological tagging could lead to substantially better lemmatization. Finally, we provide a detailed error analysis indicating when and why morphological analysis helps lemmatization. We offer two tangible recommendations: one is better off using a joint model (i) for languages with fewer training data available and (ii) languages that have richer morphology.
Our system and pre-trained models on all languages in the latest version of the UD corpora are released at https://sigmorphon.github.io/sharedtasks/2019/task2/.
We contend, however, that rather than expecting the network to implicitly learn some notion of morpho-syntax, it is better to explicitly train a joint model to morphologically disambiguate and lemmatize. Indeed, to this end, we introduce a joint model for the introduction of morphology into a neural lemmatizer.
|
Why do the authors introduce a joint model?
|
The authors introduce a joint model for the introduction of morphology into a neural lemmatizer.
|
null | false
| null |
What is the best watch in the world ?
|
No one watch qualifies as best. It depends on an individual's preferences. Broadly watches can be classified into Digital and Mechanical watches. Mechanical watches can be further categorized into automatic and non-automatic.
Digital watches typically use an electronic circuit to keep track of time. They generally are mass-produced and are inexpensive but highly accurate in keeping time.
On the other hand, Mechanical Automatic watches typically use a rotor, or metal weight, to power the timepiece and keep time. The rotor is powered by the movement of the watch wearer's hand. Premium watches are handmade and can be pretty expensive. They also are the least accurate in keeping time. However, watch enthusiasts still love and collect these watches.
Mechanical non-automatic watches typically use a quartz crystal to keep track of time. These are inexpensive and very accurate at keeping time.
So the best watch for you depends on what you are looking for. If you have a tight budget and care about accurate timing, then digital or non-automatic watches might work well. On the other hand, if you are a watch enthusiast, then Automatic watches are a great buy.
|
|
null | false
| null |
Write a friendly, convincing e-mail to my landlord asking whether I can get a dog.
|
Dear landlord,
I hope you've had a great start to your day and all is well! I've really enjoyed living in your building and amenities. In my time residing here, I've been a clean and respectful tenant. I'm very interested in getting a dog and would like to stay in your building however, I wanted to check with you first if the building is dog friendly. If so, are there any restrictions such as size or breed? Additionally are there any additional costs or changes to my lease I can expect?
I look forward to hearing from you and I hope we can make this work!
Best,
Tenant
|
|
null | false
| null |
Homelessness in the San Francisco Bay Area
One of the primary causes of homelessness in the Bay Area is an insufficient supply of affordable housing. Mass homelessness has several contributing factors, including: "Economic Dislocation", "Reduced Social Safety Nets", "Failed Housing Policy", "Mass Incarceration", "Family Instability", and other "Individual Causes" including mental health and physical wellness. Reasons cited for homelessness in the 2019 survey commissioned by the City of San Francisco include job loss (26%), alcohol/drug use (18%), eviction (13%), argument/asked to leave by friend/family (12%), mental health issues (8%), and divorce/separation (5%). 70% of people homeless in San Francisco in 2019 reported most recently becoming homeless while living in San Francisco: 22% came from another county within California, and 8% came from another state. However, of the 70% who had become homeless while living in San Francisco, 45% had only been in San Francisco for ten years or less, and 6% had only been in San Francisco for one year or less. Reasons for coming from outside San Francisco at the time of homelessness include seeking a job (25%), LGBTQ acceptance (11%), accessing homeless services (22%), was visiting and decided to stay (17%), accessing VA services or clinic (5%), and family/friends are here (13%).
|
Based on the reference text, what were the main reasons homeless people from outside of San Francisco went to San Francisco?
|
The reasons homeless people not from San Francisco went to San Francisco include:
1. Seeking a job
2. LGBTQ acceptance
3. Accessing homeless services
4. Was visiting and decided to stay
5. Family or friends live in San Francisco
6. Accessing VA services or clinic
|
null | false
| null |
The Bjorck Family (Swedish: Familjen Björck) is a 1940 Swedish comedy film directed by Anders Henrikson and starring Olof Winnerstrand, Frida Winnerstrand and Birgitta Arman. It was shot at the Centrumateljéerna Studios in Stockholm. The film's sets were designed by the art director Arthur Spjuth. It was based on a radio show of the same name that ran from 1936 to 1943.
|
Can you please show some details regarding the comedy film Bjorck Family using given text as a reference
|
1. The 1940 Swedish comedy film The Bjorck Family, also known as Familjen Björck, was directed by Anders Henrikson and starred Olof Winnerstrand, Frida Winnerstrand, and Birgitta Arman.
2. It was filmed in Stockholm at Centrumateljéerna Studios.
3. Arthur Spjuth, the film's art director, created the sets.
4. It was based on the 1936–1943 radio programme of the same name.
|
null | false
| null |
What is the main difference between ORC and Parquet?
|
Both file formats are columnar with similar support for encoding and compression. Their key difference is in how nested data is represented. In Parquet, the nested data is fully shredded into columns with additional metadata to reconstruct the nesting structure. In ORC, nested structures are not decomposed into columns. Rather the nested data value is stored as a regular column value.
|
|
null | false
| null |
Formal discussions on inference date back to Arab mathematicians and cryptographers, during the Islamic Golden Age between the 8th and 13th centuries. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains one of the first uses of permutations and combinations, to list all possible Arabic words with and without vowels. Al-Kindi's Manuscript on Deciphering Cryptographic Messages gave a detailed description of how to use frequency analysis to decipher encrypted messages, providing an early example of statistical inference for decoding. Ibn Adlan (1187–1268) later made an important contribution on the use of sample size in frequency analysis.
The earliest writing containing statistics in Europe dates back to 1663, with the publication of Natural and Political Observations upon the Bills of Mortality by John Graunt. Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data, hence its stat- etymology. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and natural and social sciences.
|
Given this reference text about the history of statistics, what was the earliest publican of writing on statistics?
|
The earliest publication containing statistics is Natural and Political Observations upon the Bills or Mortality by John Graunt.
|
null | false
| null |
Provide a bullet point list of all primary matter phase transitions, with the name of the process following by the matter states involved Use the format: {process name}: {initial state} -> {final state}
|
- melting: solid -> liquid
- sublimation: solid -> gas
- freezing: liquid -> solid
- vaporization: liquid -> gas
- ionization: gas -> plasma
- condensation: gas -> liquid
- deposition: gas -> solid
- deionization: plasma -> gas
|
|
null | false
| null |
What are some of the best places to eat in Boston?
|
There are many types of restaurants in Boston. Boston is well known for its local seafood, lobster, scollops, shrimp, and many types of fish. Boston is also a waterfront city, so there are many dinning options available on the water.
|
|
null | false
| 157
|
The task of interpreting and following natural language (NL) navigation instructions involves interleaving different signals, at the very least the linguistic utterance and the representation of the world. For example, in turn right on the first intersection, the instruction needs to be interpreted, and a specific object in the world (the intersection) needs to be located in order to execute the instruction. In NL navigation studies, the representation of the world may be provided via visual sensors BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 or as a symbolic world representation. This work focuses on navigation based on a symbolic world representation (referred to as a map).
Previous datasets for NL navigation based on a symbolic world representation, HCRC BIBREF5, BIBREF6, BIBREF7 and SAIL BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 present relatively simple worlds, with a small fixed set of entities known to the navigator in advance. Such representations bypass the great complexity of real urban navigation, which consists of long paths and an abundance of previously unseen entities of different types.
In this work we introduce Realistic Urban Navigation (RUN), where we aim to interpret navigation instructions relative to a rich symbolic representation of the world, given by a real dense urban map. To address RUN, we designed and collected a new dataset based on OpenStreetMap, in which we align NL instructions to their corresponding routes. Using Amazon Mechanical Turk, we collected 2515 instructions over 3 regions of Manhattan, all specified (and verified) by (respective) sets of humans workers. This task raises several challenges. First of all, we assume a large world, providing long routes, vulnerable to error propagation; secondly, we assume a rich environment, with entities of various different types, most of which are unseen during training and are not known in advance; finally, we evaluate on the full route intended, rather than on last-position only.
We then propose a strong neural baseline for RUN where we augment a standard encoder-decoder architecture with an entity abstraction layer, attention over words and worlds, and a constantly updating world-state. Our experimental results and ablation study show that this architecture is indeed better-equipped to treat grounding in realistic urban settings than standard sequence-to-sequence architectures. Given this RUN benchmark, empirical results, and evaluation procedure, we hope to encourage further investigation into the topic of interpreting NL instructions in realistic and previously unseen urban domains.
[1]The task defined by BIBREF6 is of moving between entities only.
We then propose a strong neural baseline for RUN where we augment a standard encoder-decoder architecture with an entity abstraction layer, attention over words and worlds, and a constantly updating world-state.
|
What's the function of the strong neural baseline?
|
The authors augment a standard encoder-decoder architecture with an entity abstraction layer, attention over words and worlds, and a constantly updating world-state.
|
null | false
| null |
Who create Ben & Jerry's icecream?
|
Ben & Jerry's ice cream was founded by Ben Cohen and Jerry Greenfield in South Burlington, Vermont. The ice cream grew in popularity largely due to its exotic flavors which included large chunks of ingredients mixed in with regular ice cream. This was due to Cohen's server anosmia, a lack of sense and smell, who relied on the textures when creating new flavors.
|
|
null | false
| null |
Identify which animal species is alive or extinct: Archelon Turtle, Bornean Orangutan
|
Archelon Turtle is extinct, Bornean Orangutan is alive.
|
|
null | false
| null |
Brazilian jiu-jitsu was initially developed in 1925 by Brazilian brothers Carlos, Oswaldo, Gastão Jr., George, and Hélio Gracie, after Carlos was taught jiu-jitsu by a travelling Japanese judoka, Mitsuyo Maeda, who himself mastered his ground fighting while interacting with Taro Miyake (Tanabe student), Sadakazu Uyenishi (Handa, Tanabe) and Yukio Tani (Tenjin Shinyo-ryu) and catch wrestlers in Europe.
|
Given this paragraph about Jiu-Jitsu, who taught Carlos Gracie.
|
Mitsuyo Maeda
|
null | false
| 246
|
The major advantage of our system is that we only leverage NLTK tokenization and a POS tagger to preprocess our training dataset. When implementing our neural network based clinical information extraction system, we found it is not easy to construct high quality training data due to the noisy format of clinical notes. Choosing the proper tokenizer is quite important for span identification. After several experiments, we found "RegexpTokenizer" can match our needs. This tokenizer can generate spans for each token via sophisticated regular expression like below,
The major advantage of our system is that we only leverage NLTK2 tokenization and a POS tagger to preprocess our training dataset. When implementing our neural network based clinical information extraction system, we found it is not easy to construct high quality training data due to the noisy format of clinical notes.
|
Is it easy to construct high quality training data?
|
No, it isn't.
|
null | false
| null |
List the ingredients and step by step instructions to make a glass of AMF.
|
Ingredients:
3/4 oz vodka
3/4 oz rum
3/4 oz tequila
3/4 oz gin
3/4 oz blue curaçao
3/4 oz simple syrup
3/4 oz lemon juice
Sprite or 7up
Preserved cherry
Steps:
1. Mix the vodka, rum, tequila, gin, blue curaçao, simple syrup, and lemon juice in a glass filled with ice.
2. Top the glass off with the Sprite or 7up and stir.
3. Garnish with the preserved cherry.
|
|
2001.11381
| false
| null |
Los resultados de la evaluación se presentan en la Tabla TABREF42, en la forma de promedios normalizados entre [0,1] y de su desviación estándar $\sigma $.
Los resultados de la evaluación se presentan en la Tabla TABREF42, en la forma de promedios normalizados entre [0,1] y de su desviación estándar $\sigma $.
|
What evaluation metrics did they look at?
|
accuracy with standard deviation
|
null | false
| null |
Identify which instrument is string or percussion: Tubular bells, Cuataro
|
Cuataro is string, Tubular bells is percussion.
|
|
null | false
| null |
Prior to the emergence of Kievan Rus' in the 9th century, most of the area north of the Black Sea, which roughly overlaps with modern-day Ukraine and Belarus, was primarily populated by eastern Slavic tribes. In the northern region around Novgorod were the Ilmen Slavs and neighboring Krivichi, who occupied territories surrounding the headwaters of the West Dvina, Dnieper and Volga rivers. To their north, in the Ladoga and Karelia regions, were the Finnic Chud tribe. In the south, in the area around Kiev, were the Poliane, a group of Slavicized tribes with Iranian origins, the Drevliane to the west of the Dnieper, and the Severiane to the east. To their north and east were the Vyatichi, and to their south was forested land settled by Slav farmers, giving way to steppe lands populated by nomadic herdsmen.
There was once controversy over whether the Rus' were Varangians or Slavs, however, more recently scholarly attention has focused more on debating how quickly an ancestrally Norse people assimilated into Slavic culture. This uncertainty is due largely to a paucity of contemporary sources. Attempts to address this question instead rely on archaeological evidence, the accounts of foreign observers, and legends and literature from centuries later. To some extent, the controversy is related to the foundation myths of modern states in the region. This often unfruitful debate over origins has periodically devolved into competing nationalist narratives of dubious scholarly value being promoted directly by various government bodies, in a number of states. This was seen in the Stalinist period, when Soviet historiography sought to distance the Rus' from any connection to Germanic tribes, in an effort to dispel Nazi propaganda claiming the Russian state owed its existence and origins to the supposedly racially superior Norse tribes. More recently, in the context of resurgent nationalism in post-Soviet states, Anglophone scholarship has analyzed renewed efforts to use this debate to create ethno-nationalist foundation stories, with governments sometimes directly involved in the project. Conferences and publications questioning the Norse origins of the Rus' have been supported directly by state policy in some cases, and the resultant foundation myths have been included in some school textbooks in Russia.
While Varangians were Norse traders and Vikings, some Russian and Ukrainian nationalist historians argue that the Rus' were themselves Slavs (see Anti-Normanism). Normanist theories focus on the earliest written source for the East Slavs, the Primary Chronicle, which was produced in the 12th century. Nationalist accounts on the other hand have suggested that the Rus' were present before the arrival of the Varangians, noting that only a handful of Scandinavian words can be found in Russian and that Scandinavian names in the early chronicles were soon replaced by Slavic names.
Nevertheless, the close connection between the Rus' and the Norse is confirmed both by extensive Scandinavian settlements in Belarus, Russia, and Ukraine and by Slavic influences in the Swedish language.Though the debate over the origin of the Rus' remains politically charged, there is broad agreement that if the proto-Rus' were indeed originally Norse, they were quickly nativized, adopting Slavic languages and other cultural practices. This position, roughly representing a scholarly consensus (at least outside of nationalist historiography), was summarized by the historian, F. Donald Logan, "in 839, the Rus were Swedes; in 1043 the Rus were Slavs". Recent scholarship has attempted to move past the narrow and politicized debate on origins, to focus on how and why assimilation took place so quickly. Some modern DNA testing also points to Viking origins, not only of some of the early Rus' princely family and/or their retinues but also links to possible brethren from neighboring countries like Sviatopolk I of Kiev.
Ahmad ibn Fadlan, an Arab traveler during the 10th century, provided one of the earliest written descriptions of the Rus': "They are as tall as a date palm, blond and ruddy, so that they do not need to wear a tunic nor a cloak; rather the men among them wear garments that only cover half of his body and leaves one of his hands free." Liutprand of Cremona, who was twice an envoy to the Byzantine court (949 and 968), identifies the "Russi" with the Norse ("the Russi, whom we call Norsemen by another name") but explains the name as a Greek term referring to their physical traits ("A certain people made up of a part of the Norse, whom the Greeks call [...] the Russi on account of their physical features, we designate as Norsemen because of the location of their origin."). Leo the Deacon, a 10th-century Byzantine historian and chronicler, refers to the Rus' as "Scythians" and notes that they tended to adopt Greek rituals and customs. But 'Scythians' in Greek parlance is used predominantly as a generic term for nomads.
|
Summarise the history of the origin of the Kievan Rus' name
|
The origin of the name the Kievan Rus' has a lot of debates, especially in some countries with strong influence on nationalist ideas(for more information check ethno-nationalist foundation stories). There was once controversy over whether the Rus' were Varangians or Slavs. Some historians argue that the Rus' were themselves Slavs, while Varangians were Norse traders and Vikings. The close connection between the Rus' and the Norse is confirmed both by extensive Scandinavian settlements that were found on nowadays territories of Belarus, Russia, and Ukraine and by Slavic influences in the Swedish language.
Putting aside the pro-nationalist politico-historical view, we could follow historical and archeological findings that explain tribes' formations on the territory that was later names, Kievan Rus'.
Liutprand of Cremona, identifies the "Russi" with the Norse but explains the name as a Greek term referring to their physical traits. Leo the Deacon, a 10th-century Byzantine historian and chronicler, refers to the Rus' as "Scythians" and notes that they tended to adopt Greek rituals and customs. But 'Scythians' in Greek parlance is used predominantly as a generic term for nomads.
Before the Kievan Rus' creation or appearance, most of the area north of the Black Sea was primarily populated by eastern Slavic tribes. But it also was surrounded and populated by many more as Ilmen Slavs and neighboring Krivich, Finnic Chud tribe, Poliane, a group of Slavicized tribes with Iranian origins, the Drevliane and the Severiane together with the Vyatichi, and to their south was forested land settled by Slav farmers, giving way to steppe lands populated by nomadic herdsmen.
|
null | false
| null |
What is Project Mosaic?
|
Mosaic is an extension to the Apache Spark framework that allows easy and fast processing of very large geospatial datasets.
Mosaic provides:
-easy conversion between common spatial data encodings (WKT, WKB and GeoJSON);
-constructors to easily generate new geometries from Spark native data types;
-many of the OGC SQL standard ST_ functions implemented as Spark Expressions for transforming, aggregating and joining spatial datasets;
-high performance through implementation of Spark code generation within the core Mosaic functions;
-optimisations for performing point-in-polygon joins using an approach we co-developed with Ordnance Survey (blog post); and
-the choice of a Scala, SQL and Python API.
|
|
2002.01030
| false
| null |
Table TABREF21 shows the performance of non-static capsule network for fake news detection in comparison to other methods. The accuracy of our model is 7.8% higher than the best result achieved by LSVM.
As mentioned in Section SECREF13, the LIAR dataset is a multi-label dataset with short news statements. In comparison to the ISOT dataset, the classification task for this dataset is more challenging. We evaluate the proposed model while using different metadata, which is considered as speaker profiles. Table TABREF30 shows the performance of the capsule network for fake news detection by adding every metadata. The best result of the model is achieved by using history as metadata. The results show that this model can perform better than state-of-the-art baselines including hybrid CNN BIBREF15 and LSTM with attention BIBREF16 by 3.1% on the validation set and 1% on the test set.
The accuracy of our model is 7.8% higher than the best result achieved by LSVM.
The results show that this model can perform better than state-of-the-art baselines including hybrid CNN BIBREF15 and LSTM with attention BIBREF16 by 3.1% on the validation set and 1% on the test set.
|
What are state of the art methods authors compare their work with?
|
ISOT dataset: LLVM
Liar dataset: Hybrid CNN and LSTM with attention
|
null | false
| null |
Which of these sports leagues are considered part of Big 4 Sports leagues in the United States: MLS, NFL, La Liga, IPL, NHL
|
The sports leagues considered part of Big 4 Sports leagues in the United States are the NFL and the NHL
|
|
1809.04960
| false
| null |
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each piece of news contains a title, the content of the article, and a list of the users' comments. Following the previous work BIBREF0 , we tokenize all text with the popular python package Jieba, and filter out short articles with less than 30 words in content and those with less than 20 comments. The dataset is split into training/validation/test sets, and they contain 191,502/5,000/1,610 pieces of news, respectively. The whole dataset has a vocabulary size of 1,858,452. The average lengths of the article titles and content are 15 and 554 Chinese words. The average comment length is 17 words.
We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M). It shows that IR+Proposed can outperform the supervised IR model given the same paired dataset. It concludes that the proposed model can exploit the unpaired data to further improve the performance of the supervised model.
We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model.
We further combine the supervised IR with our unsupervised model, which is trained with full unpaired data (4.8M) and different number of paired data (from 50K to 4.8M).
|
Which paired corpora did they use in the other experiment?
|
The answers are shown as follows:
* Chinese dataset BIBREF0
|
null | false
| null |
A chair is a type of seat, typically designed for one person and consisting of one or more legs, a flat or slightly angled seat and a back-rest. They may be made of wood, metal, or synthetic materials, and may be padded or upholstered in various colors and fabrics. Chairs vary in design. An armchair has armrests fixed to the seat; a recliner is upholstered and features a mechanism that lowers the chair's back and raises into place a footrest; a rocking chair has legs fixed to two long curved slats; and a wheelchair has wheels fixed to an axis under the seat.
|
What type of chair would I want to design to give an option to lean all the way back?
|
As the name suggests, a recliner features a mechanism that lowers the chair’s back and raises a footrest into place
|
null | false
| null |
What are Gummi Bears called in England?
|
Jelly Babies are what people call Gummi Bears in England
|
|
null | false
| null |
Why practicing sport is important ?
|
Studies show that practicing sport regularly can reduce the risk of cancer and improve your overall health. You can practice what we call "neat", meaning you don't have to go for a swim or a run 1h a day, you can already be physically active by walking 30 minutes or 1 hour per day, by taking the stairs instead of the elevator, going shopping or house cleaning and so on.
|
|
null | false
| 121
|
Knowledge and/or data is often modeled in a structure, such as indexes, tables, key-value pairs, or triplets. These data, by their nature (e.g., raw data or long time-series data), are not easily usable by humans; outlining their crucial need to be synthesized. Recently, numerous works have focused on leveraging structured data in various applications, such as question answering BIBREF0, BIBREF1 or table retrieval BIBREF2, BIBREF3. One emerging research field consists in transcribing data-structures into natural language in order to ease their understandablity and their usablity. This field is referred to as “data-to-text" BIBREF4 and has its place in several application domains (such as journalism BIBREF5 or medical diagnosis BIBREF6) or wide-audience applications (such as financial BIBREF7 and weather reports BIBREF8, or sport broadcasting BIBREF9, BIBREF10). As an example, Figure FIGREF1 shows a data-structure containing statistics on NBA basketball games, paired with its corresponding journalistic description.
Designing data-to-text models gives rise to two main challenges: 1) understanding structured data and 2) generating associated descriptions. Recent data-to-text models BIBREF11, BIBREF12, BIBREF13, BIBREF10 mostly rely on an encoder-decoder architecture BIBREF14 in which the data-structure is first encoded sequentially into a fixed-size vectorial representation by an encoder. Then, a decoder generates words conditioned on this representation. With the introduction of the attention mechanism BIBREF15 on one hand, which computes a context focused on important elements from the input at each decoding step and, on the other hand, the copy mechanism BIBREF16, BIBREF17 to deal with unknown or rare words, these systems produce fluent and domain comprehensive texts. For instance, Roberti et al. BIBREF18 train a character-wise encoder-decoder to generate descriptions of restaurants based on their attributes, while Puduppully et al. BIBREF12 design a more complex two-step decoder: they first generate a plan of elements to be mentioned, and then condition text generation on this plan. Although previous work yield overall good results, we identify two important caveats, that hinder precision (i.e. factual mentions) in the descriptions:
Linearization of the data-structure. In practice, most works focus on introducing innovating decoding modules, and still represent data as a unique sequence of elements to be encoded. For example, the table from Figure FIGREF1 would be linearized to [(Hawks, H/V, H), ..., (Magic, H/V, V), ...], effectively leading to losing distinction between rows, and therefore entities. To the best of our knowledge, only Liu et al. BIBREF19, BIBREF11 propose encoders constrained by the structure but these approaches are designed for single-entity structures.
Arbitrary ordering of unordered collections in recurrent networks (RNN). Most data-to-text systems use RNNs as encoders (such as GRUs or LSTMs), these architectures have however some limitations. Indeed, they require in practice their input to be fed sequentially. This way of encoding unordered sequences (i.e. collections of entities) implicitly assumes an arbitrary order within the collection which, as demonstrated by Vinyals et al. BIBREF20, significantly impacts the learning performance.
To address these shortcomings, we propose a new structured-data encoder assuming that structures should be hierarchically captured. Our contribution focuses on the encoding of the data-structure, thus the decoder is chosen to be a classical module as used in BIBREF12, BIBREF10. Our contribution is threefold:
We model the general structure of the data using a two-level architecture, first encoding all entities on the basis of their elements, then encoding the data structure on the basis of its entities;
We introduce the Transformer encoder BIBREF21 in data-to-text models to ensure robust encoding of each element/entities in comparison to all others, no matter their initial positioning;
We integrate a hierarchical attention mechanism to compute the hierarchical context fed into the decoder.
We report experiments on the RotoWire benchmark BIBREF10 which contains around $5K$ statistical tables of NBA basketball games paired with human-written descriptions. Our model is compared to several state-of-the-art models. Results show that the proposed architecture outperforms previous models on BLEU score and is generally better on qualitative metrics.
In the following, we first present a state-of-the art of data-to-text literature (Section 2), and then describe our proposed hierarchical data encoder (Section 3). The evaluation protocol is presented in Section 4, followed by the results (Section 5). Section 6 concludes the paper and presents perspectives.
To address these shortcomings, we propose a new structured-data encoder assuming that structures should be hierarchically captured.
|
What kind of encoder does the paper propose?
|
A new structured-data encoder assuming that structures should be hierarchically captured.
|
null | false
| 18
|
An NMT system consists of an encoder which automatically learns the characteristics of a source sentence into fix-length context vectors and a decoder that recursively combines the produced context vectors with the previous target word to generate the most probable word from a target vocabulary.
More specifically, a bidirectional recurrent encoder reads every words INLINEFORM0 of a source sentence INLINEFORM1 and encodes a representation INLINEFORM2 of the sentence into a fixed-length vector INLINEFORM3 concatinated from those of the forward and backward directions: INLINEFORM4
Here INLINEFORM0 is the one-hot vector of the word INLINEFORM1 and INLINEFORM2 is the word embedding matrix which is shared across the source words. INLINEFORM3 is the recurrent unit computing the current hidden state of the encoder based on the previous hidden state. INLINEFORM4 is then called an annotation vector, which encodes the source sentence up to the time INLINEFORM5 from both forward and backward directions. Recurrent units in NMT can be a simple recurrent neural network unit (RNN), a Long Short-Term Memory unit (LSTM) BIBREF3 or a Gated Recurrent Unit (GRU) BIBREF4
Similar to the encoder, the recurrent decoder generates one target word INLINEFORM0 to form a translated target sentence INLINEFORM1 in the end. At the time INLINEFORM2 , it takes the previous hidden state of the decoder INLINEFORM3 , the previous embedded word representation INLINEFORM4 and a time-specific context vector INLINEFORM5 as inputs to calculate the current hidden state INLINEFORM6 : INLINEFORM7
Again, INLINEFORM0 is the recurrent activation function of the decoder and INLINEFORM1 is the shared word embedding matrix of the target sentences. The context vector INLINEFORM2 is calculated based on the annotation vectors from the encoder. Before feeding the annotation vectors into the decoder, an attention mechanism is set up in between, in order to choose which annotation vectors should contribute to the predicting decision of the next target word. Intuitively, a relevance between the previous target word and the annotation vectors can be used to form some attention scenario. There exists several ways to calculate the relevance as shown in BIBREF5 , but what we describe here follows the proposed method of BIBREF2 DISPLAYFORM0
In BIBREF2 , this attention mechanism, originally called alignment model, has been employed as a simple feedforward network with the first layer is a learnable layer via INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . The relevance scores INLINEFORM3 are then normalized into attention weights INLINEFORM4 and the context vector INLINEFORM5 is calculated as the weighted sum of all annotation vectors INLINEFORM6 . Depending on how much attention the target word at time INLINEFORM7 put on the source states INLINEFORM8 , a soft alignment is learned. By being employed this way, word alignment is not a latent variable but a parametrized function, making the alignment model differentiable. Thus, it could be trained together with the whole architecture using backpropagation.
One of the most severe problems of NMT is handling of the rare words, which are not in the short lists of the vocabularies, i.e. out-of-vocabulary (OOV) words, or do not appear in the training set at all. In BIBREF6 , the rare target words are copied from their aligned source words after the translation. This heuristic works well with OOV words and named entities but unable to translate unseen words. In BIBREF7 , their proposed NMT models have been shown to not only be effective on reducing vocabulary sizes but also have the ability to generate unseen words. This is achieved by segmenting the rare words into subword units and translating them. The state-of-the-art translation systems essentially employ subword NMT BIBREF7 .
One of the most severe problems of NMT is handling of the rare words, which are not in the short lists of the vocabularies, i.e. out-of-vocabulary (OOV) words, or do not appear in the training set at all.
|
What is one of the most severe problems of NMT?
|
How to handle the rare words which do not appear in the short lists of the vocabulary.
|
null | false
| null |
How are sub-lets traditionally they handled?
|
Many properties try to be as hands-off as possible when dealing with sub-lets and others go as far as to deter and make sub-letting as difficult as possible. No follow-up calls to replacement tenants, massive sub-let fees, and even trying to persuade replacement tenants to sign a new lease, with move-in specials. While all this may help your property in the short-run with maintaining and driving your current occupancy up; mishandling sub-lets can be detrimental to your property in the long run. Reputation and brand management are the most important factors in generating new leads and driving traffic to your property. Word-of-mouth reputation, online reviews, and social media platforms all suffer from the hands-off/no sub-letting mindset. No one wants to lease at an apartment that will try to keep him or her in their lease when they need to move back home to tend to a family emergency or transfer schools to better their education. Sub-lets will ALWAYS be a part of your property, regardless of how great your customer service may be.
|
|
null | false
| null |
What's the best career advise you could give a new graduate entering into a technical or analytics field?
|
A key feature of any technical or analytics job is that you will be constantly challenged with new trying something new. You will need to learn to learn, i.e. to teach yourself new techniques and new technology, and to have the courage to apply these things to complex challenges where it's not always clear that you have the right answer (and where very few if any people can fully validate your work). For many people, the uncertainty and risk of failure associated with these situations tends to push them towards safe and known opportunities with well-tread paths for them to follow. If you crave a fulfilling career, have the courage to pursue the paths less traveled, acknowledge and learn from your failures and surround yourself with others similarly willing to stretch and grow. You will find such choices lead to a more satisfying career over the longer term.
|
|
null | false
| 203
|
We base our experiments on a large collection of Bible translations crawled from the web, coming from various sources and periods of times. Any other multilingual data collection would work as well, but with the selected corpus we have the advantage that we cover the same genre and roughly the same coverage for each language involved. It is also easy to divide the data into training and test sets by using Bible verse numbers, which allows us to control for semantic similarity between languages in a way that would have been difficult in a corpus that is not multi-parallel. Altogether we have 1,303 translations in 990 languages that we can use for our purposes. These were chosen so that the model alphabet size is below 1000 symbols, which was satisfied by choosing only translations in Latin, Cyrillic or Greek script.
Certainly, there are disadvantages as well, such as the limited size (roughly 500 million tokens in total, with most languages having only one translation of the New Testament each, with roughly 200 thousand tokens), the narrow domain and the high overlap of named entities. The latter can lead to some unexpected effects when using nonsensical language vectors, as the model will then generate a sequence of random names.
The corpus deviates in some ways from an ideal multi-parallel corpus. Most translations are of the complete New Testament, whereas around 300 also contain the Old Testament (thus several times longer), and around ten contain only portions of the New Testament. Additionally, several languages have multiple translations, which are then concatenated. These translations may vary in age and style, but historical versions of languages (with their own ISO 639-3 code) are treated as distinct languages. During training we enforce a uniform distribution between languages when selecting training examples.
We base our experiments on a large collection of Bible translations crawled from the web, coming from various sources and periods of times.
|
Where is their data from?
|
A large collection of Bible translations.
|
null | false
| 298
|
At a high level, as shown in Figure FIGREF4 , a product search engine works as follows: a customer issues a query, which is passed to a lexical matching engine (typically an inverted index BIBREF0 , BIBREF1 ) to retrieve all products that contain words in the query, producing a match set. The match set passes through stages of ranking, wherein top results from the previous stage are re-ranked before the most relevant items are finally displayed. It is imperative that the match set contain a relevant and diverse set of products that match the customer intent in order for the subsequent rankers to succeed. However, inverted index-based lexical matching falls short in several key aspects:
In this paper, we address the question: Given rich customer behavior data, can we train a deep learning model to retrieve matching products in response to a query? Intuitively, there is reason to believe that customer behavior logs contain semantic information; customers who are intent on purchasing a product circumvent the limitations of lexical matching by query reformulation or by deeper exploration of the search results. The challenge is the sheer magnitude of the data as well as the presence of noise, a challenge that modern deep learning techniques address very effectively.
Product search is different from web search as the queries tend to be shorter and the positive signals (purchases) are sparser than clicks. Models based on conversion rates or click-through-rates may incorrectly favor accessories (like a phone cover) over the main product (like a cell phone). This is further complicated by shoppers maintaining multiple intents during a single search session: a customer may be looking for a specific television model while also looking for accessories for this item at the lowest price and browsing additional products to qualify for free shipping. A product search engine should reduce the effort needed from a customer with a specific mission (narrow queries) while allowing shoppers to explore when they are looking for inspiration (broad queries).
As mentioned, product search typically operates in two stages: matching and ranking. Products that contain words in the query ( INLINEFORM0 ) are the primary candidates. Products that have prior behavioral associations (products bought or clicked after issuing a query INLINEFORM1 ) are also included in the candidate set. The ranking step takes these candidates and orders them using a machine-learned rank function to optimize for customer satisfaction and business metrics.
We present a neural network trained with large amounts of purchase and click signals to complement a lexical search engine in ad hoc product retrieval. Our first contribution is a loss function with a built-in threshold to differentiate between random negative, impressed but not purchased, and purchased items. Our second contribution is the empirical result that recommends average pooling in combination with INLINEFORM0 -grams that capture short-range linguistic patterns instead of more complex architectures. Third, we show the effectiveness of consistent token hashing in Siamese networks for zero-shot learning and handling out of vocabulary tokens.
In Section SECREF2 , we highlight related work. In Section SECREF3 , we describe our model architecture, loss functions, and tokenization techniques including our approach for unseen words. We then introduce the readers to the data and our input representations for queries and products in Section SECREF4 . Section SECREF5 presents the evaluation metrics and our results. We provide implementation details and optimizations to efficiently train the model with large amounts of data in Section SECREF6 . Finally, we conclude in Section SECREF7 with a discussion of future work.
Our first contribution is a loss function with a built-in threshold to differentiate between random negative, impressed but not purchased, and purchased items.
|
What is presented to differentiate between random negative, impressed but not purchased, and purchased items?
|
A loss function with a built-in threshold.
|
null | false
| null |
give me a list of all the ways a person can hydrate
|
- drink water from a glass
- drink water using a straw
- drink water from a hose
- suck on an ice cube
- have a cucumber
- get an IV
- drink an electrolyte
|
|
null | false
| null |
Identify which instrument is string or percussion: Ikembe, Cimbalom
|
Cimbalom is string, Ikembe is percussion.
|
|
null | false
| null |
Individuals whom the media refers to as members of the PayPal Mafia include:
Peter Thiel, PayPal founder and former chief executive officer who is sometimes referred to as the "don" of the PayPal Mafia
Max Levchin, founder and chief technology officer at PayPal
Elon Musk, co-founder of Zip2, founder of X.com which merged with Confinity to form PayPal. Musk later founded SpaceX, co-founded Tesla, Inc., co-founded OpenAI, Neuralink, founded The Boring Company, and became owner of Twitter, Inc.
David O. Sacks, former PayPal COO who later founded Geni.com and Yammer
Scott Banister, early advisor and board member at PayPal.
Roelof Botha, former PayPal CFO who later became a partner and Senior Steward of venture capital firm Sequoia Capital
Steve Chen, former PayPal engineer who co-founded YouTube.
Reid Hoffman, former executive vice president who later founded LinkedIn and was an early investor in Facebook, Aviary
Ken Howery, former PayPal CFO who became a partner at Founders Fund
Chad Hurley, former PayPal web designer who co-founded YouTube
Eric M. Jackson, who wrote the book The PayPal Wars and became chief executive officer of WND Books and co-founded CapLinked
Jawed Karim, former PayPal engineer who co-founded YouTube
Jared Kopf, former PayPal (executive assistant to Peter Thiel) who co-founded Slide, HomeRun and NextRoll
Dave McClure, former PayPal marketing director, a super angel investor for start up companies
Andrew McCormack, co-founder of Valar Ventures
Luke Nosek, PayPal co-founder and former vice president of marketing and strategy, became a partner at Founders Fund with Peter Thiel and Ken Howery
Keith Rabois, former executive at PayPal who later worked at LinkedIn, Slide, Square, Khosla Ventures, and currently with Peter Thiel at Founders Fund, and personally invested in Tokbox, Xoom, Slide, LinkedIn, Geni, Room 9 Entertainment, YouTube, and Yelp
Jack Selby, former vice president of corporate and international development at PayPal who co-founded Clarium Capital with Peter Thiel, later becoming managing director of Grandmaster Capital Management
Premal Shah, former product manager at PayPal, became the founding president of Kiva.org
Russel Simmons, former PayPal engineer who co-founded Yelp Inc.
Jeremy Stoppelman, former vice president of technology at PayPal who later co-founded Yelp
Yishan Wong, former engineering manager at PayPal, later worked at Facebook and became the CEO of Reddit
|
What did the CFO do after Paypal?
|
Roelof Botha, one of the former CFOs, became partner and Senior Steward of venture capital firm Sequoia Capital. Ken Howery, the other CFO, became a partner at Founders Fund.
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Foote?
|
Philip Foote
|
|
null | false
| null |
Give me a list of things to do when travelling by plane
|
1. Eat little
2. stay hydrated with plenty of water
3. Take a travel pillow for your neck
4 Take an eye mask to wear for sleep
5. Ask to upgrade if in coach
6. Be prepared to not be comfortable
|
|
null | false
| null |
Give me a random bulleted list of ten landlocked countries
|
- Uzbekistan
- Liechtenstein
- Bhutan
- Botswana
- Laos
- Mongolia
- Switzerland
- Uganda
- Zambia
- Vatican City
|
|
null | false
| 21
|
We compare our two exploration strategies to the following baselines and ablations:
KG-A2C This is the exact same method presented in BIBREF6 with no modifications.
A2C Represents the same approach as KG-A2C but with all the knowledge graph components removed. The state representation is text only encoded using recurrent networks.
A2C-chained Is a variation on KG-A2C-chained where we use our policy chaining approach with the A2C method to train the agent instead of KG-A2C.
A2C-Explore Uses A2C in addition to the exploration strategy seen in KG-A2C-Explore. The cell representations here are defined in terms of the recurrent network based encoding of the textual observation.
Figure FIGREF10 shows that agents utilizing knowledge-graphs in addition to either enhanced exploration method far outperform the baseline A2C and KG-A2C. KG-A2C-chained and KG-A2C-Explore both pass the bottleneck of a score of 40, whereas A2C-Explore gets to the bottleneck but cannot surpass it.
There are a couple of key insights that can be drawn from these results The first is that the knowledge graph appears to be critical; it is theorized to help with partial observability. However the knowledge graph representation isn't sufficient in that the knowledge graph representation without enhanced exploration methods cannot surpass the bottleneck. A2C-chained—which explores without a knowledge graph—fails to even outperform the baseline A2C. We hypothesize that this is due to the knowledge graph aiding implicitly in the sample efficiency of bottleneck detection and subsequent exploration. That is, exploring after backtracking from a potentially detected bottleneck is much more efficient in the knowledge graph based agent.
The Go-Explore based exploration algorithm sees less of a difference between agents. A2C-Explore converges more quickly, but to a lower reward trajectory that fails to pass the bottleneck, whereas KG-A2C-Explore takes longer to reach a similar reward but consistently makes it through the bottleneck. The knowledge graph cell representation appears to thus be a better indication of what a promising state is as opposed to just the textual observation.
Comparing the advanced exploration methods when using the knowledge graph, we see that both agents successfully pass the bottleneck corresponding to entering the cellar and lighting the lamp and reach comparable scores within a margin of error. KG-A2C-chained is significantly more sample efficient and converges faster. We can infer that chaining policies by explicitly detecting bottlenecks lets us pass it more quickly than attempting to find promising cell representations with Go-Explore. This form of chained exploration with backtracking is particularly suited to sequential decision making problems that can be represented as acyclic directed graphs as in Figure FIGREF1.
KG-A2C-chained is significantly more sample efficient and converges faster. We can infer that chaining policies by explicitly detecting bottlenecks lets us pass it more quickly than attempting to find promising cell representations with Go-Explore.
|
What can be inferred from the fact that KG-A2C-chained is significantly more sample efficient and converges faster?
|
They can infer that chaining policies by explicitly detecting bottlenecks lets them pass it more quickly than attempting to find promising cell representations with Go-Explore.
|
null | false
| 78
|
Social media platforms have made the spreading of fake news easier, faster as well as able to reach a wider audience. Social media offer another feature which is the anonymity for the authors, and this opens the door to many suspicious individuals or organizations to utilize these platforms. Recently, there has been an increased number of spreading fake news and rumors over the web and social media BIBREF0. Fake news in social media vary considering the intention to mislead. Some of these news are spread with the intention to be ironic or to deliver the news in an ironic way (satirical news). Others, such as propaganda, hoaxes, and clickbaits, are spread to mislead the audience or to manipulate their opinions. In the case of Twitter, suspicious news annotations should be done on a tweet rather than an account level, since some accounts mix fake with real news. However, these annotations are extremely costly and time consuming – i.e., due to high volume of available tweets Consequently, a first step in this direction, e.g., as a pre-filtering step, can be viewed as the task of detecting fake news at the account level.
The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts.
In this work, we investigate the problem of discriminating between factual and non-factual accounts in Twitter. To this end, we collect a large dataset of tweets using a list of propaganda, hoax and clickbait accounts and compare different versions of sequential chunk-based approaches using a variety of feature sets against several baselines. Several approaches have been proposed for news verification, whether in social media (rumors detection) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, or in news claims BIBREF5, BIBREF6, BIBREF7, BIBREF8. The main orientation in the previous works is to verify the textual claims/tweets but not their sources. To the best of our knowledge, this is the first work aiming to detect factuality at the account level, and especially from a textual perspective. Our contributions are:
[leftmargin=4mm]
We propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution.
We benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts.
Our contributions are:We propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution. We benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts.
|
What contribution does the author make in this paper?
|
They propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. They test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution. They benchmark theirr approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that their approach produces superior results at detecting non-factual accounts.
|
1911.06171
| false
| null |
The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation?
In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation?
|
Which future direction in NLG are discussed?
|
The answers are shown as follows:
* 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context?
* 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks?
* 3) How to reduce the computing resources required for large-scale pre-training?
* 4) What aspect of knowledge do the pre-trained models provide for better language generation?
|
null | false
| null |
Who is the Disney character named Grogu?
|
Grogu is a character in the fictional Star Wars universe otherwise known as "baby Yoda." This character first appears in The Mandalorian and then also in the Book of Boba Fett. Grogu is a young member of the same species as Yoda from the original Star Wars movies.
|
|
null | false
| null |
Identify which animal species is alive or extinct: Arctodus, Cross River Gorilla
|
Cross River Gorilla is alive, Arctodus is extinct
|
|
null | false
| null |
What does the term March Madness refer to in college basketball?
|
March Madness is another name for the NCAA tournament, which is a single-elimination basketball tournament that takes place each year in the month of March among the 64 (plus two additional "play-in" team) selected as the top teams by a tournament committee.
|
|
1910.11790
| false
| null |
To create a final metric, we combine the individual components from Section 2 as features into a Support Vector Machine. The final results for our F1-score from this classification technique are 0.52 and 0.31 for the single and multi-turn data respectively.
We compare our results for both the single-turn and multi-turn experiments to the accuracies on the test data based off the BLEU score. We see an increase of 6% for our method with respect to the BLEU score in the single turn data, and a no change when using the multi-turn test set.
The final results for our F1-score from this classification technique are 0.52 and 0.31 for the single and multi-turn data respectively.
We compare our results for both the single-turn and multi-turn experiments to the accuracies on the test data based off the BLEU score.
|
which existing metrics do they compare with?
|
The answers are shown as follows:
* F1-score
* BLEU score
|
null | false
| 214
|
The growing popularity of online interactions through social media has been shown to have both positive and negative impacts. While social media improves information sharing, it also facilitates the propagation of online harassment, including hate speech. These negative experiences can have a measurable negative impact on users. Recently, the Pew Research Center BIBREF0 reported that “roughly four-in-ten Americans have personally experienced online harassment, and 63% consider it a major problem.”
To address the growing problem of online hate, an extensive body of work has focused on developing automatic hate speech detection models and datasets BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, simply detecting and blocking hate speech or suspicious users often has limited ability to prevent these users from simply turning to other social media platforms to continue to engage in hate speech as can be seen in the large move of individuals blocked from Twitter to Gab BIBREF9. What's more, such a strategy is often at odds with the concept of free speech. As reported by the Pew Research Center BIBREF0, “Despite this broad concern over online harassment, 45% of Americans say it is more important to let people speak their minds freely online; a slightly larger share (53%) feels that it is more important for people to feel welcome and safe online.” The special rapporteurs representing the Office of the United Nations High Commissioner for Human Rights (OHCHR) have recommended that “The strategic response to hate speech is more speech.” BIBREF10 They encourage to change what people think instead of merely changing what they do, so they advocate more speech that educates about cultural differences, diversity, and minorities as a better strategy to counter hate speech.
Therefore, in order to encourage strategies of countering online hate speech, we propose a novel task of generative hate speech intervention and introduce two new datasets for this task. Figure FIGREF5 illustrates the task. Our datasets consist of 5K conversations retrieved from Reddit and 12k conversations retrieved from Gab. Distinct from existing hate speech datasets, our datasets retain their conversational context and introduce human-written intervention responses. The conversational context and intervention responses are critical in order to build generative models to automatically mitigate the spread of these types of conversations.
To summarize, our contributions are three-fold:
We introduce the generative hate speech intervention task and provide two fully-labeled hate speech datasets with human-written intervention responses.
Our data is collected in the form of conversations, providing better context.
The two data sources, Gab and Reddit, are not well studied for hate speech. Our datasets fill this gap.
Due to our data collecting strategy, all the posts in our datasets are manually labeled as hate or non-hate speech by Mechanical Turk workers, so they can also be used for the hate speech detection task. The performance of commonly-used classifiers on our datasets is shown in Section SECREF6.
Therefore, in order to encourage strategies of countering online hate speech, we propose a novel task of generative hate speech intervention and introduce two new datasets for this task.
|
What do the authors do to encourage strategies for countering online hate speech?
|
The authors propose a novel task of generative hate speech intervention and introduce two new datasets for this task.
|
null | false
| null |
Which of the following are rock bands (e.g., garage rock, alternative, indie, etc.): Pavement, Nirvana, The Beatles, The Who, Led Zeppelin, Lynyrd Skynyrd, Pearl Jam, The Doors, dirt, grass, co2, hamburgers
|
The following are rock bands: Pavement, Nirvana, The Beatles, The Who, Led Zeppelin, Lynrd Skynyrd, Pearl Jam, The Doors
|
|
null | false
| null |
What changes needs to be done in H1b lottery process
|
The H1B filing procedure needs to be modified to include some reservation requirements. For merits who receive a higher percentage and are hired by tier 1 businesses, a certain amount should be set aside. Additionally, because the same people are filing a large number of H1b applications, the government should not recognize their multiple submissions. H1b applicants must, by law, be working on a project, and their customer information and pay stubs from the previous six months must be verified during the application process. Instead of conducting additional draws later, consider selecting a few more candidates during the initial draw and providing them with the waitlist number so they will at least be aware of any changes to their initial selection.
|
|
null | false
| null |
Which are the U.S. cities and which are U.S. states in this list? Montana, Chicago, New York, Atlanta, Honolulu, Florida, Austin, South Dakota
|
U.S. cities in the list: Chicago, New York, Atlanta, Honolulu, Austin
U.S. states in the list: Montana, New York, Florida, South Dakota
|
|
null | false
| null |
John Watson was an early American painter, born in Scotland.
|
Change this sentence to present tense
|
John Watson is an early American painter, born in Scotland.
|
null | false
| 272
|
With the rapid growth of the internet, huge amounts of text data are generated in social networks, online shopping and news websites, etc. These data create demand for powerful and efficient text analysis techniques. Probabilistic topic models such as Latent Dirichlet Allocation (LDA) BIBREF0 are popular approaches for this task, by discovering latent topics from text collections. Many conventional topic models discover topics purely based on the word-occurrences, ignoring the meta information (a.k.a., side information) associated with the content. In contrast, when we humans read text it is natural to leverage meta information to improve our comprehension, which includes categories, authors, timestamps, the semantic meanings of the words, etc. Therefore, topic models capable of using meta information should yield improved modelling accuracy and topic quality.
In practice, various kinds of meta information are available at the document level and the word level in many corpora. At the document level, labels of documents can be used to guide topic learning so that more meaningful topics can be discovered. Moreover, it is highly likely that documents with common labels discuss similar topics, which could further result in similar topic distributions. For example, if we use authors as labels for scientific papers, the topics of the papers published by the same researcher can be closely related.
At the word level, different semantic/syntactic features are also accessible. For example, there are features regarding word relationships, such as synonyms obtained from WordNet BIBREF1 , word co-occurrence patterns obtained from a large corpus, and linked concepts from knowledge graphs. It is preferable that words having similar meaning but different morphological forms, like “dog” and “puppy”, are assigned to the same topic, even if they barely co-occur in the modelled corpus. Recently, word embeddings generated by GloVe BIBREF2 and word2vec BIBREF3 , have attracted a lot of attention in natural language processing and related fields. It has been shown that the word embeddings can capture both the semantic and syntactic features of words so that similar words are close to each other in the embedding space. It seems reasonable to expect that these word embedding will improve topic modelling BIBREF4 , BIBREF5 .
Conventional topic models can suffer from a large performance degradation over short texts (e.g., tweets and news headlines) because of insufficient word co-occurrence information. In such cases, meta information of documents and words can play an important role in analysing short texts by compensating the lost information in word co-occurrences. At the document level, for example, tweets are usually associated with hashtags, users, locations, and timestamps, which can be used to alleviate the data sparsity problem. At the word level, word semantic similarity and embeddings obtained or trained on large external corpus (e.g., Google News or Wikipedia) have been proven useful in learning meaningful topics from short texts BIBREF6 , BIBREF7 .
The benefit of using document and word meta information separately is shown in several models such as BIBREF8 , BIBREF9 , BIBREF5 . However, in existing models this is usually not efficient enough due to non-conjugacy and/or complex model structures. Moreover, only one kind of meta information (either at document level or at word level) is used in most existing models. In this paper, we propose MetaLDA, a topic model that can effectively and efficiently leverage arbitrary document and word meta information encoded in binary form. Specifically, the labels of a document in MetaLDA are incorporated in the prior of the per-document topic distributions. If two documents have similar labels, their topic distributions should be generated with similar Dirichlet priors. Analogously, at the word level, the features of a word are incorporated in the prior of the per-topic word distributions, which encourages words with similar features to have similar weights across topics. Therefore, both document and word meta information, if and when they are available, can be flexibly and simultaneously incorporated using MetaLDA. MetaLDA has the following key properties:
We conduct extensive experiments with several real datasets including regular and short texts in various domains. The experimental results demonstrate that MetaLDA achieves improved performance in terms of perplexity, topic coherence, and running time.
In this paper, we propose MetaLDA, a topic model that can effectively and efficiently leverage arbitrary document and word meta information encoded in binary form.
|
What topic model do the authors propose?
|
It is a topic model that can effectively and efficiently leverage arbitrary document and word meta information encoded in binary form.
|
null | false
| null |
Who are the top 5 non-fiction writers of all time?
|
1) Margaret Atwood
2) James Baldwin
3)Mark Bowden
4) Ta-Nehisi Coates
5) Robin Cook
|
|
null | false
| null |
What are the words of House Tarly?
|
"First in Battle"
|
|
null | false
| null |
What is a lepton?
|
A lepton is an elementary particle that interacts via the weak force and has a half-integer spin. Examples of leptons include the electron, muon and tau particles and their corresponding neutrinos - the electron neutrino, the muon neutron and the tau neutrino.
|
|
null | false
| null |
Give me a list of different cat species that are good for people with allergies.
|
There are a few different breeds of cats who produce less of the common allergens. Some of these include the Hairless Sphynx, Siberian, Balinese, Bengal, Cornish Rex, Devon Rex, Russian Blue, Oriental Shorthair, LaPerm, and the Siamese breeds.
|
|
null | false
| null |
Daal dhokli (Gujarati: દાળ ઢોકળી) is an Indian dish common in Rajasthani and Gujarati cuisine, made by boiling wheat flour pieces in a pigeon pea stew. A similar preparation is called varanfal (Marathi: वरण फळ), or chakolyaa (Marathi: चकोल्या) in Marathi.
Preparation
Daal dhokli can be made with various types of lentils (or daal). Some commonly used daals include toor (pigeon peas), masoor (red lentil), and moong (mung bean). The daal is then pressure-cooked with water and tempered with spices to create a stew. The dhokli, or wheat flour pieces, are made by kneading a dough of wheat flour, salt, and water, rolling it, and cutting into pieces.
Some variations add peanuts. Other flavors in the dish can come from kokum, jaggery, and spices like cumin and asafoetida.
|
Given a reference text about Daal dhokli, tell me three commonly used daals.
|
Three commonly used daals are toor, masoor, and moong.
|
null | false
| null |
Tell me whether these are pasta types or pizza types: macaroni, rigatoni, margherita, pepperoni, orzo, spaghetti, stuffed crust, Hawaiian
|
Pasta: macaroni, rigatoni, orzo, spaghetti
Pizza: margherita, pepperoni, stuffed crust, Hawaiian
|
|
null | false
| null |
Which Baseball team won the 2016 World Series?
|
The Chicago Cubs won the World Series of Baseball in 2016.
|
|
null | false
| null |
The New Joint Endeavor for Welfare, Education, and Liberation, or New JEWEL Movement (NJM), was a Marxist–Leninist vanguard party in the Caribbean island nation of Grenada that was led by Maurice Bishop.
Established in 1973, the NJM issued its manifesto prior to the granting of independence to Grenada in 1974. The movement took control of the country with a successful bloodless revolution in 1979 and ruled by decree as the People's Revolutionary Government until 1983. In 1983, Bishop was killed by paramilitaries affiliated with hard-liners in his own party. This led to a military government, which was deposed by the US military in a 1983 invasion.
|
Extract all the dates from this paragraph provided.
|
1973, 1974, 1979, 1983
|
null | false
| 134
|
We evaluate our method on natural language inference (NLI), paraphrase identification (PI), and sentiment classification. We also conduct analysis on gate values and experiments on model variants. For detailed experimental settings, we refer readers to the supplemental material.
For the NLI and PI tasks, there exists recent work specializing in sentence pair classification. However in this work we confine our model to the architecture that encodes each sentence using a shared encoder without any inter-sentence interaction, in order to focus on the effectiveness of the models in extracting semantics. But note that the applicability of CAS-LSTM is not limited to sentence encoding based approaches.
We use Quora Question Pairs dataset in evaluating the performance of our method on the PI task.
|
What dataset do the authors use in evaluating the performance of their method on the PI task?
|
Quora Question Pairs dataset
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.