paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Fuldera is a village in the Val Müstair municipality in the district of Inn in the Swiss canton of Graubünden. In 2009 Fuldera merged with Lü, Müstair, Santa Maria Val Müstair, Tschierv and Valchava to form the municipality of Val Müstair.
History
Fuldera is first mentioned in 1322 as Faldiera.
Geography
Fuldera village
Fuldera had an area, as of 2006, of 13.2 km2 (5.1 sq mi). Of this area, 24.9% is used for agricultural purposes, while 37.2% is forested. Of the rest of the land, 1.4% is settled (buildings or roads) and the remainder (36.4%) is non-productive (rivers, glaciers or mountains).
The village is located in the Val Müstair sub-district (now Val Müstair municipality) of the Inn district. It is located on the right side of the Rombach. It consists of the larger Fuldera-Dora and the smaller Fuldera-Daint.
|
What's the area of Fuldera village?
|
The Fuldera village encompasses an area of 13.2 km2 or 5.1 sq mi.
|
null | false
| null |
Name some active NBA famous players
|
Some active NBA famous players are Lebron James, Luka Doncic, Joel Embiid, Nikola Jokic , Kawhi Leonard, Zion Williamson, Kevin Durant
|
|
null | false
| null |
Lucy Thurber is an American playwright based in New York City. She is the recipient of the first Gary Bonasorte Memorial Prize for Playwriting, a Lilly Award and a 2014 OBIE Award for The Hill Town Plays.
Lucy Thurber is the author of: Where We're Born, Ashville, Killers & Other Family, Stay, Bottom of the World, Monstrosity, Scarcity, The Locus, The Insurgents, Dillingham City and other plays.
Five of her plays, while standing alone as individual works, also form a cycle known as The Hill Town Plays. Each play in the cycle considers an important moment in the life of the main character, beginning with childhood in rural Western Massachusetts, and then progressing through college, coming to terms with her sexual identity, and onto adulthood. The five plays are: Where We're Born, Ashville, Killers and Other Family, Scarcity, and Stay. In 2013 they were produced all together by David Van Asselt of Rattlestick Playwrights Theater and ran simultaneously at several theatres in the West Village in New York City. She, along with Rattlestick Playwrights Theater received a special citation from the Obie Awards in 2014 for their collaborative presentation of the works.
|
Name the five plays of The Hill Town Plays by Lucy Thurber.
|
The five plays are: Where We're Born, Ashville, Killers and Other Family, Scarcity, and Stay.
|
null | false
| 32
|
Propaganda has been tackled mostly at the article level. BIBREF3 created a corpus of news articles labelled as propaganda, trusted, hoax, or satire. BIBREF4 experimented with a binarized version of that corpus: propaganda vs. the other three categories. BIBREF5 annotated a large binary corpus of propagandist vs. non-propagandist articles and proposed a feature-based system for discriminating between them. In all these cases, the labels were obtained using distant supervision, assuming that all articles from a given news outlet share the label of that outlet, which inevitably introduces noise BIBREF6.
A related field is that of computational argumentation which, among others, deals with some logical fallacies related to propaganda. BIBREF7 presented a corpus of Web forum discussions with instances of ad hominem fallacy. BIBREF8, BIBREF9 introduced Argotario, a game to educate people to recognize and create fallacies, a by-product of which is a corpus with $1.3k$ arguments annotated with five fallacies such as ad hominem, red herring and irrelevant authority, which directly relate to propaganda.
Unlike BIBREF8, BIBREF9, BIBREF7, our corpus uses 18 techniques annotated on the same set of news articles. Moreover, our annotations aim at identifying the minimal fragments related to a technique instead of flagging entire arguments.
The most relevant related work is our own, which is published in parallel to this paper at EMNLP-IJCNLP 2019 BIBREF10 and describes a corpus that is a subset of the one used for this shared task.
Propaganda has been tackled mostly at the article level.
|
At what level has propaganda been tackled mostly?
|
At the article level.
|
null | false
| 176
|
The pre-trained language model, BERT BIBREF0 has led to a big breakthrough in various kinds of natural language understanding tasks. Ideally, people can start from a pre-trained BERT checkpoint and fine-tune it on a specific downstream task. However, the original BERT models are memory-exhaustive and latency-prohibitive to be served in embedded devices or CPU-based online environments. As the memory and latency constraints vary in different scenarios, the pre-trained BERT model should be adaptive to different requirements with accuracy retained to the largest extent. Existing BERT-oriented model compression solutions largely depend on knowledge distillation BIBREF1, which is inefficient and resource-consuming because a large training corpus is required to learn the behaviors of a teacher. For example, DistilBERT BIBREF2 is re-trained on the same corpus as pre-training a vanilla BERT from scratch; and TinyBERT BIBREF3 utilizes expensive data augmentation to fit the distillation target. The costs of these model compression methods are as large as pre-training and unaffordable for low-resource settings. Therefore, it is straight-forward to ask, can we design a lightweight method to generate adaptive models with comparable accuracy using significantly less time and resource consumption? In this paper, we propose LadaBERT (Lightweight adaptation of BERT through hybrid model compression) to tackle the raised questions. Specifically, LadaBERT is based on an iterative hybrid model compression framework consisting of weighting pruning, matrix factorization and knowledge distillation. Initially, the architecture and weights of student model are inherited from the BERT teacher. In each iteration, the student model is first compressed by a small ratio based on weight pruning and matrix factorization, and is then fine-tuned under the guidance of teacher model through knowledge distillation. Because weight pruning and matrix factorization help to generate better initial and intermediate status
in the knowledge distillation iterations, the accuracy and efficiency of model compression can be greatly improved.
We conduct extensive experiments on five public datasets of natural language understanding. As an example, the performance comparison of LadaBERT and state-of-the-art models on MNLI-m dataset is illustrated in Figure FIGREF1. We can see that LadaBERT outperforms other BERT-oriented model compression baselines at various model compression ratios. Especially, LadaBERT-1 outperforms BERT-PKD significantly under $2.5\times $ compression ratio, and LadaBERT-3 outperforms TinyBERT under $7.5\times $ compression ratio while the training speed is accelerated by an order of magnitude.
The rest of this paper is organized as follows. First, we summarizes the related works of model compression and their applications to BERT in Section SECREF2. Then, the methodology of LadaBERT is introduced in Section SECREF3, and experimental results are presented in Section SECREF4. At last, we conclude this work and discuss future works in Section SECREF5.
We can see that LadaBERT outperforms other BERT-oriented model compression baselines at various model compression ratios.
|
Does the LadaBERT outperform other BERT-oriented model compression baselines at various model compression ratios?
|
Yes, it does.
|
null | false
| null |
How fast do you move when riding a wave?
|
The average human runs about 6 to 8 mph depending on age, gender, fitness, and training. Average humans can sprint about 14 mph (well trained individuals run much faster).
The speed of a breaking wave varies depending on the size and period of the particular wave energy; however, on average a wave moves about 10 to 15 mph.
You ride waves about as fast as you can run but while standing on a hydroplaning board. Which is why the Beach Boys say "Catch a wave and you're sitting on top of the world".
|
|
null | false
| null |
What is the nickname for Spokane, Washington?
|
Spokane's official nickname is "Hooptown USA" which it received from hosting Hoopfest which is the world's largest basketball tournament. Spokane is also known as the "Lilac City."
|
|
null | false
| null |
last 5 prime minister of India
|
Narendra Modi
Manmohan Singh
Atal Bihari Vajpayee
Deve Gowda
P.V Narsimha Rao
|
|
null | false
| null |
Racial categories have historically been used as a way to enable an oppressive figure or group to discriminate against other groups or individuals which were seen as different from that of the oppressor. In nineteenth and early twentieth century Europe, artwork was a common form of racialization which targeted countries in the Middle East and Asia. The artwork, predominantly paintings, were portrayed in order to instill prejudice in the Western populations through sexualizing and manipulating images. One of the most prominent examples of Orientalist work in art is a piece by Eugène Delacroix titled Women of Algiers in their Apartment. Dating back to 1834, it portrays three women resting in a harem in exotic clothing while an African woman is dressed in plain clothing, depicting her role as a servant. Fine textiles, hookahs, and other paraphernalia adorn the room, which represents a European fantasy of an exotic scene. Attempts to portray these cultures as strange, foreign and exotic through Orientalism led to intolerance towards the Arab and Asian communities in Europe and the United States. Others argue that Delacroix, who travelled in North Africa sketching extensively, was depicting a realistic scene of the era based on his first-hand knowledge and experience. In such an interpretation the clothing, for example, is consistent with the times, as Arab North Africans dressed differently from Europeans, and kept black slaves who would not have been treated as equals.
|
How are racial categories used historically?
|
Historically, oppressive groups use racial categories to discriminate against other groups that are different from them.
|
null | false
| null |
Do humans have stripes?
|
Humans have "stripes" called Blaschko's lines. They are generally invisible to the human eye but some people's lines may be visible due to skin conditions or if they are a chimera, which will cause the lines to express different levels of melanin and make them more readily visible.
|
|
null | false
| 299
|
Extracting data elements such as study descriptors from publication full texts is an essential step in a number of tasks including systematic review preparation BIBREF0 , construction of reference databases BIBREF1 , and knowledge discovery BIBREF2 . These tasks typically involve domain experts identifying relevant literature pertaining to a specific research question or a topic being investigated, identifying passages in the retrieved articles that discuss the sought after information, and extracting structured data from these passages. The extracted data is then analyzed, for example to assess adherence to existing guidelines BIBREF1 . Figure FIGREF2 shows an example text excerpt with information relevant to a specific task (assessment of adherence to existing guidelines BIBREF1 ) highlighted.
Extracting the data elements needed in these tasks is a time-consuming and at present a largely manual process which requires domain expertise. For example, in systematic review preparation, information extraction generally constitutes the most time consuming task BIBREF4 . This situation is made worse by the rapidly expanding body of potentially relevant literature with more than one million papers added into PubMed each year BIBREF5 . Therefore, data annotation and extraction presents an important challenge for automation.
A typical approach to automated identification of relevant information in biomedical texts is to infer a prediction model from labeled training data – such a model can then be used to assign predicted labels to new data instances. However, obtaining training data for creating such prediction models can be very costly as it involves the step which these models are trying to automate – manual data extraction. Furthermore, depending on the task at hand, the types of information being extracted may vary significantly. For example, in systematic reviews of randomized controlled trials this information generally includes the patient group, the intervention being tested, the comparison, and the outcomes of the study (PICO elements) BIBREF4 . In toxicology research the extraction may focus on routes of exposure, dose, and necropsy timing BIBREF1 . Previous work has largely focused on identifying specific pieces of information such as biomedical events BIBREF6 or PICO elements BIBREF0 . However, depending on the domain and the end goal of the extraction, these may be insufficient to comprehensively describe a given study.
Therefore, in this paper we focus on unsupervised methods for identifying text segments (such as sentences or fixed length sequences of words) relevant to the information being extracted. We develop a model that can be used to identify text segments from text documents without labeled data and that only requires the current document itself, rather than an entire training corpus linked to the target document. More specifically, we utilize representation learning methods BIBREF7 , where words or phrases are embedded into the same vector space. This allows us to compute semantic relatedness among text fragments, in particular sentences or text segments in a given document and a short description of the type of information being extracted from the document, by using similarity measures in the feature space. The model has the potential to speed up identification of relevant segments in text and therefore to expedite annotation of domain specific information without reliance on costly labeled data.
We have developed and tested our approach on a reference database of rodent uterotropic bioassays BIBREF1 which are labeled according to their adherence to test guidelines set forth in BIBREF3 . Each study in the database is assigned a label determining whether or not it met each of six main criteria defined by the guidelines; however, the database does not contain sentence-level annotations or any information about where the criteria was mentioned in each publication. Due to the lack of fine-grained annotations, supervised learning methods cannot be easily applied to aid annotating new publications or to annotate related but distinct types of studies. This database therefore presents an ideal use-case for unsupervised approaches.
While our approach doesn't require any labeled data to work, we use the labels available in the dataset to evaluate the approach. We train a binary classification model for identifying publications which satisfied given criteria and show the model performs better when trained on relevant sentences identified by our method than when trained on sentences randomly picked from the text. Furthermore, for three out of the six criteria, a model trained solely on the relevant sentences outperforms a model which utilizes full text. The results of our evaluation support the intuition that semantic relatedness to criteria descriptions can help in identifying text sequences discussing sought after information.
There are two main contributions of this work. We present an unsupervised method that employs representation learning to identify text segments from publication full text which are relevant to/contain specific sought after information (such as number of dose groups). In addition, we explore a new dataset which hasn't been previously used in the field of information extraction.
The remainder of this paper is organized as follows. In the following section we provide more details of the task and the dataset used in this study. In Section SECREF3 we describe our approach. In Section SECREF4 we evaluate our model and discuss our results. In Section SECREF5 we compare our work to existing approaches. Finally, in Section SECREF6 we provide ideas for further study.
We have developed and tested our approach on a reference database of rodent uterotropic bioassays2 (Kleinstreuer et al., 2016) which are labeled according to their adherence to test guidelines set forth in (OECD, 2007).
|
What kind of database did they test the approach on?
|
A reference database of rodent uterotropic bioassays.
|
null | false
| null |
On 19 March 2023, Swiss investment bank UBS Group AG agreed to buy Credit Suisse for CHF 3 billion (US$3.2 billion) in an all-stock deal brokered by the government of Switzerland and the Swiss Financial Market Supervisory Authority. The Swiss National Bank supported the deal by providing more than CHF 100 billion (US$104 billion) in liquidity to UBS following its takeover of Credit Suisse's operations, while the Swiss government provided a guarantee to UBS to cover losses of up to CHF 9 billion (US$9.6 billion) over the short term. Additionally, CHF 16 billion (US$17.2 billion) of Additional Tier 1 bonds were written down to zero.
Credit Suisse is a globally systemically important bank whose investment banking unit, First Boston, had been recently tarnished by a series of high-profile scandals. The banking crisis in the United States had caused fear among global investors and led to panic over other possibly troubled banks. Credit Suisse's share price plunged after the leading shareholder ruled out further investment into the bank due to regulatory issues. The deal was rapidly agreed upon and announced just before the Asian financial markets opened on Monday morning in order to prevent "market shaking" turmoil in the global financial markets. Soon afterward, central banks across the world announced USD liquidity measures to try and ease wider market panic and avoid a wider banking crisis.
|
Why is UBS and Credit Suisse merging
|
On 19 March 2023, the Swiss wealth management and investment bank UBS Group AG agreed to buy Credit Suisse for CHF 3 billion (US$3.2 billion) in an all-stock deal brokered by the government of Switzerland and the Swiss Financial Market Supervisory Authority. The Swiss National Bank supported the deal by providing more than CHF 100 billion (US$104 billion) in liquidity to UBS following its takeover of Credit Suisse's operations, while the Swiss government provided a guarantee to UBS to cover losses of up to CHF 9 billion (US$9.6 billion) over the short term. Additionally, CHF 16 billion (US$17.2 billion) of Additional Tier 1 bonds were written down to zero.
Credit Suisse is a globally systemically important bank whose investment banking unit, First Boston, had been recently tarnished by a series of high-profile scandals. The banking crisis in the United States had caused fear among global investors and led to panic over other possibly troubled banks. Credit Suisse's share price plunged after the leading shareholder ruled out further investment into the bank due to regulatory issues. The deal was rapidly agreed upon and announced just before the Asian financial markets opened on Monday morning in order to prevent "market shaking" turmoil in the global financial markets. Soon afterward, central banks across the world announced USD liquidity measures to try and ease wider market panic and avoid a wider banking crisis.
|
null | false
| null |
What is the tallest building in the San Francisco Bay Area?
|
The tallest building in the SF Bay Area is the Salesforce Tower.
|
|
null | false
| null |
In the 1050s and early 1060s, William became a contender for the throne of England held by the childless Edward the Confessor, his first cousin once removed. There were other potential claimants, including the powerful English earl Harold Godwinson, whom Edward named as king on his deathbed in January 1066. Arguing that Edward had previously promised the throne to him and that Harold had sworn to support his claim, William built a large fleet and invaded England in September 1066. He decisively defeated and killed Harold at the Battle of Hastings on 14 October 1066. After further military efforts, William was crowned king on Christmas Day, 1066, in London. He made arrangements for the governance of England in early 1067 before returning to Normandy. Several unsuccessful rebellions followed, but William's hold was mostly secure on England by 1075, allowing him to spend the greater part of his reign in continental Europe.
|
Why did a William the Conqueror, a Frenchman, become the King of England?
|
William the Conqueror was from Normandy, located in northwest France. The King of England in the late 1050s and early 1060s was Edward the Confessor. Edward was William's cousin. Since Edward was childless, there were multiple people who claimed that they were the rightful heir and should become king. These conflicting claims gave rise to armed conflict. William invaded England, where he won the Battle of Hastings, defeating earl Harold Godwinson. A number of battles followed, but William became King of England late in 1066. He dealt with multiple rebellions over the next decade, but retained his power. Ultimately he was able to return to Normandy and rule England from there.
|
null | false
| null |
Tell me whether the following are Pokemon or Overwatch characters: Hanzo, Flareon, Mewtwo, Mei, Dratini, Kiriko
|
Flareon, Mewtwo, and Dratini are Pokemon. Hanzo, Mei, and Kiriko are Overwatch characters.
|
|
1804.04225
| false
| null |
The clinical notes we used for the experiment are provided by domain experts, consisting of 1,160 physician logs of Medical ICU admission requests at a tertiary care center affiliated to Mount Sanai. Prospectively collected over one year, these semi-structured logs contain free-text descriptions of patients' clinical presentations, medical history, and required critical care-level interventions. We identify 818 abbreviations and find 42,506 candidates using domain-specific knowledge (i.e., www.allacronym.com/_medical). The enriched corpus contains 42,506 Wikipedia articles, each of which corresponds to one candidate, 6 research papers and 2 critical care medicine textbooks, besides our raw ICU data.
The clinical notes we used for the experiment are provided by domain experts, consisting of 1,160 physician logs of Medical ICU admission requests at a tertiary care center affiliated to Mount Sanai.
The enriched corpus contains 42,506 Wikipedia articles, each of which corresponds to one candidate, 6 research papers and 2 critical care medicine textbooks, besides our raw ICU data.
|
Which dataset do they use to build their model?
|
The answers are shown as follows:
* 1,160 physician logs of Medical ICU admission requests
* 42,506 Wikipedia articles
* 6 research papers and 2 critical care medicine textbooks
|
null | false
| null |
Write an email to Visa alerting them that I am no longer interested in the position I’m interviewing for since I have accepted another offer. I have a lot of respect for the company and their employee experience is high!
|
Thank you so much for this opportunity to continue interviewing! I have really enjoyed getting to know Visa more during this process, but unfortunately must remove myself from the candidate pool at this time. As I shared earlier, I had competing offers with earlier timelines.
This in no way reflects my interest in Visa-- now is such an amazing time to join a function and company in the midst of transformation. It is also so heartening to hear how positive the employee experience is. This is to say that I sincerely hope to stay in touch and that our paths can cross again later down the line.
Thanks again for your coordination!
|
|
null | false
| null |
Write a list of items that are “must buys” at Costco
|
1. Toilet Paper
2. Paper Towels
3 TV’s
4. Jewelry
5. Water Bottles
6. Any Meat Product
7. Avocados
8. Alcohol
9. Giant Shampoo & Conditioner
10. Clothes
|
|
null | false
| 95
|
Conflating hatred content with offensive or harmless language causes online automatic hate speech detection tools to flag user-generated content incorrectly. Not addressing this problem may bring about severe negative consequences for both platforms and users such as decreasement of platforms' reputation or users abandonment. Here, we propose a transfer learning approach advantaging the pre-trained language model BERT to enhance the performance of a hate speech detection system and to generalize it to new datasets. To that end, we introduce new fine-tuning strategies to examine the effect of different layers of BERT in hate speech detection task. The evaluation results indicate that our model outperforms previous works by profiting the syntactical and contextual information embedded in different transformer encoder layers of the BERT model using a CNN-based fine-tuning strategy. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using the pre-trained BERT model to alleviate bias in hate speech datasets in future studies, by investigating a mixture of contextual information embedded in the BERT’s layers and a set of features associated to the different type of biases in data.
Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets.
|
Does the model proposed by the author having the ability to detect some biases?
|
Yes, it does.
|
1610.05243
| false
| null |
Both systems were optimized on the tst2014 using Minimum error rate training BIBREF20 . A detailed description of the systems can be found in BIBREF21 .
Both systems were optimized on the tst2014 using Minimum error rate training BIBREF20 . A detailed description of the systems can be found in BIBREF21 .
|
How is the PBMT system trained?
|
The answers are shown as follows:
* systems were optimized on the tst2014 using Minimum error rate training BIBREF20
|
null | false
| null |
Give me a list of things to buy a 12 year old boy
|
1. Vouchers
2. An experience
3. Movie tickets
4. Cash is always king!
|
|
null | false
| null |
Write a review of Neil Gaiman's book American Gods.
|
What a phenomenal book. I can see how people would call this a "more mature Percy Jackson", but to me American Gods was much more than just mythology fanfiction. Overall, the imaginative story, noir setting, mysterious plot, witty dialogue, and incredible characters made this one of the best books I've read this year.
Starting with the setting and premise - gods in the modern world has become a setting of it's own. Most notably, this story reminds me of a western version of Noragami, where a god's strength stems from their worshipers. It's hilarious to see deities of old living mundane lives, frankly quite charming. Our characters navigate through a spooky setting, but it's never overbearing nor depressing.
But what really makes this book special are the characters and dialogue. Shadow and Wednesday are an amazing duo; between Shadows quiet reflections and Wednesday's blunt terrorizing, the pair travel across rural America, running through a plethora of interesting icons and people.
|
|
null | false
| null |
Where should I go this summer?
|
It depends, for many people, a summer destination is all about going to a nice beach, enjoying the sunshine and relaxing. For others summer vacation is about going on a new adventure to explore a nice country, city or a cool nearby village.
Here are some ideas for a summer vacation if you are interested in traveling to Europe
Road trip across the Amalfi coast for one week.
Cruising around the Croatian islands.
Ancient history tour in Rome, Vatican City & Athens.
Enjoy the beautiful beaches in Southern France.
There are many other beautiful places to visit but this will depend on your budget, duration and your interests.
|
|
1803.03786
| false
| null |
We feed the above-described hand-crafted features together with the task-specific embeddings learned by the deep neural neural network (a total of 1,892 attributes combined) into a Support Vector Machines (SVM) classifier BIBREF37 . SVMs have proven to perform well in different classification settings, including in the case of small and noisy datasets.
We feed the above-described hand-crafted features together with the task-specific embeddings learned by the deep neural neural network (a total of 1,892 attributes combined) into a Support Vector Machines (SVM) classifier BIBREF37 .
|
what classifiers were used in this paper?
|
The answers are shown as follows:
* Support Vector Machines (SVM) classifier
|
null | false
| null |
What are tacos?
|
Tacos are Mexican food, made up of a tortilla with a stuffing of protein (meats) or veggies. You add toppings like cilantro, onions, and hot sauce to the tacos. Tacos are delicious and make a fabulous meal.
|
|
null | false
| null |
How many people were present in West Berlin when US President JFK delivered his infamous "Ich bin ein Berliner" speech?
|
Approximately one million people were in attendance on June 26, 1963.
|
|
null | false
| null |
What is recency bias?
|
Recency bias is the overweighting the result of more recent events when predicting what will happen in the future. For example, if I recently experienced a pandemic that dramatically affected my quality of life, I may tend to overestimate the likelihood of another pandemic in the near future.
|
|
null | false
| null |
Give me 5 ways to relax for an hour ?
|
Here are a few suggestions
1. Go for a walk outside
2. Catch up on sleep
3. Read a book
4. Watch Netflix
5. Call a friend
|
|
null | false
| null |
Why are Dr. Seuss books so good?
|
Dr. Suess books are written in a different style from traditional books. They are written with a play on words, have made up words that are tongue twisters, and use carefully crafted silly rhyming and poetry to make the reading fun. Some of the most famous Dr. Seuss books are Green Eggs and Ham, Cat in in the Hat, and How the Grinch Stole Christmas. An example of the poetry and rhyming from a Dr Seuss book is "one fish, two fish, red fish, blue fish" and "look at me, look at me, look at me now, its fun to have fun, but you have to know how."
|
|
null | false
| 122
|
Cost-sensitive classification can be used when the “cost” of mislabelling one class is higher than that of mislabelling other classes BIBREF9, BIBREF10. For example, the real cost to a bank of miscategorising a large fraudulent transaction as authentic is potentially higher than miscategorising (perhaps only temporarily) a valid transaction as fraudulent. Cost-sensitive learning tackles the issue of class imbalance by changing the cost function of the model such that misclassification of training examples from the minority class carries more weight and is thus more `expensive'. This is achieved by simply multiplying the loss of each example by a certain factor. This cost-sensitive learning technique takes misclassification costs into account during model training, and does not modify the imbalanced data distribution directly.
Cost-sensitive learning tackles the issue of class imbalance by changing the cost function of the model such that misclassification of training examples from the minority class carries more weight and is thus more ‘expensive'. This is achieved by simply multiplying the loss of each example by a certain factor. This cost-sensitive learning technique takes misclassification costs into account during model training, and does not modify the imbalanced data distribution directly.
|
What is the strength of Cost-sensitive classification in model training?
|
It reduces the influence of class imbalance by taking misclassification costs into account , and it does not modify the imbalanced data distribution directly.
|
null | false
| null |
How many dwarf planets are there in our solar system?
|
Currently only 5 objects are officially recognized as dwarf planets in our solar system. These are named as Ceres, Pluto, Eris, Haumea, and Makemake.
|
|
null | false
| null |
Give me 5 examples of why using a prime lens is better than a zoom lens for photography ?
|
-Better low light photography even without a flash
-Prime lenses have fewer moving parts
-Better performance
-Sharper Images
-High chances of getting Bokeh
|
|
null | false
| 3
|
In this section, we describe the languages experimented with, datasets used, the network hyper-parameters used in our experiments.
Bengali, Gujarati, Marathi, Malayalam and Tamil are the source languages, and translation from these to Hindi constitute the child tasks.
|
What are the child tasks in the experiment?
|
Bengali, Gujarati, Marathi, Malayalam and Tamil are the source languages, and translation from these to Hindi constitute the child tasks.
|
null | false
| 250
|
Learning semantic representations (embeddings) of different entities, such as textual, commercial, and physical, has been a recent and active area of research. Such representations can facilitate applications that rely on a notion of similarity, for example recommendation systems and ranking algorithms in e-commerce.
In natural language processing, word2vec BIBREF0 learns vector representations of words from large quantities of text, where each word is mapped to a $d$-dimensional vector such that semantically similar words have geometrically closer vectors. This is achieved by predicting either the context words appearing in a window around a given target word (skip-gram model), or the target word given the context (CBOW model). The main assumption is that words appearing frequently in similar contexts share statistical properties (the distributional hypothesis). Crucially, word2vec models, like many other word embedding models, preserve sequential information encoded in text so as to leverage word co-occurrence statistics. The skip-gram model has been adapted to other domains in order to learn dense representations of items other than words. For example, product embeddings in e-commerce BIBREF1 or vacation rental embeddings in the hospitality domain BIBREF2 can be learned by treating purchase histories or user click sequences as sentences and applying a word2vec approach.
Most of the prior work on item embedding exploit the co-occurrence of items in a sequence as the main signal for learning the representation. One disadvantage of this approach is that it fails to incorporate rich structured information associated with the embedded items. For example, in the travel domain, where we seek to embed hotels and other travel-related entities, it could be helpful to encode explicit information such as user ratings, star ratings, hotel amenities, and location in addition to implicit information encoded in the click-stream.
In this work, we propose an algorithm for learning hotel embeddings that combines sequential user click information in a word2vec approach with additional structured information about hotels. We propose a neural architecture that adopts and extends the skip-gram model to accommodate arbitrary relevant information of embedded items, including but not limited to geographic information, ratings, and item attributes. In experimental results, we show that enhancing the neural network to jointly encode click and supplemental structured information outperforms a skip-gram model that encodes the click information alone. The proposed architecture also naturally handles the cold-start problem for hotels with little or no historical clicks. Specifically, we can infer an embedding for these properties by leveraging their supplemental structured metadata.
Compared to previous work on item embeddings, the novel contributions of this paper are as follows:
We propose a novel framework for fusing multiple sources of information about an item (such as user click sequences and item-specific information) to learn item embeddings via self-supervised learning.
We generate an interpretable embedding which can be decomposed into sub-embeddings for clicks, location, ratings, and attributes, and employed either as separate component embeddings or a single, unified embedding.
It is also dynamic, meaning it is easy to reflect future changes in attributes such as star-rating or addition of amenities in the embedding vectors without retraining.
We address the cold-start problem by including hotel metadata which are independent of user click-stream interactions and available for all hotels. This helps us to better impute embeddings for sparse items/hotels.
We show significant gains over previous work based on click-embedding in several experimental studies.
The structure of the remainder of this paper is as follows. Section 2 gives an overview of some of the recent works on neural embedding. Section 3 provides details of the proposed framework, including the neural network architecture, training methodology, and how the cold-start problem is addressed. In Section 4, we present experimental results on several different tasks and a comparison with previous state-of-the-art work. Section 5 concludes the paper.
In experimental results, we show that enhancing the neural network to jointly encode click and supplemental structured information outperforms a skip-gram model that encodes the click information alone.
|
Does enhancing the neural network to jointly encode click and supplemental structured information outperform a skip-gram model that encodes the click information alone?
|
Yes, it does.
|
null | false
| null |
What are some quick ways to lose all of my money?
|
Here are some quick ways to lose all of your money:
1. Place all of your money on red at the roulette table.
2. Play high-stakes blackjack everyday for the next week.
3. Buy random crypto currencies like Solana.
4. Buy extremely out of the money options on trading applications like Robinhood.
5. Loan it to friends and families with a history of not paying you back.
|
|
null | false
| null |
Write a bullet-pointed list of updates from a college club fundraising team lead about their fundraising work from last week.
|
* Planned fundraising efforts for the next week, as well as implemented donut sales on campus to increase fundraising streams.
* Sent out Google Form to gauged interest of the club on buying club-branded shirts and stickers. By mid-week, we will make final decisions on apparel and begin sending out order forms.
* Continued with burger and chicken wings food sales on the central campus plaza.
|
|
null | false
| null |
Who composed the 'Moonlight Sonata', and when?
|
Commonly known as the Moonlight Sonata, the Piano Sonata No. 14 in C-sharp minor "Quasi una fantasia", Op. 27, No. 2, was composed by Ludwig van Beethoven.
The composition was completed in 1801 and dedicated to the Countess Giulietta Guicciardi in 1802. It's one of the most popular musical compositions for the piano.
|
|
null | false
| null |
How are the teams which qualify for the National Football League Playoffs determined?
|
The 32 teams that comprise the National Football League are organized by two conferences, the American and National, with four divisions of four teams in each conference. Each year the team with the best record in their division automatically qualifies for the playoffs. In addition, the three remaining teams with the best records in each conference also qualify and are designated as "wildcards". Tiebreakers are used to determine who makes the playoffs in scenarios where two or more teams finish with the same number of wins. Tiebreakers can include head-to-head competition, record in conference, and record against common opponents.
|
|
2001.05672
| false
| null |
Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he" is used for the subject as “he" but is used for the object as “him".
Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he" is used for the subject as “he" but is used for the object as “him".
|
What else is tried to be solved other than 12 tenses, model verbs and negative form?
|
The answers are shown as follows:
* cases of singular/plural, subject pronoun/object pronoun, etc.
|
null | false
| null |
Can I provide commercial software or services based on Spark?
|
Yes, as long as you respect the Apache Software Foundation's software license and trademark policy. In particular, note that there are strong restrictions about how third-party products use the "Spark" name (names based on Spark are generally not allowed). Please also refer to our trademark policy summary.
|
|
null | false
| null |
Traditional Estonian cuisine has substantially been based on meat and potatoes, and on fish in coastal and lakeside areas, but now bears influence from many other cuisines, including a variety of international foods and dishes, with a number of contributions from the traditions of nearby countries. Scandinavian, German, Russian, Latvian, Lithuanian and other influences have played their part. The most typical foods in Estonia have been rye bread, pork, potatoes and dairy products. Estonian eating habits have historically been closely linked to the seasons. In terms of staples, Estonia belongs firmly to the beer, vodka, rye bread and pork "belt" of Europe.
|
What kind of food comprises Estonian cuisine?
|
Estonian cuisine is generally based on meat, potatoes, fish, and more recently bears. In most typical Estonian foods you'll find rye bread, pork, potatoes, and dairy products.
|
1910.11471
| false
| null |
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
SMT techniques require a parallel corpus in thr source and thr target language. A text-code parallel corpus similar to Fig. FIGREF12 is used in training. This parallel corpus has 18805 aligned data in it . In source data, the expression of each line code is written in the English language. In target data, the code is written in Python programming language.
|
What dataset do they use?
|
A parallel corpus where the source is an English expression of code and the target is Python code.
|
null | false
| null |
Thomas Attewell (7 November 1869 – 6 July 1937) was an English first-class cricketer. Attewell was a right-handed batsman who bowled right-arm medium pace. He was born at Keyworth, Nottinghamshire.
His brother William played first-class cricket for Nottinghamshire and the Marylebone Cricket Club, as well as Test cricket for England. His cousin Walter Attewell played first-class cricket for Nottinghamshire.
|
What was the first name of Thomas Attewell's brother and cousin?
|
William and Walter
|
null | false
| null |
What are cons of getting a dog?
|
1. It is hard to make long travel plans while owning a dog.
2. Depending on the breed seasonal shedding could be a problem.
3. Responsible ownership of a dog is a big financial investments.
4. Dogs live shorter lives than humans so upcoming emotional toll could be a detractor.
|
|
null | false
| null |
Cy Young was the oldest child born to Nancy (Mottmiller) and McKinzie Young, Jr., and was christened Denton True Young. He was of part German descent. The couple had four more children: Jesse Carlton, Alonzo, Ella, and Anthony. When the couple married, McKinzie's father gave him the 54 acres (220,000 m2) of farm land he owned. Young was born in Gilmore, a tiny farming community located in Washington Township, Tuscarawas County, Ohio. He was raised on one of the local farms and went by the name Dent Young in his early years. Young was also known as "Farmer Young" and "Farmboy Young". Young stopped his formal education after he completed the sixth grade so he could help out on the family's farm. In 1885, Young moved with his father to Nebraska, and in the summer of 1887, they returned to Gilmore.
https://en.wikipedia.org/wiki/Cy_Young
|
Given this reference text about Cy Young, what type of work did his family do while he was growing up, how many older siblings did he have, and what two states did he live in as a child?
|
Cy Young grew up in a farming family, did not have any older siblings (he was the oldest of five children) and he lives primarily in Ohio but also spent two years in Nebraska as a child.
|
null | false
| null |
MPT assumes that investors are risk averse, meaning that given two portfolios that offer the same expected return, investors will prefer the less risky one. Thus, an investor will take on increased risk only if compensated by higher expected returns. Conversely, an investor who wants higher expected returns must accept more risk. The exact trade-off will not be the same for all investors. Different investors will evaluate the trade-off differently based on individual risk aversion characteristics. The implication is that a rational investor will not invest in a portfolio if a second portfolio exists with a more favorable risk-expected return profile—i.e., if for that level of risk an alternative portfolio exists that has better expected returns.
Under the model:
Portfolio return is the proportion-weighted combination of the constituent assets' returns.
Portfolio return volatility
\sigma _{p} is a function of the correlations ρij of the component assets, for all asset pairs (i, j). The volatility gives insight into the risk which is associated with the investment. The higher the volatility, the higher the risk.
|
Extract the formulas in math latex markdown format to calculate the portfolio return and variance in Modern Portfolio Theory https://en.wikipedia.org/wiki/Modern_portfolio_theory
|
- Expected return:
$$
\mathrm{E}\left(R_p\right)=\sum_i w_i \mathrm{E}\left(R_i\right)
$$
where $R_p$ is the return on the portfolio, $R_i$ is the return on asset $i$ and $w_i$ is the weighting of component asset $i$ (that is, the proportion of asset "i" in the portfolio, so that $\sum_i w_i=1$ ).
- Portfolio return variance:
$$
\sigma_p^2=\sum_i w_i^2 \sigma_i^2+\sum_i \sum_{j \neq i} w_i w_j \sigma_i \sigma_j \rho_{i j}
$$
where $\sigma_i$ is the (sample) standard deviation of the periodic returns on an asset $i$, and $\rho_{i j}$ is the correlation coefficient between the returns on assets $i$ and $j$. Alternatively the expression can be written as:
$$
\sigma_p^2=\sum_i \sum_j w_i w_j \sigma_i \sigma_j \rho_{i j}
$$
where $\rho_{i j}=1$ for $i=j$, or
$$
\sigma_p^2=\sum_i \sum_j w_i w_j \sigma_{i j}
$$
where $\sigma_{i j}=\sigma_i \sigma_j \rho_{i j}$ is the (sample) covariance of the periodic returns on the two assets, or alternatively denoted as $\sigma(i, j)$, $\operatorname{cov}{ }_{i j}$ or $\operatorname{cov}(i, j)$.
- Portfolio return volatility (standard deviation):
$$
\sigma_p=\sqrt{\sigma_p^2}
$$
For a two-asset portfolio:
- Portfolio return:
$$\mathrm{E}\left(R_p\right)=w_A \mathrm{E}\left(R_A\right)+w_B \mathrm{E}\left(R_B\right)=w_A \mathrm{E}\left(R_A\right)+\left(1-w_A\right) \mathrm{E}\left(R_B\right)$$
- Portfolio variance:
$$\sigma_p^2=w_A^2 \sigma_A^2+w_B^2 \sigma_B^2+2 w_A w_B \sigma_A \sigma_B \rho_{A B}$$
For a three-asset portfolio:
- Portfolio return:
$$\mathrm{E}\left(R_p\right)=w_A \mathrm{E}\left(R_A\right)+w_B \mathrm{E}\left(R_B\right)+w_C \mathrm{E}\left(R_C\right)$$
- Portfolio variance:
$$\sigma_p^2=w_A^2 \sigma_A^2+w_B^2 \sigma_B^2+w_C^2 \sigma_C^2+2 w_A w_B \sigma_A \sigma_B \rho_{A B}+2 w_A w_C \sigma_A \sigma_C \rho_{A C}+2 w_B w_C \sigma_B \sigma_C \rho_{B C}$$
|
null | false
| null |
How many NBA basketball teams are in California?
|
There are 4 NBA basketball teams in California: Golden State Warriors, Los Angeles Lakers, Los Angeles Clippers, and Sacramento Kings.
|
|
null | false
| 295
|
In order to provide an extrinsic metric to evaluate the quality of the acoustic units discovered by our different methods, we performed an unsupervised word segmentation task on the acoustic units sequences, and evaluated the accuracy of the discovered word boundaries. We also wanted to experiment using lattices as an input for the word segmentation task, instead of using single sequences of units, so as to better mitigate the uncertainty of the AUD task and provide a companion metric that would be more robust to noise. A model capable of performing word segmentation both on lattices and text sequences was introduced by BIBREF6 . Building on the work of BIBREF17 , BIBREF18 they combine a nested hierarchical Pitman-Yor language model with a Weighted Finite State Transducer approach. Both for lattices and acoustic units sequences, we use the implementation of the authors with a bigram language model and a unigram character model. Word discovery is evaluated using the Boundary metric from the Zero Resource Challenge 2017 BIBREF20 and BIBREF21 . This metric measures the quality of a word segmentation and the discovered boundaries with respect to a gold corpus (Precision, Recall and F-score are computed).
We also wanted to experiment using lattices as an input for the word segmentation task, instead of using single sequences of units, so as to better mitigate the uncertainty of the AUD task and provide a companion metric that would be more robust to noise.
|
How to better mitigate the uncertainty of the AUD task and provide a companion metric that would be more robust to noise?
|
Using lattices as an input for the word segmentation task, instead of using single sequences of units to better mitigate the uncertainty of the AUD task and provide a companion metric that would be more robust to noise.
|
null | false
| null |
Give me a list of the 10 most important soccer teams in Brazil.
|
Sure! Some of the most important soccer teams in Brazil are:
1. Corinthians
2. Flamengo
3. São Paulo
4. Santos
5. Fluminense
6. Botafogo
7. Cruzeiro
8. Atlético Mineiro
9. Gremio
10. Palmeiras
|
|
null | false
| 2
|
There has been significant progress on Named Entity Recognition (NER) in recent years using models based on machine learning algorithms BIBREF0 , BIBREF1 , BIBREF2 . As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts. In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated data. For such new types of entities, however, it is very hard to find experts to annotate the data within short time limits and hiring experts is costly and non-scalable, both in terms of time and money.
In order to quickly obtain new training data, we can use crowdsourcing as one alternative way at lower cost in a short time. But as an exchange, crowd annotations from non-experts may be of lower quality than those from experts. It is one biggest challenge to build a powerful NER system on such a low quality annotated data. Although we can obtain high quality annotations for each input sentence by majority voting, it can be a waste of human labors to achieve such a goal, especially for some ambiguous sentences which may require a number of annotations to reach an agreement. Thus majority work directly build models on crowd annotations, trying to model the differences among annotators, for example, some of the annotators may be more trustful BIBREF3 , BIBREF4 .
Here we focus mainly on the Chinese NER, which is more difficult than NER for other languages such as English for the lack of morphological variations such as capitalization and in particular the uncertainty in word segmentation. The Chinese NE taggers trained on news domain often perform poor in other domains. Although we can alleviate the problem by using character-level tagging to resolve the problem of poor word segmentation performances BIBREF5 , still there exists a large gap when the target domain changes, especially for the texts of social media. Thus, in order to get a good tagger for new domains and also for the conditions of new entity types, we require large amounts of labeled data. Therefore, crowdsourcing is a reasonable solution for these situations.
In this paper, we propose an approach to training a Chinese NER system on the crowd-annotated data. Our goal is to extract additional annotator independent features by adversarial training, alleviating the annotation noises of non-experts. The idea of adversarial training in neural networks has been used successfully in several NLP tasks, such as cross-lingual POS tagging BIBREF6 and cross-domain POS tagging BIBREF7 . They use it to reduce the negative influences of the input divergences among different domains or languages, while we use adversarial training to reduce the negative influences brought by different crowd annotators. To our best knowledge, we are the first to apply adversarial training for crowd annotation learning.
In the learning framework, we perform adversarial training between the basic NER and an additional worker discriminator. We have a common Bi-LSTM for representing annotator-generic information and a private Bi-LSTM for representing annotator-specific information. We build another label Bi-LSTM by the crowd-annotated NE label sequence which reflects the mind of the crowd annotators who learn entity definitions by reading the annotation guidebook. The common and private Bi-LSTMs are used for NER, while the common and label Bi-LSTMs are used as inputs for the worker discriminator. The parameters of the common Bi-LSTM are learned by adversarial training, maximizing the worker discriminator loss and meanwhile minimizing the NER loss. Thus the resulting features of the common Bi-LSTM are worker invariant and NER sensitive.
For evaluation, we create two Chinese NER datasets in two domains: dialog and e-commerce. We require the crowd annotators to label the types of entities, including person, song, brand, product, and so on. Identifying these entities is useful for chatbot and e-commerce platforms BIBREF8 . Then we conduct experiments on the newly created datasets to verify the effectiveness of the proposed adversarial neural network model. The results show that our system outperforms very strong baseline systems. In summary, we make the following contributions:
The experimental results show that the proposed approach performs the best among all the comparison systems.
|
Does the approach proposed by the authors perform better than all the comparison systems?
|
Yes, it does.
|
null | false
| null |
What links Catalonia, Andalusia, Cantabria, Galicia
|
Regions of Spain
|
|
null | false
| 57
|
Our framework consists of a series of steps for which there are choices and alternatives. Although it is not possible to provide comparisons to the myriad of methods and possibilities available, we have examined quantitatively the robustness of the results to parametric and methodological choices in different steps of the framework: (i) the importance of using Doc2Vec embeddings instead of BoW vectors, (ii) the size of training corpus for Doc2Vec; (iii) the sparsity of the MST-kNN similarity graph construction. We have also carried out quantitative comparisons to other methods, including: (i) LDA-BoW, and (ii) clustering with other community detection methods. We provide a brief summary here and additional material in the SI.
The use of fixed-sized vector embeddings (Doc2Vec) instead of standard bag of words (BoW) is an integral part of our pipeline. Doc2Vec produces lower dimensional vector representations (as compared to BoW) with higher semantic and syntactic content. It has been reported that Doc2Vec outperforms BoW representations in practical benchmarks of semantic similarity, as well as being less sensitive to hyper-parameters BIBREF22 .
To quantify the improvement provided by Doc2Vec in our framework, we constructed a MST-kNN graph following the same steps but starting with TF-iDF vectors for each document. We then ran MS on this TF-iDF similarity graph, and compared the results to those obtained from the Doc2Vec similarity graph. Figure 7 shows that the Doc2Vec version outperforms the BoW version across all resolutions in terms of both $NMI$ and $\widehat{PMI}$ scores.
As shown in Table 1 , we have tested the effect of the size of the training corpus on the Doc2Vec model. We trained Doc2Vec on two additional training sets of 1 million and 2 million records (randomly chosen from the full set of $\sim $ 13 million records). We then followed the same procedure to construct the MST-kNN similarity graph and carried out the MS analysis. The results, presented in Figure S3 in the SI, show that the performance is affected only mildly by the size of the Doc2Vec training set.
To examine the effect of sparsification in the graph construction, we have studied the dependence of quality of the partitions against the number of neighbours, $k$ , in the MST-kNN graph. Our numerics, shown in Figure S4 in the SI, indicate that both the $NMI$ and $\widehat{PMI}$ scores of the MS clusterings reach a similar level of quality for values of $k$ above 13-16, with minor improvement after that. Hence our results are robust to the choice of $k$ , provided it is not too small. Due to computational efficiency, we thus favour a relatively small $k$ , but not too small.
We carried out a comparison with LDA, a widely used methodology for text analysis. A key difference between standard LDA and our MS method is the fact that a different LDA model needs to be trained separately for each number of topics pre-determined by the user. To offer a comparison across the methods, We obtained five LDA models corresponding to the five MS levels we considered in detail. The results in Table 2 show that MS and LDA give partitions that are comparably similar to the hand-coded categories (as measured with $NMI$ ), with some differences depending on the scale, whereas the MS clusterings have higher topic coherence (as given by $\widehat{PMI}$ ) across all scales.
To give an indication of the computational cost, we ran both methods on the same servers. Our method takes approximately 13 hours in total to compute both the Doc2Vec model on 13 million records (11 hours) and the full MS scan with 400 partitions across all resolutions (2 hours). The time required to train just the 5 LDA models on the same corpus amounts to 30 hours (with timings ranging from $\sim $ 2 hours for the 3 topic LDA model to 12.5 hours for the 44 topic LDA model).
This comparison also highlights the conceptual difference between our multi-scale methodology and LDA topic modelling. While LDA computes topics at a pre-determined level of resolution, our method obtains partitions at all resolutions in one sweep of the Markov time, from which relevant partitions are chosen based on their robustness. However, the MS partitions at all resolutions are available for further investigation if so needed.
We have used several algorithms readily available in code libraries (i.e., the iGraph module for Python) to cluster/partition the same kNN-MST graph. Figure S5 in the SI shows the comparison against several well-known partitioning methods (Modularity Optimisation BIBREF44 , InfoMap BIBREF4 , Walktrap BIBREF45 , Label Propagation BIBREF46 , and Multi-resolution Louvain BIBREF35 ) which give just one partition (or two in the case of the Louvain implementation in iGraph) into a particular number of clusters, in contrast with our multiscale MS analysis. Our results show that MS provides improved or equal results to other graph partitioning methods for both $NMI$ and $\widehat{PMI}$ across all scales. Only for very fine resolution with more than 50 clusters, Infomap, which partitions graphs into small clique-like subgraphs BIBREF32 , BIBREF47 , provides a slightly improved $NMI$ for that particular scale. Therefore, MS allows us to find relevant, yet high quality clusterings across all scales by sweeping the Markov time parameter.
To examine the effect of sparsification in the graph construction, we have studied the dependence of quality of the partitions against the number of neighbours, k, in the MST-kNN graph.
|
How to examine the effect of sparsification in graph construction?
|
They studied the dependence of quality of the partitions against the number of neighbours, k, in the MST-kNN graph.
|
null | false
| null |
The names "laptop" and "notebook" refer to the fact that the computer can be practically placed on (or on top of) the user's lap and can be used similarly to a notebook. As of 2022, in American English, the terms "laptop" and "notebook" are used interchangeably; in other dialects of English, one or the other may be preferred. Although the term "notebook" originally referred to a specific size of laptop (originally smaller and lighter than mainstream laptops of the time), the term has come to mean the same thing and no longer refers to any specific size.
|
Given the following paragraph about a laptops, why are laptops called "laptop"?
|
Laptops are called "laptop" because the computer can be practically placed on the user's lap .
|
null | false
| null |
Which episode does Jon Snow lead a battle where he is outnumbered by Ramsey Bolton?
|
"Battle of the Bastards" is the ninth episode of the sixth season of Game of Thrones which had the massive battle between Jon Snow and Ramsey Bolton. Ramsey Bolton ultimately lost, thanks to Lord Baelish bringing fighters from the Vale to support Jon Snow.
|
|
null | false
| null |
What is the Premier League?
|
The Premier League (legal name: The Football Association Premier League Limited) is the highest level of the English football league system. Contested by 20 clubs, it operates on a system of promotion and relegation with the English Football League (EFL). Seasons typically run from August to May with each team playing 38 matches (playing all other teams both home and away). Most games are played on Saturday and Sunday afternoons, with occasional weekday evening fixtures.
The competition was founded as the FA Premier League on 20 February 1992 following the decision of clubs in the Football League First Division to break away from the Football League founded in 1888; however, teams may still be relegated into and promoted from the EFL Championship. The Premier League takes advantage of a lucrative television rights sale to Sky: from 2019 to 2020, the league's accumulated television rights deals were worth around £3.1 billion a year, with Sky and BT Group securing the domestic rights to broadcast 128 and 32 games respectively. The Premier League is a corporation where chief executive Richard Masters is responsible for its management, whilst the member clubs act as shareholders. Clubs were apportioned central payment revenues of £2.4 billion in 2016–17, with a further £343 million in solidarity payments to English Football League (EFL) clubs.
The Premier League is the most-watched sports league in the world, broadcast in 212 territories to 643 million homes and a potential TV audience of 4.7 billion people. For the 2018–19 season, the average Premier League match attendance was at 38,181, second to the German Bundesliga's 43,500, while aggregated attendance across all matches is the highest of any association football league at 14,508,981. Most stadium occupancies are near capacity. The Premier League ranks first in the UEFA coefficients of leagues based on performances in European competitions over the past five seasons as of 2021. The English top-flight has produced the second-highest number of UEFA Champions League/European Cup titles, with five English clubs having won fourteen European trophies in total.
Fifty clubs have competed since the inception of the Premier League in 1992: forty-eight English and two Welsh clubs. Seven of them have won the title: Manchester United (13), Manchester City (6), Chelsea (5), Arsenal (3), Blackburn Rovers (1), Leicester City (1) and Liverpool (1).
|
|
null | false
| null |
What is a p-value in statistics?
|
In statistics, a p-value is used to test the significance of the hypothesis. It indicates the probability that the results were obtained by chance alone. A small p-value indicates that it is very unlikely that the observed result came about by chance.
However, whether an alternative hypothesis with a p-value of less than 0.05 should be accepted without providing further evidence is controversial. Therefore, to properly quantify the statistical significance of results, contextual factors must also be considered, such as the quality of the measurements, the external evidence, and the validity of the assumptions underlying the data analysis.
|
|
null | false
| null |
The Egyptian pyramids are ancient masonry structures located in Egypt. Sources cite at least 118 identified "Egyptian" pyramids. Approximately 80 pyramids were built within the Kingdom of Kush, now located in the modern country of Sudan. Of those located in modern Egypt, most were built as tombs for the country's pharaohs and their consorts during the Old and Middle Kingdom periods.
The earliest known Egyptian pyramids are found at Saqqara, northwest of Memphis, although at least one step-pyramid-like structure has been found at Saqqara, dating to the First Dynasty: Mastaba 3808, which has been attributed to the reign of Pharaoh Anedjib, with inscriptions, and other archaeological remains of the period, suggesting there may have been others. The otherwise earliest among these is the Pyramid of Djoser built c. 2630–2610 BCE during the Third Dynasty. This pyramid and its surrounding complex are generally considered to be the world's oldest monumental structures constructed of dressed masonry.
The most famous Egyptian pyramids are those found at Giza, on the outskirts of Cairo. Several of the Giza pyramids are counted among the largest structures ever built. The Pyramid of Khufu is the largest Egyptian pyramid. It is the only one of the Seven Wonders of the Ancient World still in existence, despite its being the oldest wonder by about 2,000 years.
|
Given this paragraph about pyramids , when was the first pyramid built?
|
2630–2610 BCE
|
null | false
| 3
|
In this section, we describe the languages experimented with, datasets used, the network hyper-parameters used in our experiments.
We use OpenNMT-Torch (Klein et al., 2018) to train the NMT system.
|
How to train the NMT system in this paper?
|
The authors use OpenNMT-Torch to train the NMT system.
|
null | false
| null |
What is an offside in football/soccer?
|
The offside rule in football states that, at the moment of a pass (i.e. when the ball leaves the passer's foot), there must be at least two opposing players (including the goalkeeper) behind the attacking player receiving the pass. If this condition is not met, the referee calls offside, which results on a free kick for the defending team.
|
|
null | false
| null |
Stochastic from Greek 'aim, guess' refers to the property of being well described by a random probability distribution. Although stochasticity and randomness are distinct in that the former refers to a modeling approach and the latter refers to phenomena themselves, these two terms are often used synonymously. Furthermore, in probability theory, the formal concept of a stochastic process is also referred to as a random process.
Stochasticity is used in many different fields, including the natural sciences such as biology, chemistry, ecology, neuroscience, and physics, as well as technology and engineering fields such as image processing, signal processing, information theory, computer science, cryptography, and telecommunications. It is also used in finance, due to seemingly random changes in financial markets as well as in medicine, linguistics, music, media, color theory, botany, manufacturing, and geomorphology.
|
What does it mean by Stochastic
|
The word Stochastic is used in many different fields and describe the processes that involve randomness.
|
null | false
| null |
Why did the US join WW2?
|
The Japanese attack on Pearl Harbor on December 7, 1941, ended the debate over American intervention in both the Pacific and European theaters of World War II. The day after the attack, Congress declared war on Imperial Japan with only a single dissenting vote. Germany and Italy— Japan’s allies—responded by declaring war against the United States. Faced with these realities and incensed by the attack on Pearl Harbor, everyday Americans enthusiastically supported the war effort. Isolation was no longer an option.
|
|
null | false
| null |
The Samsung E1200 (also known as Samsung Pusha and Samsung Keystone 2) is a mobile phone made by Samsung. This phone was released in 2012 through Tesco in the UK. It was discontinued in 2015 when the Samsung Guru FM E1202 was released as its successor.
|
If the year is 2013, is the Samsung E1200 being produced?
|
Yes, the phone was released in 2012 and was not discontinued until 2015.
|
null | false
| null |
What is a PCR test?
|
PCR stands for "polymerase chain reaction". This test uses cycles of heating and cooling to detect photons that are emitted when a reagent (binder) RNA binds to the viral RNA that has been extracted from the human sample. Each heating and cooling cycle increases (amplifies) the amount of the targeted genetic material in the test tube. If the intensity of the photons does not reach a threshold after a certain number of cycles, the PCR test will result in a "Not Detected" test result.
|
|
null | false
| null |
What is a forest?
|
A forest is a geographical region where trees predominate.
|
|
null | false
| 18
|
Neural Machine Translation (NMT) has shown its effectiveness in translation tasks when NMT systems perform best in recent machine translation campaigns BIBREF0 , BIBREF1 . Compared to phrase-based Statistical Machine Translation (SMT) which is basically an ensemble of different features trained and tuned separately, NMT directly modeling the translation relationship between source and target sentences. Unlike SMT, NMT does not require much linguistic information and large monolingual data to achieve good performances.
An NMT consists of an encoder which recursively reads and represents the whole source sentence into a context vector and a recurrent decoder which takes the context vector and its previous state to predict the next target word. It is then trained in an end-to-end fashion to learn parameters which maximizes the likelihood between the outputs and the references. Recently, attention-based NMT has been featured in most state-of-the-art systems. First introduced by BIBREF2 , attention mechanism is integrated in decoder side as feedforward layers. It allows the NMT to decide which source words should take part in the predicting process of the next target words. It helps to improve NMTs significantly. Nevertheless, since the attention mechanism is specific to a particular source sentence and the considering target word, it is also specific to particular language pairs.
Some recent work has focused on extending the NMT framework to multilingual scenarios. By training such network using parallel corpora in number of different languages, NMT could benefit from additional information embedded in a common semantic space across languages. Basically, the proposed NMT are required to employ multiple encoders or multiple decoders to deal with multilinguality. Furthermore, in order to avoid the tight dependency of the attention mechanism to specific language pairs, they also need to modify their architecture to combine either the encoders or the attention layers. These modifications are specific to the purpose of the tasks as well. Thus, those multilingual NMTs are more complicated, much more free parameters to learn and more difficult to perform standard trainings compared to the original NMT.
In this paper, we introduce a unified approach to seamlessly extend the original NMT to multilingual settings. Our approach allows us to integrate any language in any side of the encoder-decoder architecture with only one encoder and one decoder for all the languages involved. Moreover, it is not necessary to do any network modification to enable attention mechanism in our NMT systems. We then apply our proprosed framework in two demanding scenarios: under-resourced translation and zero-resourced translation. The results show that bringing multilinguality to NMT helps to improve individual translations. With some insightful analyses of the results, we set our goal toward a fully multilingual NMT framework.
The paper starts with a detailed introduction to attention-based NMT. In Section SECREF3 , related work about multi-task NMT is reviewed. Section SECREF5 describes our proposed approach and thorough comparisons to the related work. It is followed by a section of evaluating our systems in two aforementioned scenarios, in which different strategies have been employed under a unified approach (Section SECREF4 ). Finally, the paper ends with conclusion and future work.
This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/
First introduced by (Bahdanau et al., 2014), attention mechanism is integrated in decoder side as feedforward layers. It allows the NMT to decide which source words should take part in the predicting process of the next target words. It helps to improve NMTs significantly.
|
What benefits does the attention mechanism bring NMT?
|
It can improve NMTs by allowing the NMT to decide which source words should attend the predicting process of the next target words.
|
null | false
| 296
|
Gender bias describes an inherent prejudice against a gender, captured both by individuals and larger social systems. Word embeddings, a popular machine-learnt semantic space, have been shown to retain gender bias present in corpora used to train them BIBREF0. This results in gender-stereotypical vector analogies à la NIPS20135021, such as man:computer programmer :: woman:homemaker BIBREF1, and such bias has been shown to materialise in a variety of downstream tasks, e.g. coreference resolution BIBREF2, BIBREF3.
By operationalising gender bias in word embeddings as a linear subspace, DBLP:conf/nips/BolukbasiCZSK16 are able to debias with simple techniques from linear algebra. Their method successfully mitigates [author=simone,color=blue!40,size=,fancyline,caption=,]does not particularly like boldfacing for emphasis, but can live with.direct bias: man is no longer more similar to computer programmer in vector space than woman. However, the structure of gender bias in vector space remains largely intact, and the new vectors still evince indirect bias: associations which result from gender bias between not explicitly gendered words, for example a possible association between football and business resulting from their mutual association with explicitly masculine words BIBREF4. In this paper we continue the work of BIBREF4, and show that another paradigm for gender bias mitigation proposed by BIBREF5, Counterfactual Data Augmentation (CDA), is also unable to mitigate indirect bias. We also show, using a new test we describe (non-biased gender analogies), that WED might be removing too much gender information, casting further doubt on its operationalisation of gender bias as a linear subspace.
To improve CDA we make two proposals. The first, Counterfactual Data Substitution (CDS), is designed to avoid text duplication in favour of substitution. The second, the Names Intervention, is a method which can be applied to either CDA or CDS, and treats bias inherent in first names. It does so using a novel name pairing strategy that accounts for both name frequency and gender-specificity. Using our improvements, the clusters of the most biased words exhibit a reduction of cluster purity by an average of 49% across both corpora following treatment, thereby offering a partial solution to the problem of indirect bias as formalised by BIBREF4. [author=simone,color=blue!40,size=,fancyline,caption=,]first part of reaction to reviewer 4Additionally, although one could expect that the debiased embeddings might suffer performance losses in computational linguistic tasks, our embeddings remain useful for at least two such tasks, word similarity and sentiment classification BIBREF6.
Using our improvements, the clusters of the most biased words exhibit a reduction of cluster purity by an average of 49% across both corpora following treatment, thereby offering a partial solution to the problem of indirect bias as formalised by BIBREF4.
|
Do the clusters of the most biased words exhibit a reduction of cluster purity?
|
Yes, they do.
|
null | false
| null |
Why do people travel do different places?
|
One reason for people to travel is business. They meet customers, or colleagues in different cities, possibly different countries. Even when the main purpose of travel is business, there is also a social aspect to it. Meeting colleagues or customers in a casual environment, e.g. for lunch or dinner helps with creating stronger relationships.
|
|
null | false
| null |
I have got assigned a task to classify the following items as flammable or inflammable. Could you please help me with the classification?
Here is the list of items: water, alcohol, gasoline, and orange juice
|
Here is the classification for these items:
- Water: it is inflammable.
- Alcohol: it is flammable and can easily catch fire.
- Gasoline: it is also flammable. It is a highly volatile liquid that can ignite easily.
- Orange juice: it is inflammable.
|
|
null | false
| null |
Classify if the following are hollywood or bollywood movies: Zanjeer; The Shawshank redemption; Sholay; Pride & prejudice
|
Zanjeer and Sholay are bollywood moves. The Shawshank redemption and Pride & Prejudice are hollywood movies.
|
|
null | false
| null |
The women's individual competition at the 2018 Asian Games in Jakarta, Indonesia was held from 23 August to 26 August (inclusive) at the Pondok Indah Golf & Country Club.
|
How many days was the women's individual competition?
|
4 days. 26-23+1 = 4. Counting 23, 24, 25, 26 equals 4 days.
|
null | false
| null |
Why are there speed limits on public roads?
|
Governments create and enforce speed limits for humans driving vehicles on public roads in the interest of public safety. The speed limits generally depend on the type of road, including whether it is in a residential area or a freeway. Police officers enforce posted limits by patrolling the roads and checking the speeds of vehicles around them using radar technology. By preventing vehicles from traveling at excessive speeds, speed limits help keep roadways safer for everyone.
|
|
null | false
| null |
Identify which instrument is string or percussion: Maraca, Valiha
|
Valiha is string, Maraca is percussion.
|
|
null | false
| null |
Where is Lebanon ?
|
Lebanon is a Middle Eastern country that is located on the eastern shore of the Mediterranean Sea. Lebanon's southern border is shared with Israel. Syria borders Lebanon to the east and north, and the whole western edge of the country is bordered by the Mediterranean Sea.
|
|
null | false
| null |
What is Luxembourg?
|
Luxembourg is a country in Western Europe surrounded by Belgium, France and Germany
|
|
null | false
| 15
|
Robotic Process Automation (RPA) is a type of software bots that simulates hand-operated human activities like entering data into a system, registering into accounts, and accomplishing straightforward but repetitive workflows BIBREF0. However, one of the drawbacks of RPA-bots is their susceptibility to changes in defined scenarios: being designed for a particular task, the RPA-bot is usually not adaptable to other domains or even light modifications in a workflow BIBREF0. This inability to readjust to shifting conditions gave rise to Intelligent Process Automation (IPA) systems. IPA-bots combine RPA with Artificial Intelligence (AI) and thus are able to execute more cognitively demanding tasks that require i.a. reasoning and language understanding. Hence, IPA-bots advanced beyond automating shallow “click tasks” and can perform jobs more intelligently – by means of machine learning algorithms. Such IPA-systems undertake time-consuming and routine tasks, and thus enable smart workflows and free up skilled workers to accomplish higher-value activities.
One of the potential applications of Natural Language Processing (NLP) within the IPA domain are conversational interfaces that enable human-to-machine interaction. The main benefit of conversational systems is their ability to give attention to several users simultaneously while supporting natural communication. A conventional dialogue system comprises multiple stages and involves different types of NLP subtasks, starting with Natural Language Understanding (NLU) (e.g., intent classification, named entity extraction) and going towards dialogue management (i.e., determining the next possible bot action, considering the dialogue history) and response generation (e.g., converting the semantic representation of the next system action into a natural language utterance). A typical dialogue system for IPA purposes undertakes shallow customer support requests (e.g., answering of FAQs), allowing human workers to focus on more sophisticated inquiries.
Recent research in the dialogue generation domain is conducted by employing AI-techniques like machine and deep learning BIBREF1, BIBREF2. However, conventional supervised methods have limitations when applied to real-world data and industrial tasks. The primary challenge here refers to a training phase since a robust model requires an extensive amount of structured and labeled data, that is often not available for domain-specific problems. Especially if it concerns dialogue data, which has to be appropriately structured as well as labeled and annotated with additional information. Therefore, despite the popularity of deep learning end-to-end models, one still needs to rely on conventional pipelines in practical dialogue engineering, especially while setting a new domain. However, with few structured data available, transfer learning methods can be used. Such algorithms enable training of the systems with less or even a minimal amount of data, and are able to transfer the knowledge obtained during the training on existing data to the unseen domain.
The system has two purposes: First, it reduces repetitive and time-consuming activities and, therefore, allows workers of the e-learning platform to focus solely on complex questions; Second, by interacting with users, it augments the resources with structured and to some extent labeled training data for further possible implementation of learnable dialogue components (see Section SECREF5);
|
What are the purposes of the dialogue system in this paper?
|
The system has two purposes: first, it reduces repetitive and time-consuming activities and, therefore, allows workers of the e-learning platform to focus solely on complex questions; second, by interacting with users, it augments the resources with structured and to some extent labeled training data for further possible implementation of learnable dialogue components (see Section 4).
|
null | false
| null |
Give me a bulleted list of the major religious groups
|
* Christianity
* Islam
* Atheists or Irreligion
* Hinduism
* Buddhism
* Sikhism
* Judaism
|
|
null | false
| null |
What checks should be done when a data pipeline fails
|
Generally data pipelines have sources, transformations and targets. First, check for errors on sources. Second, check for errors on transformations built in pipeline. Third, check for errors on targets. Further, check for errors on underlying infrastructure of the pipeline such as network, authentication and memory systems.
|
|
2002.10361
| false
| null |
FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech).
FLOAT SELECTED: Table 1: Statistical summary of multilingual corpora across English, Italian, Polish, Portuguese and Spanish. We present number of users (Users), documents (Docs), and average tokens per document (Tokens) in the corpus, plus the label distribution (HS Ratio, percent of documents labeled positive for hate speech).
|
How large is the corpus?
|
It contains 106,350 documents
|
null | false
| null |
Why is the sky blue?
|
Because the Earth's atmosphere scatters the light of the sun, and blue light scatters more readily than other colors in the spectrum, we tend to see the sky as blue.
Light can be affected by the medium it passes through. For example, a prism separates white light into a spectrum of colors, and a mirror reflects the light entirely. When it passes through the atmosphere, light gets scattered. If unimpeded, the light of the sun is white.
Blue light has a shorter wavelength than other colors of light, and it scatters more easily. It is this scattering which makes it perceptible to the human eye.When the sun starts to set, the light is passing through even more atmosphere, and yet further colors are visible (such as orange and red).
You can read more at https://spaceplace.nasa.gov/blue-sky/en/, where I researched this answer.
|
|
null | false
| 197
|
The articles from the newspaper were fed in spaCy library into the proper format for training. Different parameters were tested, in order to get the optimal result. The dataset was shuffled, using the same seed for all the experiments and was split into a train set (70%), a test set (20%) and a validation set (10%). Information was passed through the training algorithm in batches with an increasing batch size from 4 to 32 and a step of 1.001. Additionally, a dropout rate was configured in every batch, initialized to 0.6 which dropped during the training process to 0.4. Most of the experiments were trained using 30 epochs.
The main area of study for the experiments focuses on three important components. At first, we investigate the difference in results between part of speech taggers that classify morphological features and taggers that detect only the part of speech. Moreover, we explore the significance of pretrained vectors used from a model and their effect on the extraction of better results. Most importantly, the usage of subwords of tokens from a tagger as embeddings is issued. For the experiments, precision, recall and f1 score are used as evaluation metrics.
The dataset was shuffled, using the same seed for all the experiments and was split into a train set (70%), a test set (20%) and a validation set (10%).
|
What is the ratio of the test set, training set, and validation set?
|
7:2:1.
|
null | false
| null |
Church was born on May 3, 1977, in Granite Falls, North Carolina (Dudley Shoals area), to Ken and Rita Church. Church worked with his father at Clayton Marcus, a furniture upholstery company where his father was president. At 13, he bought a guitar and began writing songs of his own. By his senior year of high school, he had found a gig at a local bar, which occupied most of his time. He played many Jimmy Buffett cover songs and a few of his own original songs in some dive bars. Some of these places were so rough that he got into a few altercations from the stage. For a few years, the band played often in bars and restaurants throughout North Carolina.[citation needed] The band "Mountain Boys" consisted of his college roommate, brother, and a fellow guitarist.
Before moving to Nashville, Eric graduated from South Caldwell High School and then Appalachian State University with a degree in marketing. Upon graduation, Church became engaged to a Spanish teacher in the town of Lenoir, North Carolina. The future bride's father attempted to persuade Church into a corporate career, which he rejected as an aspiring musician (and later referenced in his song "What I Almost Was" from his debut album). She broke the engagement and he headed to Nashville with his father's financial backing. His father also provided the opportunity to make contacts, and more importantly, time to focus on developing his songwriting ability.[citation needed]
|
Where was Eric Church born?
|
Church was born on May 3, 1977, in Granite Falls, North Carolina (Dudley Shoals area), to Ken and Rita Church
|
null | false
| null |
What is Amazon Web Services?
|
AWS or Amazon Web Services is the cloud platform that is offered by Amazon. They have services like compute, virtual machines, databases, storage, serverless sql and serverless functions, artificial intelligence, machine learning, and data science, data lakes, and internet of things. A platform like AWS has all the tools for seamless integration to build an entire analytics platform for your business. AWS also serves the most business who have moved to cloud compared to their competitors Microsoft Azure and Google Cloud Platform.
|
|
null | false
| null |
Beavers (genus Castor) are large, semiaquatic rodents of the Northern Hemisphere. There are two existing species: the North American beaver (Castor canadensis) and the Eurasian beaver (C. fiber). Beavers are the second-largest living rodents, after capybaras, weighing up to 30 kilograms (66 lb). Beavers have stout bodies with large heads, long chisel-like incisors, brown or gray fur, hand-like front feet, webbed back feet, and tails that are flat and scaly. The two species differ in skull and tail shape and fur color. Beavers can be found in a number of freshwater habitats, such as rivers, streams, lakes and ponds. They are herbivorous, consuming tree bark, aquatic plants, grasses and sedges.
|
What genus are beavers a part of?
|
Castor genus
|
null | false
| 102
|
Rendering natural language descriptions from structured data is required in a wide variety of commercial applications such as generating descriptions of products, hotels, furniture, etc., from a corresponding table of facts about the entity. Such a table typically contains {field, value} pairs where the field is a property of the entity (e.g., color) and the value is a set of possible assignments to this property (e.g., color = red). Another example of this is the recently introduced task of generating one line biography descriptions from a given Wikipedia infobox BIBREF0 . The Wikipedia infobox serves as a table of facts about a person and the first sentence from the corresponding article serves as a one line description of the person. Figure FIGREF2 illustrates an example input infobox which contains fields such as Born, Residence, Nationality, Fields, Institutions and Alma Mater. Each field further contains some words (e.g., particle physics, many-body theory, etc.). The corresponding description is coherent with the information contained in the infobox.
Note that the number of fields in the infobox and the ordering of the fields within the infobox varies from person to person. Given the large size (700K examples) and heterogeneous nature of the dataset which contains biographies of people from different backgrounds (sports, politics, arts, etc.), it is hard to come up with simple rule-based templates for generating natural language descriptions from infoboxes, thereby making a case for data-driven models. Based on the recent success of data-driven neural models for various other NLG tasks BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , one simple choice is to treat the infobox as a sequence of {field, value} pairs and use a standard seq2seq model for this task. However, such a model is too generic and does not exploit the specific characteristics of this task as explained below. First, note that while generating such descriptions from structured data, a human keeps track of information at two levels. Specifically, at a macro level, she would first decide which field to mention next and then at a micro level decide which of the values in the field needs to be mentioned next. For example, she first decides that at the current step, the field occupation needs attention and then decides which is the next appropriate occupation to attend to from the set of occupations (actor, director, producer, etc.). To enable this, we use a bifocal attention mechanism which computes an attention over fields at a macro level and over values at a micro level. We then fuse these attention weights such that the attention weight for a field also influences the attention over the values within it. Finally, we feed a fused context vector to the decoder which contains both field level and word level information. Note that such two-level attention mechanisms BIBREF6 , BIBREF7 , BIBREF8 have been used in the context of unstructured data (as opposed to structured data in our case), where at a macro level one needs to pay attention to sentences and at a micro level to words in the sentences.
Next, we observe that while rendering the output, once the model pays attention to a field (say, occupation) it needs to stay on this field for a few timesteps (till all the occupations are produced in the output). We refer to this as the stay on behavior. Further, we note that once the tokens of a field are referred to, they are usually not referred to later. For example, once all the occupations have been listed in the output we will never visit the occupation field again because there is nothing left to say about it. We refer to this as the never look back behavior. To model the stay on behaviour, we introduce a forget (or remember) gate which acts as a signal to decide when to forget the current field (or equivalently to decide till when to remember the current field). To model the never look back behaviour we introduce a gated orthogonalization mechanism which ensures that once a field is forgotten, subsequent field context vectors fed to the decoder are orthogonal to (or different from) the previous field context vectors.
We experiment with the WikiBio dataset BIBREF0 which contains around 700K {infobox, description} pairs and has a vocabulary of around 400K words. We show that the proposed model gives a relative improvement of 21% and 20% as compared to current state of the art models BIBREF0 , BIBREF9 on this dataset. The proposed model also gives a relative improvement of 10% as compared to the basic seq2seq model. Further, we introduce new datasets for French and German on the same lines as the English WikiBio dataset. Even on these two datasets, our model outperforms the state of the art methods mentioned above.
Next, we observe that while rendering the output, once the model pays attention to a field (say, occupation) it needs to stay on this field for a few timesteps (till all the occupations are produced in the output).
|
What is "stay on" behavior?
|
It is the behavior they have observed that the model does while rendering the output, once the model pays attention to a field (say, occupation) it needs to stay on this field for a few timesteps (till all the occupations are produced in the output).
|
null | false
| null |
Dynamite!! 2010 was a mixed martial arts and kickboxing event in the annual New Year's Eve event promoted by Fighting and Entertainment Group that took place on December 31, 2010 at the Saitama Super Arena in Saitama, Japan. The event included bouts that encompass the DREAM, K-1 and K-1 World MAX banners. Just as the previous Dynamite!! 2009, with the official commentator Nozomi Sasaki, the event aired on TBS in Japan and HDNet in North America.
Background
FEG president Tanikawa had stated that the company planned on holding an event together with Sengoku Raiden Championship and this event may have a similar DREAM vs SRC format as Dynamite!! 2009. However, this did not happen as SRC held its own event, World Victory Road Presents: Soul of Fight.
Tanikawa has also stated that this Dynamite!! event will most likely be shortened (to around three hours) and the annual K-1 Koshien tournament will not take place at this event.
The title fight between Bibiano Fernandes and Hiroyuki Takaya was the last in the 63 kg. (139 lbs) weight class before it gets broken into the standard MMA bantamweight and featherweight divisions. DREAM plans on holding a Grand Prix in whichever of the two weight classes the champion does not decide to fight in. After winning the title, Hiroyuki Takaya stated that he would become the Featherweight (145 lbs) Champion.
The Aoki/Nagashima fight was first announced that there would be an alternate of four rounds, with one round under MMA rules and the next round, then they will change gloves and, fight under kickboxing rules. A coin toss will determine what rules they start under. However, it has later announced that the fight will feature one three-minute kickboxing round, followed by a standard five-minute MMA round. The fighters used open-fingered gloves throughout. If no winner were decided in the eight minutes of competition, the fight would have been declared an automatic draw.
A fight between Bob Sapp and Shinichi Suzukawa in a super-heavyweight “IGF rules” bout was set to feature a rule set similar to MMA, but with no closed fist strikes. However, the fight was canceled due to Sapp attempting to re-negotiate his contract backstage prior to the event and eventually electing to withdraw from the contest.
|
Wha is the Dynamite!! 2010
|
The Dynamite!! 2010 was a mixed martial arts and kickboxing event in the annual New Year's Eve event promoted by Fighting and Entertainment Group that took place on December 31, 2010 at the Saitama Super Arena in Saitama, Japan.
The event included bouts that encompass the DREAM, K-1 and K-1 World MAX banners. Just as the previous Dynamite!! 2009, with the official commentator Nozomi Sasaki, the event aired on TBS in Japan and HDNet in North America.
|
null | false
| 368
|
Recently, there has been a significant increase in the investment for autonomous driving (AD) research and development, with the goal of achieving full autonomy in the next few years. Realizing this vision requires robust ML/AI algorithms that are trained on massive amounts of data. Thousands of cars, equipped with various types of sensors (e.g., LIDAR, RGB, RADAR), are now deployed around the world to collect this heterogeneous data from real-world driving scenes. The primary objective for AD is to use these data to optimize the vehicle's perception pipeline on such tasks as: 3D object detection, obstacle tracking, object trajectory forecasting, and learning an ideal driving policy. Fundamental to all of these tasks will be the vehicle's context understanding capability, which requires knowledge of the time, location, detected objects, participating events, weather, and various other aspects of a driving scene. Even though state-of-the-art AI technologies are used for this purpose, their current effectiveness and scalability are insufficient to achieve full autonomy. Humans naturally exhibit context understanding behind the wheel, where the decisions we make are the result of a continuous evaluation of perceptual cues combined with background knowledge. For instance, human drivers generally know which area of a neighborhood might have icy road conditions on a frigid winter day, where flooding is more frequent after a heavy rainfall, which streets are more likely to have kids playing after school, and which intersections have poor lighting. Currently, this type of common knowledge is not being used to assist self-driving cars and, due to the sample-inefficiency of current ML/AI algorithms, vehicle models cannot effectively learn these phenomena through statistical observation alone. On March 18, 2018, Elaine Herzberg’s death was reported as the first fatality incurred from a collision with an autonomous vehicle. An investigation into the collision, conducted by The National Transportation Safety Board (NTSB), remarks on the shortcomings of current AD and context understanding technologies. Specifically, NTSB found that the autonomous vehicle incorrectly classified Herzberg as an unknown object, a vehicle, and then a bicycle within the complex scene as she walked across the road. Further investigation revealed that the system design did not include consideration for pedestrians walking outside of a crosswalk, or jaywalking BIBREF9. Simply put, the current AD technology lacks fundamental understanding of the characteristics of objects and events within common scenes; this suggests that more research is required in order to achieve the vision of autonomous driving.
Knowledge Graphs (KGs) have been successfully used to manage heterogeneous data within various domains. They are able to integrate and structure data and metadata from multiple modalities into a unified semantic representation, encoded as a graph. More recently, KGs are being translated into latent vector space representations, known as Knowledge Graph Embeddings (KGEs), that have been shown to improve the performance of machine learning models when applied to certain downstream tasks, such as classification BIBREF10, BIBREF11. Given a KG as a set of triples, KGE algorithms learn to create a latent representation of the KG entities and relations as continuous KGE vectors. This encoding allows KGEs to be easily manipulated and integrated with machine learning algorithms. Motivated by the shortcomings of current context understanding technologies, along with the promising outcomes of KGEs, our research focuses on the generation and evaluation of KGEs on AD data. Before directly applying KGEs on critical AD applications, however, we evaluate the intrinsic quality of KGEs across multiple metrics and KGE algorithms BIBREF12. Additionally, we present an early investigation of using KGEs for a selected use-case from the AD domain.
Dataset. To promote and enable further research on autonomous driving, several benchmark datasets have been made publicly available by companies in this domain BIBREF13. NuScenes is a benchmark dataset of multimodal vehicular data, recently released by Aptiv BIBREF14 and used for our experiments. NuScenes consists of a collection of 20-second driving scenes, with $\sim $40 sub-scenes sampled per driving scene (i.e., one every 0.5 seconds). In total, NuScenes includes 850 driving scenes and 34,149 sub-scenes. Each sub-scene is annotated with detected objects and events, each defined within a taxonomy of 23 object/event categories.
Scene Ontology. In autonomous driving, a scene is defined as an observable volume of time and space BIBREF15. On the road, a vehicle may encounter many different situations|such as merging onto a divided highway, stopping at a traffic light, and overtaking another vehicle|all of which are considered as common driving scenes. A scene encapsulates all relevant information about a particular situation, including data from vehicular sensors, objects, events, time and location. A scene can also be divided into a sequence of sub-scenes. As an example, a 20-second drive consisting primarily of the vehicle merging into a highway could be considered as a scene. In addition, all the different situations the vehicle encounters within these 20 seconds can also be represented as (sub-)scenes. In this case, a scene may be associated with a time interval and spatial region while a sub-scene may be associated with a specific timestamp and a set of spatial coordinates. This semantic representation of a scene is formally defined in the Scene Ontology (see figure FIGREF8(a), depicted in Protege). To enable the generation of a KG from the data within NuScenes, the Scene Ontology is extended to include all the concepts (i.e., objects and event categories) found in the NuScenes dataset.
Generating Knowledge Graphs. The Scene Ontology identifies events and features-of-interests (FoIs) as top-level concepts. An event or a FoI may be associated with a Scene via the includes relation. FoIs are associated with events through the isParticipantOf relation. Figure FIGREF8(b) shows a subset of the FoIs and events defined by the Scene Ontology. In generating the scenes' KG, each scene and sub-scene found in NuScenes is annotated using the Scene Ontology. Table TABREF9 shows some basic statistics of the generated KG.
KGE Algorithms. KGE algorithms enable the ability to easily feed knowledge into ML algorithms and improve the performance of learning tasks, by translating the knowledge contained in knowledge graphs into latent vector space representation of KGEs BIBREF16. To select candidate KGE algorithms for our evaluation, we referred to the classification of KGE algorithms provided by Wang et al. BIBREF17. In this work, KGE algorithms are classified into two primary categories: (1) Transitional distance-based algorithms and (2) Semantic matching-based models. Transitional distance-based algorithms define the scoring function of the model as a distance-based measure, while semantic matching-based algorithms define it as a similarity measure. Here, entity and relation vectors interact via addition and subtraction in the case of Transitional distance-based models; in semantic matching-based models, the interaction between entity and relation vectors is captured by multiplicative score functions BIBREF18.
Initially, for our study we had selected one algorithm from each class: TransE BIBREF19 to represent the transitional distance-based algorithms and RESCAL BIBREF20 to represent the semantic matching-based algorithms. However, after experimentation, RESCAL did not scale well for handling large KGs in our experiments. Therefore, we also included HolE BIBREF21|an efficient successor of RESCAL|in the evaluation. A brief summary of each algorithm is provided for each model, below:
TransE: the TransE model is often considered to be the most-representative of the class of transitional distance-based algorithms BIBREF17. Given a triple (h, r, t) from the KG, TransE encodes h, r and t as vectors, with r represented as a transition vector from h to t: $\mathbf {h} + \mathbf {r} \approx \mathbf {t}$. Since both entities and relations are represented as vectors, TransE is one of the most efficient KGE algorithms, with $\mathcal {O}(n d + m d)$ space complexity and $\mathcal {O}(n_t d)$ time complexity ($n_t$ is the number of training triples).
RESCAL: RESCAL is capable of generating an expressive knowledge graph embedding space, due to its ability to capture complex patterns over multiple hops in the KG. RESCAL encodes relations as matrices and captures the interaction between entities and relations using a bi-linear scoring function. Though the use of a matrix to encode each relation yields improved expressivity, it also limits RESCAL’s ability to scale with large KGs. It has $\mathcal {O}(n d + m d^2)$ space complexity and $\mathcal {O}(n_t d^2)$ time complexity.
HolE: HoLE is a more efficient successor of RESCAL, addressing its space and time complexity issues, by encoding relations as vectors without sacrificing the expressivity of the model. By using circular correlation operation BIBREF21, it captures the pairwise interaction of entities as composable vectors. This optimization yields $\mathcal {O} (n d + m d)$ space complexity and $\mathcal {O}(n_t d \log d)$ time complexity.
Visualizing KGEs. In order to visualize the generated KGE, a “mini" KG from the NuScenes-mini dataset was created. Specifically, 10 scenes were selected (along with their sub-scenes) to generate the KG, and the TransE algorithm was used to learn the embeddings. When training the KGEs, we chose the dimension of the vectors to be 100. To visualize the embeddings in 2-dimensional (2D) space, the dimensions are reduced using the t-Distributed Stochastic Neighbor Embedding (t-SNE) BIBREF22 projection. Figure FIGREF11(a) shows the resulting embeddings of the NuScenes dataset. To denote interesting patterns that manifest in the embeddings, instances of Car (a FoI) and the events in which they participate are highlighted. In this image, events such as parked car, moving car, and stopped car are clustered around entities of type Car. This shows that the isParticipantOf relations defined in the KG are maintained within the KG embeddings.
Here, we deviate slightly from the prior work in evaluating KGE algorithms, which evaluate KGEs based downstream task performance. Instead, we focus on an evaluation that uses only metrics that quantify the intrinsic quality of KGEs BIBREF23: categorization measure, coherence measure, and semantic transition distance. Categorization measures how well instances of the same type cluster together. To quantify this quality, all vectors of the same type are averaged together and the cosine similarity is computed between the averaged vector and the typed class. The Coherence measure quantifies the proportion of neighboring entities that are of the same type; the evaluation framework proposes that, if a set of entities are typed by the class, those entities should form a cluster in the embedding space with the typed class as the centroid. Adapted from the word embedding literature, Semantic Transitional Distance captures the relational semantics of the KGE: if a triple $(h,r,t)$ is correctly represented in the embedding space, the transition distance between the vectors representing $(\mathbf {h+r})$ should be close to $\mathbf {t}$. This is quantified by computing the cosine similarity between $(\mathbf {h+r})$ and $\mathbf {t}$.
Results. Evaluation results are reported with respect to each algorithm and metric. Figure FIGREF13 shows the evaluation results of categorization measure, coherence measure, and semantic transitional distance|for each KGE algorithm. The NuScenes KG, generated from the NuScenes-trainval dataset, is large in terms of both the number of triples and number of entities (see Table TABREF9). Hence, RESCAL did not scale well to this dataset. For this reason, we only report the evaluation results for TransE and HolE. When considering the KGE algorithms, TransE's performance is consistently better across metrics, compared to HolE's performance. However, it is interesting to note that HolE significantly outperforms TransE for some classes/relations. When considering the evaluation metrics, it is evident that the categorization measure and semantic transitional distance are able to capture the quality of type semantics and relational semantics, respectively. The value of the coherence measure, however, is zero for HoLE in most cases and close to zero for TransE in some cases. In our experimental setting, the poor performance with respect to the coherence measure may suggest that it may not be a good metric for evaluating KGEs in the AD domain.
We report preliminary results from our investigation into using KGEs for a use-case in the AD domain. More specifically, we apply KGEs for computing scene similarity. In this case, the goal is to find (sub-)scenes that are characteristically similar, using the learned KGEs. Given a set of scene pairs, we choose the pair with the highest cosine similarity as the most similar. Figure FIGREF15 shows an illustration of the two most similar sub-scenes, when the list of pairs include sub-scenes from different scenes. An interesting observation is that the black string of objects in sub-scene (a) are Barriers (a Static Object), and the orange string of objects in sub-scene (b) are Stopped Cars. This example suggests that the KGE-based approach could identify sub-scenes that share similar characteristics even though the sub-scenes are visually dissimilar.
We presented an investigation of using KGEs for AD context understanding, along with an evaluation of the intrinsic quality of KGEs. The evaluation suggests that KGEs are specifically able to capture the semantic properties of a scene knowledge graph (e.g., isParticipantOf relation between objects and events). More generally, KGE algorithms are capable of translating semantic knowledge, such as type and relational semantics to KGEs. When considering the different KGE algorithms, we report that the transitional distance-based algorithm, TransE, shows consistent performance across multiple quantitative KGE-quality metrics. Our evaluation further suggests that some quality metrics currently in use, such as the coherence measure, may not be effective in measuring the quality of the type semantics from KGEs, in the AD domain. Finally, in applying the learned KGEs to a use-case of importance in the AD domain, we shed some light on the effectiveness of leveraging KGEs in capturing AD scene similarity.
We suggest neuro-symbolic fusion as one solution and, indeed, our results show that our embedding space preserves the semantic properties of the conceptual elements that make up visual scenes.
|
Does their embedding space preserve the semantic properties of the conceptual elements that make up visual scenes?
|
Yes.
|
null | false
| null |
The United States of America is a federal republic consisting of 50 states, a federal district (Washington, D.C., the capital city of the United States), five major territories, and various minor islands. Both the states and the United States as a whole are each sovereign jurisdictions. The Tenth Amendment to the United States Constitution allows states to exercise all powers of government not delegated to the federal government. Each state has its own constitution and government, and all states and their residents are represented in the federal Congress, a bicameral legislature consisting of the Senate and the House of Representatives. Each state is represented by two senators, while representatives are distributed among the states in proportion to the most recent constitutionally mandated decennial census. Additionally, each state is entitled to select a number of electors to vote in the Electoral College, the body that elects the president of the United States, equal to the total of representatives and senators in Congress from that state. The federal district does not have representatives in the Senate, but has a non-voting delegate in the House, and it is also entitled to electors in the Electoral College. Congress can admit more states, but it cannot create a new state from territory of an existing state or merge of two or more states into one without the consent of all states involved, and each new state is admitted on an equal footing with the existing states.
The United States has control over fourteen territories. Five of them (American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands) have a permanent, nonmilitary population, while nine of them (the United States Minor Outlying Islands) do not. With the exception of Navassa Island, Puerto Rico, and the U.S. Virgin Islands, which are located in the Caribbean, all territories are located in the Pacific Ocean. One territory, Palmyra Atoll, is considered to be incorporated, meaning the full body of the Constitution has been applied to it; the other territories are unincorporated, meaning the Constitution does not fully apply to them. Ten territories (the Minor Outlying Islands and American Samoa) are considered to be unorganized, meaning they have not had an organic act enacted by Congress; the four other territories are organized, meaning an organic act has been enacted by Congress. The five inhabited territories each have limited autonomy in addition to having territorial legislatures and governors, but residents cannot vote in federal elections, although all are represented by non-voting delegates in the House.
The largest state by population is California, with a population of 39,538,223 people, while the smallest is Wyoming, with a population of 576,851 people; the federal district has a larger population (689,545) than both Wyoming and Vermont. The largest state by area is Alaska, encompassing 665,384 square miles (1,723,337 square kilometers), while the smallest is Rhode Island, encompassing 1,545 square miles (4,001 square kilometers). The most recent states to be admitted, Alaska and Hawaii, were admitted in 1959. The largest territory by population is Puerto Rico, with a population of 3,285,874 people (larger than 21 states), while the smallest is the Northern Mariana Islands, with a population of 47,329 people. Puerto Rico is the largest territory by area, encompassing 5,325 square miles (13,791 square kilometers); the smallest territory, Kingman Reef, encompasses only 0.005 square miles (0.01 square kilometers).
|
From the passage note down the name and population of largest and smallest US state categorized by population. List the results in format ‘name of state: Population count‘.
|
Largest state by population - California: 39,538,223
Smallest state by population - Wyoming: 576,851
|
null | false
| 33
|
We use two reference standards to evaluate the semantic similarity and relatedness measures . UMNSRS was annotated for both similarity and relatedness by medical residents. MiniMayoSRS was annotated for relatedness by medical doctors (MD) and medical coders (coder). In this section, we describe these data sets and describe a few of their differences.
MiniMayoSRS: The MayoSRS, developed by PakhomovPMMRC10, consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic. The relatedness of each term pair was assessed based on a four point scale: (4.0) practically synonymous, (3.0) related, (2.0) marginally related and (1.0) unrelated. MiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter–annotator agreement was achieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78. We evaluate our method on the mean of the physician scores, and the mean of the coders scores in this subset in the same manner as reported by PedersenPPC07.
UMNSRS: The University of Minnesota Semantic Relatedness Set (UMNSRS) was developed by PakhomovMALPM10, and consists of 725 clinical term pairs whose semantic similarity and relatedness was determined independently by four medical residents from the University of Minnesota Medical School. The similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch a bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness. The Intraclass Correlation Coefficient (ICC) for the reference standard tagged for similarity was 0.47, and 0.50 for relatedness. Therefore, as suggested by Pakhomov and colleagues,we use a subset of the ratings consisting of 401 pairs for the similarity set and 430 pairs for the relatedness set which each have an ICC of 0.73.
We use two reference standards to evaluate the semantic similarity and relatedness measures1. UMNSRS was annotated for both similarity and relatedness by medical residents. MiniMayoSRS was annotated for relatedness by medical doctors (MD) and medical coders (coder).
|
What are the two reference standards to evaluate the semantic similarity and relatedness measures?
|
UMNSRS and MiniMayoSRS.
|
1808.02113
| false
| null |
Our application is the escalation of Internet chats. To maintain quality of service, users are transferred to human representatives when their conversations with an intelligent virtual assistant (IVA) fail to progress. These transfers are known as escalations. We apply Han to such conversations in a sequential manner by feeding each user turn to Han as they occur, to determine if the conversation should escalate. If so, the user will be transferred to a live chat representative to continue the conversation. To help the human representative quickly determine the cause of the escalation, we generate a visualization of the user's turns using the attention weights to highlight the turns influential in the escalation decision. This helps the representative quickly scan the conversation history and determine the best course of action based on problematic turns.
Our application requires that the visualizations be generated in real-time at the point of escalation. The user must wait for the human representative to review the IVA chat history and resume the failed task. Therefore, we seek visualization methods that do not add significant latency to the escalation transfer. Using the attention weights for turn influence is fast as they were already computed at the time of classification. However, these weights will not generate useful visualizations for the representatives when their values are similar across all turns (see Han Weight in Table TABREF1 ). To overcome this problem, we develop a visualization method to be applied in the instances where the attention weights are uniform. Our method produces informative visuals for determining influential samples in a sequence by observing the changes in sample importance over the cumulative sequence (see Our Weight in Table TABREF1 ). Note that we present a technique that only serves to resolve situations when the existing attention weights are ambiguous; we are not developing a new attention mechanism, and, as our method is external, it does not require any changes to the existing model to apply.
We apply Han to such conversations in a sequential manner by feeding each user turn to Han as they occur, to determine if the conversation should escalate. If so, the user will be transferred to a live chat representative to continue the conversation.
The user must wait for the human representative to review the IVA chat history and resume the failed task.
|
How do they gather human reviews?
|
The answers are shown as follows:
* human representative to review the IVA chat history and resume the failed task
|
null | false
| 45
|
Speech-to-Text translation (ST) is essential for a wide range of scenarios: for example in emergency calls, where agents have to respond emergent requests in a foreign language BIBREF0; or in online courses, where audiences and speakers use different languages BIBREF1. To tackle this problem, existing approaches can be categorized into cascaded method BIBREF2, BIBREF3, where a machine translation (MT) model translates outputs of an automatic speech recognition (ASR) system into target language, and end-to-end method BIBREF4, BIBREF5, where a single model learns acoustic frames to target word sequence mappings in one step towards the final objective of interest. Although the cascaded model remains the dominant approach due to its better performance, the end-to-end method becomes more and more popular because it has lower latency by avoiding inferences with two models and rectifies the error propagation in theory.
Since it is hard to obtain a large-scale ST dataset, multi-task learning BIBREF5, BIBREF6 and pre-training techniques BIBREF7 have been applied to end-to-end ST model to leverage large-scale datasets of ASR and MT. A common practice is to pre-train two encoder-decoder models for ASR and MT respectively, and then initialize the ST model with the encoder of the ASR model and the decoder of the MT model. Subsequently, the ST model is optimized with the multi-task learning by weighing the losses of ASR, MT, and ST. This approach, however, causes a huge gap between pre-training and fine-tuning, which are summarized into three folds:
Subnet Waste: The ST system just reuses the ASR encoder and the MT decoder, while discards other pre-trained subnets, such as the MT encoder. Consequently, valuable semantic information captured by the MT encoder cannot be inherited by the final ST system.
Role Mismatch: The speech encoder plays different roles in pre-training and fine-tuning. The encoder is a pure acoustic model in pre-training, while it has to extract semantic and linguistic features additionally in fine-tuning, which significantly increases the learning difficulty.
Non-pre-trained Attention Module: Previous work BIBREF6 trains attention modules for ASR, MT and ST respectively, hence, the attention module of ST does not benefit from the pre-training.
To address these issues, we propose a Tandem Connectionist Encoding Network (TCEN), which is able to reuse all subnets in pre-training, keep the roles of subnets consistent, and pre-train the attention module. Concretely, the TCEN consists of three components, a speech encoder, a text encoder, and a target text decoder. Different from the previous work that pre-trains an encoder-decoder based ASR model, we only pre-train an ASR encoder by optimizing the Connectionist Temporal Classification (CTC) BIBREF8 objective function. In this way, the additional decoder of ASR is not required while keeping the ability to read acoustic features into the source language space by the speech encoder. Besides, the text encoder and decoder can be pre-trained on a large MT dataset. After that, we employ common used multi-task learning method to jointly learn ASR, MT and ST tasks.
Compared to prior works, the encoder of TCEN is a concatenation of an ASR encoder and an MT encoder and our model does not have an ASR decoder, so the subnet waste issue is solved. Furthermore, the two encoders work at tandem, disentangling acoustic feature extraction and linguistic feature extraction, ensuring the role consistency between pre-training and fine-tuning. Moreover, we reuse the pre-trained MT attention module in ST, so we can leverage the alignment information learned in pre-training.
Since the text encoder consumes word embeddings of plausible texts in MT task but uses speech encoder outputs in ST task, another question is how one guarantees the speech encoder outputs are consistent with the word embeddings. We further modify our model to achieve semantic consistency and length consistency. Specifically, (1) the projection matrix at the CTC classification layer for ASR is shared with the word embedding matrix, ensuring that they are mapped to the same latent space, and (2) the length of the speech encoder output is proportional to the length of the input frame, so it is much longer than a natural sentence. To bridge the length gap, source sentences in MT are lengthened by adding word repetitions and blank tokens to mimic the CTC output sequences.
We conduct comprehensive experiments on the IWSLT18 speech translation benchmark BIBREF1, demonstrating the effectiveness of each component. Our model is significantly better than previous methods by 3.6 and 2.2 BLEU scores for the subword-level decoding and character-level decoding strategies, respectively.
Our contributions are three-folds: 1) we shed light on why previous ST models cannot sufficiently utilize the knowledge learned from the pre-training process; 2) we propose a new ST model, which alleviates shortcomings in existing methods; and 3) we empirically evaluate the proposed model on a large-scale public dataset.
Our model is significantly better than previous methods by 3.6 and 2.2 BLEU scores for the subword-level decoding and character-level decoding strategies, respectively.
|
How many BlEU scores does their model better than previous methods for the subword-level decoding?
|
3.6 BLEU scores.
|
null | false
| 54
|
Knowledge graphs BIBREF0 enable structured access to world knowledge and form a key component of several applications like search engines, question answering systems and conversational assistants. Knowledge graphs are typically interpreted as comprising of discrete triples of the form (entityA, relationX, entityB) thus representing a relation (relationX) between entityA and entityB. However, one limitation of only a discrete representation of triples is that it does not easily enable one to infer similarities and potential relations among entities which may be missing in the knowledge graph. Consequently, one popular alternative is to learn dense continuous representations of entities and relations by embedding them in latent continuous vector spaces, while seeking to model the inherent structure of the knowledge graph. Most knowledge graph embedding methods can be classified into two major classes: one class which operates purely on triples like RESCAL BIBREF1 , TransE BIBREF2 , DistMult BIBREF3 , TransD BIBREF4 , ComplEx BIBREF5 , ConvE BIBREF6 and the second class which seeks to incorporate additional information (like multi-hops) BIBREF7 . Learning high-quality knowledge graph embeddings can be quite challenging given that (a) they need to effectively model the contextual usages of entities and relations (b) they would need to be useful for a variety of predictive tasks on knowledge graphs.
In this paper, we present a new type of knowledge graph embeddings called Dolores that are both deep and contextualized. Dolores learns both context-independent and context-dependent embeddings of entities and relations through a deep neural sequential model. Figure 1 illustrates the deep contextualized representations learned. Note that the contextually independent entity embeddings (see Figure 1 ) reveal three clusters of entities: writers, philosophers, and musicians. The contextual dependent embeddings in turn effectively account for specific relations. In particular, the context-dependent representations under the relation nationality now nicely cluster the above entities by nationality namely Austrians, Germans, and British/Irish. Similarly Figure 1 shows contextual embeddings given the relation place-lived. Note that these embeddings correctly capture that even though Beethoven and Brahms being Germans, they lived in Vienna and are closer to other Austrian musicians like Schubert.
Unlike most knowledge graph embeddings like TransD, TransE BIBREF2 , BIBREF4 etc. which are typically learned using shallow models, the representations learned by Dolores are deep: dependent on an entire path (rather than just a triple), are functions of internal states of a Bi-Directional LSTM and composed of representations learned at various layers potentially capturing varying degrees of abstractions. Dolores is inspired by recent advances in learning word representations (word embeddings) from deep neural language models using Bi-Directional LSTMs BIBREF8 . In particular, we derive connections between the work of Peters et al. ( BIBREF8 ) who learn deep contextualized word embeddings from sentences using a Bi-Directional LSTM based language model and random walks on knowledge graphs. These connections enable us to propose new “deep contextualized” knowledge graph embeddings which we call Dolores embeddings.
Knowledge Embeddings learned using Dolores can easily be used as input representations for predictive models on knowledge graphs. More importantly, when existing predictive models use input representations for entities and relations, we can easily replace those representations with Dolores representations and significantly improve the performance of existing models. Specifically, we show that Dolores embeddings advance the state-of-the-art models on various tasks like link prediction, triple classification and missing relation type prediction.
To summarize, our contributions are as follows:
Most knowledge graph embedding methods can be classified into two major classes: one class which operates purely on triples like RESCAL (Nickel et al., 2011), TRANSE (Bordes et al., 2013), DISTMULT (Yang et al., 2015), TRANSD (Ji et al., 2015), COMPLEX (Trouillon et al., 2016), CONVE (Dettmers et al., 2018) and the second class which seeks to incorporate additional information (like multi-hops) (Wang et al., 2017).
|
What are the two main categories of embedding methods for most knowledge graphs?
|
One class which operates purely on triples like RESCAL, TRANSE, DISTMULT, TRANSD, COMPLEX, CONVE and the second class which seeks to incorporate additional information (like multi-hops).
|
null | false
| null |
Athena (Ἀθηνᾶ, Athēnâ)
Goddess of reason, wisdom, intelligence, skill, peace, warfare, battle strategy, and handicrafts. According to most traditions, she was born from Zeus's forehead, fully formed and armored, after Zeus swallowed her mother, Metis, whole. She is depicted as being crowned with a crested helm, armed with shield and spear, and wearing the aegis over a long dress. Poets describe her as "grey-eyed" or having especially bright, keen eyes. She is a special patron of heroes such as Odysseus. She is the patron of the city Athens (from which she takes her name) and is attributed to various inventions in arts and literature. Her symbol is the olive tree. She is commonly shown as being accompanied by her sacred animal, the owl. Her Roman counterpart is Minerva.
|
From the passage identify the domains in which Athena is considered God. Display the results in a comma separated format.
|
reason, wisdom, intelligence, skill, peace, warfare, battle strategy, handicrafts
|
null | false
| null |
What are the names of the lines on Hong Kong's MTR network?
|
- Island Line
- Tsuen Wan Line
- Kwun Tong Line
- South Island Line
- Tseung Kwan O Line
- Tung Chung Line & Disneyland Resort Line
- East Rail Line
- Tuen Ma Line
- Airport Express
|
|
null | false
| 369
|
There are several recent works that attempt to match profiles across different Internet services. Some of these works utilize private user data, while some, like ours, use publicly available data. An example of a work that uses private data is Balduzzi et al. BIBREF8 . They use data from the Friend Finder system (which includes some private data) provided by various social networks to link users across services. Though one can achieve a relatively high level of success by using private data to link user accounts, we are interested in using only publicly available data for this task. In fact, as mentioned earlier, we do not even consider publicly available information that could explicitly identify a user, such as names, birthdays and locations.
Several methods have been proposed for matching user profiles using public data BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . These works differ from ours in two main aspects. First, in some of these works, the ground truth data is collected by assuming that all profiles that have the same screen name are from the same users BIBREF15 , BIBREF16 . This is not a valid assumption. In fact, it has been suggested that close to $20\%$ of accounts with the same screen name in Twitter and Facebook are not matching BIBREF17 . Second, almost all of these works use features extracted from the user profiles BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . Our work, on the other hand, is blind to the profile information and only utilizes users' activity patterns (linguistic and temporal) to match their accounts across different social networks. Using profile information to match accounts is contrary to the best practices of stylometry since it assumes and relies on the honesty, consistency and willingness of the users to explicitly share identifiable information about themselves (such as location).
An example of a work that uses private data is Balduzzi et al. [2]. They use data from the Friend Finder system (which includes some private data) provided by various social networks to link users across services.
|
Where is the data used by Balduzzi et al. from?
|
The Friend Finder system.
|
null | false
| null |
The Yayhoos formed as a "temporary band" in 1993 and consisted of "an all-star lineout of country punk veterans," including Eric "Rosco" Ambel on guitar/vocals, Dan Baird on guitar/vocals, Keith Christopher on bass, and Terry Anderson on drums/vocals. With three solo artists from singer-songwriter backgrounds—Anderson, Ambel, and Baird—the band features three lead vocalists as well as three songwriters.
|
Extract all of the dates mentioned in this paragraph and list them using bullets in the format {Date} - {Description}
|
• 1993 - The Yayhoos were formed during this year.
|
null | false
| 267
|
We now present the most interesting results from our experiments. For the evaluation we used standard metrics for classification accuracy, suitable for studying problems such as sentiment analysis. In particular we used Precision and Recall, with the former calculated as the ratio of the number of tweets correctly classified to a given class over the total number of tweets classified to that class, while the latter measures the ratio of messages correctly classified to a given class over the number of messages from that class. Additionally, the F-score is the harmonic mean of precision and recall, expressed as INLINEFORM0 . For our particular case with three classes, P, R and F are computed for each class separately, with the final F value derived as the weighted mean of the separate INLINEFORM1 -scores: INLINEFORM2 ; recall that INLINEFORM3 , INLINEFORM4 and INLINEFORM5 . The results are shown in Table TABREF24 , along with the reported results from state of the art approaches proposed by other researchers in the field. Note that the performance numbers P,R and F of the other state of the art approaches are based on the authors' reported data in the cited works. Additionally, we report the performance of each individual LSTM classifier as if used alone over the same data (that is, without the ensemble logic). The F-score for our proposed approaches shown in the last column, is the weighted average value over the 3 classes (Neutral,Sexism,Racism). Moreover, all the reported values are average values produced for a number of runs of the same tested scheme over the same data. Figure FIGREF23 shows the F-Score as a function of the number of training samples for each ensemble of classifiers. We clearly see that the models converge. For the final run the F-score has standard deviation value not larger than 0.001, for all classifiers.
As can be seen in Table TABREF24 , the work by BIBREF12 , in which character n-grams and gender information were used as features, obtained the quite low F-score of INLINEFORM0 . Later work by the same author BIBREF5 investigated the impact of the experience of the annotator in the performance, but still obtaining a lower F-score than ours. Furthermore, while the first part of the two step classification BIBREF16 performs quite well (reported an F-score of 0.9520), it falls short in detecting the particular class the abusive text belongs to. Finally, we observe that applying a simple LSTM classification with no use of additional features (denoted `single classifier (i)' in Table TABREF24 ), achieves an F-score that is below 0.93, something that is in line with other researchers in the field, see BIBREF15 .
Very interestingly, the incorporation of features related to user's behaviour into the classification has provided a significant increase in the performance vs. using the textual content alone, INLINEFORM0 vs. INLINEFORM1 .
Another interesting finding is the observed performance improvement by using an ensemble instead of a single classifier; some ensembles outperform the best single classifier. Furthermore, the NRS classifier, which produces the best score in relation to other single classifiers, is the one included in the best performing ensemble.
In comparison to the approach by BIBREF13 , which focuses on various classes of Sexism, the results show that our deep learning model is doing better as far as detecting Sexism in general, outperforming the FastText algorithm they include in their experiments (F=0.87). The inferiority of FastText over LSTM is also reported in the work by BIBREF15 , as well as being inferior over CNN in, BIBREF16 . In general, through our ensemble schemes is confirmed that deep learning can outperform any NLP-based approaches known so far in the task of abusive language detection.
We also present the performance of each of the tested models per class label in Table TABREF25 . Results by other researchers have not been included, as these figures are not reported in the existing literature. As can be seen, sexism is quite easy to classify in hate-speech, while racism seems to be harder; similar results were reported by BIBREF7 . This result is consistent across all ensembles.
For completion, the confusion matrices of the best performing approach that employs 3 classifiers (ensemble viii) as well as of the ensemble of the 5 classifiers (xi), are provided in Table TABREF26 . The presented values is the sum over multiple runs.
The code and results associated with this paper will be available on-line soon at: https://github.com/gpitsilis/hate-speech/
For the evaluation we used standard metrics for classification accuracy, suitable for studying problems such as sentiment analysis. In particular we used Precision and Recall, with the former calculated as the ratio of the number of tweets correctly classified to a given class over the total number of tweets classified to that class, while the latter measures the ratio of messages correctly classified to a given class over the number of messages from that class.
|
What did they calculate precision as?
|
The ratio of the number of tweets correctly classified to a given class over the total number of tweets classified to that class.
|
null | false
| null |
How does interest rates and inflation impact bond prices?
|
Interest rates have an inverse effect on bond prices. Typically as the interest rate go up, the existing bonds that were sold previously at a certain maturity and coupon, now stands devalued as investors can get a newer bond for higher interest payments. Similarly during inflation as Fed tries to control inflation by raising interest rates it has the inverse effect on bond prices. It is important to note that not all bonds react the same way to interest rates. For example, short term interest rate bonds may feel the impact almost immediately while long term bonds may have a gradual but likely to see more impactful price change. It is important to note that there are Treasury Inflation protected securities, a type of a bond that adjust principal value of the bond based on inflation.
|
|
null | false
| null |
In what ways can having children improve your quality of life?
|
This is my own opinion, and completely anecdotal, but having children can help improve your quality of life in several ways. Children can provide parents a sense of purpose. Parents are responsible for the well-being of their children, and this obligation in-turn forces parents to become less self-centered, and focus on something other than themselves. The relationships parents develop with their children is also unique to any other bond. They represent a piece of you, and are completely dependent on your for everything. That dependence, when coupled with the love they have for you, can give your life new meaning and joy.
|
|
null | false
| 322
|
NLG is the process of automatically generating coherent NL text from non-linguistic data BIBREF0. Recently, the field has seen an increased interest in the development of NLG systems focusing on verbalizing resources from SW data BIBREF1. The SW aims to make information available on the Web easier to process for machines and humans. However, the languages underlying this vision, i.e., RDF, SPARQL and OWL, are rather difficult to understand for non-expert users. For example, while the meaning of the OWL class expression Class: Professor SubClassOf: worksAt SOME University is obvious to every SW expert, this expression (“Every professor works at a university”) is rather difficult to fathom for lay persons.
Previous works such as SPARQL2NL BIBREF2 and SPARTIQULATION BIBREF3 have already shown the usefulness of the verbalization of SPARQL and RDF in areas such as question answering BIBREF4 and the explanation of the output of systems based onSW technologies BIBREF5. However, other SW languages are rarely investigated, such as OWL.
In this paper, we present an open-source holistic NLG framework for the SW, named LD2NL, which facilitates the verbalization of the three key languages of the SW, i.e., RDF, OWL, and SPARQL into NL. Our framework is based on a bottom-up paradigm for verbalizing SW data. Additionally, LD2NL builds upon SPARQL2NL as it is open-source and the paradigm it follows can be reused and ported to RDF and OWL. Thus, LD2NL is capable of generating either a single sentence or a summary of a given resource, rule, or query. To validate our framework, we evaluated LD2NL using experts 66 in NLP and SW as well as 20 non-experts who were lay users or non-users of SW. The results suggest that LD2NL generates texts which can be easily understood by humans. The version of LD2NL used in this paper, all experimental results will be publicly available.
To validate our framework, we evaluated LD2NL using experts 66 in Natural Language Processing (NLP) and SW as well as 20 non-experts who were lay users or non-users of SW.
|
How to evaluate LD2NL?
|
The authors evaluated LD2NL using experts 66 in Natural Language Processing (NLP) and SW as well as 20 non-experts who were lay users or non-users of SW.
|
1906.00378
| false
| null |
We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:
We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English.
|
Which languages are used in the multi-lingual caption model?
|
The answers are shown as follows:
* German-English, French-English, and Japanese-English
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.