paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
1810.00663
| false
| null |
We compare the performance of translation approaches based on four metrics:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.
The harmonic average of the precision and recall over all the test set BIBREF26 .
The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .
GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0.
We compare the performance of translation approaches based on four metrics:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.
The harmonic average of the precision and recall over all the test set BIBREF26 .
The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .
GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0.
|
What evaluation metrics are used?
|
exact match, f1 score, edit distance and goal match
|
null | false
| null |
Chennai (/ˈtʃɛnaɪ/ (listen), Tamil: [ˈt͡ɕenːaɪ̯]), formerly known as Madras,[A] is the capital city of Tamil Nadu, the southernmost Indian state. It is the state's primate city both in area and population and is located on the Coromandel Coast of the Bay of Bengal. According to the 2011 Indian census, Chennai is the sixth-most populous city in India and forms the fourth-most populous urban agglomeration. The Greater Chennai Corporation is the civic body responsible for the city; it is the oldest city corporation of India, established in 1688—the second oldest in the world after London.
Historically, Chennai and surroundings were part of the Chola, Pandya, Pallava and Vijayanagara kingdoms for many centuries. The coastal land, which then contained the fishing village Madrasapattinam, was purchased by the British East India Company from the Nayak ruler Damarla Chennapa Nayaka, in the 17th century. The British garrison established the Madras city and port, and built Fort St. George—the first British fortress in India—which the French won over briefly in 1746, before becoming the winter capital of the Madras Presidency, a colonial province of the British Raj in the Indian subcontinent. After India gained its independence in 1947, Madras continued as the capital city of the Madras State and present-day Tamil Nadu. The city was officially renamed as Chennai in 1996.
The city is coterminous with Chennai district, which together with the adjoining suburbs constitutes the Chennai Metropolitan Area,[note 1] the 36th-largest urban area in the world by population and one of the largest metropolitan economies of India. As the traditional and de facto gateway of South India, Chennai is among the most-visited Indian cities by foreigners. It was ranked the 43rd most-visited city in the world in 2015 and 36th in 2019. The Quality of Living Survey rates it as India's safest city. Chennai attracts 45 percent of health tourists visiting India, and 30 to 40 percent of domestic health tourists. As such, it is termed "India's health capital". Chennai has the fifth-largest urban economy, and had the third-largest expatriate population in India.
|
What were all the earlier names for the city presently known as Chennai?
|
Chennai traces its roots back to a fishing village called Madrasapattinam. The British officially named it Madras when they purchased the land in the 17th century. The name of the city was changed to Chennai in 1996.
|
null | false
| null |
Which universe we are living today and how large the universe is?
|
Wow, that's a tough one, currently, we are living in the observational universe, which is 5% of the whole universe from the latest research. The universe we lives is 93 billion light-year in diameter.
|
|
null | false
| null |
Which sport is the most popular worldwide?
|
Football (called soccer is US) is the most popular sport worldwide. The FIFA World cup is being watched by millions of people around the world.
|
|
null | false
| 78
|
Social media platforms have made the spreading of fake news easier, faster as well as able to reach a wider audience. Social media offer another feature which is the anonymity for the authors, and this opens the door to many suspicious individuals or organizations to utilize these platforms. Recently, there has been an increased number of spreading fake news and rumors over the web and social media BIBREF0. Fake news in social media vary considering the intention to mislead. Some of these news are spread with the intention to be ironic or to deliver the news in an ironic way (satirical news). Others, such as propaganda, hoaxes, and clickbaits, are spread to mislead the audience or to manipulate their opinions. In the case of Twitter, suspicious news annotations should be done on a tweet rather than an account level, since some accounts mix fake with real news. However, these annotations are extremely costly and time consuming – i.e., due to high volume of available tweets Consequently, a first step in this direction, e.g., as a pre-filtering step, can be viewed as the task of detecting fake news at the account level.
The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts.
In this work, we investigate the problem of discriminating between factual and non-factual accounts in Twitter. To this end, we collect a large dataset of tweets using a list of propaganda, hoax and clickbait accounts and compare different versions of sequential chunk-based approaches using a variety of feature sets against several baselines. Several approaches have been proposed for news verification, whether in social media (rumors detection) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, or in news claims BIBREF5, BIBREF6, BIBREF7, BIBREF8. The main orientation in the previous works is to verify the textual claims/tweets but not their sources. To the best of our knowledge, this is the first work aiming to detect factuality at the account level, and especially from a textual perspective. Our contributions are:
[leftmargin=4mm]
We propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution.
We benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts.
We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution.
|
Why is the ablation test applied?
|
To investigate the contribution of the several semantic and dictionary-based features.
|
null | false
| 183
|
Even though machine translation has improved considerably with the advent of neural machine translation (NMT) BIBREF0 , BIBREF1 , the translation of pronouns remains a major issue. They are notoriously hard to translate since they often require context outside the current sentence.
As an example, consider the sentences in Figure FIGREF1 . In both languages, there is a pronoun in the second sentence that refers to the European Central Bank. When the second sentence is translated from English to German, the translation of the pronoun it is ambiguous. This ambiguity can only be resolved with context awareness: if a translation system has access to the previous English sentence, the previous German translation, or both, it can determine the antecedent the pronoun refers to. In this German sentence, the antecedent Europäische Zentralbank dictates the feminine gender of the pronoun sie.
It is unfortunate, then, that current NMT systems generally operate on the sentence level BIBREF2 , BIBREF3 , BIBREF4 . Documents are translated sentence-by-sentence for practical reasons, such as line-based processing in a pipeline and reduced computational complexity. Furthermore, improvements of larger-context models over baselines in terms of document-level metrics such as BLEU or RIBES have been moderate, so that their computational overhead does not seem justified, and so that it is hard to develop more effective context-aware architectures and empirically validate them.
To address this issue, we present an alternative way of evaluating larger-context models on a test set that allows to specifically measure a model's capability to correctly translate pronouns. The test suite consists of pairs of source and target sentences, in combination with contrastive translation variants (for evaluation by model scoring) and additional linguistic and contextual information (for further analysis). The resource is freely available. Additionally, we evaluate several context-aware models that have recently been proposed in the literature on this test set, and extend existing models with parameter tying.
The main contributions of our paper are:
Section SECREF2 explains how our paper relates to existing work on context-aware models and the evaluation of pronoun translation. Section SECREF3 describes our test suite. The context-aware models we use in our experiments are detailed in Section SECREF4 . We discuss our experiments in Section SECREF5 and the results in Section SECREF6 .
The main contributions of our paper are: • We present a large-scale test set to evaluate the accuracy with which NMT models translate the English pronoun it to its German counterparts es, sie and er. • We evaluate several context-aware systems and show how targeted, contrastive evaluation is an effective tool to measure improvement in pronoun translation. • We empirically demonstrate the effectiveness of parameter tying in multi-encoder contextaware models.
|
What are the contributions of this paper?
|
• The authors present a large-scale test set to evaluate the accuracy with which NMT models translate the English pronoun it to its German counterparts es, sie and er. • They evaluate several context-aware systems and show how targeted, contrastive evaluation is an effective tool to measure improvement in pronoun translation. • They empirically demonstrate the effectiveness of parameter tying in multi-encoder contextaware models.
|
null | false
| null |
How does grain orientation effect the dimensional movement of wood?
|
Wood expands and shrinks depending on its moisture content. The relative amount of change, expansion, or shrinkage differs for each of the three main grain directions. Those directions are longitudinal, radial, and tangential. Longitudinal is the direction a tree or its branches grows length-wise. In most lumber, this is its length. The change in this direction is very small. That is why a wooden ruler can be used without changing its measurements when ambient moisture changes. In almost all uses, longitudinal expansion and contraction are small enough not to be of concern. Radial and tangential directional expansion and contraction are significant and different enough that they affect the dimensional stability of a piece of wood. The relative amount of change in each direction is specific to the wood species. Regardless of wood species, the change is greater in the tangential direction than in the radial direction. The tangential direction is tangential to the growth rings in the wood, and the radial direction is perpendicular to the growth rings. The T/R or ratio of tangential to radial change correlates to the dimensional stability of a wood species. If you visualize a piece of lumber that is milled square and comes from a part of a log such that the rings are almost straight across in one direction, that is referred to as vertical grain or quarter sawn. As the moisture content of the wood changes to come into equilibrium with its environment, it will move toward a diamond shape. The closer the T/R ratio is to 1, the less the square dimension will distort. Dimensional stability and the direction of wood movement is important to plan for when cutting joinery in wood. If two joined pieces are aligned so that they will not move in the same direction relative to the joint, the joint will be unstable when the ambient humidity changes and can cause the joint to fail.
|
|
null | false
| null |
French Open
The trophies have been awarded to the winners since 1953 and are manufactured by Mellerio dits Meller, a famous Parisian jewelry house. They are all made of pure silver with finely etched decorations on their side. Each new singles winner gets his or her name written on the base of the trophy. Winners receive custom-made pure silver replicas of the trophies they have won. They are usually presented by the President of the French Tennis Federation (FFT).
The trophy awarded to the winner of the men's singles is called the Coupe des Mousquetaires (The Musketeers' Cup). It is named in honor of the "Four Musketeers". The trophy weighs 14 kg, is 40 cm high and 19 cm wide. The current design was created in 1981 by the Mellerio dit Meller. Each winner gets a smaller-size replica and the original remains property of the FFT at all times.
The trophy awarded to the winner of the women's singles is called the Coupe Suzanne Lenglen (Suzanne Lenglen Cup) since 1979. The current cup was awarded for the first time in 1986. It is, with a few details, a replica of a cup offered at the time by the city of Nice to Suzanne Lenglen. This trophy, donated by Suzanne Lenglen's family to the Musée National du Sport, was awarded between 1979 and 1985 to every winner until the FFT made a copy. Each winner receives a smaller-size replica and the original remains property of the FFT at all times.
|
Extract the names of the trophies given to the French Open singles champions and provide them in a bulleted list.
|
- Coupe des Mousquetaires (The Musketeers' Cup)
- Coupe Suzanne Lenglen (Suzanne Lenglen Cup)
|
null | false
| null |
Explain how sampling bias in Voice Of Customer research could adversely affect a marketing analysis of customers' post-purchase experience with a product.
|
Recently I bought a TV from Amazon after extensive research and analysis. The TV manufacturer promptly followed up two weeks after the purchase. The post-purchase experience survey had questions like the following:
· Did you buy it from a store/online?
· Are you satisfied with your order?
· Which store did you visit?
· Was the self-assembly instructions easy to follow?
· What other items would you consider purchasing from our brand?
The above questions were too generic from the TV manufacturer, and they missed the mark in identifying me as a persona and my buying criteria. I bought the TV for my basement as a secondary TV, and I picked a TV brand with standard features to be cost-effective. Below are a few sampling biases from the survey
Buyer’s criteria
The survey didn’t identify the TV for primary usage. I purchased this as a secondary TV, and my expectations were low, but I’m satisfied with the TV. I wouldn’t have picked this brand if this was my primary TV purchase. This is a classic example of sampling bias where the survey didn’t identify my buying criteria and resulted in skewed results.
Audience segmentation
As a millennial, my primary purchase channel is online. I rely heavily on YouTube reviews and blogs for my research. The TV survey aimed at in-store purchases and needed to capture the online experience correctly. This type of sampling bias results in incorrect market analysis.
|
|
null | false
| null |
Dionysus (Διόνυσος, Diónusos)/Bacchus (Βάκχος, Bákkhos)
God of wine, fruitfulness, parties, festivals, madness, chaos, drunkenness, vegetation, ecstasy, and the theater. He is the twice-born son of Zeus and Semele, in that Zeus snatched him from his mother's womb and stitched Dionysus into his own thigh and carried him until he was ready to be born. In art he is depicted as either an older bearded god (particularly before 430 BC) or an effeminate, long-haired youth (particularly after 430 BC). His attributes include the thyrsus, a drinking cup, the grape vine, and a crown of ivy. He is often in the company of his thiasos, a group of attendants including satyrs, maenads, and his old tutor Silenus. The consort of Dionysus was Ariadne. It was once held that Dionysius was a later addition to the Greek pantheon, but the discovery of Linear B tablets confirm his status as a deity from an early period. Bacchus was another name for him in Greek, and came into common usage among the Romans. His sacred animals include dolphins, serpents, tigers, and donkeys.
|
From the passage identify the attributes of Dionysus. Display the results in a comma separated format.
|
the thyrsus, a drinking cup, the grape vine, a crown of ivy
|
null | false
| null |
Identify which animal is domesticated or wild: Cat, Bubal hartebeest
|
Cat is domesticated, Bubal hartebeest is domesticated
|
|
null | false
| null |
The Catholic University of America (CUA) is a private Roman Catholic research university in Washington, D.C. It is a pontifical university of the Catholic Church in the United States and the only institution of higher education founded by U.S. Catholic bishops. Established in 1887 as a graduate and research center following approval by Pope Leo XIII, the university began offering undergraduate education in 1904. It is classified among "R2: Doctoral Universities – High research activity".
Its campus is adjacent to the Brookland neighborhood, known as "Little Rome", which contains 60 Catholic institutions, including Trinity Washington University, the Dominican House of Studies, and Archbishop Carroll High School, as well as the Basilica of the National Shrine of the Immaculate Conception.
CUA's programs emphasize the liberal arts, professional education, and personal development. The school stays closely connected with the Catholic Church and Catholic organizations. The residential U.S. cardinals put on the American Cardinals Dinner each year to raise scholarship funds. The university also has a long history of working with the Knights of Columbus; its law school and basilica have dedications to the involvement and support of the Knights.
|
Given this reference text about the Catholic University of America, tell me when it was established, where it is located, and who founded it.
|
The Catholic University was founded in 1887 by the US Catholic Bishops, and is located in Washington, DC.
|
null | false
| null |
What is alexa?
|
Alexa is a virtual assistant technology that people can purchase for use in their homes. It was created by Amazon back in 2013, and it responds like a human to human questions. It can order items from Amazon, and can also be used for home automation. Many widgets created can connect to the Alexa to extend the functionality of what Alexa can do. Alexa has fun entertainment features like story telling, singing, and telling jokes.
|
|
null | false
| null |
The Dom Tower (Cathedral Tower, Dutch: Domtoren) of Utrecht is the tallest church tower in the Netherlands, at 112.5 metres (368 feet) in height. It is considered the symbol of Utrecht. The tower was part of St. Martin's Cathedral, also known as the Dom Church, and was built between 1321 and 1382, to a design by John of Hainaut. The cathedral was never fully completed due to lack of money. Since the unfinished nave collapsed in 1674, the Dom tower has been a freestanding tower. The tower stands at the spot where the city of Utrecht originated almost 2,000 years ago.
Design and construction
The Dom Tower was one of the largest towers constructed in Europe during the fourteenth century, and it was planned to show the power of the church in Utrecht.[citation needed] Its construction led preacher Geert Groote to protest against the vanity of such an immense project, suggesting it was too tall, too expensive and all but aesthetic.
The tower consists of two square blocks, topped by a much lighter lantern. One of the most striking features is the absence of visible buttresses. Its particular shape and original architecture had a large influence on many other towers in the Netherlands, including the Martinitoren in Groningen.[citation needed] Upon completion in 1382 the tower stood 109 metres tall. However the height was increased during the restorations in 1910, to its present height of 112.5 metres.
The Dom tower was a multifunctional building. In addition to being a belfry, it contained a private chapel of the Bishop of Utrecht on the first floor. It also served as a watchtower; the tower guard was housed on the second floor of the lower square block.
|
What is the height of the Domtoren in Utrecht.
|
The Dom Tower (Cathedral Tower, Dutch: Domtoren) of Utrecht is the tallest church tower in the Netherlands, at 112.5 metres (368 feet) in height.The Dom Tower was one of the largest towers constructed in Europe during the fourteenth century, and it was planned to show the power of the church in Utrecht.
|
null | false
| null |
Those who think that intelligent extraterrestrial life is (nearly) impossible argue that the conditions needed for the evolution of life—or at least the evolution of biological complexity—are rare or even unique to Earth. Under this assumption, called the rare Earth hypothesis, a rejection of the mediocrity principle, complex multicellular life is regarded as exceedingly unusual.
The rare Earth hypothesis argues that the evolution of biological complexity requires a host of fortuitous circumstances, such as a galactic habitable zone, a star and planet(s) having the requisite conditions, such as enough of a continuous habitable zone, the advantage of a giant guardian like Jupiter and a large moon, conditions needed to ensure the planet has a magnetosphere and plate tectonics, the chemistry of the lithosphere, atmosphere, and oceans, the role of "evolutionary pumps" such as massive glaciation and rare bolide impacts. And perhaps most importantly, advanced life needs whatever it was that led to the transition of (some) prokaryotic cells to eukaryotic cells, sexual reproduction and the Cambrian explosion.
In his book Wonderful Life (1989), Stephen Jay Gould suggested that if the "tape of life" were rewound to the time of the Cambrian explosion, and one or two tweaks made, human beings most probably never would have evolved. Other thinkers such as Fontana, Buss, and Kauffman have written about the self-organizing properties of life.
|
Please give a simple list of reasons that extraterrestrial intelligence may not exist or is very rare
|
Conditions that evolved intelligent life on planet Earth may be exceedingly rare in the Universe. These reasons for that are speculated to be:
1. The solar system is in a safe location within its galaxy.
2. The location of the earth relative to the sun and other planets in the solar system gives it protection from extreme environmental conditions that would prevent life from occurring.
3. The steps that led to advanced life forms (prokaryotic cells to eukaryotic cells, sexual reproduction and the Cambrian explosion) may be very unlikely to occur.
|
null | false
| null |
What is Vue.js?
|
Vue.js is a JavaScript framework for building user interfaces. It builds on top of standard HTML, CSS, and JavaScript and provides a declarative and component-based programming model that helps you efficiently develop user interfaces, be they simple or complex.
|
|
null | false
| 349
|
We summarize in Table TABREF8 an incomplete list of published Image Captioning datasets, in English and in other languages. Several image caption datasets for English have been constructed, the representative examples are Flickr3k BIBREF5, BIBREF6; Flickr 30k BIBREF7 – an extending of Flickr3k and Microsoft COCO (Microsoft Common in Objects in Context) BIBREF8.
Besides, several image datasets with non-English captions have been developed. Depending on their applications, the target languages of these datasets vary, including German and French for image retrieval, Japanese for cross-lingual document retrieval BIBREF9 and image captioning BIBREF10, BIBREF3, Chinese for image tagging, captioning and retrieval BIBREF4. Each of these datasets is built on top of an existing English dataset, with MS-COCO as the most popular choice.
Our dataset UIT-ViIC is constructed using images from Microsoft COCO (MS-COCO). MS-COCO dataset includes more than 150,000 images, divided into three distributions: train, vailidate, test. For each image, five captions are provided independently by Amazon’s Mechanical Turk. MS-COCO is the most popular dataset for Image Captioning thanks to the MS-COCO challenge (2015) and it has a powerful evaluation server for candidates.
Regarding to the Vietnamese language processing, there are quite a number of research works on other tasks such as parsing, part-of-speech, named entity recognition, sentiment analysis, question answering. However, to the extent of our knowledge, there are no research publications on image captioning for Vietnamese. Therefore, we decide to build a new corpus of Vietnamese image captioning for Image Captioning research community and evaluate the state-of-the-art models on our corpus. In particular, we validate and compare the results by BLEU BIBREF11, ROUGE BIBREF12 and CIDEr BIBREF13 metrics between Neural Image Captioning (NIC) model BIBREF14, Image Captioning model from the Pytorch-tutorial BIBREF15 by Yunjey on our corpus as the pioneering results.
In particular, we validate and compare the results by BLEU, ROUGE and CIDEr metrics between Neural Image Captioning (NIC) model, Image Captioning model from the Pytorch-tutorial by Yunjey on our corpus as the pioneering results.
|
What is used to validate and compare the results between Neural Image Cap_x005f_x005f_x005f_x0002_tioning (NIC) model ?
|
BLEU, ROUGE and CIDEr metrics are used to validate and compare the results.
|
null | false
| null |
Share the best 7 European Cities for Nightlife
|
1. Berlin
2. Amsterdam
3. Prague
4. Budapest
5. Dublin
6. London
7. Barcelona
|
|
null | false
| null |
Who would win between a boxer and an MMA fighter?
|
In a fight with virtually no rules, or with rules that are followed in MMA. The MMA fighter will clearly have the advantage over the boxer, because MMA fighters have a wide variety of skills, for instance they can use both their hands and legs. However, under boxing rules, the same principle will apply, a boxer would definitely have the upper hand in this setting. A good example is the Floyd Mayweather vs Conor McGregor fight, which was under boxing rules and saw mayweather emerge as the winner.
|
|
null | false
| null |
Fatemeh Pahlavi was born in Tehran on 30 October 1928. She was the tenth child of Reza Shah and his fourth and last wife, Esmat Dowlatshahi. Her mother was from the Qajar dynasty and married Reza Shah in 1923. Fatemeh was the full-sister of Abdul Reza Pahlavi, Ahmad Reza Pahlavi, Mahmoud Reza Pahlavi and Hamid Reza Pahlavi.
|
How many full siblings did Fatemeh Pahlavi have?
|
4 siblings: Abdul Reza Pahlavi, Ahmad Reza Pahlavi, Mahmoud Reza Pahlavi and Hamid Reza Pahlavi.
|
null | false
| null |
Harvard University is a private Ivy League research university in Cambridge, Massachusetts. Founded in 1636 as Harvard College and named for its first benefactor, the Puritan clergyman John Harvard, it is the oldest institution of higher learning in the United States and is widely considered to be one of the most prestigious universities in the world.
|
What is the oldest higher learning institution in the United States
|
The oldest higher learning institution in the United States is Harvard University, which is widely considered the most prestigious university in the world.
|
null | false
| 38
|
We consider the dataset consisting of the entire collection of articles of the Wikipedia Medicine Portal, updated at the end of 2014. Wikipedia articles are written according to the Media Wiki markup language, a HTML-like language. Among the structural elements of one page, which differs from standard HTML pages, there are i) the internal links, i.e., links to other Wikipedia pages, different from links to external resources); ii) categories, which represent the Media Wiki categories a page belongs to: they are encoded in the part of text within the Media Wiki “categories" tag in the page source, and iii) informative boxes, so called “infoboxes", which summarize in a structured manner some peculiar pieces of information related the topic of the article. The category values for the articles in the medical portal span over the ones listed at https://en.wikipedia.org/wiki/Portal:Medicine. Examples of categories, which appear at the bottom of each Wikipedia page, are in Fig. 1 .
Infoboxes of the medical portal feature medical content and standard coding. As an example, Fig. 2 shows the infobox in the Alzheimer's disease page of the portal. The infobox contains explanatory figures and text denoting peculiar characteristics of the disease and the value for the standard code of such disease (ICD9, as for the international classification of the disease).
Thanks to WikiProject Medicine, the dataset of articles we collected from the Wikipedia Medicine Portal has been manually labeled into seven quality classes. They are ordered as Stub, Start, C, B, A, Good Article (GA), Featured Article (FA). The Featured and Good article classes are the highest ones: to have those labels, an article requires a community consensus and an official review by selected editors, while the other labels can be achieved with reviews from a larger, even controlled, set of editors. Actually, none of the articles in the dataset is labeled as A, thus, in the following, we do not consider that class, restricting the investigation to six classes.
At the date of our study, we were able to gather 24,362 rated documents. Remarkably, only a small percentage of them (1%) is labeled as GA and FA. Indeed, the distribution of the articles among the classes is highly skewed. There are very few (201) articles for the highest quality classes (FA and GA), while the vast majority (19,108) belongs to the lowest quality ones (Stub and Start). This holds not only for the medical portal. Indeed, it is common in all Wikipedia, where, on average, only one article in every thousand is a Featured one.
In Section "Experiments and results" , we will adopt a set of machine-learning classifiers to automatically label the articles into the quality classes. Dealing with imbalanced classes is a common situation in many real applications of classification learning: healthy patients over the population, fraudulent actions over daily genuine transactions, and so on. Without any countermeasure, common classifiers tend to correctly identify only articles belonging to the majority classes, clearly leading to severe mis-classification of the minority classes, since typical learning algorithms strive to maximize the overall prediction accuracy. To reduce the disequilibrium among the size of the classes, we have first randomly sampled the articles belonging to the most populated classes. Then, we have performed some further elaboration, as shown in the following.
Many studies have been conducted to improve learning algorithms accuracy in presence of imbalanced data BIBREF8 . For the current work, we have considered one of the most popular approaches, namely the Synthetic Sampling with Data Generation, detailed in BIBREF9 . It consists in generating synthetic instances from the minority classes, to balance the overall dataset. The approach has been broadly applied to problems relying on NLP features, see, e.g., BIBREF10 . In our case, we resampled the input data set by applying the Synthetic Minority Oversampling TEchnique (SMOTE), with percentage 40% for GA and 180%, for FA. In particular, the steps to oversample are the following:
Table 1 shows the number of articles in the dataset, divided per class, as well as the random samples we have considered for our study. The experiments presented in Section "Experiments and results" are based on the articles of the right-hand column in the table.
They are ordered as Stub, Start, C, B, A, Good Article (GA), Featured Article (FA).
|
What are the quality classes of articles in the dataset?
|
They are ordered as Stub, Start, C, B, A, Good Article (GA), Featured Article (FA).
|
null | false
| null |
How should I think about ways to arrange a desk in a home office
|
There are several ways to arrange a desk in a home office. A few guiding principles (based on the space and how you'll use your desk) can help guide you along the way.
* If you're frequently on camera for video calls you'll want to consider what people will see in your background. Avoid clutter and too much distraction, but also feel free to use that part of the office to express yourself professionally. An interesting painting or a few professional accolades can be a good conversation starters.
* Bright windows can be great for creativity but having these directly behind you when on video calls can give you a whitewashed look that's not ideal. Ideally, a window would be in front of you to not only cast your face in a good light, but also to give you a nice view as you work. If you can't position a window on front of your desk, you could always invest in good artificial lighting (e.g., a ring light) to help with video calls.
* Avoid having positioning your desk so your back is directly in front of the office entrance. If you have line of site to all entrances / exits you'll likely feel more in control while working, which can make you more productive.
* Position your desk so you have easy access to everything you'd need for work. Frequently used reference materials, papers, writing supplies, etc. should all be within arms length so as not to disrupt your "flow" while working.
|
|
null | false
| null |
Burg Lockenhaus (Hungarian Léka) is a castle and medieval fortress in the Güns Valley in the southeastern part of Lockenhaus, in Burgenland, eastern Austria. Burg Lockenhaus is 368 metres (1,207 ft) above sea level. The castle was built in Romanesque and Gothic architectural styles around 1200, and was initially called "Leuca" or Léka. It is part of the Naturpark Geschriebenstein.
Geography
The castle is in the Güns Valley, set amidst a hilly terrain in eastern Austria, near the Hungarian border towards Kőszeg. It is in the southeastern part of the town of Lockenhaus, roughly 120 kilometres (75 mi) south of Vienna and roughly the same distance by road northeast of Graz.[citation needed] It was built on the spur of a hill, which is now called the Castle Hill. It can be reached by express bus service along the Eisenstadter Bundestrasse. A lake lies immediately to the east of the castle.
|
Where is the Burg Lockenhaus located
|
Burg Lockenhaus (Hungarian Léka) is a castle and medieval fortress in the Guens Valley in the southeastern part of Lockenhaus, in Burgenland, eastern Austria. Burg Lockenhaus is 368 metres (1,207 ft) above sea level. The castle was built in Romanesque and Gothic architectural styles around 1200, and was initially called "Leuca" or Léka. It is part of the Naturpark Geschriebenstein.
|
null | false
| null |
Which is a species of fish? Dolphin or Dolphin fish
|
Dolphin fish
|
|
null | false
| null |
Q: What do I do if I get a rash or other reaction to hand sanitizer? What do I do if I have been exposed to contaminated hand sanitizer?
|
A: Call your doctor if you experience a serious reaction to hand sanitizer. People who have been exposed to contaminated hand sanitizer and are experiencing symptoms should seek immediate medical treatment for potential reversal of toxic effects.
The FDA encourages consumers and health care professionals to report adverse events experienced with the use of hand sanitizers to the FDA’s MedWatch Adverse Event Reporting program:
Complete and submit the report online; or
Download and complete the form, then submit it via fax at 1-800-FDA-0178.
Include as much information as you can about the product that caused the reaction, including the product name, the manufacturer, and the lot number (if available).
See Q&A for Consumers: Hand Sanitizers and COVID-19 and Safely Using Hand Sanitizer for more information.
|
|
null | false
| null |
The Buddhist Uprising of 1966 (Vietnamese: Nổi dậy Phật giáo 1966), or more widely known in Vietnam as the Crisis in Central Vietnam (Vietnamese: Biến động Miền Trung), was a period of civil and military unrest in South Vietnam, largely focused in the I Corps area in the north of the country in central Vietnam. The area is a heartland of Vietnamese Buddhism, and at the time, activist Buddhist monks and civilians were at the forefront of opposition to a series of military juntas that had been ruling the nation, as well as prominently questioning the escalation of the Vietnam War.
During the rule of the Catholic Ngô Đình Diệm, the discrimination against the majority Buddhist population generated the growth of Buddhist institutions as they sought to participate in national politics and gain better treatment. In 1965, after a series of military coups that followed the fall of the Diệm regime in 1963, Air Marshal Nguyễn Cao Kỳ and General Nguyễn Văn Thiệu finally established a stable junta, holding the positions of Prime Minister and figurehead Chief of State respectively. The Kỳ-Thiệu regime was initially almost a feudal system, being more of an alliance of warlords than a state as each corps commander ruled his area as his own fiefdom, handing some of the taxes they collected over to the government in Saigon and keeping the rest for themselves. During that time, suspicion and tension continued between the Buddhist and Catholic factions in Vietnamese society.
The religious factor combined with a power struggle between Kỳ and General Nguyễn Chánh Thi, the commander of I Corps, a Buddhist local to the region and popular in the area. Thi was a strong-willed officer regarded as a capable commander, and Kỳ saw him as a threat, as did others within the junta. In February 1966, Kỳ attended a summit in Honolulu, where he became convinced that he now had American support to move against Thi, the strongest and most able of the corps commanders. In March 1966, Kỳ fired Thi and ordered him into exile in the United States under the false pretense of medical treatment. This prompted both civilians and some I Corps units to launch widespread civil protests against Kỳ's regime and halt military operations against Viet Cong. Kỳ gambled by allowing Thi to return to I Corps before departing for the US, but the arrival of the general to his native area only fuelled anti-Kỳ sentiment. The Buddhist activists, students and Thi loyalists in the military coalesced into the "Struggle Movement", calling for a return to civilian rule and elections. Meanwhile, Thi stayed in I Corps and did not leave; strikes and protests stopped civilian activity in the area, government radio stations were taken over and used for anti-Kỳ campaigning, and military operations ceased. Riots also spread to the capital Saigon and other cities further south.
At the start of April, Kỳ decided to move. He declared that Da Nang, the main centre in I Corps, was under communist control and publicly vowed to kill the mayor, who had expressed support for the Struggle Movement. Kỳ moved military forces into the city and travelled there to prepare for an assault, but had to withdraw and then start discussions with Buddhist leaders, as it was obvious that he was not strong enough to crush the opposition. In the meantime, he fired Thi's successor Nguyễn Văn Chuân because he wanted a firmer attempt to regain control, and appointed Tôn Thất Đính to replace him. Đính claimed to have calmed the situation, but Kỳ viewed the situation as appeasement, and on May 15, his forces drove off Đính and took over. During the previous month, American forces had also become involved in the stand-off, and the Struggle Movement viewed their participation as biased towards Kỳ, leading to some tense confrontations.
In the second half of May, Kỳ's forces began to force the issue and gradually wore down the Struggle Movement as the rebel I Corps forces were worn down, despite some American objections that his aggressive attacks had the potential to cause too much collateral damage. At one stage, Kỳ's forces ended up in a skirmish with American forces, and later, the Americans were in the middle of a stand-off between the Vietnamese factions regarding a mined bridge. As Kỳ's forces took back Da Nang and Huế in street fighting, Struggle Movement supporters saw American intervention as being pro-Kỳ, and anti-US riots resulted in some American buildings being burnt down. Kỳ's triumph ended the Buddhist movement's influence on politics and he confined their leader Thích Trí Quang to house arrest thereafter, while Thi left for the United States.
|
In the following text, identify the key commanders and leaders of the Buddhist Uprising of 1966, extract their roles, and place them in a bulleted list.
|
-- Nguyễn Cao Kỳ: Air Marshal of South Vietnam leading up to the Buddhist Uprising of 1966. Alongside Nguyễn Văn Thiệu in 1965, he established a stable junta known as the Kỳ-Thiệu regime and held the position of Prime Minister. This junta felt threatened Commander Thi of the I Corps, forcing his exile as a result. Thi's exile sparked widespread civil unrest known as the "Struggle Movement". In the end, Kỳ triumphed over the Buddhist Uprising, ending the Buddhist movement's influence on local politics.
-- Nguyễn Chánh Thi: General of South Vietnam leading up to the Buddhist Uprising of 1966. The commander of the I Corps, Thi was a a Buddhist local to the region and popular in the area. He was viewed as a strong-willed officer regarded as a capable commander. Thi was seen as a threat by the Kỳ-Thiệu junta, and was exiled to the United States as a result by Kỳ. His exile sparked widespread civil unrest known as the "Struggle Movement". Thi returned to South Vietnam to help lead the Buddhist Uprising. In the end, Thi's was defeated, resulting in his return to exile to the United States.
|
null | false
| null |
What are the names of all Depeche Mode albums?
|
Speak & Spell, released in 1981.
A Broken Frame, released in 1982.
Construction Time Again, released in 1983.
Some Great Reward, released in 1984.
Black Celebration, released in 1986.
Music for the Masses, released in 1987.
Violator, released in 1990.
Songs of Faith and Devotion, released in 1993.
Ultra, released in 1997.
Exciter, released in 2001.
Playing the Angel, released in 2005.
Sounds of the Universe, released in 2009.
Delta Machine, released in 2013.
Spirit, released in 2017.
Memento Mori, released in 2023.
|
|
null | false
| null |
What are some of the sight seeing and outdoor activities to do in Portugal, Spain
|
Day 1: Arrival in Lisbon
- Transfer from the airport to the hotel (8.6 miles/14 km)
Day 2: Lisbon
- Visit to Belém Tower (5.5 miles/9 km)
- Jerónimos Monastery
- St. George's Castle
- Alfama neighborhood
Day 3: Sintra
- Visit to Pena Palace (18.7 miles/30 km)
- Sintra National Palace
- Cabo da Roca (14.6 miles/23.5 km)
Day 4: Cascais
- Beach day in Cascais (16.8 miles/27 km)
- Visit to Boca do Inferno
- Cascais Marina
Day 5: Óbidos
- Visit to Óbidos Castle (52.8 miles/85 km)
- Walk through the village
- Visit to Alcobaça Monastery (22.8 miles/37 km)
Day 6: Nazaré
- Visit to Nazaré beach (30.7 miles/49.5 km)
- Cable car ride
- Visit to the Chapel of Our Lady of Nazaré
Day 7: Coimbra
- Visit to Coimbra University (68.3 miles/110 km)
- Walk through the historic center
- Visit to the Old Cathedral
Day 8: Aveiro
- Boat ride through the canals of Aveiro (44.7 miles/72 km)
- Visit to the Aveiro Cathedral
- Walk through the Fisherman's Quarter
Day 9: Porto
- Visit to the Ribeira neighborhood (46.6 miles/75 km)
- Climb the Clérigos Tower
- Visit to the São Bento train station
- Tour of the Port wine cellars
Day 10: Departure from Porto
- Transfer from the hotel to the airport (10.4 miles/17 km)
|
|
null | false
| 139
|
It is well known that language has certain structural properties which allows natural language speakers to make “infinite use of finite means" BIBREF3 . This structure allows us to generalize beyond the typical machine learning definition of generalization BIBREF4 (which considers performance on the distribution that generated the training set), permitting the understanding of any utterance sharing the same structure, regardless of probability. For example, sentences of length 100 typically do not appear in natural text or speech (our personal 'training set'), but can be understood regardless due to their structure. We refer to this notion as linguistic generalization .
Many problems in NLP are treated as sequence to sequence tasks with solutions built on seq2seq-attention based models. While these models perform very well on standard datasets and also appear to capture some linguistic structure BIBREF5 , BIBREF6 , BIBREF7 , they also can be quite brittle, typically breaking on uncharacteristic inputs BIBREF8 , BIBREF1 , indicating that the extent of linguistic generalization these models achieve is still somewhat lacking.
Due to the high capacity of these models, it is not unreasonable to expect them to learn some structure from the data. However, learning structure is not a sufficient condition to achieving linguistic generalization. If this structure is to be usable on data outside the training distribution, the model must learn the structure without additionally learning (overfitting on) patterns specific to the training data. One may hope, given the right hyperparameter configuration and regularization, that a model converges to a solution that captures the reusable structure without overfitting too much on the training set. While this solution exists in theory, in practice, it may be difficult to find.
In this work, we look at the feasibility of training and tuning seq2seq-attention models towards a solution that generalizes in this linguistic sense. In particular, we train models on a symbol replacement task with a well defined generalizable structure. The task is simple enough that all models achieve near perfect accuracy on the standard test set, i.e., where the inputs are drawn from the same distribution as that of the training set. We then test these models for linguistic generalization by creating test sets of uncharacteristic inputs, i.e., inputs that are not typical in the training distribution but still solvable given that the generalizable structure was learned. Our results indicate that generalization is highly sensitive; such that even changes in the random seed can drastically affect the ability to generalize. This dependence on an element that is not (or ideally should not be) a hyperparameter suggests that the line between generalization and failure is quite fine, and may not be feasible to reach simply by hyperparameter tuning alone.
We then test these models for linguistic generalization by creating test sets of uncharacteristic inputs, i.e., inputs that are not typical in the training distribution but still solvable given that the generalizable structure was learned.
|
How to test the models for linguistic generalization in this paper?
|
The authors create test sets of uncharacteristic inputs to test the models, i.e., inputs that are not typical in the training distribution but still solvable given that the generalizable structure was learned.
|
null | false
| null |
What is OAuth 2.0?
|
OAuth 2.0, which stands for “Open Authorization”, is a standard designed to allow a website or application to access resources hosted by other web apps on behalf of a user. It replaced OAuth 1.0 in 2012 and is now the de facto industry standard for online authorization.
|
|
null | false
| 391
|
This paper presents a novel neural model - Dynamic Fusion Network (DFN), for machine reading comprehension (MRC). DFNs differ from most state-of-the-art models in their use of a dynamic multi-strategy attention process, in which passages, questions and answer candidates are jointly fused into attention vectors, along with a dynamic multi-step reasoning module for generating answers. With the use of reinforcement learning, for each input sample that consists of a question, a passage and a list of candidate answers, an instance of DFN with a sample-specific network architecture can be dynamically constructed by determining what attention strategy to apply and how many reasoning steps to take. Experiments show that DFNs achieve the best result reported on RACE, a challenging MRC dataset that contains real human reading questions in a wide variety of types. A detailed empirical analysis also demonstrates that DFNs can produce attention vectors that summarize information from questions, passages and answer candidates more effectively than other popular MRC models.
This paper presents a novel neural model - Dynamic Fusion Network (DFN), for machine reading comprehension (MRC). DFNs differ from most state-of-the-art models in their use of a dynamic multistrategy attention process, in which passages, questions and answer candidates are jointly fused into attention vectors, along with a dynamic multi-step reasoning module for generating answers.
|
What do DFNs differ from most state-of-the-art models?
|
Their use of a dynamic multistrategy attention process, in which passages, questions and answer candidates are jointly fused into attention vectors, along with a dynamic multi-step reasoning module for generating answers.
|
null | false
| null |
Categorize each of the following instruments as either string or keyboard: Guitar, Violin, piano, harmonium, cello, accordion, banjo
|
Guitar - String
Violin - String
piano - keyboard
harmonium - keyboard
cello - string
accordion - keyboard
banjo - string
|
|
null | false
| null |
How many seasons of Greatest American Hero were there?
|
There were three seasons of The Greatest American Hero.
|
|
null | false
| null |
Give me a list of books that are long and hard to read. The sort of books with which I could impress someone who read a lot.
|
When talking to someone well read, I would go with some of the classics of literature. The following books require a very detailed read in order to be able to understand them, due to the complex topics they analyze. They are long, but also considered masterpieces:
- "Crime and Punishment" by Dostoievsky
- "Ulysses" by James Joyce
- "The Magic Mountain" by Thomas Mann
- "Gravity's Rainbow" by Thomas Pynchon
- "Don Quixote" by Miguel de Cervantes
- "Infinite Jest" by David Foster Wallace
|
|
null | false
| null |
Why do people move/sway during music?
|
Scientists believe we are born with a sense of rhythm and our bodies naturally react and behave to the sound of music in different ways.
|
|
null | false
| 158
|
In this question answering task, a reading passage , a query and several answer choices are given. P denotes the passage, Q denotes query and C denotes one of the multiple choices. The target of the model is to choose a correct answer A from multiple choices based on informations of P and Q.
Fig. FIGREF1 is the pipeline overview of QACNN. First, we use embedding layer to transform P, Q, and C into word embedding. Then the compare layer generates passage-query similarity map INLINEFORM0 and passage-choice similarity map INLINEFORM1 . The following part is the main component of QACNN. It consists of two-staged CNN architecture. The first stage projects word-level feature into sentence-level, and the second stage projects sentence-level feature into passage-level. Moreover, we apply query-based attention mechanism to each stage on the basis of INLINEFORM2 feature at word level and sentence level respectively. After QACNN Layer, we obtain each choice answer feature. Finally, a prediction layer collects output information from every choice feature and returns the most possible answer.
First, we use embedding layer to transform P, Q, and C into word embedding.
|
What do authors use to transform P, Q, and C into word embedding?
|
Embedding layer.
|
null | false
| null |
What is a Flat Earth Society
|
Flat Earth Society is a group of like minded people who believe the earth to be flat rather than round. Although science has proven the existence of our earth as a circular mass orbiting the sun in the solar system, the Flat Earthers firmly believe it to be a propaganda and continue to hold the belief that the earth is actually flat and not round.
|
|
null | false
| null |
Tell me whether the following European languages are in the Indo-European language family or not: Spanish, English, Basque, Finnish, Romanian, Hungarian, and Catalan.
|
Spanish, Romanian, and Catalan are all Indo-European languages classified within the Romance branch of Indo-European languages. English and German are both Indo-European languages classified within the Germanic branch of Indo-European languages. Basque, Finnish, and Hungarian are not Indo-European languages. Basque is a language isolate and is not known to be related to any other living language. Finnish and Hungarian are Uralic languages and are not related to the Indo-European languages.
|
|
null | false
| null |
Should differential privacy be used in the U.S. census?
|
I disagree with the US Census Bureau in its use of differential privacy for reasons as follow. The main purpose of the US Census is enumeration – to correctly count the population such that resource distribution can be performed smoothly and properly in the future. The use of differential privacy essentially undermines this goal. Although populations at the state level would remain true to the raw data, those at the town level could be subject to much higher variances. For example, the New York Times article “Changes to the Census Could Make Small Towns Disappear” exemplifies how differential privacy could compromise the enumerative purpose of the US Census. Native American towns that are much less populated than the urban districts could see their population numbers being halved by the DP algorithm. In addition, these incorrect numbers for Native Americans mean more than a mere calculation mistake – it could bring real-world consequences like reduced fundings or inadequate political representation. Yet, isn’t the US Census counting the population such that everyone can receive federal resources and form political constituency equally and equitably? Therefore, the use of differential privacy essentially negates the Census’s purpose.
Besides the more material concern illustrated above, I argue that the use of differential privacy could also lead to a much more social and cultural issue – representation. The New York Times article has revealed that our society has adopted algorithms that always attend to the majority at the expense of the minority, and such logic behind the algorithmic design would push the representation issue into a vicious cycle. Native American people, whose land was brutally taken away and colonized, have been historically rendered as a minority group in the society. All talks on land reparations aside, the US Census Bureau has designed and passed after checks an algorithm that diminishes their very existence on the spreadsheet. What does this imply about the logic behind the implementation of differential privacy? If the Census Bureau is fully aware that most of the Native American towns are rather sparsely populated, and the algorithm to preserve differential privacy would lead to high variances for estimation of small populations, what does the implementation of this algorithm imply? That this group of people are fine with not being represented in the final headcount, even if they have spent time and money in the process? This show of negligence again invalidates the use of differential privacy in the US Census.
|
|
1905.06906
| true
| null |
We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 .
We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation.
We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models.
The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation.
|
Does the fact that GCNs can perform well on this tell us that the task is simpler than previously thought?
|
No.
|
null | false
| null |
Of the following, which are characters from the TV show "I Love Lucy:" Lucy Ricardo, Ethel Mertz, Donald Trump, John Fitzgerald Kennedy, Ricky Ricardo, Fred Mertz, Batman, John Smith
|
Lucy Ricardo, Ethel Mertz, Ricky Ricardo, Fred Mertz
|
|
null | false
| null |
Which characters belong to DC or Marvel Universe? Magneto, Dr. Polaris
|
Dr. Polaris is DC, Magneto is Marvel
|
|
null | false
| null |
The 2018 UCI Track Cycling World Championships were the World Championships for track cycling in 2018. They took place in the Netherlands at the Omnisport Apeldoorn from 28 February to 4 March 2018.
|
Where was the 2018 UCI Track Cycling World Championships held?
|
The UCI Track Cycling World Championships were held in Omnisport Apeldoorn velodrome, in the Netherlands.
|
null | false
| null |
Amsterdam Centraal was designed by Pierre Cuypers, who is also known for his design of the Rijksmuseum in Amsterdam. While Cuypers was the principal architect, it is believed that he focused mostly on the decoration of the station building and left the structural design to railway engineers. The station was built by contractor Philipp Holzmann. The new central station replaced Amsterdam Willemspoort Station, which had closed in 1878, as well as the temporary Westerdok Station used from 1878 to 1889. The idea for a central station came from Johan Rudolph Thorbecke, then the Netherlands Minister of the Interior and responsible for the national railways, who, in 1884, laid two proposals before the Amsterdam municipal council. In the first proposal, the station would be situated between the Leidseplein and the Amstel river. In the other, it would be built in the open harbour front allowing for the station to be connected to the existing main lines in the area to the west and the south, but also to a projected new northern line.
|
Please write a bulleted list of the key people involved in the construction of the Amsterdam Centraal station, and their role in the project.
|
The key people involved in the construction of Amsterdam Centraal station were:
- Pierre Cuypers: designer
- Philipp Holzmann: building contractor
- Johan Rudolph Thorbecke: proposer
|
null | false
| null |
What does costco do?
|
Costco wholesale is a membership warehouse that offers direct to consumer pricing for products bought in bulk. The majority of Costco's revenue is generated from memberships not from the products they sell. They also offer brand-name merchandise at steeply discounted prices. With a Costco membership you can also purchase products online directly and have then shipped directly to you. Competitors to Costco are Sam's Club, Walmart, Amazon, Kroger, HEB, Home Depot, and Lowes.
|
|
null | false
| null |
Name five professional sports teams in the state of New York.
|
Five professional sports teams in the state of New York are the New York Yankees, New York Mets, New York Islands, Brooklyn Nets, and New York Knicks.
|
|
null | false
| null |
What is India's rank in Global Gender Gap Index 2022
|
135 out of 146 countries
|
|
null | false
| 311
|
Named Entity Recognition (NER) is an important step in most of the natural language processing (NLP) pipelines. It is designed to robustly handle proper names, which is essential for many applications. Although a seemingly simple task, it faces a number of challenges in noisy datasets and it is still considered an emerging research area BIBREF0 , BIBREF1 . Despite recent efforts, we still face limitations at identifying entities and (consequently) correctly classifying them. Current state-of-the-art NER systems typically have about 85-90% accuracy on news text - such as articles (CoNLL03 shared task data set) - but they still perform poorly (about 30-50% accuracy) on short texts, which do not have implicit linguistic formalism (e.g. punctuation, spelling, spacing, formatting, unorthodox capitalisation, emoticons, abbreviations and hashtags) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF1 . Furthermore, the lack of external knowledge resources is an important gap in the process regardless of writing style BIBREF5 . To face these problems, research has been focusing on microblog-specific information extraction techniques BIBREF2 , BIBREF6 .
In this paper, we propose a joint clustering architecture that aims at minimizing the current gap between world knowledge and knowledge available in open domain knowledge bases (e.g., Freebase) for NER systems, by extracting features from unstructured data sources. To this aim, we use images and text from the web as input data. Thus, instead of relying on encoded information and manually annotated resources (the major limitation in NER architectures) we focus on a multi-level approach for discovering named entities, combining text and image features with a final classifier based on a decision tree model. We follow an intuitive and simple idea: some types of images are more related to people (e.g. faces) whereas some others are more related to organisations (e.g. logos), for instance. This principle is applied similarly to the text retrieved from websites: keywords for search engines representing names and surnames of people will often return similarly related texts, for instance. Thus, we derive some indicators (detailed in sec:finalclassifier which are then used as input features in a final classifier.
To the best of our knowledge, this is the first report of a NER architecture which aims to provide a priori information based on clusters of images and text features.
In this paper, we propose a joint clustering architecture that aims at minimizing the current gap between world knowledge and knowledge available in open domain knowledge bases (e.g., Freebase) for NER systems, by extracting features from unstructured data sources.
|
What architecture do they propose?
|
A joint clustering architecture that aims at minimizing the current gap between world knowledge and knowledge available in open domain knowledge bases (e.g., Freebase) for Named Entity Recognition (NER) systems, by extracting features from unstructured data sources.
|
null | false
| null |
What are the branches of the United States Government?
|
The branches of the United States Government are the Judicial, Legislative, and Executive. The Judicial Branch evaluates the laws, and is made of the Supreme Court and lower courts. The Legislative Branch makes the laws, and is made of the Senate and the House of Representatives. The Executive branch carries out and enforces the law, and is made of the President, Vice President, Cabinet, and several government agencies.
|
|
null | false
| null |
It is considered a great honor to be invited to live on The Lawn, and 54 fourth-year undergraduates do so each year, joining ten members of the faculty who permanently live and teach in the Pavilions there. Similarly, graduate students may live on The Range. Edgar Allan Poe formerly lived in 13 West Range, and since 1904 the Raven Society has retrofitted and preserved his room much as it may have existed in the 1820s.
|
According to this paragraph about the University of Virginia, where did Edgar Allan Poe live during his time at the university?
|
Edgar Allen Poe lived in 13 West Range on The Lawn at the University of Virginia.
|
null | false
| 356
|
Several works have been proposed to learn multilingual word embeddings, which are then combined to perform cross-lingual document classifications. These word embeddings are trained on either word alignments or sentence-aligned parallel corpora. To provide reproducible benchmark results, we use MultiCCA word embeddings published by BIBREF3 .
There are multiple ways to combine these word embeddings for classification. We train a simple one-layer convolutional neural network (CNN) on top of the word embeddings, which has shown to perform well on text classification tasks regardless of training data size BIBREF4 . Specifically, convolutional filters are applied to windows of word embeddings, with a max-over-time pooling on top of them. We freeze the multilingual word embeddings while only training the classifier. Hyper-parameters such as convolutional output dimension, window sizes are done by grid search over the Dev set of the same language as the train set.
We use a one hidden-layer MLP as classifier.
|
What is used as a classifier?
|
One hidden-layer MLP.
|
null | false
| null |
Until the spring of 1972, Sowell was a registered Democrat, after which he then left the Democratic Party and resolved not to associate with any political party again, stating "I was so disgusted with both candidates that I didn't vote at all." Though he is often described as a black conservative, Sowell said, "I prefer not to have labels, but I suspect that 'libertarian' would suit me better than many others, although I disagree with the libertarian movement on a number of things." He has been described as one of the most prominent advocates of contemporary classical liberalism along with Friedrich Hayek and Larry Arnhart. Sowell primarily writes on economic subjects, generally advocating a free market approach to capitalism. Sowell opposes the Federal Reserve, arguing that it has been unsuccessful in preventing economic depressions and limiting inflation. Sowell described his study of Karl Marx in his autobiography; as a former Marxist who early in his career became disillusioned with it, he emphatically opposes Marxism, providing a critique in his book Marxism: Philosophy and Economics (1985).
Sowell has also written a trilogy of books on ideologies and political positions, including A Conflict of Visions, in which he speaks on the origins of political strife; The Vision of the Anointed, in which he compares the conservative/libertarian and liberal/progressive worldviews; and The Quest for Cosmic Justice, in which, as in many of his other writings, he outlines his thesis of the need felt by intellectuals, politicians, and leaders to fix and perfect the world in utopian and ultimately, he posits, disastrous fashions. Separate from the trilogy, but also in discussion of the subject, he wrote Intellectuals and Society, building on his earlier work, in which he discusses what he argues to be the blind hubris and follies of intellectuals in a variety of areas.
His book Knowledge and Decisions, a winner of the 1980 Law and Economics Center Prize, was heralded as a "landmark work," selected for this prize "because of its cogent contribution to our understanding of the differences between the market process and the process of government." In announcing the award, the centre acclaimed Sowell, whose "contribution to our understanding of the process of regulation alone would make the book important, but in reemphasizing the diversity and efficiency that the market makes possible, work goes deeper and becomes even more significant." Friedrich Hayek wrote: "In a wholly original manner [Sowell] succeeds in translating abstract and theoretical argument into highly concrete and realistic discussion of the central problems of contemporary economic policy."
Sowell opposes the imposition of minimum wages by governments, arguing in his book Basic Economics that "Unfortunately, the real minimum wage is always zero, regardless of the laws, and that is the wage that many workers receive in the wake of the creation or escalation of a government-mandated minimum wage, because they either lose their jobs or fail to find jobs when they enter the labor force." He goes further to argue that minimum wages disproportionately affect "members of racial or ethnic minority groups" that have been discriminated against. He asserts that "Before federal minimum wage laws were instituted in the 1930s, the black unemployment rate was slightly lower than the white unemployment rate in 1930. But then followed the Davis-Bacon Act of 1931, the National Industrial Recovery Act (NIRA) of 1933 and the Fair Labor Standards Act (FLSA) of 1938 – all of which imposed government-mandated minimum wages, either on a particular sector or more broadly... By 1954, black unemployment rates were double those of whites and have continued to be at that level or higher. Those particularly hard hit by the resulting unemployment have been black teenage males."
Sowell also favors decriminalization of all drugs. He opposes gun control laws, arguing, "On net balance, they do not save lives, but cost lives."[
|
Give me a bulleted list of 3 books Thomas Sowell has written and what they are about.
|
The following are the names and overviews of three books Thomas Sowell has written:
- Knowledge and Decisions: This book discusses the differences between market and government processes
- The Vision of the Anointed: This book compares conservative and liberal worldviews
- A Conflict of Visions: This book reviews the origins of political strife
|
null | false
| 48
|
Table-to-text generation is an important and challenging task in natural language processing, which aims to produce the summarization of numerical table BIBREF0, BIBREF1. The related methods can be empirically divided into two categories, pipeline model and end-to-end model. The former consists of content selection, document planning and realisation, mainly for early industrial applications, such as weather forecasting and medical monitoring, etc. The latter generates text directly from the table through a standard neural encoder-decoder framework to avoid error propagation and has achieved remarkable progress. In this paper, we particularly focus on exploring how to improve the performance of neural methods on table-to-text generation.
Recently, ROTOWIRE, which provides tables of NBA players' and teams' statistics with a descriptive summary, has drawn increasing attention from academic community. Figure FIGREF1 shows an example of parts of a game's statistics and its corresponding computer generated summary. We can see that the tables has a formal structure including table row header, table column header and table cells. “Al Jefferson” is a table row header that represents a player, “PTS” is a table column header indicating the column contains player's score and “18” is the value of the table cell, that is, Al Jefferson scored 18 points. Several related models have been proposed . They typically encode the table's records separately or as a long sequence and generate a long descriptive summary by a standard Seq2Seq decoder with some modifications. Wiseman explored two types of copy mechanism and found conditional copy model BIBREF3 perform better . Puduppully enhanced content selection ability by explicitly selecting and planning relevant records. Li improved the precision of describing data records in the generated texts by generating a template at first and filling in slots via copy mechanism. Nie utilized results from pre-executed operations to improve the fidelity of generated texts. However, we claim that their encoding of tables as sets of records or a long sequence is not suitable. Because (1) the table consists of multiple players and different types of information as shown in Figure FIGREF1. The earlier encoding approaches only considered the table as sets of records or one dimensional sequence, which would lose the information of other (column) dimension. (2) the table cell consists of time-series data which change over time. That is to say, sometimes historical data can help the model select content. Moreover, when a human writes a basketball report, he will not only focus on the players' outstanding performance in the current match, but also summarize players' performance in recent matches. Lets take Figure FIGREF1 again. Not only do the gold texts mention Al Jefferson's great performance in this match, it also states that “It was the second time in the last three games he's posted a double-double”. Also gold texts summarize John Wall's “double-double” performance in the similar way. Summarizing a player's performance in recent matches requires the modeling of table cell with respect to its historical data (time dimension) which is absent in baseline model. Although baseline model Conditional Copy (CC) tries to summarize it for Gerald Henderson, it clearly produce wrong statements since he didn't get “double-double” in this match.
To address the aforementioned problems, we present a hierarchical encoder to simultaneously model row, column and time dimension information. In detail, our model is divided into three layers. The first layer is used to learn the representation of the table cell. Specifically, we employ three self-attention models to obtain three representations of the table cell in its row, column and time dimension. Then, in the second layer, we design a record fusion gate to identify the more important representation from those three dimension and combine them into a dense vector. In the third layer, we use mean pooling method to merge the previously obtained table cell representations in the same row into the representation of the table's row. Then, we use self-attention with content selection gate BIBREF4 to filter unimportant rows' information. To the best of our knowledge, this is the first work on neural table-to-text generation via modeling column and time dimension information so far. We conducted experiments on ROTOWIRE. Results show that our model outperforms existing systems, improving baseline BLEU from 14.19 to 16.85 ($+18.75\%$), P% of relation generation (RG) from 74.80 to 91.46 ($+22.27\%$), F1% of content selection (CS) from 32.49 to 41.21 ($+26.84\%$) and content ordering (CO) from 15.42 to 20.86 ($+35.28\%$) on test set. It also exceeds the state-of-the-art model in terms of those metrics.
To address the aforementioned problems, we present a hierarchical encoder to simultaneously model row, column and time dimension information.
|
What is the hierarchical encoder used for?
|
The hierarchical encoder is used to simultaneously model row, column and time dimension information.
|
null | false
| null |
Give me a list of actresses who won an Oscar in the last 5 years.
|
1. Michelle Yeoh 2022
2. Jessica Chastain, 2021
3. Frances McDormand, 2020
4. Renée Zellweger, 2019
5. Olivia Colman, 2018
|
|
null | false
| null |
Make a list of pets you would want if you knew they wouldn’t bite you
|
1. Tiger
2. Lion
3. Alligator
4. Polar Bear
5. Killer Whale
6. Pit Viper
7. Jaguar
8. Dinosaur
9. Shark
10. Wolf
|
|
null | false
| null |
"Colour the World" is a song by German production group Sash! featuring Nigerian-Swedish recording artist and producer Dr. Alban. It was released in 1999 via Mighty, Club Tools, and Multiply Records as the fourth and final single from the group's second studio album, Life Goes On (1998). It was successful in a number of European countries and reached number 15 on the UK Singles Chart.
|
What is Color of the world
|
Track listing
All lyrics are written by Dr. Alban, Ralf Kappmeier, Sascha Lappessen, Thomas Alisson; all music is composed by Ralph Kappmaier, Thomas Allison, Sascha Lappessen.
No. Title Length
1. "Colour the World" (Single) 3:35
2. "Colour the World" (Extended Mix) 6:00
3. "Colour the World" (Dario G. Remix) 4:02
4. "Colour the World" (ATB Remix) 5:51
5. "Colour the World" (DJ Delicious Remix) 6:52
6. "Colour the World" (Dale Cooper & Vincent Price Remix) 6:19
Total length: 32:39
Credits
Design – Michael Kowalkowski
Lyrics – Dr. Alban, Ralf Kappmeier, Sascha Lappessen, Thomas Alisson
Mastering – J. Quincy Kramer
Music – Ralf Kappmeier, Sascha Lappessen, Thomas Alisson
Producer – Sash!, Tokapi
Vocals – Dr. Alban
|
null | false
| null |
Who regulates companies in Australia?
|
The Australian Securities and Investments Commission (ASIC) regulates financial services and consumer credit, and authorised financial markets operating in Australia.
|
|
1906.02715
| false
| null |
Our work extends these explorations of the geometry of internal representations. Investigating how BERT represents syntax, we describe evidence that attention matrices contain grammatical representations. We also provide mathematical arguments that may explain the particular form of the parse tree embeddings described in BIBREF8 . Turning to semantics, using visualizations of the activations created by different pieces of text, we show suggestive evidence that BERT distinguishes word senses at a very fine level. Moreover, much of this semantic information appears to be encoded in a relatively low-dimensional subspace.
To formalize what it means for attention matrices to encode linguistic features, we use an attention probe, an analog of edge probing BIBREF11 . An attention probe is a task for a pair of tokens, $(token_i, token_j)$ where the input is a model-wide attention vector formed by concatenating the entries $a_{ij}$ in every attention matrix from every attention head in every layer. The goal is to classify a given relation between the two tokens. If a linear model achieves reliable accuracy, it seems reasonable to say that the model-wide attention vector encodes that relation. We apply attention probes to the task of identifying the existence and type of dependency relation between two words.
Turning to semantics, using visualizations of the activations created by different pieces of text, we show suggestive evidence that BERT distinguishes word senses at a very fine level.
We apply attention probes to the task of identifying the existence and type of dependency relation between two words.
|
How were the feature representations evaluated?
|
The answers are shown as follows:
* attention probes
* using visualizations of the activations created by different pieces of text
|
null | false
| null |
In 1947, a tense diplomatic and military standoff began to unfold between the United States, the United Kingdom and the Soviet Union over the future of Germany. The Americans and Western European allies wanted to include the sectors of Germany which they controlled in the Marshall Plan, an economic plan to rebuild Europe after the devastation of the war. The Soviets perceived the Marshall Plan to be the foundation for an anti-Soviet alliance and pressured the Americans, British and French to back down. On 20 March 1948, the Soviet representative walked out of the meeting of the Allied Control Council, and on 31 March 1948, the United States Congress approved funding for the Marshall Plan. Soviet troops then began to block the corridor that brought supplies from the western zones of Germany to West Berlin. In response, an increased number of aircraft brought supplies by air from west Germany to Tempelhof airfield in the American sector and Gatow airfield in the British sector of Berlin. At the same time Soviet military aircraft began to violate airspace in West Berlin and harass (or what the military called "buzz") flights in and out of West Berlin. Despite the danger of flying in such conditions, civilian aircraft continued to fly in and out of Berlin.
|
Extract all the airports listed in this reference paragraph
|
Tempelhof, Gatow
|
null | false
| null |
What is the format of an email?
|
Emails contain a username, the @ symbol, and a website domain. The website domain must contain a "." (dot), where the left and right side of the dot are nonempty. For example, lorem.ipsum@dolor.com is an email.
|
|
null | false
| null |
If playing golf, categorize which of the following parts of the course would be considered 'hazards', which would be 'safe', and which would be 'neither': The green, a sand trap, a stream, the fairway, the club house, and the tee box.
|
The green, fairway, and tee box would all be safe, while a sand trap and a stream would be hazards, and the club house would be neither.
|
|
null | false
| null |
What are the most popular dishes to bring to a potluck?
|
Since potlucks are designed to scale out the food preparation efforts while providing a diverse menu, items that are easy to prepare in large quantities and that appeal to a lot of different people are most commonly brought. For example, meatballs are relatively inexpensive, widely enjoyed, and easy to be kept warm in crockpot. Items like chicken wings are popular for many of the same reasons. Salads are also a good choice because they don not need to be kept hot. Items that require immediate serving are typically avoided because the dishes need to be prepared, then travel to the locality of the event, and as such create logistical and quality problems. Other items are ones that people typically take pride in, like chili, or ethnic staples that expose people to new tastes and cultures.
|
|
null | false
| null |
Give me a listing of ways I can incorporate physical activity into my daily routine.
|
Incorporating physical activity into your daily routine can come in many forms. Some examples include standing, walking, running, jogging, and walking the dog. You can also incorporate physical activity into everyday activities such as parking further away from the store's entrance in order to walk more.
|
|
null | false
| null |
What are some typical characteristics of a Persian cat?
|
Persian cats generally tend to be medium-sized cats with long-haired, fluffy coats, small, rounded ears, flatter faces, and small noses. They also have round heads and have the appearance of bigger eyes.
|
|
null | false
| null |
Pandemonium 2 is a psychedelic 2.5D game; specifically, it renders characters and environments with polygons and uses a 3D camera to create the appearance of 3-D, while gameplay is actually on a 2-D plane. Players can choose to be either Nikki or Fargus - who each have different specialties - with the option to change for each level. The player characters' abilities have been expanded over those in Pandemonium! with the ability to climb ropes, crawl, and pull themselves up ledges. During the quest, players can earn extra lives by collecting treasure.
Unique powers collected during gameplay may replace, extend or add to previous powers, depending on which character is in play and which power has been obtained. The characters handle the unique powers differently. For instance, collecting the "fire power" allows Nikki to shoot destructive fireballs, while it grants Fargus invincibility and a deadly touch, but requires him to constantly move forward. Machines and equipment are also available throughout the game, although they are tied to certain areas.
|
What is the negative side effect of the "fire power" on Fargus?
|
It requires him to constantly move forward.
|
null | false
| null |
Mark Richard Hunt (born 23 March 1974) is a New Zealand former mixed martial artist and kickboxer of Samoan descent, currently living in Sydney, Australia. Hunt competed in the Ultimate Fighting Championship (UFC) until 2018 and was the winner of the 2001 K-1 World Grand Prix. He is known as "King of Walk-Offs" due to his popularization of walking away and stopping attacks before the referee officially ends the match.
|
Extract the reason why was Mark Hunt known as the King of Walk Offs
|
Mark Hunt was a mixed martial artist and kickboxer who would walk away and stop his attacks before the referee officially ended the match.
|
null | false
| null |
Why should one travel the world?
|
As the son of a U.S. Diplomat, I've had the opportunity to live abroad most of my life in countries such as Cambodia, Australia, South Korea and Malaysia. Through my travels despite the sad goodbyes to friends and family, I learned the importance of being open-minded, resilient and open to a diverse network. When you travel abroad you experience a world you may have never considered or a culture that shows you the diversity of the world. This is important for personal development.
|
|
null | false
| null |
Neuromancer is a 1984 science fiction novel by American-Canadian writer William Gibson. Considered one of the earliest and best-known works in the cyberpunk genre, it is the only novel to win the Nebula Award, the Philip K. Dick Award, and the Hugo Award. It was Gibson's debut novel and the beginning of the Sprawl trilogy. Set in the future, the novel follows Henry Case, a washed-up hacker hired for one last job, which brings him in contact with a powerful artificial intelligence.
|
Given the reference text about the science fiction novel Neuromancer, tell me the novel's author and the name of the main protagonist.
|
Neuromancer was written by William Gibson and its main protagonist is Henry Case.
|
null | false
| null |
Why are Asian women’s fancy clothes so expensive
|
The traditional Asian clothes are made using lots of expensive materials and a huge labour effort. On the average fancy clothes can take somewhere between 10 to 30 days and less fancy clothes takes somewhere between 5 to 10 days.
The good tailors who understand client requirements are very hard to find and have limited capacity to fulfil large number of orders. As this is manual labour, automation can not help especially for customised sizes and designs.
The raw material used as well as expensive cloth increases the cost and requires lots of thought and aesthetic sense to make it look nice. You are paying for the quality as well as the unique custom design.
Asian wedding clothes are normally the most expensive. Some high end brands use gold and silver threads to make the wedding day dress even more prestigious. An average wedding dress can cost from tens of thousands to hundreds of thousands of dollars.
|
|
1909.00694
| false
| null |
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
FLOAT SELECTED: Table 1: Statistics of the AL, CA, and CO datasets.
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
FLOAT SELECTED: Table 2: Details of the ACP dataset.
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13.
From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
FLOAT SELECTED: Table 1: Statistics of the AL, CA, and CO datasets.
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
FLOAT SELECTED: Table 2: Details of the ACP dataset.
|
How big is the Japanese data?
|
7000000 pairs of events were extracted from the Japanese Web corpus, 529850 pairs of events were extracted from the ACP corpus
|
null | false
| null |
What are five hobbies I can do in my spare time?
|
Hobbies can be a great way to spend your free time and can also help you learn and grow as a person. Five popular hobbies include Learning a new language, watching TV and movies, playing the piano, taking photographs, and exercising.
|
|
null | false
| null |
What are some of the best things about a summertime visit to Mount Desert Island, Maine?
|
Mount Desert Island is wonderful in the summer, with some of the best things about it being a bike ride on the carriage roads, sitting on rocks at Little Hunter's Beach, eating lobster at Beal's Lobster Pier, going for a walk on Sand Beach, hiking up Champlain Mountain, browsing the shops of Bar Harbor, and driving the scenic Park Loop Road.
|
|
null | false
| 197
|
The articles from the newspaper were fed in spaCy library into the proper format for training. Different parameters were tested, in order to get the optimal result. The dataset was shuffled, using the same seed for all the experiments and was split into a train set (70%), a test set (20%) and a validation set (10%). Information was passed through the training algorithm in batches with an increasing batch size from 4 to 32 and a step of 1.001. Additionally, a dropout rate was configured in every batch, initialized to 0.6 which dropped during the training process to 0.4. Most of the experiments were trained using 30 epochs.
The main area of study for the experiments focuses on three important components. At first, we investigate the difference in results between part of speech taggers that classify morphological features and taggers that detect only the part of speech. Moreover, we explore the significance of pretrained vectors used from a model and their effect on the extraction of better results. Most importantly, the usage of subwords of tokens from a tagger as embeddings is issued. For the experiments, precision, recall and f1 score are used as evaluation metrics.
Additionally, a dropout rate was configured in every batch, initialized to 0.6 which dropped during the training process to 0.4.
|
What is the initial dropout rate?
|
0.6.
|
null | false
| null |
Fidelity Bravery Integrity is which organisations motto
|
FBI
|
|
null | false
| null |
What is photosynthesis?
|
Photosynthesis is the biological process by which plants use carbon dioxide, sunlight and water to create oxygen and energy.
|
|
null | false
| 227
|
We use the SentEval toolkit to evaluate the quality of sentence representations from BERT activations. The evaluation encompasses a variety of downstream and probing tasks. Downstream tasks include text classification, natural language inference, paraphrase detection, and semantic similarity. Probing tasks use single sentence embedding as input, are designed to probe sentence-level linguistic phenomena, from superficial properties of sentences to syntactic information to semantic acceptability. For details about the tasks, please refer to BIBREF8 and BIBREF9. We compare the BERT embeddings against two state-of-the-art sentence embeddings, Universal Sentence Encoder BIBREF5, InferSent BIBREF2, and a baseline of averaging GloVe word embeddings.
Effect of Encoder Layer: We compare the performance of embeddings extracted from different encoder layers of a pre-trained BERT using bert-as-service BIBREF10. Since we are interested in the linguistic information encoded in the embeddings, we only add a logistic regression layer on top of the embeddings for each classification task. The results of using [CLS] token activations as embeddings are presented in Figure FIGREF1. The raw values are provided in the Appendix. In the heatmap, the raw values of metrics are normalized by the best performance of a particular task from all the models we evaluated including BERT. The tasks in the figure are grouped by task category. For example, all semantic similarity related tasks are placed at the top of the figure.
As can be seen from the figure, embeddings from top layers generally perform better than lower layers. However, for certain semantic probing tasks such as tense classification, subject, and object number classifications, middle layer embeddings perform the best. Intuitively, embeddings from top layer should be more biased towards the target of BERT pre-training tasks, while bottom layer embeddings should be close to the word embeddings. We observed a higher correlation in performance between bottom layer embeddings and GloVe embeddings than embeddings from other layers. Overall, pre-trained BERT embeddings perform well in text classification and syntactic probing tasks. The biggest limitation lies in the semantic similarity and sentence surface information probing tasks, where we observed a big gap between BERT and other state-of-the-art models.
Effect of Pooling Methods: We examined different methods of extracting BERT hidden state activations. The pooling methods we evaluated include: CLS-pooling (the hidden state corresponding to the [CLS] token), SEP-pooling (the hidden state corresponding to the [SEP] token), Mean-pooling (the average of the hidden state of the encoding layer on the time axis), and Max-pooling (the maximum of the hidden state of the encoding layer on the time axis). To eliminate the layer-wise effects, we averaged the performance of each pooling method over different layers. The results are summarized in Table TABREF2, where the score for each task category is calculated by averaging the normalized values for the tasks within each category. Although the activations of [CLS] token hidden states are often used in fine-tuning BERT for classification tasks, Mean-pooling of hidden states performs the best in all task categories among all the pooling methods.
Pre-trained vs. Fine-tuned BERT: All the models we considered in this paper benefit from supervised training on natural language inference datasets. In this section, we compare the performance of embeddings from pre-trained BERT and fine-tuned BERT. Two natural language inference datasets, MNLI BIBREF11 and SNLI, were considered in the experiment. Inspired by the fact that embeddings from different layers excel in different tasks, we also conducted experiments by concatenating embeddings from multiple layers. The results are presented in Table TABREF3, and the raw values are provided in the Appendix.
As we can see from the table, embeddings from pre-trained BERT are good at capturing sentence-level syntactic information and semantic information, but poor at semantic similarity tasks and surface information tasks. Our findings are consistent with BIBREF12 work on assessing BERT's syntactic abilities. Fine-tuning on natural language inference datasets improves the quality of sentence embedding, especially on semantic similarity tasks and entailment tasks. Combining embeddings from two layers can further boost the performance on sentence surface and syntactic information probing tasks. Experiments were also conducted by combining embeddings from multiple layers. However, there is no significant and consistent improvement over pooling just from two layers. Adding multi-layer perceptron (MLP) instead of logistic regression layer on top of the embeddings also provides no significant changes in performance, which suggests that most linguistic properties can be extracted with just a linear readout of the embeddings. Our best model is the combination of embeddings from the top and bottom layer of the BERT fine-tuned on SNLI dataset.
Universal text representations are important for many NLP tasks as modern deep learning models are becoming more and more data-hungry and computationally expensive. On one hand, most research and industry tasks face data sparsity problem due to the high cost of annotation. Universal text representations can mitigate this problem to a certain extent by performing implicit transfer learning among tasks. On the other hand, modern deep learning models with millions of parameters are expensive to train and host, while models using text representation as the building blocks can achieve similar performance with much fewer tunable parameters. The pre-computed text embeddings can also help decrease model latency dramatically at inference time.
Since the introduction of pre-trained word embeddings such as word2vec BIBREF0 and GloVe BIBREF1, a lot of efforts have been devoted to developing universal sentence embeddings. Initial attempts at learning sentence representation using unsupervised approaches did not yield satisfactory performance. Recent work BIBREF2 has shown that models trained in supervised fashion on datasets like Stanford Natural Language Inference (SNLI) corpus BIBREF3 can consistently outperform unsupervised methods like SkipThought vectors BIBREF4. More recently, Universal Sentence Encoder BIBREF5 equipped with the Transformer BIBREF6 as the encoder, co-trained on a large amount of unsupervised training data and SNLI corpus, has demonstrated surprisingly good performance with minimal amounts of supervised training data for a transfer task.
BERT BIBREF7, one of the latest models that leverage heavily on language model pre-training, has achieved state-of-the-art performance in many natural language understanding tasks ranging from sequence and sequence pair classification to question answering. The fact that pre-trained BERT can be easily fine-tuned with just one additional output layer to create a state-of-the-art model for a wide range of tasks suggests that BERT representations are potential universal text embeddings.
Passages that consist of multiple sentences are coherent units of natural languages that convey information at a pragmatic or discourse level. While there are many models for generating and evaluating sentence embeddings, there hasn't been a lot of work on passage level embedding generation and evaluation.
In this paper, we conducted an empirical study of layer-wise activations of BERT as general-purpose text embeddings. We want to understand to what extent does the BERT representation capture syntactic and semantic information. The sentence-level embeddings are evaluated on downstream and probing tasks using the SentEval toolkit BIBREF8, while the passage-level encodings are evaluated on four passage-level QA datasets (both factoid and non-factoid) under a learning-to-rank setting. Different methods of combining query embeddings with passage-level answer embeddings are examined.
Effect of Encoder Layer: We compare the performance of embeddings extracted from different encoder layers of a pre-trained BERT using bertas-service (Xiao, 2018).****In this paper, we conducted an empirical study of layer-wise activations of BERT as generalpurpose text embeddings. We want to understand to what extent does the BERT representation capture syntactic and semantic information.
|
What do they evaluate in the paper?
|
The quality of sentence representations extracted from different encoder layers of a pre-trained BERT as general-purpose text embeddings.
|
null | false
| null |
Classify the below states based on east or west coast.
Rhode Island, South Carolina, Vermont, Alaska, Hawaii
|
East Coast - Rhode Island, South Carolina, Vermont
West Coast - Alaska and Hawaii
|
|
null | false
| null |
What are the national languages of Switzerland?
|
The national languages of Switzerland are German, French, Italian, and Romansh.
|
|
1601.00901
| false
| null |
In this paper, we propose a novel approach to joint learning of ontology and semantic parsing, which is designed for homogeneous collections of text, where each fact is usually stated only once, therefore we cannot rely on data redundancy. Our approach is text-driven, semi-automatic and based on grammar induction. It is presented in Figure 1 .The input is a seed ontology together with text annotated with concepts from the seed ontology. The result of the process is an ontology with extended instances, classes, taxonomic and non-taxonomic relations, and a semantic parser, which transform basic units of text, i.e sentences, into semantic trees. Compared to trees that structure sentences based on syntactic information, nodes of semantic trees contain semantic classes, like location, profession, color, etc. Our approach does not rely on any syntactic analysis of text, like part-of-speech tagging or dependency parsing. The grammar induction method works on the premise of curriculum learning BIBREF7 , where the parser first learns to parse simple sentences, then proceeds to learn more complex ones. The induction method is iterative, semi-automatic and based on frequent patterns. A context-free grammar (CFG) is induced from the text, which is represented by several layers of semantic annotations. The motivation to use CFG is that it is very suitable for the proposed alternating usage of top-down and bottom-up parsing, where new rules are induced from previously unparsable parts. Furthermore, it has been shown by BIBREF8 that CFGs are expressive enough to model almost every language phenomena. The induction is based on a greedy iterative procedure that involves minor human involvement, which is needed for seed rule definition and rule categorization. Our experiments show that although the grammar is ambiguous, it is scalable enough to parse a large dataset of sentences.
The grammar induction method works on the premise of curriculum learning BIBREF7 , where the parser first learns to parse simple sentences, then proceeds to learn more complex ones. The induction method is iterative, semi-automatic and based on frequent patterns. A context-free grammar (CFG) is induced from the text, which is represented by several layers of semantic annotations.
|
How did they induce the CFG?
|
The answers are shown as follows:
* the parser first learns to parse simple sentences, then proceeds to learn more complex ones. The induction method is iterative, semi-automatic and based on frequent patterns
|
1806.03191
| false
| null |
Most unsupervised distributional approaches for hypernymy detection are based on variants of the Distributional Inclusion Hypothesis BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF4 . Here, we compare to two methods with strong empirical results. As with most DIH measures, they are only defined for large, sparse, positively-valued distributional spaces. First, we consider WeedsPrec BIBREF8 which captures the features of INLINEFORM0 which are included in the set of a broader term's features, INLINEFORM1 : DISPLAYFORM0
Second, we consider invCL BIBREF11 which introduces a notion of distributional exclusion by also measuring the degree to which the broader term contains contexts not used by the narrower term. In particular, let INLINEFORM0
Although most unsupervised distributional approaches are based on the DIH, we also consider the distributional SLQS model based on on an alternative informativeness hypothesis BIBREF10 , BIBREF4 . Intuitively, the SLQS model presupposes that general words appear mostly in uninformative contexts, as measured by entropy. Specifically, SLQS depends on the median entropy of a term's top INLINEFORM0 contexts, defined as INLINEFORM1
For completeness, we also include cosine similarity as a baseline in our evaluation.
First, we consider WeedsPrec BIBREF8 which captures the features of INLINEFORM0 which are included in the set of a broader term's features, INLINEFORM1 : DISPLAYFORM0
Second, we consider invCL BIBREF11 which introduces a notion of distributional exclusion by also measuring the degree to which the broader term contains contexts not used by the narrower term.
Although most unsupervised distributional approaches are based on the DIH, we also consider the distributional SLQS model based on on an alternative informativeness hypothesis BIBREF10 , BIBREF4 .
For completeness, we also include cosine similarity as a baseline in our evaluation.
|
Which distributional methods did they consider?
|
The answers are shown as follows:
* WeedsPrec BIBREF8
* invCL BIBREF11
* SLQS model
* cosine similarity
|
null | false
| null |
Anfield is a football stadium in Anfield, Liverpool, Merseyside, England, which has a seating capacity of 53,394, making it the seventh largest football stadium in England. It has been the home of Liverpool F.C. since their formation in 1892. It was originally the home of Everton from 1884 to 1891, before they moved to Goodison Park after a dispute with the club president.
The stadium has four stands: the Spion Kop, the Main Stand, the Sir Kenny Dalglish Stand and the Anfield Road End. The record attendance of 61,905 was set at a match between Liverpool and Wolverhampton Wanderers in 1952. The ground converted to an all-seater stadium in 1994 as a result of the Taylor Report, which reduced its capacity.
Two gates at the stadium are named after former Liverpool managers: Bill Shankly and Bob Paisley. Both managers have been honoured with statues outside the stadium: Shankly's unveiled in 1997 by the Kop Stand and Paisley's in 2020 by the Main Stand. The ground is 2 miles (3 km) from Liverpool Lime Street railway station. It was proposed in 2002 to replace the stadium with a new one in the adjacent Stanley Park, but after the acquisition of Liverpool F.C. by Fenway Sports Group in 2010 it was made clear this would not happen.
Construction for an extension to the main stand began on 8 December 2014. This extension, one of the largest all-seater single stands in European football, opened to the public on 9 September 2016, increasing the stadium capacity to 53,394. The Anfield Road Stand is currently being redeveloped to bring the stadium capacity to around 61,000. It is due to be completed in time for the 2023–24 season.
|
When was Anfield football stadium built?
|
Anfield was built in 1884 and first occupied by and has been redeveloped many times over the years. The most recent redevelopment was in 2022 on The Anfield Road Stand which brought the total capacity of the stadium to around 61,000.
|
null | false
| null |
What would be the pros and cons of replacing all civilian airplanes with dirigibles?
|
Replacing all civilian airplanes with dirigibles would have positive and negative effects. The most notable positive effect would be reduced environmental impact: although dirigibles, like airplanes, need engines, dirigibles are filled with lighter-than-air gas and therefore no power is needed to counteract gravity, so they consume less fuel. However, there would be a significant negative economic impact because dirigibles fly much more slowly than airplanes, thus the items airplanes carry (e.g. people and freight) would take much longer to get to their destination. In addition, another problem with dirigibles is that they are less able to handle strong winds compared to airplanes, so it might be necessary for the dirigible to take a more circuitous route than an airplane, or simply delay the flight until the bad weather passes.
|
|
null | false
| null |
2003: Cricket World Cup
Tendulkar made 673 runs in 11 matches in the 2003 Cricket World Cup, helping India reach the final. While Australia won, Tendulkar was given the Man of the Tournament award.
He continued to score heavily in ODI cricket that year, with two hundreds in a tri-series involving New Zealand and Australia. As a part-time bowler, he dismissed an exhausted centurion, Matthew Hayden, in the tri-series final.
|
How many runs did Sachin score in the 2011 world cup?
|
Tendulkar made 673 runs in 11 matches in the 2003 Cricket World Cup, helping India reach the final. While Australia won, Tendulkar was given the Man of the Tournament award
|
null | false
| null |
What causes rain?
|
Rain is the byproduct of the natural cycle of evaporation and condensation that happens all over the world. Moisture evaporates into the atmosphere from bodies of water and soil. Once this moisture is aloft in the atmosphere, certain conditions cause it to precipitate in the form of rain. These conditions include changes in atmospheric pressure, orographic lift cause by hilly terrain, and changes in temperature. Once the moisture starts to condense into larger and larger droplets, they eventually become heavy enough to fall from the cloud in the form of rain drops. This cycle then continues on with the newly fallen rain beginning to evaporate into the atmosphere.
|
|
1910.10408
| true
| null |
Methods ::: Combining the two methods
We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length.
Methods ::: Combining the two methods
We further propose to use the two methods together to combine their strengths. In fact, while the length token acts as a soft constraint to bias NMT to produce short or long translation with respect to the source, actually no length information is given to the network. On the other side, length encoding leverages information about the target length, but it is agnostic of the source length.
|
Do they experiment with combining both methods?
|
Yes.
|
null | false
| 137
|
Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing.
Although multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train.
Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are:
We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU.
We evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT.
We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task.
Our main contributions in this work are: We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU.
|
What method do the authors propose?
|
A fast adaptation method.
|
null | false
| null |
Classify the following as smooth or sharp: broken glass, thorn, leather, baby skin, cactus, mirror
|
Broken Glass: Sharp
Thorn: Sharp
Leather: Smooth
Baby Skin: Smooth
Cactus: Sharp
Mirror: Smooth
|
|
null | false
| 64
|
We apply our system to three datasets from different domains. SQuAD BIBREF3 consists of questions whose answers are free form spans of text from passages in Wikipedia articles. We follow the same setting as in BIBREF0 , and split $10\%$ of training questions as the test set, and report performance when training on subsets of the remaining data ranging from $1\%$ to $90\%$ of the full set. We also report the performance on the dev set when trained on the full training set ( $1^\ast $ in Table 2 ). We use the same hyperparameter settings as in prior work. We compare and study four different settings: 1) the Supervised Learning (SL) setting, which is only trained on the supervised data, 2) the best performing GDAN model from BIBREF0 , 3) pretraining on a Language Modeling (LM) objective and fine-tuning on the supervised data, and 4) pretraining on the Cloze dataset and fine-tuning on the supervised data. The LM and Cloze methods use exactly the same data for pretraining, but differ in the loss functions used. We report F1 and EM scores on our test set using the official evaluation scripts provided by the authors of the dataset.
TriviaQA BIBREF4 comprises of over 95K web question-answer-evidence triples. Like SQuAD, the answers are spans of text. Similar to the setting in SQuAD, we create multiple smaller subsets of the entire set. For our semi-supervised QA system, we use the BiDAF+SA model BIBREF1 – the highest performing publicly available system for TrivaQA. Here again, we compare the supervised learning (SL) settings against the pretraining on Cloze set and fine tuning on the supervised set. We report F1 and EM scores on the dev set.
We also test on the BioASQ 5b dataset, which consists of question-answer pairs from PubMed abstracts. We use the publicly available system from BIBREF14 , and follow the exact same setup as theirs, focusing only on factoid and list questions. For this setting, there are only 899 questions for training. Since this is already a low-resource problem we only report results using 5-fold cross-validation on all the available data. We report Mean Reciprocal Rank (MRR) on the factoid questions, and F1 score for the list questions.
We follow the same setting as in (Yang et al., 2017), and split 10% of training questions as the test set, and report performance when training on subsets of the remaining data ranging from 1% to 90% of the full set.
|
What is the training-testing ratio of the system?
|
It ranged from 1:10 to 9:1.
|
2003.08553
| false
| null |
System description ::: Architecture
As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process are:
QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker. This website is designed to ease the use of management APIs. It also provides a test pane.
QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content. It then passes these QA pairs to the web app to create the Knowledge Base Index.
Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.
QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index. WebApp does ranking on top of retrieved results. WebApp also handles feedback management for active learning.
Bot: Calls the WebApp with the User's query to get results.
System description ::: Architecture
The components involved in the process are:
QnAMaker Portal: This is the Graphical User Interface (GUI) for using QnAMaker.
QnaMaker Management APIs: This is used for the extraction of Question-Answer (QA) pairs from semi-structured content.
Azure Search Index: Stores the KB with questions and answers as indexable columns, thus acting as a retrieval layer.
QnaMaker WebApp: Acts as a layer between the Bot, Management APIs, and Azure Search Index.
Bot: Calls the WebApp with the User's query to get results.
|
What components is the QnAMaker composed of?
|
The answers are shown as follows:
* QnAMaker Portal
* QnaMaker Management APIs
* Azure Search Index
* QnaMaker WebApp
* Bot
|
null | false
| 87
|
We posses a wealth of prior knowledge about many natural language processing tasks. For example, in text categorization, we know that words such as NBA, player, and basketball are strong indicators of the sports category BIBREF0 , and words like terrible, boring, and messing indicate a negative polarity while words like perfect, exciting, and moving suggest a positive polarity in sentiment classification.
A key problem arisen here, is how to leverage such knowledge to guide the learning process, an interesting problem for both NLP and machine learning communities. Previous studies addressing the problem fall into several lines. First, to leverage prior knowledge to label data BIBREF1 , BIBREF2 . Second, to encode prior knowledge with a prior on parameters, which can be commonly seen in many Bayesian approaches BIBREF3 , BIBREF4 . Third, to formalise prior knowledge with additional variables and dependencies BIBREF5 . Last, to use prior knowledge to control the distributions over latent output variables BIBREF6 , BIBREF7 , BIBREF8 , which makes the output variables easily interpretable.
However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge? Or, which kind of knowledge is appropriate for the task? Let's see an example: we may be a baseball fan but unfamiliar with hockey so that we can provide a few number of feature words of baseball, but much less of hockey for a baseball-hockey classification task. Such prior knowledge may mislead the model with heavy bias to baseball. If the model cannot handle this situation appropriately, the performance may be undesirable.
In this paper, we investigate into the problem in the framework of Generalized Expectation Criteria BIBREF7 . The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical. To this end, we introduce auxiliary regularization terms in which our prior knowledge is formalized as distribution over output variables. Recall the example just mentioned, though we do not have enough knowledge to provide features for class hockey, it is easy for us to provide some neutral words, namely words that are not strong indicators of any class, like player here. As one of the factors revealed in this paper, supplying neutral feature words can boost the performance remarkably, making the model more robust.
More attractively, we do not need manual annotation to label these neutral feature words in our proposed approach.
More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later.
To summarize, the main contributions of this work are as follows:
The rest of the paper is structured as follows: In Section 2, we briefly describe the generalized expectation criteria and present the proposed regularization terms. In Section 3, we conduct extensive experiments to justify the proposed methods. We survey related work in Section 4, and summarize our work in Section 5.
For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later.
|
What are the features of these three manners?
|
For the first manner, they simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, they assume they have some knowledge about the class distribution which will be detailed soon later.
|
null | false
| null |
What are the benefits of using genetically modified crops in agriculture?
|
There are many benefits to using genetically modified crops in agriculture. The benefits include making the crops hardier with greater resistance to diseases and droughts. The resulting crops can also make food more nutritious and taste better. An additional environmental benefit of genetically modified crops is that they have the potential to reduce resources needed to grow, including water and fertilizer.
|
|
null | false
| null |
What is William Shakespeare's greatest play?
|
There is no common agreement on which play written by William Shakespeare is the greatest. In total, William Shakespeare wrote approximately 39 plays, each of which can be broadly categorized into three main groups: histories, comedies and tragedies. Henry V and Richard III are regarded amongst his finest histories, both featuring an English king as their eponymous character. The Tempest and A Midsummer Night's Dream are considered to be amongst his best comedies; whilst tragedies were arguably Shakespeare's greatest genre - Hamlet, Romeo and Juliet, Macbeth, Julius Caesar and King Lear are all amongst his finest contributions to this field. Of these, Romeo and Juliet is often considered to be the greatest love story ever written, and Hamlet is widely regarded as one of the most important analyses of the human condition. William Shakespeare's plays are widely believed to be amongst the greatest ever written, and to this day are regularly performed around the world.
|
|
null | false
| 171
|
We first gather abusive utterances from 600K conversations with US-based customers. We search for relevant utterances by simple keyword spotting and find that about 5% of the corpus includes abuse, with mostly sexually explicit utterances. Previous research reports even higher levels of abuse between 11% BIBREF2 and 30% BIBREF6. Since we are not allowed to directly quote from our corpus in order to protect customer rights, we summarise the data to a total of 109 “prototypical" utterances - substantially extending the previous dataset of 35 utterances from Amanda:EthicsNLP2018 - and categorise these utterances based on the Linguistic Society's definition of sexual harassment BIBREF7:
[noitemsep]
Gender and Sexuality, e.g. “Are you gay?”, “How do you have sex?”
Sexualised Comments, e.g. “I love watching porn.”, “I'm horny.”
Sexualised Insults, e.g. “Stupid bitch.”, “Whore”
Sexual Requests and Demands, e.g. “Will you have sex with me?”, “Talk dirty to me.”
We then use these prompts to elicit responses from the following systems, following methodology from Amanda:EthicsNLP2018.
[leftmargin=5mm, noitemsep]
4 Commercial: Amazon Alexa, Apple Siri, Google Home, Microsoft's Cortana.
4 Non-commercial rule-based: E.L.I.Z.A. BIBREF8, Parry BIBREF9, A.L.I.C.E. BIBREF10, Alley BIBREF11.
4 Data-driven approaches:
Cleverbot BIBREF12;
NeuralConvo BIBREF13, a re-implementation of BIBREF14;
an implementation of BIBREF15's Information Retrieval approach;
a vanilla Seq2Seq model trained on clean Reddit data BIBREF1.
Negative Baselines: We also compile responses by adult chatbots: Sophia69 BIBREF16, Laurel Sweet BIBREF17, Captain Howdy BIBREF18, Annabelle Lee BIBREF19, Dr Love BIBREF20.
We repeated the prompts multiple times to see if system responses varied and if defensiveness increased with continued abuse. If this was the case, we included all responses in the study. Following this methodology, we collected a total of 2441 system replies in July-August 2018 - 3.5 times more data than Amanda:EthicsNLP2018 - which 2 expert annotators manually annotated according to the categories in Table TABREF14 ($\kappa =0.66$).
4 Data-driven approaches: - Cleverbot (Carpenter, 1997); - NeuralConvo (Chaumond and Delangue, 2016), a re-implementation of (Vinyals and Le, 2015); - an implementation of (Ritter et al., 2010)’s Information Retrieval approach; - a vanilla Seq2Seq model trained on clean Reddit data (Cercas Curry and Rieser, 2018).
|
What are the 4 Data-driven approaches prompts?
|
Cleverbot; NeuralConvo; an implementation of Information Retrieval approach; a vanilla Seq2Seq model trained on clean Reddit data.
|
null | false
| null |
What color is the Rainbow?
|
The rainbow is the VIBGYOR that stands for Violet, Indigo, Blue, green, yellow, Orange and Red.
|
|
1909.07734
| false
| null |
BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function.
IDEA
BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function.
|
What model was used by the top team?
|
The answers are shown as follows:
* Two different BERT models were developed
|
null | false
| null |
Give me a bulleted list of countries that start with the letter B.
|
- Botswana
- Bulgaria
- Brazil
- Bolivia
- Belgium
- Bahrain
- Bahamas
|
|
null | false
| 53
|
Medical search engines are an essential component for many online medical applications, such as online diagnosis systems and medical document databases. A typical online diagnosis system, for instance, relies on a medical search engine. The search engine takes as input a user query that describes some symptoms and then outputs clinical concept entries that provide relevant information to assist in diagnosing the problem. One challenge medical search engines face is the segmentation of individual clinical entities. When a user query consists of multiple clinical entities, a search engine would often fail to recognize them as separate entities. For example, the user query “fever joint pain weight loss headache” contains four separate clinical entities: “fever”, “joint pain”, “weight loss”, and “headache”. But when the search engine does not recognize them as separate entities and proceeds to retrieve results for each word in the query, it may find "pain" in body locations other than "joint pain", or it may miss "headache" altogether, for example. Some search engines allow the users to enter a single clinical concept by selecting from an auto-completion pick list. But this could also result in retrieving inaccurate or partial results and lead to poor user experience.
We want to improve the medical search engine so that it can accurately retrieve all the relevant clinical concepts mentioned in a user query, where relevant clinical concepts are defined with respect to the terminologies the search engine uses. The problem of extracting clinical concept mentions from a user query can be seen as a variant of the Concept Extraction (CE) task in the frequently-cited NLP challenges in healthcare, such as 2010 i2b2/VA BIBREF0 and 2013 ShARe/CLEF Task 1 BIBREF1. Both CE tasks in 2010 i2b2/VA and 2013 ShARe/CLEF Task 1 ask the participants to design an algorithm to tag a set of predefined entities of interest in clinical notes. These entity tagging tasks are also known as clinical Named Entity Recognition (NER). For example, the CE task in 2010 i2b2/VA defines three types of entities: “problem”, “treatment”, and “test”. The CE task in 2013 ShARe/CLEF defines various types of disorder such as “injury or poisoning”, "disease or syndrome”, etc. In addition to tagging, the CE task in 2013 ShARe/CLEF has an encoding component which requires selecting one and only one Concept Unique Identifier (CUI) from Systematized Nomenclature Of Medicine Clinical Terms (SNOMED-CT) for each disorder entity tagged. Our problem, similar to the CE task in 2013 ShARe/CLEF, also contains two sub-problems: tagging mentions of entities of interest (entity tagging), and selecting appropriate terms from a glossary to match the mentions (term matching). However, several major differences exist. First, compared to clinical notes, the user queries are much shorter, less technical, and often less coherent. Second, instead of encoding, we are dealing with term matching where we rank a few best terms that match an entity, instead of selecting only one. This is because the users who type the queries may not have a clear idea about what they are looking for, or could be laymen who know little terminology, it may be more helpful to provide a set of likely results and let the users choose. Third, the types of entities are different. Each medical search engine may have its own types of entities to tag. There is also one minor difference in the tagging scheme between our problem and the CE task in 2013 ShARe/CLEF - We limit our scope to dealing with entities of consecutive words and not disjoint entities . We use only Beginning, Inside, Outside (BIO) tags. Given the differences listed above, we need to customize a framework consisting of an entity tagging and term matching component for our CE problem.
Medical search engines are an essential component for many online medical applications, such as online diagnosis systems and medical document databases. A typical online diagnosis system, for instance, relies on a medical search engine. The search engine takes as input a user query that describes some symptoms and then outputs clinical concept entries that provide relevant information to assist in diagnosing the problem. One challenge medical search engines face is the segmentation of individual clinical entities. When a user query consists of multiple clinical entities, a search engine would often fail to recognize them as separate entities. For example, the user query “fever joint pain weight loss headache” contains four separate clinical entities: “fever”, “joint pain”, “weight loss”, and “headache”. But when the search engine does not recognize them as separate entities and proceeds to retrieve results for each word in the query, it may find "pain" in body locations other than "joint pain", or it may miss "headache" altogether, for example. Some search engines allow the users to enter a single clinical concept by selecting from an auto-completion pick list. But this could also result in retrieving inaccurate or partial results and lead to poor user experience.
We want to improve the medical search engine so that it can accurately retrieve all the relevant clinical concepts mentioned in a user query, where relevant clinical concepts are defined with respect to the terminologies the search engine uses. The problem of extracting clinical concept mentions from a user query can be seen as a variant of the Concept Extraction (CE) task in the frequently-cited NLP challenges in healthcare, such as 2010 i2b2/VA BIBREF0 and 2013 ShARe/CLEF Task 1 BIBREF1. Both CE tasks in 2010 i2b2/VA and 2013 ShARe/CLEF Task 1 ask the participants to design an algorithm to tag a set of predefined entities of interest in clinical notes. These entity tagging tasks are also known as clinical Named Entity Recognition (NER). For example, the CE task in 2010 i2b2/VA defines three types of entities: “problem”, “treatment”, and “test”. The CE task in 2013 ShARe/CLEF defines various types of disorder such as “injury or poisoning”, "disease or syndrome”, etc. In addition to tagging, the CE task in 2013 ShARe/CLEF has an encoding component which requires selecting one and only one Concept Unique Identifier (CUI) from Systematized Nomenclature Of Medicine Clinical Terms (SNOMED-CT) for each disorder entity tagged. Our problem, similar to the CE task in 2013 ShARe/CLEF, also contains two sub-problems: tagging mentions of entities of interest (entity tagging), and selecting appropriate terms from a glossary to match the mentions (term matching). However, several major differences exist. First, compared to clinical notes, the user queries are much shorter, less technical, and often less coherent. Second, instead of encoding, we are dealing with term matching where we rank a few best terms that match an entity, instead of selecting only one. This is because the users who type the queries may not have a clear idea about what they are looking for, or could be laymen who know little terminology, it may be more helpful to provide a set of likely results and let the users choose. Third, the types of entities are different. Each medical search engine may have its own types of entities to tag. There is also one minor difference in the tagging scheme between our problem and the CE task in 2013 ShARe/CLEF - We limit our scope to dealing with entities of consecutive words and not disjoint entities . We use only Beginning, Inside, Outside (BIO) tags. Given the differences listed above, we need to customize a framework consisting of an entity tagging and term matching component for our CE problem.
We want to improve the medical search engine so that it can accurately retrieve all the relevant clinical concepts mentioned in a user query, where relevant clinical concepts are defined with respect to the terminologies the search engine uses.**** Given the differences listed above, we need to customize a framework consisting of an entity tagging and term matching component for our CE problem.
|
What framework does the paper develop?
|
A framework for the entity tagging and term matching problem in a medical search engine.
|
null | false
| null |
Resolving underfitting
There are multiple ways to deal with underfitting:
1) Increase the complexity of the model: If the model is too simple, it may be necessary to increase its complexity by adding more features, increasing the number of parameters, or using a more flexible model. However, this should be done carefully to avoid overfitting.
2) Use a different algorithm: If the current algorithm is not able to capture the patterns in the data, it may be necessary to try a different one. For example, a neural network may be more effective than a linear regression model for some types of data.
3) Increase the amount of training data: If the model is underfitting due to lack of data, increasing the amount of training data may help. This will allow the model to better capture the underlying patterns in the data.
4) Regularization: Regularization is a technique used to prevent overfitting by adding a penalty term to the loss function that discourages large parameter values. It can also be used to prevent underfitting by controlling the complexity of the model.
5) Ensemble Methods: Ensemble methods combine multiple models to create a more accurate prediction. This can help to reduce underfitting by allowing multiple models to work together to capture the underlying patterns in the data.
6) Feature engineering: Feature engineering involves creating new model features from the existing ones that may be more relevant to the problem at hand. This can help to improve the accuracy of the model and prevent underfitting.
|
Extract the different ways of dealing with underfitting mentioned in the text
|
The different ways of deal with underfitting mentioned in the text are:
- Increase the complexity of the model
- Use a different algorithm
- Increase the amount of data
- Regularization
- Ensemble Methods
- Feature engineering
|
null | false
| 184
|
Given a piece of text and the span of an entity mention in this text, fine-grained entity typing (FET) is the task of assigning fine-grained type labels to the mention BIBREF0. The assigned labels should be context dependent BIBREF1. For example, in the sentence “Trump threatens to pull US out of World Trade Organization,” the mention “Trump” should be labeled as /person and /person/politician, although Donald Trump also had other occupations such as businessman, TV personality, etc.
This task is challenging because it usually uses a relatively large tag set, and some mentions may require the understanding of the context to be correctly labeled. Moreover, since manual annotation is very labor-intensive, existing approaches have to rely on distant supervision to train models BIBREF0, BIBREF2.
Thus, the use of extra information to help with the classification process becomes very important. In this paper, we improve FET with entity linking (EL). EL is helpful for a model to make typing decisions because if a mention is correctly linked to its target entity, we can directly obtain the type information about this entity in the knowledge base (KB). For example, in the sentence “There were some great discussions on a variety of issues facing Federal Way,” the mention “Federal Way” may be incorrectly labeled as a company by some FET models. Such a mistake can be avoided after linking it to the city Federal Way, Washington. For cases that require the understanding of the context, using entity linking results is also beneficial. In the aforementioned example where “Trump” is the mention, obtaining all the types of Donald Trump in the knowledge base (e.g., politician, businessman, TV personality, etc.) is still informative for inferring the correct type (i.e., politician) that fits the context, since they narrows the possible labels down.
However, the information obtained through EL should not be fully trusted since it is not always accurate. Even when a mention is correctly linked to an entity, the type information of this entity in the KB may be incomplete or outdated. Thus, in this paper, we propose a deep neural fine-grained entity typing model that flexibly predicts labels based on the context, the mention string, and the type information from KB obtained with EL.
Using EL also introduces a new problem for the training process. Currently, a widely used approach to create FET training samples is to use the anchor links in Wikipedia BIBREF0, BIBREF3. Each anchor link is regarded as a mention, and is weakly labeled with all the types of its referred entity (the Wikipedia page the anchor link points to) in KB. Our approach, when links the mention correctly, also uses all the types of the referred entity in KB as extra information. This may cause the trained model to overfit the weakly labeled data. We design a variant of the hinge loss and introduce noise during training to address this problem.
We conduct experiments on two commonly used FET datasets. Experimental results show that introducing information obtained through entity linking and having a deep neural model both helps to improve FET performance. Our model achieves more than 5% absolute strict accuracy improvement over the state of the art on both datasets.
Our contributions are summarized as follows:
We propose a deep neural fine-grained entity typing model that utilizes type information from KB obtained through entity linking.
We address the problem that our model may overfit the weakly labeled data by using a variant of the hinge-loss and introducing noise during training.
We demonstrate the effectiveness of our approach with experimental results on commonly used FET datasets.
Our code is available at https://github.com/HKUST-KnowComp/IFETEL.
Thus, in this paper, we propose a deep neural fine-grained entity typing model that flexibly predicts labels based on the context, the mention string, and the type information from KB obtained with EL.
|
What model does the paper propose?
|
A deep neural fine-grained entity typing (FET) model that flexibly predicts labels based on the context, the mention string, and the type information obtained with entity linking (EL).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.