paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
ABB Ltd. is a Swedish-Swiss multinational corporation headquartered in Zürich, Switzerland. The company was formed in 1988 when Sweden's Allmänna Svenska Elektriska Aktiebolaget (ASEA) and Switzerland's Brown, Boveri & Cie merged to create ASEA Brown Boveri, later simplified to the initials ABB. Both companies were established in the late 1800s and were major electrical equipment manufacturers, a business that ABB remains active in today. The company has also since expanded to robotics and automation technology.
|
Who is the company ABB
|
ABB Ltd. is a Swedish-Swiss multinational corporation headquartered in Zürich, Switzerland. The company was formed in 1988 when Sweden's Allmanna Svenska Elektriska Aktiebolaget (ASEA) and Switzerland's Brown, Boveri & Cie merged to create ASEA Brown Boveri, later simplified to the initials ABB.
Both companies were established in the late 1800s and were major electrical equipment manufacturers, a business that ABB remains active in today. The company has also since expanded to robotics and automation technology.
|
null | false
| null |
What is Spatial Analysis?
|
Spatial analysis is any of the formal techniques which studies entities using their topological, geometric, or geographic properties. Spatial analysis includes a variety of techniques using different analytic approaches, especially spatial statistics. In may be applied in fields as diverse as astronomy, with its studies of the placement of galaxies in the cosmos, or to chip fabrication engineering, with its use of "place and route" algorithms to build complex wiring structures. In a more restricted sense, spatial analysis is geospatial analysis, the technique applied to structures at the human scale, most notably in the analysis of geographic data. It may also be applied to genomics, as in transcriptomics data.
Complex issues arise in spatial analysis, many of which are neither clearly defined nor completely resolved, but form the basis for current research. The most fundamental of these is the problem of defining the spatial location of the entities being studied. Classification of the techniques of spatial analysis is difficult because of the large number of different fields of research involved, the different fundamental approaches which can be chosen, and the many forms the data can take.
|
|
null | false
| null |
The Great Barrier Reef is the world's largest coral reef system, composed of over 2,900 individual reefs and 900 islands stretching for over 2,300 kilometres (1,400 mi) over an area of approximately 344,400 square kilometres (133,000 sq mi). The reef is located in the Coral Sea, off the coast of Queensland, Australia, separated from the coast by a channel 100 miles wide in places and over 200 feet deep. The Great Barrier Reef can be seen from outer space and is the world's biggest single structure made by living organisms. This reef structure is composed of and built by billions of tiny organisms, known as coral polyps. It supports a wide diversity of life and was selected as a World Heritage Site in 1981. CNN labelled it one of the Seven Natural Wonders of the World in 1997. Australian World Heritage places included it in its list in 2007. The Queensland National Trust named it a state icon of Queensland in 2006.
A large part of the reef is protected by the Great Barrier Reef Marine Park, which helps to limit the impact of human use, such as fishing and tourism. Other environmental pressures on the reef and its ecosystem include runoff of manmade pollutants, climate change accompanied by mass coral bleaching, dumping of dredging sludge and cyclic population outbreaks of the crown-of-thorns starfish. According to a study published in October 2012 by the Proceedings of the National Academy of Sciences, the reef has lost more than half its coral cover since 1985, a finding reaffirmed by a 2020 study which found over half of the reef's coral cover to have been lost between 1995 and 2017, with the effects of a widespread 2020 bleaching event not yet quantified.
The Great Barrier Reef has long been known to and used by the Aboriginal Australian and Torres Strait Islander peoples, and is an important part of local groups' cultures and spirituality. The reef is a very popular destination for tourists, especially in the Whitsunday Islands and Cairns regions. Tourism is an important economic activity for the region, generating over AUD$3 billion per year. In November 2014, Google launched Google Underwater Street View in 3D of the Great Barrier Reef.
|
What is the largest coral reef system in the world?
|
The world's largest coral reef system is The Great Barrier Reef and is located in the Coral Sea, off the coast of Queensland, Australia. The Great Barrier Reef consists of over 900 islands and 2,900 individual reefs stretching for over 2,300 kilometres (1,400 mi) over an area of approximately 344,400 square kilometres (133,000 sq mi). The Great Barrier Reef can be seen from outer space and is the world's biggest single structure made by living organisms. This reef structure is composed of and built by billions of tiny organisms, known as coral polyps. In 1981 The Great Barrier Reef has been selected as a World Heritage Site.
|
null | false
| null |
Coming in at first place is Sweden. Sweden ranks highly across the studies, coming in 5th place on the EPI, 9th for the GFI, and 4th for the lowest annual average PM2.5 concentration at 6.6 μg/m³.
However, like most European countries Sweden’s main downfall is its CO2 emissions per capita, ranking only 28th place with 4.18 tonnes (4th in Europe).
Some of Sweden’s highlights are that it has a perfect score in the EPI for protecting marine ecosystems and it ranks second in the GFI’s CO2 emissions growth in transport sector indicator.
|
what is the greenest country ion the world
|
sweden
|
null | false
| null |
Kostas Georgakis (Greek: Κώστας Γεωργάκης) (23 August 1948 – 19 September 1970) was a Greek student of geology, who in the early hours of 19 September 1970, set himself ablaze in Matteotti square in Genoa in a fatal protest against the dictatorial regime of Georgios Papadopoulos.
Early life
Georgakis grew up in Corfu in a family of five. His father was a self-employed tailor of modest means. Both his father and grandfather distinguished themselves in the major wars that Greece fought in the 20th century. He attended the second lyceum in Corfu where he excelled in his studies. In August 1967, a few months after the 21 April coup in Greece, Georgakis went to Italy to study as a geologist in Genoa. He received 5,000 drachmas per month from his father and this, according to friends' testimony, made him feel guilty for the financial burden his family endured so that he could attend a university. In Italy he met Rosanna, an Italian girl of the same age and they got engaged. In 1968 Georgakis became a member of the Center Union party of Georgios Papandreou.
|
Given a reference text about Kostas Georgakis, when did he go to Italy and what did he do there?
|
Kostas Georgakis went to Genoa, Italy in 1967 to study geology.
|
null | false
| 45
|
Table TABREF29 shows the results on four test sets as well as the average performance. Our method significantly outperforms the strong `many-to-many+pretrain' baseline by 3.6 and 2.2 BLEU scores respectively, indicating the proposed method is very effective that substantially improves the translation quality. Besides, both pre-training and multi-task learning can improve translation quality, and the pre-training settings (2nd-4th rows) are more effective compared to multi-task settings (5th-8th rows). We observe a performance degradation in the `triangle+pretrain' baseline. Compared to our method, where the decoder receives higher-level syntactic and semantic linguistic knowledge extracted from text encoder, their ASR decoder can only provide lower word-level linguistic information. Besides, since their model lacks text encoder and the architecture of ST decoder is different from MT decoder, their model cannot utilize the large-scale MT data in all the training stages. Interestingly, we find that the char-level models outperform the subword-level models in all settings, especially in vanilla baseline. A similar phenomenon is observed by BIBREF6 berard2018end. A possible explanation is that learning the alignments between speech frames and subword units in another language is notoriously difficult. Our method can bring more gains in the subword setting since our model is good at learning the text-to-text alignment and the subword-level alignment is more helpful to the translation quality.
Table 4 shows the comparison between our best model with the cascaded systems, which combines the ASR model and MT model. In addition to a simple combination system, we also re-segment the ASR outputs before feeding to the MT system, denoted as ‘cascaded+re-seg’. Specifically, we train a seq2seq model (Bahdanau, Cho, and Bengio 2015) on the MT dataset, where the source side is a no punctuation sentence and the target side is a natural sentence. After that, we use the seq2seq model to add sentence boundaries and punctuation on ASR outputs. It can be seen that our end-toend model outperforms the simple cascaded model over 2 BLEU scores, and achieves a comparable performance with the ‘cascaded+re-seg’ system.
|
Is their best model better than the cascaded systems?
|
Yes.
|
null | false
| null |
What is a metaverse?
|
In science fiction, the "metaverse" is a hypothetical iteration of the Internet as a single, universal, and immersive virtual world that is facilitated by the use of virtual reality (VR) and augmented reality (AR) headsets. In colloquial usage, a "metaverse" is a network of 3D virtual worlds focused on social and economic connection.
The term "metaverse" originated in the 1992 science fiction novel Snow Crash as a portmanteau of "meta" and "universe". Metaverse development is often linked to advancing virtual reality technology due to the increasing demands for immersion. Recent interest in metaverse development is influenced by Web3, a concept for a decentralized iteration of the internet. Web3 and metaverse have been used as buzzwords to exaggerate the development progress of various related technologies and projects for public relations purposes. Information privacy, user addiction, and user safety are concerns within the metaverse, stemming from challenges facing the social media and video game industries as a whole.
|
|
null | false
| null |
What is the best way to lose weight?
|
The best way to lose weight is to be in a calorie deficit. This means that you are consuming less calories than you expend. An easy way to confirm this deficit is to track your calorie intake with a calorie tracking application like MyFitnessPal. This helps you better understand the macronutrient composition (protein, carbohydrates, fat) of your food, and which foods are more calorically dense than others. One gram of protein has 4 calories per gram, one gram of carbohydrate has 4 calories per gram, and one gram of fat has 9 calories per gram. Each food that you consume is either one or a combination of these three macronutrients. It is possible to expend calories to create a larger deficit through exercise. There are many types of exercise, but most types can be categorized as either cardiovascular training or weight training. Both types of exercise are important, as cardiovascular exercises, such as walking, running, and biking, train your heart and burn calories during exercise. Weight-based training involves challenging your muscles with loads in order to make them stronger to handle larger loads and more volume. Weight-based training does not burn as many calories during exercise, however this type of training taxes your muscles so that they need to repair and thus burns more calories throughout the day after your training session. Weight-based training is important for strong muscles, bones, and tendons. The best way to lose weight involves a combination of a calorie-controlled diet, cardiovascular exercise, and weight-based training exercise sustained over a duration of time until you meet your weight goal. Once you meet your goal, it is then important to sustain these good habits and not revert back to the habits and behaviors you were previously engaged in that contributed to your weight gain.
|
|
null | false
| null |
Do we live in a simulation? Argue both sides
|
We do not know if we live in a simulation. Both sides of the argument have their merits and shortcomings. On the one hand, it is possible that we are living in a simulation, as it could explain certain aspects of the universe that are not easily explained otherwise. On the other hand, it could be that we are not living in a simulation, as there is no evidence to support this claim. Ultimately, it is up to us to decide what we believe.
|
|
null | false
| 187
|
Knowledge graphs (KGs) containing relationship triples (subject, relation, object), denoted as (s, r, o), are the useful resources for many NLP and especially information retrieval applications such as semantic search and question answering BIBREF0 . However, large knowledge graphs, even containing billions of triples, are still incomplete, i.e., missing a lot of valid triples BIBREF1 . Therefore, much research efforts have focused on the knowledge graph completion task which aims to predict missing triples in KGs, i.e., predicting whether a triple not in KGs is likely to be valid or not BIBREF2 , BIBREF3 , BIBREF4 . To this end, many embedding models have been proposed to learn vector representations for entities (i.e., subject/head entity and object/tail entity) and relations in KGs, and obtained state-of-the-art results as summarized by BIBREF5 and BIBREF6 . These embedding models score triples (s, r, o), such that valid triples have higher plausibility scores than invalid ones BIBREF2 , BIBREF3 , BIBREF4 . For example, in the context of KGs, the score for (Melbourne, cityOf, Australia) is higher than the score for (Melbourne, cityOf, United Kingdom).
Triple modeling is applied not only to the KG completion, but also for other tasks which can be formulated as a triple-based prediction problem. An example is in search personalization, one would aim to tailor search results to each specific user based on the user's personal interests and preferences BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Here the triples can be formulated as (submitted query, user profile, returned document) and used to re-rank documents returned to a user given an input query, by employing an existing KG embedding method such as TransE BIBREF3 , as proposed by BIBREF12 . Previous studies have shown the effectiveness of modeling triple for either KG completion or search personalization. However, there has been no single study investigating the performance on both tasks.
Conventional embedding models, such as TransE BIBREF3 , DISTMULT BIBREF13 and ComplEx BIBREF14 , use addition, subtraction or simple multiplication operators, thus only capture the linear relationships between entities. Recent research has raised interest in applying deep neural networks to triple-based prediction problems. For example, BIBREF15 proposed ConvKB—a convolutional neural network (CNN)-based model for KG completion and achieved state-of-the-art results. Most of KG embedding models are constructed to modeling entries at the same dimension of the given triple, where presumably each dimension captures some relation-specific attribute of entities. To the best of our knowledge, however, none of the existing models has a “deep” architecture for modeling the entries in a triple at the same dimension.
BIBREF16 introduced capsule networks (CapsNet) that employ capsules (i.e., each capsule is a group of neurons) to capture entities in images and then uses a routing process to specify connections from capsules in a layer to those in the next layer. Hence CapsNet could encode the intrinsic spatial relationship between a part and a whole constituting viewpoint invariant knowledge that automatically generalizes to novel viewpoints. Each capsule accounts for capturing variations of an object or object part in the image, which can be efficiently visualized. Our high-level hypothesis is that embedding entries at the same dimension of the triple also have these variations, although it is not straightforward to be visually examined.
To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization. Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings. In our CapsE, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are unique INLINEFORM3 -dimensional embeddings of INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps. Entries at the same dimension from all feature maps are then encapsulated into a capsule. Thus, each capsule can encode many characteristics in the embedding triple to represent the entries at the corresponding dimension. These capsules are then routed to another capsule which outputs a continuous vector whose length is used as a score for the triple. Finally, this score is used to predict whether the triple (s, r, o) is valid or not.
In summary, our main contributions from this paper are as follows:
INLINEFORM0 We propose an embedding model CapsE using the capsule network BIBREF16 for modeling relationship triples. To our best of knowledge, our work is the first consideration of exploring the capsule network to knowledge graph completion and search personalization.
INLINEFORM0 We evaluate our CapsE for knowledge graph completion on two benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . CapsE obtains the best mean rank on WN18RR and the highest mean reciprocal rank and highest Hits@10 on FB15k-237.
INLINEFORM0 We restate the prospective strategy of expanding the triple embedding models to improve the ranking quality of the search personalization systems. We adapt our model to search personalization and evaluate on SEARCH17 BIBREF12 – a dataset of the web search query logs. Experimental results show that our CapsE achieves the new state-of-the-art results with significant improvements over strong baselines.
To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization.
|
What problems did the novel application of capsule networks (CapsNet) solve?
|
KG completion and search personalization.
|
2004.03685
| false
| null |
Guidelines for Evaluating Faithfulness
We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.
Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.
Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.
We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.
We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.
Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.
Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.
End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness.
Guidelines for Evaluating Faithfulness
We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.
Guidelines for Evaluating Faithfulness ::: Be explicit in what you evaluate.
Conflating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpretation techniques—be clear about which properties are being prioritized.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-judgement on the quality of interpretation.
We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation should not involve human-provided gold labels.
We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.
Guidelines for Evaluating Faithfulness ::: Do not trust “inherent interpretability” claims.
Inherent interpretability is a claim until proven otherwise. Explanations provided by “inherently interpretable” models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.
Guidelines for Evaluating Faithfulness ::: Faithfulness evaluation of IUI systems should not rely on user performance.
End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is. While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness.
|
Which are key points in guidelines for faithfulness evaluation?
|
The answers are shown as follows:
* Be explicit in what you evaluate.
* Faithfulness evaluation should not involve human-judgement on the quality of interpretation.
* Faithfulness evaluation should not involve human-provided gold labels.
* Do not trust “inherent interpretability” claims.
* Faithfulness evaluation of IUI systems should not rely on user performance.
|
null | false
| null |
Write a paragraph about AI governance.
|
The AI arms race is heating up, and breakthroughs are happening at an accelerating pace.
The release of ChatGPT by OpenAI represents a profound leap forward in how humans interface with machines, showcasing the startling progress in large language models. Meanwhile generative AI capabilities such as Dall-E, Stable Diffusion, and Midjourney are able to generate highly realistic and detailed images from text descriptions, demonstrating a level of creativity and imagination that was once thought to be exclusively human.
Humans seem fundamentally wired to continuously advance technology and improve our knowledge and capabilities. Also, the human brain tends to think linearly, causing us to underestimate the exponential progress of technology. Companies and nations are incentivized by market forces and geopolitical game theory to pursue better intelligence through the advancement of AI.
The Future of Life Institute recently published Pause Giant AI Experiments: An Open Letter. The letter — with notable signatories including Elon Musk, Steve Wozniak and Andrew Yang — caused a stir, calling for a 6 month pause on advanced AI development:
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Much of the media and public discourse in response to this letter has focused on who signed it and pushing back on the notion that humanity faces an imminent existential threat of artificial superintelligence. Dystopian claims of runaway artificial intelligence seem hyperbolic to many people, and calling for a 6 month moratorium is not realistic. Good luck convincing China to “pause” their efforts in the AI arms race.
But are there no boundaries? Should we proceed with no guidelines?
For example …
Are we comfortable outsourcing decisions to black box AI systems that lack transparency and explainability, making it impossible for humans to understand the reasoning behind decisions?
Should we be worried about the development of AI-powered autonomous weapons that make decisions about the use of lethal force without human input?
Should we be worried about the potential for malicious actors to use AI for nefarious purposes, such as sophisticated propaganda campaigns?
Are our current laws, regulations and political systems equipped to handle the rapid influx of new AI alignment questions that society will grapple with in the very near future?
As AI becomes more advanced, it may become difficult to understand, which could lead to unintended outcomes. AI systems can behave in ways that are unforeseen and difficult to control. The AI alignment problem is a societal challenge that requires collaboration between researchers, engineers, entrepreneurs, policymakers, and the public. It will also require international cooperation between governments and the private sector. This is not just a technical challenge, but also a philosophical and ethical one.
The open letter mentioned above goes on to recommend:
“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”
This is certainly a worthy goal, and it can be achieved by doing AI in the open. What we currently lack is a framework. Society needs a set of procedures and protocols to make the recommendation from The Future of Life Institute actionable.
Jointly, we must consider and debate the pros and cons of many ideas, including but not limited to:
Mandatory disclosure of model details, including training datasets, evaluation methodologies, and known biases
Development of a framework that establishes model monitoring and audit requirements for advanced AI systems
Implementation of laws that impose liability for AI-caused harm
Establishment of a regulatory authority for oversight and tracking of highly capable AI systems
The first step in achieving a productive framework for safe AI development is an open dialogue among the many stakeholders involved, which includes everyone. We must rise above the hyper-politicized discourse that our dishonest and broken media often forces upon us. This topic is too important and the ramifications are too profound. Join me in advocating for an intelligent and respectful conversation on AI — one that solicits input and open debate from a diverse set of voices to help ensure a path forward that is in our collective best interest.
|
|
null | false
| null |
As per WHO Marburg Virus Disease(MVD) is a severe often fatal haemorrhagic fever it was earlier known as
|
Marburg Haemorrhagic Fever
|
|
null | false
| null |
Categorize each of the following as being beneficial or not beneficial to making a home protected from burglars: alarm system, open doors, locks, cameras, open windows
|
Alarm System: Beneficial
Open Doors: Not Beneficial
Locks: Beneficial
Cameras: Beneficial
Open Windows: Not Beneficial
|
|
null | false
| null |
The primary temple in the city, it was dedicated to the cult of the Capitoline Triad. It was built in 73 AD and consists of three cellae that have preserved much of the original polychrome marble floors, while their interior walls are now a lapidarium displaying ancient Roman epigraphs collected in the 19th century. In front of the cellae, is a fragmentary portico, composed of Corinthian columns that support a pediment containing a dedication to the Emperor Vespasian. Almost entirely buried by a landslide of the Cidneo Hill, it was rediscovered in 1823 through various archaeological campaigns. During excavation in 1826, a splendid bronze statue of a winged Victory was found inside it, likely hidden in late antiquity to preserve it from pillage. After restoration completed in 2013, the site reopened as a new archaeological park.
|
From the given text, please give me a short bulleted list of the key characteristics of the Capitolium of Brixia in Brescia (Italy)
|
Primary temple of the city, built in 73 AD
Dedicated to the cult of the Capitoline Triad
Three cellae preserve the original polychrome marble floors
Interior walls display ancient Roman epigraphs from the 19th century
Almost fully buried by a landslide, and rediscovered in 1823.
Restoration completed in 2013.
|
null | false
| null |
What are the differences between the New York Giants and New York Jets?
|
The New York Giants and New York Jets are both professional football teams in the NFL who play in the New York Metropolitan area. Both teams play in MetLife stadium which used to be called Giants stadium in East Rutherford, NJ. Their fan bases live in New Jersey and New York primarily. The Jets primary color is Green, while the Giants are blue.
|
|
null | false
| null |
What is Occupation Therapy.
|
Occupation Therapy helps make you independent to do the daily tasks that are important to your quality of life. Occupational therapists strive to help their patients improve your functional independence through various activities that are related to strengthening, balance, endurance.
|
|
null | false
| null |
Miles Davis, Rhianna, Elton John, John Coltrane, Taylor Swift, Amy Winehouse, Elvis, Billy Holiday. These are all musicians - list which of these are jazz musicians, and which are not.
|
Miles Davis, John Coltrane, Amy Winehouse and Billy Holiday are considered to be jazz musicians. Rhianna, Taylor Swift, Elvis and Elton John are not jazz musicians.
|
|
null | false
| null |
An aurora (pl: auroras or aurorae), also commonly known as the polar lights, is a natural light display in Earth's sky, predominantly seen in high-latitude regions (around the Arctic and Antarctic). Auroras display dynamic patterns of brilliant lights that appear as curtains, rays, spirals, or dynamic flickers covering the entire sky.
Auroras are the result of disturbances in the magnetosphere caused by the solar wind. Major disturbances result from enhancements in the speed of the solar wind from coronal holes and coronal mass ejections. These disturbances alter the trajectories of charged particles in the magnetospheric plasma. These particles, mainly electrons and protons, precipitate into the upper atmosphere (thermosphere/exosphere). The resulting ionization and excitation of atmospheric constituents emit light of varying colour and complexity. The form of the aurora, occurring within bands around both polar regions, is also dependent on the amount of acceleration imparted to the precipitating particles.
In northern latitudes, the effect is known as the aurora borealis or the northern lights. The former term was coined by Galileo in 1619, from the Roman goddess of the dawn and the Greek name for the north wind. The southern counterpart, the aurora australis or the southern lights, has features almost identical to the aurora borealis and changes simultaneously with changes in the northern auroral zone. The aurora australis is visible from high southern latitudes in Antarctica, Chile, Argentina, South Africa, New Zealand and Australia. The aurora borealis is visible from areas around the Arctic such as Alaska, the Canadian Territories, Iceland, Greenland, Norway, Sweden, Finland and Siberia. On rare occasions the aurora borealis can be seen as far south as the Mediterranean and the southern states of the US.
|
From which countries is the aurora borealis visible?
|
The aurora borealis is visible from areas around the Arctic such as Alaska, the Canadian Territories, Iceland, Greenland, Norway, Sweden, Finland and Siberia. On rare occasions the aurora borealis can be seen as far south as the Mediterranean and the southern states of the US.
|
null | false
| null |
In the english language what is alliteration?
|
Alliteration as I learned it in school from my second grade teacher is the repetition of consonant vowels sounds at the beginning of words and the end of sentences. An example of alliteration is the following "people are so perplexing when they are pushy."
|
|
1902.11049
| false
| null |
For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer.
For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer.
|
What human evaluation metrics were used in the paper?
|
rating questions on a scale of 1-5 based on fluency of language used and relevance of the question to the context
|
null | false
| null |
What is the Forever War about?
|
The Forever War was written by Joe Haldeman and published in 1974. It follows the story of the main protagonist, William Mandella, as he is conscripted to fight in an interstellar war against the Taurans. The book was awarded the Nebula Award in 1975 and the Hugo and Locus awards in 1976.
The Forever War deals with themes of a soldier's experience with war and represents the author's experience in the Vietnam War. The sequels, Forever Peace (1997) and Forever Free (1999), followed and are direct sequels to the original novel.
|
|
null | false
| null |
WFSD-LP (107.9 FM) is a low-power FM radio station broadcasting a Christian inspirational format. Licensed to Tallahassee, Florida, United States, the station is currently owned by Tallahassee First Seventh-day Adventist Church, affiliated with LifeTalk Radio.
|
What city is WFSD Radio licensed to?
|
WFSD Radio is licensed to Tallahassee, Florida, United States
|
null | false
| null |
Flying model craft and stories of manned flight go back many centuries; however, the first manned ascent — and safe descent — in modern times took place by larger hot-air balloons developed in the 18th century. Each of the two World Wars led to great technical advances. Consequently, the history of aircraft can be divided into five eras:
- Pioneers of flight, from the earliest experiments to 1914.
- First World War, 1914 to 1918.
- Aviation between the World Wars, 1918 to 1939.
- Second World War, 1939 to 1945.
- Postwar era, also called the Jet Age, 1945 to the present day.
|
What are the first and third eras of aircraft history mentioned in the text below and when did they end? Use a bullet list and the format {Name} ({Year}).
|
- Pioneers of flight (1914)
- Aviation between the World Wars (1939)
|
null | false
| null |
Which team's have the most NCAA Division I men's basketball championships?
|
1. UCLA has 11 championship wins
2. Kentucky has 8 championship wins
3. North Carolina has 6 championship wins
4. Due has 5 championship wins
5. Indian has 5 championship wins
|
|
null | false
| null |
The Doomguy (also spelt Doom Guy, as well as referred to as the Doom Marine, Doom Slayer or just the Slayer in Doom (2016) and Doom Eternal) is a fictional character and the protagonist of the Doom video game franchise of first-person shooters created by id Software. He was created by American video game designer John Romero. He was introduced as the player character in the original 1993 video game Doom. Within the Doom series, Doomguy is a demon hunter space marine dressed in green combat armor who rarely speaks onscreen, and his personality and backstory were intentionally vague to reinforce his role as a player avatar. In Doom Eternal, he is voiced by American voice actor Matthew Waterson, while Jason Kelley voices the character in that game's downloadable content The Ancient Gods: Part Two. He has appeared in several other games developed by id Software, including Quake Champions and Quake III Arena.
|
Who is Doom Guy?
|
Doom Guy was created by John Romero as the fictional protagonist of the Doom video game. In the game he is a demon hunter space marine dressed in green combat armor.
|
null | false
| null |
What were the top 3 best-selling PlayStation 2 video games?
|
Top 3 best-selling PlayStation 2 video games: Grand Theft Auto: San Andreas [17.33 million];
Gran Turismo 3: A-Spec [14.89 million]; Gran Turismo 4 [11.76 million].
|
|
null | false
| null |
How do you operate a car with a manual transmission?
|
Through a combination of a shifter and three pedals: gas, brake, and clutch. Press down the clutch pedal with one foot, and the brake with your other foot first and then turn on the engine of the car. It is recommended to shift to neutral immediately after turning on your car and prior to departing. To shift between one of the engaged gears, typically marked 1, 2, 3, 4, 5 N (for neutral) and R (for reverse), press down on the clutch all the way to the floor and move the shifter to the desired location. Once the car is on and you're in neutral if you want to move forward, press and hold the clutch pedal to the floor and move the shifter to the "1" position. Slowly disengage the clutch pedal partially, and while disengaging slowly start to press down on the gas pedal. After gradually pressing more on the gas pedal, slowly remove your foot from the clutch entirely. This will start you in forward motion, and you will be restricted to the speed limit of 1st gear. You can track this limit through the RPM (revolutions per minute) gauge on the dashboard of your car. Typically you want to keep the RPMs between 1500-4000. If the RPMs start to decrease to 1500-2000 that is a sign you want to shift from a higher gear to a lower gear (i.e. 2nd gear to 1st gear), and as the RPMs increase to 4000, that is an indication you want to shift to a higher gear (i.e. 2nd gear to 3rd gear).
|
|
null | false
| null |
Tell me the genre of each of the following books, choosing between science fiction, fantasy, or neither.
The Name of the Wind by Patrick Rothfuss, The Player of Games by Iain M. Banks, Crying in H Mart by Michelle Zauner, Crime and Punishment by Fyodr Dostoevsky, and Hyperion by Dan Simmons.
|
The Player of Games and Hyperion are science fiction, The Name of the Wind is fantasy, and both Crying in H Mart and Crime and Punishment are neither of those genres.
|
|
null | false
| null |
My chickens keep getting eaten by foxes. What should I do?
|
Buy a coop for your chickens and put your chickens into their coop at night. Build a fence around the coop so that the foxes cannot enter the coop. Examine the coop and the fence for any holes that foxes can use to slip in and patch up any of these holes.
|
|
null | false
| 70
|
We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence. Generally, LSTMs take inputs from left (past) of the sentence and computes the hidden state. However, it is proven beneficialBIBREF23 to use bi-directional LSTM, where, hidden states are computed based from right (future) of sentence and both of these hidden states are concatenated to produce the final output as $h_t$=[$\overrightarrow{h_t}$;$\overleftarrow{h_t}$], where $\overrightarrow{h_t}$, $\overleftarrow{h_t}$ = hidden state computed in forward and backward direction respectively.
We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence.
|
How is word representation captured?
|
They used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence.
|
null | false
| 517
|
Three public datasets for domain adaptation are used to evaluate STU-KD. Office-31 (Saenko et al., 2010) is a widely used dataset for domain adaptation. It has 4, 652 images and 31 classes collected from three domains: Amazon (A), DSLR (D), and Webcam (W). ImageCLEF-DA has three domains: ImageNet ILSVRC 2012 (I), Caltech-256 (C), and Pascal VOC 2012 (P). For each domain, there are 12 classes and 50 images in each class. Office-Home (Venkateswara et al., 2017) has 15, 500 images and 65 classes from four distinct domains: Artistic images (Ar), Clip Art (Cl), Product images (Pr), and Real-World images (Rw).
|
Does the data split consider the balance of classes?
|
The data split does not consider the balance of classes on purpose, which is more realistic under the federated setting.
|
null | false
| 44
|
Table TABREF5 reflects that the BOW model provided the best results, both in terms of accuracy and F1-score. The best result achieved by a sentiment model was the Stanford Sentiment ($0.63$ F1-score), with others within the same range and linguistic-based features collectively giving the best performance.
Table 1 reflects that the BOW model provided the best results, both in terms of accuracy and F1-score. The best result achieved by a sentiment model was the Stanford Sentiment (0.63 F1-score), with others within the same range and linguistic-based features collectively giving the best performance.
|
Did the BOW model provide the best results in terms of accuracy and F1 score?
|
Yes, it did.
|
null | false
| null |
What are the most common meals?
|
The most common meals are breakfast, lunch and dinner. Breakfast is eat when a person wakes up, lunch is eaten in the afternoon after 4 - 5 hours of being awake, and dinner is eaten in the evening before bed time. Each of the meals is important to give the body energy to be able to function during the day. Skipping meals is a form of fasting, and this is very common in certain cultures and certain religions too. Eating in between meals is not a meal but a form of snacking.
|
|
null | false
| null |
I need to write a deck summarising the benefits of delta in the lakehouse, can you give me some good slide suggestions?
|
Sure, here you go -
1 - Challenges of data warehousing and analytics
2 - Why Delta, what problem does it solve
3 - What is Delta - key values points
4 - References and customer stories
5 - Enablement and Training
6 - Next Steps and QA
|
|
null | false
| null |
What is the difference between a red delicious and a Granny Smith apple?
|
A red delicious apple is red and sweet whereas a Granny Smith apple is green and tart.
|
|
null | false
| null |
In 1931 the Ford Motor Company relocated from Trafford Park in Manchester, to a larger new plant in Dagenham, which was already the location of supplier Briggs Motorway Bodies. A 500-acre (200 ha) riverside site was developed to become Europe's largest car plant, a vast vertically integrated site with its own blast furnaces and power station, importing iron ore and exporting finished vehicles. By the 1950s Ford had taken over Briggs at Dagenham and its other sites at Doncaster, Southampton, Croydon and Romford. At its peak the Dagenham plant had 4,000,000 square feet (370,000 m2) of floor space and employed over 40,000 people, although this number gradually fell during the final three decades of the 20th century as production methods advanced and Ford invested in other European factories as well. Some of Britain's best selling cars, including the Fiesta, Escort, Cortina and Sierra, were produced at the plant over the next 71 years.
|
To which town in the UK did the Ford Motor Company relocate to in 1931?
|
Dagenham
|
null | false
| null |
Identify which animal species is alive or extinct: Polacanthus, Olive Ridley Turtle
|
Olive Ridley Turtle is alive, Polacanthus is extinct.
|
|
null | false
| null |
Classify each of the following activities as "active" or "lazy": Running, jogging, biking, watching TV, drinking beer, laying down, lifting weights
|
Running: Active
Jogging: Active
Biking: Active
Watching TV: Lazy
Drinking Beer: Lazy
Laying Down: Lazy
Lifting Weights: Active
|
|
1711.00106
| false
| null |
FLOAT SELECTED: Table 2: Ablation study on the development set of SQuAD.
The contributions of each part of our model are shown in Table 2 . We note that the deep residual coattention yielded the highest contribution to model performance, followed by the mixed objective. The sparse mixture of experts layer in the decoder added minor improvements to the model performance.
FLOAT SELECTED: Table 2: Ablation study on the development set of SQuAD.
The contributions of each part of our model are shown in Table 2 .
|
How much is the gap between using the proposed objective and using only cross-entropy objective?
|
The mixed objective improves EM by 2.5% and F1 by 2.2%
|
null | false
| null |
Which is a species of fish? Sturgeon or Surgeon
|
Sturgeon
|
|
null | false
| null |
What is an ATM?
|
An automated teller machine (ATM) is an electronic telecommunications device that enables customers of financial institutions to perform financial transactions, such as cash withdrawals, deposits, funds transfers, balance inquiries or account information inquiries, at any time and without the need for direct interaction with bank staff.
ATMs are known by a variety of names, including automatic teller machine (ATM) in the United States (sometimes redundantly as "ATM machine"). In Canada, the term automated banking machine (ABM) is also used, although ATM is also very commonly used in Canada, with many Canadian organizations using ATM over ABM. In British English, the terms cashpoint, cash machine and hole in the wall are most widely used. Other terms include any time money, cashline, tyme machine, cash dispenser, cash corner, bankomat, or bancomat. ATMs that are not operated by a financial institution are known as "white-label" ATMs.
Using an ATM, customers can access their bank deposit or credit accounts in order to make a variety of financial transactions, most notably cash withdrawals and balance checking, as well as transferring credit to and from mobile phones. ATMs can also be used to withdraw cash in a foreign country. If the currency being withdrawn from the ATM is different from that in which the bank account is denominated, the money will be converted at the financial institution's exchange rate. Customers are typically identified by inserting a plastic ATM card (or some other acceptable payment card) into the ATM, with authentication being by the customer entering a personal identification number (PIN), which must match the PIN stored in the chip on the card (if the card is so equipped), or in the issuing financial institution's database.
|
|
null | false
| 57
|
The vast amounts of data collected by healthcare providers in conjunction with modern data analytics techniques present a unique opportunity to improve health service provision and the quality and safety of medical care for patient benefit BIBREF0 . Much of the recent research in this area has been on personalised medicine and its aim to deliver better diagnostics aided by the integration of diverse datasets providing complementary information. Another large source of healthcare data is organisational. In the United Kingdom, the National Health Service (NHS) has a long history of documenting extensively the different aspects of healthcare provision. The NHS is currently in the process of increasing the availability of several databases, properly anonymised, with the aim of leveraging advanced analytics to identify areas of improvement in NHS services.
One such database is the National Reporting and Learning System (NRLS), a central repository of patient safety incident reports from the NHS in England and Wales. Set up in 2003, the NRLS now contains more than 13 million detailed records. The incidents are reported using a set of standardised categories and contain a wealth of organisational and spatio-temporal information (structured data), as well as, crucially, a substantial component of free text (unstructured data) where incidents are described in the `voice' of the person reporting. The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission and discharge to serious untoward incidents, such as retained foreign objects after operations. The review and analysis of such data provides critical insight into the complex functioning of different processes and procedures in healthcare towards service improvement for safer carer.
Although statistical analyses are routinely performed on the structured component of the data (dates, locations, assigned categories, etc), the free text remains largely unused in systematic processes. Free text is usually read manually but this is time-consuming, meaning that it is often ignored in practice, unless a detailed review of a case is undertaken because of the severity of harm that resulted. There is a lack of methodologies that can summarise content and provide content-based groupings across the large volume of reports submitted nationally for organisational learning. Methods that could provide automatic categorisation of incidents from the free text would sidestep problems such as difficulties in assigning an incident category by virtue of a priori pre-defined lists in the reporting system or human error, as well as offering a unique insight into the root cause analysis of incidents that could improve the safety and quality of care and efficiency of healthcare services.
Our goal in this work is to showcase an algorithmic methodology that detects content-based groups of records in a given dataset in an unsupervised manner, based only on the free and unstructured textual description of the incidents. To do so, we combine recently developed deep neural-network high-dimensional text-embedding algorithms with network-theoretical methods. In particular, we apply multiscale Markov Stability (MS) community detection to a sparsified geometric similarity graph of documents obtained from text vector similarities. Our method departs from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents BIBREF1 . More recent approaches have used deep neural network based language models clustered with k-means, without a full multiscale graph analysis BIBREF2 . There have been some previous applications of network theory to text analysis. For example, Lanchichinetti and co-workers BIBREF3 used a probabilistic graph construction analysed with the InfoMap algorithm BIBREF4 ; however, their community detection was carried out at a single-scale and the representation of text as BoW arrays lacks the power of neural network text embeddings. The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than fitting with pre-designed classifications. The obtained results could thus help mitigate possible human error or effort in finding the right category in complex category classification trees.
We showcase the methodology through the analysis of a dataset of patient incidents reported to the NRLS. First, we use the 13 million records collected by the NRLS since 2004 to train our text embedding (although a much smaller corpus can be used). We then analyse a subset of 3229 records reported from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014 to extract clusters of incidents at different levels of resolution in terms of content. Our method reveals multiple levels of intrinsic structure in the topics of the dataset, as shown by the extraction of relevant word descriptors from the grouped records and a high level of topic coherence. Originally, the records had been manually coded by the operator upon reporting with up to 170 features per case, including a two-level manual classification of the incidents. Therefore, we also carried out an a posteriori comparison against the hand-coded categories assigned by the reporter (healthcare personnel) at the time of the report submission. Our results show good overall correspondence with the hand-coded categories across resolutions and, specifically, at the medium level of granularity. Several of our clusters of content correspond strongly to well-defined categories, yet our results also reveal complementary categories of incidents not defined in the external classification. In addition, the tuning of the granularity afforded by the method can be used to provide a distinct level of resolution in certain areas corresponding to specialise or particular sub-themes.
The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than fitting with pre-designed classifications. The obtained results could thus help mitigate possible human error or effort in finding the right category in complex category classification trees.
|
What are the advantages of multiscale community detection?
|
It allows them to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than fitting with pre-designed classifications. The obtained results could thus help mitigate possible human error or effort in finding the right category in complex category classification trees.
|
null | false
| null |
Who are your 3 favorite Star Wars characters and why?
|
There are so many great Star Wars characters, but if I have to narrow it down to 3, I'd have to say Chewy, Rey, and Grogu. Chewy is a classic Star Wars character who has always been on the 'good' side (the Rebellion). His communication mode is so adorable, he is extremely loyal, and he wears his emotions on his (hairy) sleeve. While Princess Leia was an inspriational character, I think Rey is the most impressive of the two strong female protagonists. Rey is mentally and physically strong, passionate about her beliefs, and I hope she will continue to show up in Star Wars movies. Grogu is from the The Mandalorian series, on Disney Plus. Lots of people call him Baby Yoda because he looks like a tiny version of the famous and wise Yoda character. Grogu appeals to women because he makes baby-like sounds and has huge inquisitive eyes. His curiosity gets him into trouble, but he doesn't know any better yet. He has impressive super powers for such a tiny creature, and I hope that we will get to watch Grogu grow up as the series continues.
|
|
null | false
| null |
It was the arrival of the Price Brothers and Company (of William Evan Price) that the village of Price was founded at the end of the 19th century. It was a suburb of Saint-Octave-de-Métis, which constituted a community of workers, who were employed at the saw mill.
|
Who founded the city of Price, Quebec?
|
The Price Brothers and Company
|
null | false
| null |
The Go Between Bridge, formerly known as the Hale Street Link, is a toll bridge for vehicles, pedestrians and cyclists over the Brisbane River in inner-city Brisbane, Queensland, Australia. The bridge connects Merivale and Cordelia Streets in West End to Hale Street and the Inner City Bypass at Milton. It is Brisbane's first inner-city traffic bridge in 40 years and carries around 12,000 vehicles per day. The bridge opened to traffic on 5 July 2010 and is now operated by Transurban Queensland.
|
What is the old name for The Go Between Bridge?
|
Hale Street Link
|
1911.11698
| false
| null |
The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9. These tasks were designed to cover every similarities, from the most general (the context) to the character-level similarity.
Methods ::: Evaluation ::: String length
To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).
Methods ::: Evaluation ::: Words co-occurrences
A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.
Methods ::: Evaluation ::: Stems co-occurrences
The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.
Methods ::: Evaluation ::: MeSH similarity
It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V.
The goal here being to assess if D2V could effectively replace the related-document function on PubMed, five different document similarity evaluations were designed as seen on figure FIGREF9.
Methods ::: Evaluation ::: String length
To assess whether a similar length could lead to convergence of two documents, the size of the query document $D_{x}$ has been compared with the top-close document $C_{x}$ for 10,000 document randomly selected from the TeS after some pre-processing steps (stopwords and spaces were removed from both documents).
Methods ::: Evaluation ::: Words co-occurrences
A matrix of words co-occurrence was constructed on the total corpus from PubMed. Briefly, each document was lowered and tokenized. A matrix was filled with the number of times that two words co-occur in a single document. Then, for 5,000 documents $D_{x}$ from the TeS, all models were queried for the top-close document $C_{x}$. All possible combinations between all words $WD_{x} \in D_{x}$ and all words $WC_{x} \in C_{x}$ (excluding stopwords) were extracted, 500 couples were randomly selected and the number of times each of them was co-occurring was extracted from the matrix. The average value of this list was calculated, reflecting the proximity between D and C regarding their words content. This score was also calculated between each $D_{x}$ and the top-close document $C_{x}$ returned by the pmra algorithm.
Methods ::: Evaluation ::: Stems co-occurrences
The evaluation task explained above was also applied on 10,000 stemmed texts (using the Gensim’s PorterStemmer to only keep word’s roots). The influence of the conjugation form or other suffixes can be assessed.
Methods ::: Evaluation ::: MeSH similarity
It is possible to compare the ability of both pmra and D2V to bring closer articles which were indexed with common labels. To do so, 5,000 documents $D_{x}$ randomly selected from the TeS were sent to both pmra and D2V architectures, and the top-five closer articles $C_{x}$ were extracted. The following rules were then applied to each MeSH found associated with $D_{x}$ for each document $C_{x_i}$ : add 1 to the score if this MeSH term is found in both $D_{x}$ and $C_{x_i}$, add 3 if this MeSH is defined as major topic and add 1 for each qualifier in common between $D_{x}$ and Cxi regarding this particular MeSH term. Then, the mean of these five scores was calculated for both pmra and D2V.
|
What four evaluation tasks are defined to determine what influences proximity?
|
The answers are shown as follows:
* String length
* Words co-occurrences
* Stems co-occurrences
* MeSH similarity
|
null | false
| 328
|
Neural network based models have been widely exploited with the prosperities of Deep Learning BIBREF0 and achieved inspiring performances on many NLP tasks, such as text classification BIBREF1 , BIBREF2 , semantic matching BIBREF3 , BIBREF4 and machine translation BIBREF5 . These models are robust at feature engineering and can represent words, sentences and documents as fix-length vectors, which contain rich semantic information and are ideal for subsequent NLP tasks.
One formidable constraint of deep neural networks (DNN) is their strong reliance on large amounts of annotated corpus due to substantial parameters to train. A DNN trained on limited data is prone to overfitting and incapable to generalize well. However, constructions of large-scale high-quality labeled datasets are extremely labor-intensive. To solve the problem, these models usually employ a pre-trained lookup table, also known as Word Embedding BIBREF6 , to map words into vectors with semantic implications. However, this method just introduces extra knowledge and does not directly optimize the targeted task. The problem of insufficient annotated resources is not solved either.
Multi-task learning leverages potential correlations among related tasks to extract common features, increase corpus size implicitly and yield classification improvements. Inspired by BIBREF7 , there are a large literature dedicated for multi-task learning with neural network based models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These models basically share some lower layers to capture common features and further feed them to subsequent task-specific layers, which can be classified into three types:
In this paper, we propose a generalized multi-task learning architecture with four types of recurrent neural layers for text classification. The architecture focuses on Type-III, which involves more complicated interactions but has not been researched yet. All the related tasks are jointly integrated into a single system and samples from different tasks are trained in parallel. In our model, every two tasks can directly interact with each other and selectively absorb useful information, or communicate indirectly via a shared intermediate layer. We also design a global memory storage to share common features and collect interactions among all tasks.
We conduct extensive experiments on five benchmark datasets for text classification. Compared to learning separately, jointly learning multiple relative tasks in our model demonstrate significant performance gains for each task.
Our contributions are three-folds:
The architecture focuses on Type-III, which involves more complicated interactions but has not been researched yet.
|
Which type does the architecture focus on?
|
Type-III
|
null | false
| null |
Who was given EY Entrepreneur of the year award 2020 ?
|
Kiran Mazumdar Shaw
|
|
null | false
| null |
Uncontacted peoples are groups of indigenous peoples living without sustained contact with neighbouring communities and the world community. Groups who decide to remain uncontacted are referred to as indigenous peoples in voluntary isolation. Legal protections make estimating the total number of uncontacted peoples challenging, but estimates from the Inter-American Commission on Human Rights in the UN and the non-profit group Survival International point to between 100 and 200 uncontacted peoples numbering up to 10,000 individuals total. A majority of uncontacted peoples live in South America, particularly northern Brazil, where the Brazilian government and National Geographic estimate between 77 and 84 tribes reside.
Knowledge of uncontacted peoples comes mostly from encounters with neighbouring indigenous communities and aerial footage.
|
What are Uncontacted peoples?
|
Uncontacted peoples generally refers to indigenous peoples who have remained largely isolated to the present day, maintaining their traditional lifestyles and functioning mostly independently from any political or governmental entities. However, European exploration and colonization during the early modern period brought indigenous peoples worldwide into contact with colonial settlers and explorers. As such, most indigenous groups have had some form of contact with other peoples. The term "uncontacted" therefore refers to a lack of sustained contact with the majority of non-indigenous society at the present time.
The Inter-American Commission on Human Rights refers to uncontacted peoples as "indigenous peoples in voluntary isolation." These groups are defined by their general rejection of contact with anyone outside of their own people. This definition also includes groups who have previously had sustained contact with the majority non-indigenous society but have chosen to return to isolation and no longer maintain contact. As such uncontacted peoples are understood not as living in an anachronistic state of nature but rather as contemporaries of modernity.
A 2009 United Nations report also classified "peoples in initial contact" as sharing the same characteristics but beginning to regularly communicate with and integrate into mainstream society.
To highlight their agency in staying uncontacted or isolated, international organizations emphasize calling them "indigenous peoples in isolation" or "in voluntary isolation". Otherwise they have also been called "hidden peoples" or "uncontacted tribes".
Historically European colonial ideas of uncontacted peoples, and their colonial claims over them, were informed by the imagination of and search for Prester John, king of a wealthy Christian realm in isolation, as well as the Ten Lost Tribes of Israel, identifying uncontacted peoples as "lost tribes".
International organizations have highlighted the importance of protecting indigenous peoples' environment and lands, the importance of protecting them from exploitation or abuse, and the importance of no contact in order to prevent the spread of modern diseases.
Historic exploitation and abuse at the hands of the majority group have led many governments to give uncontacted people their lands and legal protection. Many indigenous groups live on national forests or protected grounds, such as the Vale do Javari in Brazil or the North Sentinel Island in India.
Much of the contention over uncontacted peoples has stemmed from governments' desire to extract natural resources. In the 1960s and 1970s, Brazil's federal government attempted to assimilate and integrate native groups living in the Amazon jungle in order to use their lands for farming. Their efforts were met with mixed success and criticism until, in 1987, Brazil created the Department of Isolated Indians inside of FUNAI (Fundação Nacional do Índio), Brazil's Indian Agency. FUNAI was successful in securing protected lands which have allowed certain groups to remain relatively uncontacted until the present day.
A different outcome occurred in Colombia when the Nukak tribe of indigenous people was contacted by an evangelical group. The tribe was receptive to trade and eventually moved in order to have closer contact with settlers. This led to an outbreak of respiratory infections, violent clashes with narco-traffickers, and the death of hundreds of the Nukak, more than half of the tribe. Eventually, the Colombian government forcibly relocated the tribe to a nearby town where they received food and government support but were reported as living in poverty.
The threats to the Nukak tribe are generally shared by all peoples in isolation, particularly the outside world's desire to exploit their lands. This can include lumbering, ranching and farming, land speculation, oil prospecting and mining, and poaching. For example, then Peruvian President Alan García claimed in 2007 that uncontacted groups were only a "fabrication of environmentalists bent on halting oil and gas exploration". As recently as 2016, a Chinese subsidiary mining company in Bolivia ignored signs that they were encroaching on uncontacted tribes, and attempted to cover it up. In addition to commercial pursuits, other people such as missionaries can inadvertently cause great damage.
It was those threats, combined with attacks on their tribe by illegal cocaine traffickers, that led a group of Acre Indians to make contact with a village in Brazil and subsequently with the federal government in 2014. This behaviour suggests that many tribes are aware of the outside world and choose not to make contact unless motivated by fear or self-interest. Satellite images suggest that some tribes intentionally migrate away from roads or logging operations in order to remain secluded.
Indigenous rights activists have often advocated that indigenous peoples in isolation be left alone, saying that contact will interfere with their right to self-determination as peoples. On the other hand, experience in Brazil suggests isolating peoples might want to have trading relationships and positive social connections with others, but choose isolation out of fear of conflict or exploitation. The Brazilian state organization National Indian Foundation (FUNAI) in collaboration with anthropological experts has chosen to make controlled initial contact with tribes. The organization operates 15 trading posts throughout protected territory where tribes can trade for metal tools and cooking instruments. The organization also steps in to prevent some conflicts and deliver vaccinations. However, FUNAI has been critical of political will in Brazil, reporting that it only received 15% of its requested budget in 2017. In 2018, after consensus among field agents, FUNAI released videos and images of several tribes under their protection. Although the decision was criticized, the director of the Isolated Indian department, Bruno Pereira, responded that "The more the public knows and the more debate around the issue, the greater the chance of protecting isolated Indians and their lands". He shared that the organization has been facing mounting political pressure to open up lands to commercial companies. He also justified the photography by explaining that FUNAI was investigating a possible massacre against the Flechieros tribe.
Recognizing the myriad problems with contact, the United Nations Human Rights Council in 2009 and the Inter-American Commission on Human Rights in 2013 introduced guidelines and recommendations that included a right to choose self-isolation.
There have been reports of human safaris in India's Andaman Islands and in Peru, where tourism companies attempt to help tourists see uncontacted or recently contacted peoples. This practice is controversial.
|
null | false
| 127
|
Most existing adversarial attack methods for text inputs are derived from those for image inputs. These methods can be categorised into three types including gradient-based attacks, optimisation-based attacks and model-based attacks.
Gradient-based attacks are mainly white-box attacks that rely on calculating the gradients of the target classifier with respect to the input representation. This class of attacking methods BIBREF6, BIBREF7, BIBREF6 are mainly derived from the fast gradient sign method (FGSM) BIBREF1, and it has been shown to be effective in attacking CV classifiers. However, these gradient-based methods could not be applied to text directly because perturbed word embeddings do not necessarily map to valid words. Other methods such as DeepFool BIBREF8 that rely on perturbing the word embedding space face similar roadblocks. BIBREF5 propose to use nearest neighbour search to find the closest word to the perturbed embedding.
Both optimisation-based and model-based attacks treat adversarial attack as an optimisation problem where the constraints are to maximise the loss of target classifiers and to minimise the difference between original and adversarial examples. Between these two, the former uses optimisation algorithms directly; while the latter trains a seperate model to generate adversarial examples and therefore involves a training process. Some of the most effective attacks for images are achieved by optimisation-based methods, such as the L-BFGS attack BIBREF1 and the C&W attack BIBREF9 in white-box attacks and the ZOO method BIBREF10 in black-box attacks. For texts, the white-box attack HotFlip BIBREF3 and black-box attack DeepWordBug BIBREF11 and TextBugger BIBREF12 are proposed in this category.
In a similar vein, a few model-based attacks have been proposed for images, e.g. BIBREF13 design a generative adversarial network (GAN) to generate the image perturbation from a noise map. The attacking method and target classifier typically form a single large network and the attacking method is trained using the loss from the target classifier. For this reason, it is not very straightforward to use these model-based techniques for text because there is a discontinuity in the network (since words in the adversarial examples are discrete) and so it is not fully differentiable.
Model-based attacks are generally seen as greybox attacks as it requires access to target classifier during training phase, but once it’s trained it can generate adversarial examples independently.
|
Why are model-based attacks generally seen as grey box attacks?
|
Because it requires access to target classifier during training phase.
|
null | false
| null |
What is Intel Xeon good for?
|
Intel's Xeon line of processors is intended for business workstations and servers. For servers and workstations running data-intensive applications, particularly those that are mission-critical, Xeon CPUs are the best option.
|
|
null | false
| null |
Derby Museum and Art Gallery is a museum and art gallery in Derby, England. It was established in 1879, along with Derby Central Library, in a new building designed by Richard Knill Freeman and given to Derby by Michael Thomas Bass. The collection includes a gallery displaying many paintings by Joseph Wright of Derby; there is also a large display of Royal Crown Derby and other porcelain from Derby and the surrounding area. Further displays include archaeology, natural history, geology, military collections and world cultures. The Art Gallery was opened in 1882.
|
Given a reference text about Derby Museum and Art Gallery, tell me when the it was established and what the gallery contains.
|
The Derby Museum and Art Gallery was established in 1879 and includes a gallery with many paintings by Joseph Wright of Derby and other displays with archaeology, natural history, geology, military collections, and world cultures.
|
null | false
| 63
|
In this section we will compare the results of the different segmentation systems through automatic evaluations. First of all, the human segmentation, from the subcorpus $D$ composed of common documents. The results are presented in the table tab:humains. The first row shows the performance of the $I$ segments, taking the experts as a reference, while the second presents the process in the opposite direction.
We have found that segmentation by experts and naive produces two subcorpus $E$ and $N$ with very similar characteristics. This surprised us, as we expected a more important difference between them. In any case, we deduced that, at least in this corpus, it is not necessary to be an expert in linguistics to discursively segment the documents. As far as system evaluations are concerned, we use the 78 $E$ documents as reference. Table TABREF26 shows the results.
In the case of the Experts, the grammatical verb-nominal version (V-N) had better F-score performance. The verbal version (V) obtained a better accuracy $P$ than the verb-nominal (V-N). In the case of the Naive, the performance F-score, $P$ and $R$ is very similar from the Experts.
In the case of the Experts, the grammatical verb-nominal version (V-N) had better F-score performance. The verbal version (V) obtained a better accuracy P than the verb-nominal (V-N). In the case of the Naive, the performance F-score, P and R is very similar from the Experts.
|
In the case of Experts, which version had better F-score performance?
|
Verb-nominal version.
|
null | false
| 37
|
The resource is comprised of 142 hours of spoken Mapudungun that was recorded during the AVENUE project BIBREF6 in 2001 to 2005. The data was recorded under a partnership between the AVENUE project, funded by the US National Science Foundation at Carnegie Mellon University, the Chilean Ministry of Education (Mineduc), and the Instituto de Estudios Indígenas at Universidad de La Frontera, originally spanning 170 hours of audio. We have recently cleaned the data and are releasing it publicly for the first time (although it has been shared with individual researchers in the past) along with NLP baselines.
The recordings were transcribed and translated into Spanish at the Instituto de Estudios Indígenas at Universidad de La Frontera. The corpus covers three dialects of Mapudungun: about 110 hours of Nguluche, 20 hours of Lafkenche and 10 hours of Pewenche. The three dialects are quite similar, with some minor semantic and phonetic differences. The fourth traditionally distinguished dialect, Huilliche, has several grammatical differences from the other three and is classified by Ethnologue as a separate language, iso 639-3: huh, and as nearly extinct.
The recordings are restricted to a single domain: primary, preventive, and treatment health care, including both Western and Mapuche traditional medicine. The recording sessions were conducted as interactive conversations so as to be natural in Mapuche culture, and they were open-ended, following an ethnographic approach. The interviewer was trained in these methods along with the use of the digital recording systems that were available at the time. We also followed human subject protocol. Each person signed a consent form to release the recordings for research purposes and the data have been accordingly anonymized. Because Machi (traditional Mapuche healers) were interviewed, we asked the transcribers to delete any culturally proprietary knowledge that a Machi may have revealed during the conversation. Similarly, we deleted any names or any information that may identify the participants.
The corpus is culturally relevant because it was created by Mapuche people, using traditional ways of relating to each other in conversations. They discussed personal experiences with primary health care in the traditional Mapuche system and the Chilean health care system, talking about illnesses and the way they were cured. The participants ranged from 16 years old to 100 years old, almost in equal numbers of men and women, and they were all native speakers of Mapudungun.
We created two training sets, one appropriate for single-speaker speech synthesis experiments, and one appropriate for multiple-speaker speech recognition and machine translation experiments.
|
What are the two training sets in this paper?
|
One appropriate for single-speaker speech synthesis experiments, and one appropriate for multiple-speaker speech recognition and machine translation experiments.
|
null | false
| null |
What countries are these names most likely to be found among native-born citizens? John Smith, Elise Monet, Helmut Schneider, Jing Li, Raj Patel
|
England or United States, France, Germany, China, India
|
|
null | false
| null |
The exact cause of SPD is not known.However, it is known that the midbrain and brainstem regions of the central nervous system are early centers in the processing pathway for multisensory integration; these brain regions are involved in processes including coordination, attention, arousal, and autonomic function. After sensory information passes through these centers, it is then routed to brain regions responsible for emotions, memory, and higher level cognitive functions. Damage in any part of the brain involved in multisensory processing can cause difficulties in adequately processing stimuli in a functional way.
|
What are the causes for Sensory processing disorder?
|
The exact cause of Sensory processing disorder is not known. However, it is known that the midbrain and brainstem regions of the central nervous system are early centers in the processing pathway for multisensory integration; these brain regions are involved in processes including coordination, attention, arousal, and autonomic function. After sensory information passes through these centers, it is then routed to brain regions responsible for emotions, memory, and higher level cognitive functions. Damage in any part of the brain involved in multisensory processing can cause difficulties in adequately processing stimuli in a functional way.
|
null | false
| null |
What is Akshardham?
|
Akshardham in New Delhi, India is recognized by the Guinness Book of World Records as the World's largest comprehensive Hindu Temple. The 100-acre Akshardham complex on the banks of Yamuna has become an architectural landmark of India. The monument is a fusion of pink stone and pure white marble, where pink stone symbolizes bhakti (devotion) in eternal bloom and white marble that of absolute purity and eternal peace. It measures 356 ft. long, 316 ft. wide and 141 ft. hight, covering an area of 86,342 sq.ft. The marvelous, ornately hand-carved stone temple has been built without structural steel. It took five years for the 11,000 artisans and volunteers to build this grand structure. It cost about 200 crore Indian rupees. It was built by the Bochasanwasi Akshar Purushottam Swaminarayan Sanstha (BAPS) organization.
|
|
null | false
| null |
Jokic or Embiid for the MVP?
|
I would give the MVP award to Nikola Jokic.
|
|
null | false
| null |
What is Debezium?
|
Debezium is a change data capture (CDC) tool for capturing CDC change logs from databases related to updates, deletes, and inserts to source tables. Debezium can access these logs and that information can be used downstream to update CDC changes against copies of the source tables in a different platform system. Debezium is also open source. FiveTran is a competitor to Debezium but they charge for the CDC service they offer.
|
|
null | false
| null |
Classify each item as "soft" or "hard": bowling ball, cinder block, tissue, memory foam, croissant, hammer
|
Bowling Ball: Hard
Cinder Block: Hard
Tissue: Soft
Memory Foam: Soft
Croissant: Soft
Hammer: Hard
|
|
null | false
| null |
What are some different ways I can use many fresh lemons?
|
If you have multiple lemons and want to use them before they go bad, consider the following uses:
1) Lemon cleaning spray. Mix 1 parts water, 1 parts white vinegar, and the juice of one lemon into a spray bottle. A great non-toxic cleaner for around the house!
2) Limoncello. Add 1 cup simple syrup to 4 cups vodka and pour over the peels from 12 lemons. Leave in a sealed glass container for 30 days. Serve over ice and enjoy. (Drink responsibly!)
3) Lemon flavored olive oil. Slice 4 lemons and put in 4 cups of extra-virgin olive oil. Leave in a sealed container for 30 days. Enjoy in salads and soups!
4) Lemon bars. There are many ways to make these using all parts of a lemon. Check online for a recipe to make these deliciously tart bars.
5) Lemon-aid. Mix 1 parts fresh squeezed lemon juice to 6 parts water. Sweeten to taste. Serve over ice
6) Lemon Centerpiece. Brighten up the room by arranging lemons in a decorative bowl. Place on the table or counter to add a pop of color.
7) Lemon Detox water. Start your morning with 16oz of warm water, the juice of one lemon, and a pinch of cayenne.
|
|
null | false
| null |
All planets in our solar system rotate on their axis either in a clockwise or anti-clockwise direction. Tell me which planets have clockwise rotation, and which have anti-clockwise rotations. The planets are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune.
|
Mercury, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune have clockwise rotations. Venus has an anti-clockwise rotation. In summary, all planets except for Venus have a clockwise rotation.
|
|
1909.09484
| false
| null |
BPRA: Belief Per-Response Accuracy (BPRA) tests the ability to generate the correct user intents during the dialogue. This metric is used to evaluate the accuracy of dialogue belief tracker BIBREF1.
APRA: Action Per-Response Accuracy (APRA) evaluates the per-turn accuracy of the dialogue actions generated by dialogue policy maker. For baselines, APRA evaluates the classification accuracy of the dialogue policy maker. But our model actually generates each individual token of actions, and we consider a prediction to be correct only if every token of the model output matches the corresponding token in the ground truth.
BLEU BIBREF19: The metric evaluates the quality of the final response generated by natural language generator. The metric is usually used to measure the performance of the task-oriented dialogue system.
This metric is used to evaluate the accuracy of dialogue belief tracker BIBREF1.
APRA: Action Per-Response Accuracy (APRA) evaluates the per-turn accuracy of the dialogue actions generated by dialogue policy maker. For baselines, APRA evaluates the classification accuracy of the dialogue policy maker. But our model actually generates each individual token of actions, and we consider a prediction to be correct only if every token of the model output matches the corresponding token in the ground truth.
BLEU BIBREF19: The metric evaluates the quality of the final response generated by natural language generator. The metric is usually used to measure the performance of the task-oriented dialogue system.
|
What metrics are used to measure performance of models?
|
The answers are shown as follows:
* BPRA
* APRA
* BLEU
|
null | false
| 231
|
In a preliminary task, we looked for words which may designate sentences associated with controversial concepts. To this end, we ranked the words appearing in positive sentences according to their information gain for this task. The top of the list comprises the following: that, sexual, people, movement, religious, issues, rights.
The Wikipedia list of controversial issues specifies categories for the listed concepts, like Politics and economics, Religion, History, and Sexuality (some concepts are associated with two or more categories). While some top-ranked words - that, people, issues - do seem to directly indicate controversiality BIBREF12, BIBREF13, others seem to have more to do with the category they belong to. Although these categories may indeed indicate controversiality, we consider this as an indirect or implicit indication, since it is more related to the controversial theme than to controversiality per-se.
To control for this effect, we performed a second experiment where we set the concepts from one category as the test set, and used the others for training (concepts associated with the excluded category are left out, regardless of whether they are also associated with one of the training categories). We did this for 5 categories: History, Politics and economics, Religion, Science, and Sexuality. This way, thematic relatedness observed in the training set should have little or no effect on correctly estimating the level of controversy associated of concepts in the test set, and may even “mislead” the estimator. We note that previous work on controversiality does not seem to address this issue, probably because the meta-data used is less sensitive to it.
To control for this effect, we performed a second experiment where we set the concepts from one category as the test set, and used the others for training (concepts associated with the excluded category are left out, regardless of whether they are also associated with one of the training categories). We did this for 5 categories: History, Politics and economics, Religion, Science, and Sexuality.
|
What are the 5 categories involved in their second experiment?
|
History, Politics and economics, Religion, Science, and Sexuality.
|
null | false
| null |
How many cards are there is a standard deck of cards ?
|
A standard deck of cards contains fifty-two cards. The cards are divided in to four equal suits of thirteen cards, with the suits being Hearts, Diamonds, Clubs and Spades. The thirteen cards have different values on them, typically starting with a two and including a card for each number up to ten. There are also cards which show either a Jack, a Queen or a King, which nominally represent a value of eleven, twelve and thirteen respectively. The final card is known as an Ace, which represents a value of one, though in many card games it is considered the highest value card. The Heart and Diamond suits are coloured red while the Club and Spade suits are coloured black.
|
|
null | false
| 137
|
Initializing foreign embeddings is the backbone of our approach. A good initialization leads to better zero-shot transfer results and enables fast adaptation. To verify the importance of a good initialization, we train a RAMEN$_{\textsc {base}}$+RoBERTa with foreign word-embeddings are initialized randomly from $\mathcal {N}(0, {1}{d^2})$. For a fair comparison, we use the same hyper-parameters in §SECREF27. Table TABREF36 shows the results of XNLI and UD parsing of random initialization. In comparison to the initialization using aligned fastText vectors, random initialization decreases the zero-shot performance of RAMEN$_{\textsc {base}}$ by 15.9% for XNLI and 27.8 points for UD parsing on average. We also see that zero-shot parsing of SOV languages (Arabic and Hindi) suffers random initialization.
All the RAMEN models are built from English and tuned on English for zero-shot cross-lingual tasks. It is reasonable to expect RAMENs do well in those tasks as we have shown in our experiments. But are they also a good feature extractor for supervised tasks? We offer a partial answer to this question by evaluating our model for supervised dependency parsing on UD datasets.
We used train/dev/test splits provided in UD to train and evaluate our RAMEN-based parser. Table TABREF38 summarizes the results (LAS) of our supervised parser. For a fair comparison, we choose mBERT as the baseline and all the RAMEN models are initialized from aligned fastText vectors. With the same architecture of 12 Transformer layers, RAMEN$_{\textsc {base}}$+BERT performs competitive to mBERT and outshines mBERT by +1.2 points for Vietnamese. The best LAS results are obtained by RAMEN$_{\textsc {large}}$+RoBERTa with 24 Transformer layers. Overall, our results indicate the potential of using contextual representations from RAMEN for supervised tasks.
We evaluate the performance of RAMEN+RoBERTa$_{\textsc {base}}$ (initialized from monolingual data) at each training steps: initialization of word embeddings (0K update), fine-tuning target embeddings (25K), and fine-tuning the model on both English and target language (at each 25K updates). The results are presented in Figure FIGREF40.
Without fine-tuning, the average accuracy of XLNI is 39.7% for a three-ways classification task, and the average LAS score is 3.6 for dependency parsing. We see the biggest leap in the performance after 50K updates. While semantic similarity task profits significantly at 25K updates of the target embeddings, syntactic task benefits with further fine-tuning the encoder. This is expected since the target languages might exhibit different syntactic structures than English and fine-tuning encoder helps to capture language specific structures. We observe a substantial gain of 19-30 LAS for all languages except French after 50K updates.
Language similarities have more impact on transferring syntax than semantics. Without tuning the English encoder, French enjoys 50.3 LAS for being closely related to English, whereas Arabic and Hindi, SOV languages, modestly reach 4.2 and 6.4 points using the SVO encoder. Although Chinese has SVO order, it is often seen as head-final while English is strong head-initial. Perhaps, this explains the poor performance for Chinese.
Initializing foreign embeddings is the backbone of our approach. A good initialization leads to better zero-shot transfer results and enables fast adaptation. To verify the importance of a good initialization, we train a RAMEN$_{\textsc {base}}$+RoBERTa with foreign word-embeddings are initialized randomly from $\mathcal {N}(0, {1}{d^2})$. For a fair comparison, we use the same hyper-parameters in §SECREF27. Table TABREF36 shows the results of XNLI and UD parsing of random initialization. In comparison to the initialization using aligned fastText vectors, random initialization decreases the zero-shot performance of RAMEN$_{\textsc {base}}$ by 15.9% for XNLI and 27.8 points for UD parsing on average. We also see that zero-shot parsing of SOV languages (Arabic and Hindi) suffers random initialization.
All the RAMEN models are built from English and tuned on English for zero-shot cross-lingual tasks. It is reasonable to expect RAMENs do well in those tasks as we have shown in our experiments. But are they also a good feature extractor for supervised tasks? We offer a partial answer to this question by evaluating our model for supervised dependency parsing on UD datasets.
We used train/dev/test splits provided in UD to train and evaluate our RAMEN-based parser. Table TABREF38 summarizes the results (LAS) of our supervised parser. For a fair comparison, we choose mBERT as the baseline and all the RAMEN models are initialized from aligned fastText vectors. With the same architecture of 12 Transformer layers, RAMEN$_{\textsc {base}}$+BERT performs competitive to mBERT and outshines mBERT by +1.2 points for Vietnamese. The best LAS results are obtained by RAMEN$_{\textsc {large}}$+RoBERTa with 24 Transformer layers. Overall, our results indicate the potential of using contextual representations from RAMEN for supervised tasks.
We evaluate the performance of RAMEN+RoBERTa$_{\textsc {base}}$ (initialized from monolingual data) at each training steps: initialization of word embeddings (0K update), fine-tuning target embeddings (25K), and fine-tuning the model on both English and target language (at each 25K updates). The results are presented in Figure FIGREF40.
Without fine-tuning, the average accuracy of XLNI is 39.7% for a three-ways classification task, and the average LAS score is 3.6 for dependency parsing. We see the biggest leap in the performance after 50K updates. While semantic similarity task profits significantly at 25K updates of the target embeddings, syntactic task benefits with further fine-tuning the encoder. This is expected since the target languages might exhibit different syntactic structures than English and fine-tuning encoder helps to capture language specific structures. We observe a substantial gain of 19-30 LAS for all languages except French after 50K updates.
Language similarities have more impact on transferring syntax than semantics. Without tuning the English encoder, French enjoys 50.3 LAS for being closely related to English, whereas Arabic and Hindi, SOV languages, modestly reach 4.2 and 6.4 points using the SVO encoder. Although Chinese has SVO order, it is often seen as head-final while English is strong head-initial. Perhaps, this explains the poor performance for Chinese.
Are contextual representations from RAMEN also good for supervised parsing?****Overall, our results indicate the potential of using contextual representations from RAMEN for supervised tasks.
|
Are contextual representations from RAMEN also good for supervised parsing?
|
Overall, their results indicate the potential of using contextual representations from RAMEN for supervised tasks.
|
null | false
| null |
Our War (Italian: La nostra guerra) is a 2016 Italian-American documentary film directed by Bruno Chiaravalloti, Claudio Jampaglia and Benedetta Argentieri. It premiered out of competition at the 73rd edition of the Venice Film Festival. It tells the story of three foreign fighters joining the Kurdish forces to fight Deash.
|
Given the following paragraph about a film, who directed "Our War"?
|
Bruno Chiaravalloti
|
null | false
| null |
Classify each of the following as a green vegetable or a yellow vegetable: bell pepper, zucchini, broccoli, kale, summer squash, sweet potato, corn, onion, golden beet, spinach.
|
Vegetables that are green: Broccoli, kale, spinach
Vegetables that are yellow: Summer squash, corn, golden beet, corn
Vegetables that can be green or yellow: Bell pepper, zucchini, onion
Vegetables that are neither green nor yellow: Sweet potato, which is an orange root vegetable
|
|
null | false
| null |
What are the four standard sizes of beds in the United States?
|
From smallest to largest, the four bed sizes are twin, full, queen, and king.
|
|
null | false
| null |
Hawaii (/həˈwaɪi/ (listen) hə-WY-ee; Hawaiian: Hawaiʻi [həˈvɐjʔi] or [həˈwɐjʔi]) is a state in the Western United States, about 2,000 miles (3,200 km) from the U.S. mainland in the Pacific Ocean. It is the only U.S. state outside North America, the only state that is an archipelago, and the only state in the tropics.
Hawaii comprises nearly the entire Hawaiian archipelago, 137 volcanic islands spanning 1,500 miles (2,400 km) that are physiographically and ethnologically part of the Polynesian subregion of Oceania. The state's ocean coastline is consequently the fourth-longest in the U.S., at about 750 miles (1,210 km). The eight main islands, from northwest to southeast, are Niʻihau, Kauaʻi, Oʻahu, Molokaʻi, Lānaʻi, Kahoʻolawe, Maui, and Hawaiʻi—the last of these, after which the state is named, is often called the "Big Island" or "Hawaii Island" to avoid confusion with the state or archipelago. The uninhabited Northwestern Hawaiian Islands make up most of the Papahānaumokuākea Marine National Monument, the United States' largest protected area and the fourth-largest in the world.
|
What are the names of the main islands of Hawaiʻi in alphabetical order?
|
Hawaiʻi, Kahoʻolawe, Kauaʻi, Lānaʻi, Maui, Molokaʻi, and Niʻihau.
|
null | false
| null |
Tell me which of the following are dogs: Beagle, Labrador, Siamese, Chihuahua, Parakeet, Persian.
|
Beagle, Labrador, Chihuahua
|
|
null | false
| null |
Guinness World Records, known from its inception in 1955 until 1999 as The Guinness Book of Records and in previous United States editions as The Guinness Book of World Records, is a reference book published annually, listing world records both of human achievements and the extremes of the natural world. The brainchild of Sir Hugh Beaver, the book was co-founded by twin brothers Norris and Ross McWhirter in Fleet Street, London, in August 1955.
|
Is the Guinness Book of World Records legitimate?
|
The Guinness Book of World Records is a real organization that was started in 1955. The organization certifies world records by sending an official certifier to witness and record world records. Recently, the organization has come under fire for its practices of receiving money from dictatorships in exchange for certifying "seemingly random" world records, such as "The Largest Gathering of Jump Ropers Wearing Green" or "The Longest Pizza Baked in Under 10 Hours".
|
null | false
| null |
Who is playing in 2023 women’s NCAA championship game?
|
The Iowa Hawkeyes face the LSU Tigers in the Women’s NCAA Division 1 Championship game. The game will be played on April 2, 2023 in Dallas, TX.
|
|
null | false
| null |
Classify each of these car manufacturers as either Italian, German, American, Japanese, or other: Renault, Dodge, Opel, Ferrari, Lancia, Axiam, Nissan
|
Italian: Ferrari, Lancia
German: Opel
American: Dodge
Japanese: Nissan
other: Renault, Axiam
|
|
null | false
| null |
How can sunshine lift your mood?
|
Well, whenever there is sunshine, I feel a lot more optimistic, have more clarity of mind, feel more peaceful and full of energy.
Why is that happening? If you look at people's mood in winter, they're always more gloomy, introvert and don't socialise as much as in summer. In summer, however, people go out, enjoy the nature, get more fresh air and natural light, interact more with other people, eat more fruits and veggies and, since they are in season, they are tastier. All these are no brainers for boosting our wellbeing and therefore our positivity overall.
No wonder that sun is considered the source of life not only in contemporary science but also in many ancient cultures.
I wish we had more sunny days and time to enjoy them!
|
|
null | false
| null |
David Griffiths Dicks, OAM, CitWA, (born 6 October 1978) is an Australian sailor.
|
Who is David Dicks?
|
David Griffiths Dicks, OAM, CitWA, (born 6 October 1978) is an Australian sailor. He became the youngest person to sail non-stop and solo around the world. In February 1996, at the age of 17, he set out from Fremantle, Western Australia in his family's 10m S&S 34 sloop named 'Seaflight'. During his 9-month circumnavigation, he faced many challenges such as numerous knockdowns, bad weather, equipment failure, and food poisoning. Because of accepting a bolt to fix his rig near the Falkland Islands, his circumnavigation was not considered unassisted. He returned safely to Fremantle in November 1996 amid great fanfare, including a ticker-tape parade and being given the 'keys' to Perth City.
|
1704.08424
| false
| null |
In this paper, we propose to represent each word with an expressive multimodal distribution, for multiple distinct meanings, entailment, heavy tailed uncertainty, and enhanced interpretability. For example, one mode of the word `bank' could overlap with distributions for words such as `finance' and `money', and another mode could overlap with the distributions for `river' and `creek'. It is our contention that such flexibility is critical for both qualitatively learning about the meanings of words, and for optimal performance on many predictive tasks.
In this paper, we propose to represent each word with an expressive multimodal distribution, for multiple distinct meanings, entailment, heavy tailed uncertainty, and enhanced interpretability. For example, one mode of the word `bank' could overlap with distributions for words such as `finance' and `money', and another mode could overlap with the distributions for `river' and `creek'. It is our contention that such flexibility is critical for both qualitatively learning about the meanings of words, and for optimal performance on many predictive tasks.
|
How does this compare to contextual embedding methods?
|
The answers are shown as follows:
* represent each word with an expressive multimodal distribution, for multiple distinct meanings, entailment, heavy tailed uncertainty, and enhanced interpretability. For example, one mode of the word `bank' could overlap with distributions for words such as `finance' and `money', and another mode could overlap with the distributions for `river' and `creek'.
|
null | false
| 384
|
Language model pre-training has proven to be highly effective in learning universal language representations from large-scale unlabeled data. ELMo BIBREF0, GPT BIBREF1 and BERT BIBREF2 have achieved great success in many NLP tasks, such as sentiment classification BIBREF3, natural language inference BIBREF4, and question answering BIBREF5.
Despite its empirical success, BERT's computational efficiency is a widely recognized issue because of its large number of parameters. For example, the original BERT-Base model has 12 layers and 110 million parameters. Training from scratch typically takes four days on 4 to 16 Cloud TPUs. Even fine-tuning the pre-trained model with task-specific dataset may take several hours to finish one epoch. Thus, reducing computational costs for such models is crucial for their application in practice, where computational resources are limited.
Motivated by this, we investigate the redundancy issue of learned parameters in large-scale pre-trained models, and propose a new model compression approach, Patient Knowledge Distillation (Patient-KD), to compress original teacher (e.g., BERT) into a lightweight student model without performance sacrifice. In our approach, the teacher model outputs probability logits and predicts labels for the training samples (extendable to additional unannotated samples), and the student model learns from the teacher network to mimic the teacher's prediction.
Different from previous knowledge distillation methods BIBREF6, BIBREF7, BIBREF8, we adopt a patient learning mechanism: instead of learning parameters from only the last layer of the teacher, we encourage the student model to extract knowledge also from previous layers of the teacher network. We call this `Patient Knowledge Distillation'. This patient learner has the advantage of distilling rich information through the deep structure of the teacher network for multi-layer knowledge distillation.
We also propose two different strategies for the distillation process: ($i$) PKD-Last: the student learns from the last $k$ layers of the teacher, under the assumption that the top layers of the original network contain the most informative knowledge to teach the student; and ($ii$) PKD-Skip: the student learns from every $k$ layers of the teacher, suggesting that the lower layers of the teacher network also contain important information and should be passed along for incremental distillation.
We evaluate the proposed approach on several NLP tasks, including Sentiment Classification, Paraphrase Similarity Matching, Natural Language Inference, and Machine Reading Comprehension. Experiments on seven datasets across these four tasks demonstrate that the proposed Patient-KD approach achieves superior performance and better generalization than standard knowledge distillation methods BIBREF6, with significant gain in training efficiency and storage reduction while maintaining comparable model accuracy to original large models. To the authors' best knowledge, this is the first known effort for BERT model compression.
We also propose two different strategies for the distillation process: ($i$) PKD-Last: the student learns from the last $k$ layers of the teacher, under the assumption that the top layers of the original network contain the most informative knowledge to teach the student; and ($ii$) PKD-Skip: the student learns from every $k$ layers of the teacher, suggesting that the lower layers of the teacher network also contain important information and should be passed along for incremental distillation.
|
What are the two strategies they propose for the distillation process?
|
($i$) PKD-Last: the student learns from the last $k$ layers of the teacher, under the assumption that the top layers of the original network contain the most informative knowledge to teach the student; and ($i$) PKD-Skip: the student learns from every $k$ layers of the teacher, suggesting that the lower layers of the teacher network also contain important information and should be passed along for incremental distillation.
|
null | false
| null |
The episode focuses on two people: an elderly hospital doctor (Aleksander Bardini), who lives by himself in the ubiquitous Dekalog apartment block; and Dorota Geller (Krystyna Janda), a woman in her 30s who lives in the same building and is a violinist with the Philharmonic Orchestra.
|
Change the following passage to past tense
|
The episode focused on two people: an elderly hospital doctor (Aleksander Bardini), who lived by himself in the ubiquitous Dekalog apartment block; and Dorota Geller (Krystyna Janda), a woman in her 30s who lived in the same building and was a violinist with the Philharmonic Orchestra.
|
null | false
| null |
I have some vermouth and bitters. What cocktails can I make with those ingredients?
|
You can add whiskey to your vermouth and bitters to make a manhattan. Alternatively, if you have gin you can mix dry vermouth to make a martini or combine with sweet vermouth and Campari to make a negroni.
|
|
null | false
| null |
Postmodernism is an intellectual stance or mode of discourse characterized by skepticism toward the "grand narratives" of modernism; rejection of epistemic certainty or the stability of meaning; and sensitivity to the role of ideology in maintaining political power. Claims to objectivity are dismissed as naïve realism, with attention drawn to the conditional nature of knowledge claims within particular historical, political, and cultural discourses. The postmodern outlook is characterized by self-referentiality, epistemological relativism, moral relativism, pluralism, irony, irreverence, and eclecticism; it rejects the "universal validity" of binary oppositions, stable identity, hierarchy, and categorization
Initially emerging from a mode of literary criticism, postmodernism developed in the mid-twentieth century as a rejection of modernism, and has been observed across many disciplines. Postmodernism is associated with the disciplines deconstruction and post-structuralism. Various authors have criticized postmodernism as promoting obscurantism, as abandoning Enlightenment rationalism and scientific rigor, and as adding nothing to analytical or empirical knowledge
|
What is postmodernism?
|
Postmodernism is a philosophical position that questions the certainty and immutability of human knowledge. It emerges in opposition to the main tenets of modernism and rejects claims of objective universal truths that are not bound and shaped by the social, historical, political and cultural context in which they arise. Postmodernist attitudes are characterized by moral relativism, a view where opposite moral precepts and world views are equally valid regardless of their consequences or scientific underpinning. This has led to criticism from thinkers that subscribe to rationalism and the notion that objective truths can be derived from scientific rigor.
|
null | false
| null |
Machado managed to rise in his bureaucratic career, first in the Agriculture Department. Three years later, he became the head of a section in it. He published two poetry books: Falenas, in 1870, and Americanas, in 1875. Their weak reception made him explore other literary genres.
He wrote five romantic novels: Ressurreição, A Mão e a Luva, Helena and Iaiá Garcia. The books were a success with the public, but literary critics considered them mediocre. Machado suffered repeated attacks of epilepsy, apparently related to hearing of the death of his old friend José de Alencar. He was left melancholic, pessimistic and fixed on death. His next book, marked by "a skeptical and realistic tone": Memórias Póstumas de Brás Cubas (Posthumous Memoirs of Brás Cubas, also translated as Epitaph of a Small Winner), is widely considered a masterpiece. By the end of the 1880s, Machado had gained wide renown as a writer.
Although he was opposed to slavery, he never spoke against it in public. He avoided discussing politics. He was criticized by the abolitionist José do Patrocínio and by the writer Lima Barreto for staying away from politics, especially the cause of abolition. He was also criticized by them for having married a white woman. Machado was caught by surprise with the monarchy overthrown on 15 November 1889. Machado had no sympathy towards republicanism, as he considered himself a liberal monarchist and venerated Pedro II, whom he perceived as "a humble, honest, well-learned and patriotic man, who knew how to make of a throne a chair [for his simplicity], without diminishing its greatness and respect." When a commission went to the public office where he worked to remove the picture of the former emperor, the shy Machado defied them: "The picture got in here by an order and it shall leave only by another order."
The birth of the Brazilian republic made Machado become more critical and an observer of the Brazilian society of his time. From then on, he wrote "not only the greatest novels of his time, but the greatest of all time of Brazilian literature." Works such as Quincas Borba (Philosopher or Dog?) (1891), Dom Casmurro (1899), Esaú e Jacó (1904) and Memorial de Aires (1908), considered masterpieces, were successes with both critics and the public. In 1893 he published "A Missa do Galo" ("Midnight Mass"), considered his greatest short story.
|
Give me the best books wrote by Machado de Assis
|
Memórias Póstumas de Brás Cubas, Quincas Borba, Dom Casmurro, Esaú e Jacó, Memorial de Aires and A Missa do Galo
|
null | false
| 192
|
For evaluating topics with automatic quality measures, we used several English text collections and one Russian collection (Table TABREF7 ). We experiment with three thesauri: WordNet (155 thousand entries), information-retrieval thesaurus of the European Union EuroVoc (15161 terms), and Russian thesaurus RuThes (115 thousand entries) BIBREF19 .
At the preprocessing step, documents were processed by morphological analyzers. Also, we extracted noun groups as described in BIBREF16 . As baselines, we use the unigram LDA topic model and LDA topic model with added 1000 ngrams with maximal NC-value BIBREF20 extracted from the collection under analysis.
As it was found before BIBREF14 , BIBREF16 , the addition of ngrams without accounting relations between their components considerably worsens the perplexity because of the vocabulary growth (for perplexity the less is the better) and practically does not change other automatic quality measures (Table 2).
We add the Wordnet data in the following steps. At the first step, we include WordNet synonyms (including multiword expressions) into the proposed similarity sets (LDA-Sim+WNsyn). At this step, frequencies of synonyms found in the same document are summed up in process LDA topic learning as described in Algorithm SECREF3 . We can see that the kernel uniqueness becomes very low, topics are very close to each other in content (Table 2: LDA-Sim+WNsyn). At the second step, we add word direct relatives (hyponyms, hypernyms, etc.) to similarity sets. Now the frequencies of semantically related words are added up enhancing the contribution into all topics of the current document.
The Table 2 shows that these two steps lead to great degradation of the topic model in most measures in comparison to the initial unigram model: uniqueness of kernels abruptly decreases, perplexity at the second step grows by several times (Table 2: LDA-Sim+WNsynrel). It is evident that at this step the model has a poor quality. When we look at the topics, the cause of the problem seems to be clear. We can see the overgeneralization of the obtained topics. The topics are built around very general words such as "person", "organization", "year", etc. These words were initially frequent in the collection and then received additional frequencies from their frequent synonyms and related words.
Then we suppose that these general words were used in texts to discuss specific events and objects, therefore, we change the constructions of the similarity sets in the following way: we do not add word hyponyms to its similarity set. Thus, hyponyms, which are usually more specific and concrete, should obtain additional frequencies from upper synsets and increase their contributions into the document topics. But the frequencies and contribution of hypernyms into the topic of the document are not changed. And we see the great improvement of the model quality: the kernel uniqueness considerably improves, perplexity decreases to levels comparable with the unigram model, topic coherence characteristics also improve for most collections (Table 2:LDA-Sim+WNsynrel/hyp).
We further use the WordNet-based similarity sets with n-grams having the same components as described in BIBREF16 . All measures significantly improve for all collections (Table 2:LDA-Sim+WNsr/hyp+Ngrams). At the last step, we try to apply the same approach to ngrams that was previously utilized to hyponym-hypernym relations: frequencies of shorter ngrams and words are summed to frequencies of longer ngrams but not vice versa. In this case we try to increase the contribution of more specific longer ngrams into topics. It can be seen (Table 2) that the kernel uniqueness grows significantly, at this step it is 1.3-1.6 times greater than for the baseline models achieving 0.76 on the ACL collection (Table 2:LDA-Sim+WNsr/hyp+Ngrams/l).
At the second series of the experiments, we applied EuroVoc information retrieval thesaurus to two European Union collections: Europarl and JRC. In content, the EuroVoc thesaurus is much smaller than WordNet, it contains terms from economic and political domains and does not include general abstract words. The results are shown in Table 3. It can be seen that inclusion of EuroVoc synsets improves the topic coherence and increases kernel uniqueness (in contrast to results with WordNet). Adding ngrams further improves the topic coherence and kernel uniqueness.
At last we experimented with the Russian banking collection and utilized RuThes thesaurus. In this case we obtained improvement already on RuThes synsets and again adding ngrams further improved topic coherence and kernel uniqueness (Table 4).
It is worth noting that adding ngrams sometimes worsens the TC-NPMI measure, especially on the JRC collection. This is due to the fact that in these evaluation frameworks, the topics' top elements contain a lot of multiword expressions, which rarely occur in Wikipedia, used for the coherence calculation, therefore the utilized automatic coherence measures can have insufficient evidence for correct estimates.
As baselines, we use the unigram LDA topic model and LDA topic model with added 1000 ngrams with maximal NCvalue [21] extracted from the collection under analysis.
|
What are their baselines?
|
The unigram LDA topic model and LDA topic model.
|
null | false
| null |
Who did the French national football team play against in the last 3 matches of the 2018 World Cup and how did the matches turn out?
|
On 06/07/2018, France won 2-0 against Uruguay in the quarter-finals.
On 10/07/2018, France won 1-0 against Belgium in the semi-finals.
On 15/07/2018, France beat Croatia 4-2 in the final to become world champions for the next four years. The French national football team has thus been world champion twice. The last time was in 1998.
|
|
null | false
| 170
|
First, to operationalize, we say that term $i$ is associated with gender $j$ if, when discussing individuals of gender $j$, $i$ is used with unusual frequency – which we can check with statistical hypothesis tests. Let $f_i$ represent the likelihood of $i$ appearing when discussing women or men. $f_i$ is unknown, but we can model the distribution of all possible $f_i$ using the corpus of texts that we have from the domain. We construct a gender-balanced version of the corpus by randomly undersampling the more prevalent gender until the proportions of each gender are equal. Assuming a non-informative prior distribution on $f_i$, the posterior distribution is Beta($k_i$, $N - k_i$), where $k_i$ is the count of $i$ in the gender-balanced corpus and $N$ is the total count of words in that corpus.
As BIBREF10 discuss, “the distribution of the gender-specific counts can be described by an integral over all possible $f_i$. This integral defines the Beta-Binomial distribution BIBREF29, and has a closed form solution.” We say that term $i$ is significantly associated with gender $j$ if the cumulative distribution at $k_{ij}$ (the count of $i$ in the $j$ portion of the gender-balanced corpus) is $p \le 0.05$. As in the original work, we apply the Bonferroni correction BIBREF30 for multiple comparisons because we are computing statistical tests for thousands of hypotheses.
First, we trained domain-specific word embeddings using the Word2Vec BIBREF33 CBOW model ($w \in R^{100}$). Then, we used k-means clustering to cluster the embeddings of the gender-associated words. Since k-means may converge at local optima, we ran the algorithm 50 times and kept the model with the lowest sum of squared errors.
To automatically label the clusters, we combined the grounded knowledge of WordNet BIBREF34 and context-sensitive strengths of domain-specific word embeddings. Our algorithm is similar to BIBREF28's approach, but we extend their method by introducing domain-specific word embeddings for clustering as well as a new technique for sense disambiguation. Given a cluster, our algorithm proceeds with the following three steps:
Sense disambiguation: The goal is to assign each cluster word to one of its WordNet synsets; let $S$ represent the collection of chosen synsets. We know that these words have been clustered in domain-specific embedding space, which means that in the context of the domain, these words are very close semantically. Thus, we choose $S^*$ that minimizes the total distance between its synsets.
Candidate label generation: In this step, we generate $L$, the set of possible cluster labels. Our approach is simple: we take the union of all hypernyms of the synsets in $S^*$.
Candidate label ranking: Here, we rank the synsets in $L$. We want labels that are as close to all of the synsets in $S^*$ as possible; thus, we score the candidate labels by the sum of their distances to each synset in $S^*$ and we rank them from least to most distance.
In steps 1 and 3, we use WordNet pathwise distance, but we encourage the exploration of other distance representations as well.
To automatically label the clusters, we combined the grounded knowledge of WordNet (Miller, 1995) and context-sensitive strengths of domain-specific word embeddings.
|
What do the authors combine to automatically label the clusters?
|
To automatically label the clusters, the authors combined the grounded knowledge of WordNet and context-sensitive strengths of domain-specific word embeddings.
|
null | false
| null |
Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the "Thunderer of the Nile", and described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them.
Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing.
Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature.
Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the New Latin word electricus ("of amber" or "like amber",, elektron, the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.
Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges
In 1775, Hugh Williamson reported a series of experiments to the Royal Society on the shocks delivered by the electric eel; that same year the surgeon and anatomist John Hunter described the structure of the fish's electric organs. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862.
While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.
In 1887, Heinrich Hertz discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels.
The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.
Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948.
|
Name late 19th century scientists credited with making electricity an essential tool for modern life.
|
While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.
|
null | false
| 57
|
The vast amounts of data collected by healthcare providers in conjunction with modern data analytics techniques present a unique opportunity to improve health service provision and the quality and safety of medical care for patient benefit BIBREF0 . Much of the recent research in this area has been on personalised medicine and its aim to deliver better diagnostics aided by the integration of diverse datasets providing complementary information. Another large source of healthcare data is organisational. In the United Kingdom, the National Health Service (NHS) has a long history of documenting extensively the different aspects of healthcare provision. The NHS is currently in the process of increasing the availability of several databases, properly anonymised, with the aim of leveraging advanced analytics to identify areas of improvement in NHS services.
One such database is the National Reporting and Learning System (NRLS), a central repository of patient safety incident reports from the NHS in England and Wales. Set up in 2003, the NRLS now contains more than 13 million detailed records. The incidents are reported using a set of standardised categories and contain a wealth of organisational and spatio-temporal information (structured data), as well as, crucially, a substantial component of free text (unstructured data) where incidents are described in the `voice' of the person reporting. The incidents are wide ranging: from patient accidents to lost forms or referrals; from delays in admission and discharge to serious untoward incidents, such as retained foreign objects after operations. The review and analysis of such data provides critical insight into the complex functioning of different processes and procedures in healthcare towards service improvement for safer carer.
Although statistical analyses are routinely performed on the structured component of the data (dates, locations, assigned categories, etc), the free text remains largely unused in systematic processes. Free text is usually read manually but this is time-consuming, meaning that it is often ignored in practice, unless a detailed review of a case is undertaken because of the severity of harm that resulted. There is a lack of methodologies that can summarise content and provide content-based groupings across the large volume of reports submitted nationally for organisational learning. Methods that could provide automatic categorisation of incidents from the free text would sidestep problems such as difficulties in assigning an incident category by virtue of a priori pre-defined lists in the reporting system or human error, as well as offering a unique insight into the root cause analysis of incidents that could improve the safety and quality of care and efficiency of healthcare services.
Our goal in this work is to showcase an algorithmic methodology that detects content-based groups of records in a given dataset in an unsupervised manner, based only on the free and unstructured textual description of the incidents. To do so, we combine recently developed deep neural-network high-dimensional text-embedding algorithms with network-theoretical methods. In particular, we apply multiscale Markov Stability (MS) community detection to a sparsified geometric similarity graph of documents obtained from text vector similarities. Our method departs from traditional natural language processing tools, which have generally used bag-of-words (BoW) representation of documents and statistical methods based on Latent Dirichlet Allocation (LDA) to cluster documents BIBREF1 . More recent approaches have used deep neural network based language models clustered with k-means, without a full multiscale graph analysis BIBREF2 . There have been some previous applications of network theory to text analysis. For example, Lanchichinetti and co-workers BIBREF3 used a probabilistic graph construction analysed with the InfoMap algorithm BIBREF4 ; however, their community detection was carried out at a single-scale and the representation of text as BoW arrays lacks the power of neural network text embeddings. The application of multiscale community detection allows us to find groups of records with consistent content at different levels of resolution; hence the content categories emerge from the textual data, rather than fitting with pre-designed classifications. The obtained results could thus help mitigate possible human error or effort in finding the right category in complex category classification trees.
We showcase the methodology through the analysis of a dataset of patient incidents reported to the NRLS. First, we use the 13 million records collected by the NRLS since 2004 to train our text embedding (although a much smaller corpus can be used). We then analyse a subset of 3229 records reported from St Mary's Hospital, London (Imperial College Healthcare NHS Trust) over three months in 2014 to extract clusters of incidents at different levels of resolution in terms of content. Our method reveals multiple levels of intrinsic structure in the topics of the dataset, as shown by the extraction of relevant word descriptors from the grouped records and a high level of topic coherence. Originally, the records had been manually coded by the operator upon reporting with up to 170 features per case, including a two-level manual classification of the incidents. Therefore, we also carried out an a posteriori comparison against the hand-coded categories assigned by the reporter (healthcare personnel) at the time of the report submission. Our results show good overall correspondence with the hand-coded categories across resolutions and, specifically, at the medium level of granularity. Several of our clusters of content correspond strongly to well-defined categories, yet our results also reveal complementary categories of incidents not defined in the external classification. In addition, the tuning of the granularity afforded by the method can be used to provide a distinct level of resolution in certain areas corresponding to specialise or particular sub-themes.
First, we use the 13 million records collected by the NRLS since 2004 to train our text embedding (although a much smaller corpus can be used).
|
How many records do they use to train their text embedding?
|
13 million records.
|
null | false
| 106
|
The art of argumentation has been studied since the early work of Aristotle, dating back to the 4th century BC BIBREF0 . It has been exhaustively examined from different perspectives, such as philosophy, psychology, communication studies, cognitive science, formal and informal logic, linguistics, computer science, educational research, and many others. In a recent and critically well-acclaimed study, Mercier.Sperber.2011 even claim that argumentation is what drives humans to perform reasoning. From the pragmatic perspective, argumentation can be seen as a verbal activity oriented towards the realization of a goal BIBREF1 or more in detail as a verbal, social, and rational activity aimed at convincing a reasonable critic of the acceptability of a standpoint by putting forward a constellation of one or more propositions to justify this standpoint BIBREF2 .
Analyzing argumentation from the computational linguistics point of view has very recently led to a new field called argumentation mining BIBREF3 . Despite the lack of an exact definition, researchers within this field usually focus on analyzing discourse on the pragmatics level and applying a certain argumentation theory to model and analyze textual data at hand.
Our motivation for argumentation mining stems from a practical information seeking perspective from the user-generated content on the Web. For example, when users search for information in user-generated Web content to facilitate their personal decision making related to controversial topics, they lack tools to overcome the current information overload. One particular use-case example dealing with a forum post discussing private versus public schools is shown in Figure FIGREF4 . Here, the lengthy text on the left-hand side is transformed into an argument gist on the right-hand side by (i) analyzing argument components and (ii) summarizing their content. Figure FIGREF5 shows another use-case example, in which users search for reasons that underpin certain standpoint in a given controversy (which is homeschooling in this case). In general, the output of automatic argument analysis performed on the large scale in Web data can provide users with analyzed arguments to a given topic of interest, find the evidence for the given controversial standpoint, or help to reveal flaws in argumentation of others.
Satisfying the above-mentioned information needs cannot be directly tackled by current methods for, e.g., opinion mining, questions answering, or summarization and requires novel approaches within the argumentation mining field. Although user-generated Web content has already been considered in argumentation mining, many limitations and research gaps can be identified in the existing works. First, the scope of the current approaches is restricted to a particular domain or register, e.g., hotel reviews BIBREF5 , Tweets related to local riot events BIBREF6 , student essays BIBREF7 , airline passenger rights and consumer protection BIBREF8 , or renewable energy sources BIBREF9 . Second, not all the related works are tightly connected to argumentation theories, resulting into a gap between the substantial research in argumentation itself and its adaptation in NLP applications. Third, as an emerging research area, argumentation mining still suffers from a lack of labeled corpora, which is crucial for designing, training, and evaluating the algorithms. Although some works have dealt with creating new data sets, the reliability (in terms of inter-annotator agreement) of the annotated resources is often unknown BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 .
Annotating and automatically analyzing arguments in unconstrained user-generated Web discourse represent challenging tasks. So far, the research in argumentation mining “has been conducted on domains like news articles, parliamentary records and legal documents, where the documents contain well-formed explicit arguments, i.e., propositions with supporting reasons and evidence present in the text” BIBREF8 . [p. 50]Boltuzic.Snajder.2014 point out that “unlike in debates or other more formal argumentation sources, the arguments provided by the users, if any, are less formal, ambiguous, vague, implicit, or often simply poorly worded.” Another challenge stems from the different nature of argumentation theories and computational linguistics. Whereas computational linguistics is mainly descriptive, the empirical research that is carried out in argumentation theories does not constitute a test of the theoretical model that is favored, because the model of argumentation is a normative instrument for assessing the argumentation BIBREF15 . So far, no fully fledged descriptive argumentation theory based on empirical research has been developed, thus feasibility of adapting argumentation models to the Web discourse represents an open issue.
These challenges can be formulated into the following research questions:
In this article, we push the boundaries of the argumentation mining field by focusing on several novel aspects. We tackle the above-mentioned research questions as well as the previously discussed challenges and issues. First, we target user-generated Web discourse from several domains across various registers, to examine how argumentation is communicated in different contexts. Second, we bridge the gap between argumentation theories and argumentation mining through selecting the argumenation model based on research into argumentation theories and related fields in communication studies or psychology. In particular, we adapt normative models from argumentation theory to perform empirical research in NLP and support our application of argumentation theories with an in-depth reliability study. Finally, we use state-of-the-art NLP techniques in order to build robust computational models for analyzing arguments that are capable of dealing with a variety of genres on the Web.
Our motivation for argumentation mining stems from a practical information seeking perspective from the user-generated content on the Web.
|
What is the authors' motivation for argumentation mining stems?
|
The motivation for argumentation mining stems from a practical information seeking perspective from the user-generated content on the Web.
|
null | false
| null |
Tell me whether these sports use a ball or not: golf, gymnastics, boxing, soccer, football, wrestling
|
Golf: Ball
Gymnastics: No Ball
Boxing: No Ball
Soccer: Ball
Football: Ball
Wrestling: No Ball
|
|
null | false
| 369
|
We developed several linguistic, temporal and combined temporal-linguistic models for our task. These models take as input a user, $u$ , from one of the sites (i.e., Twitter or Facebook) and a list of $N$ users from the other service, where one of the $N$ users, $u\prime $ , is the same as $u$ . The models then provide a ranking among candidate matches between $u$ and each of the $N$ users. We used two criteria to evaluate our models:
A baseline random choice ranker would have an accuracy of $1/N$ , and an average rank of $N/2$ (since $u\prime $ may appear anywhere in the list of $N$ items).
We developed several linguistic, temporal and combined temporal-linguistic models for our task.
|
Did they develop models for their task?
|
Yes, they developed several linguistic, temporal and combined temporal-linguistic models.
|
1812.06038
| false
| null |
In this work we compare causal attribution networks derived from three datasets. A causal attribution dataset is a collection of text pairs that reflect cause-effect relationships proposed by humans (for example, “virus causes sickness”). These written statements identify the nodes of the network (see also our graph fusion algorithm for dealing with semantically equivalent statements) while cause-effect relationships form the directed edges (“virus” $\rightarrow $ “sickness”) of the causal attribution network.
A causal attribution dataset is a collection of text pairs that reflect cause-effect relationships proposed by humans (for example, “virus causes sickness”). These written statements identify the nodes of the network (see also our graph fusion algorithm for dealing with semantically equivalent statements) while cause-effect relationships form the directed edges (“virus” $\rightarrow $ “sickness”) of the causal attribution network.
|
What are causal attribution networks?
|
networks where nodes represent causes and effects, and directed edges represent cause-effect relationships proposed by humans
|
null | false
| null |
Why do Dutch people always ride a bicycle?
|
Dutch people use bicycles as a form of transportation, not as a sport. One interesting fact is that in Amsterdam, the capital of the Netherlands, there are more bicycles than inhabitants.
|
|
null | false
| null |
Name some famous hindi film actors
|
Here is a list of some famous hindi actors - Shahrukh Khan, Salman Khan, Amitabh Bachhan, Amir Khan, Varun Dhawan, Vicky Kaushal and Ranbir Kapoor
|
|
null | false
| null |
What is the alphabet?
|
The alphabet consists of 26 characters or letters than can be used to form words. These words put together create sentences, and sentences put together create paragraphs. Paragraphs put together create stories. There are rules for how to form words in the English language. Vowels A, E, I, O, U are the most used characters in words formed by the letters of the alphabet. When words are spoken these form the sounds of the language we call English. English is spoken as a primary language in countries like Canada, Great Britain, Australia and the United States.
|
|
null | false
| null |
Why do humans like golf?
|
Humans like golf because of the games ability to keep a person humble. Golf is by nature, an individual sport, so the human is essentially playing against themselves. This nature gives humans the ability to become at peace as they accept defeat in a game that truly cannot be won. Success is only measured against ones self.
|
|
null | false
| null |
Hifz-ur Rahman (Ottoman Turkish: Merciful Protector) was the second of two Lüft-ü Celil-class ironclads built for the Ottoman Navy in the late 1860s. Originally ordered by the Khedivate of Egypt, an autonomous vassal state of the Ottoman Empire, the central Ottoman government forced Egypt to surrender Hifz-ur Rahman while she was still under construction at the French Forges et Chantiers de la Gironde shipyard. The vessel was a turret ship, armed with two 229 mm (9 in) Armstrong guns and two 178 mm (7 in) Armstrong guns, both pairs in revolving gun turrets.
Hifz-ur Rahman saw action during the Russo-Turkish War in 1877–1878, where she operated on the Danube to try to prevent Russian forces from crossing the river. While defending the port of Sulina, she engaged Russian gunboats in an inconclusive action. She was laid up for twenty years, until the outbreak of the Greco-Turkish War in 1897, which highlighted the badly deteriorated state of the Ottoman fleet. A large-scale reconstruction program was put in place, and Hifz-ur Rahman was rebuilt in the Imperial Arsenal in the early 1890s. Nevertheless, she saw no further service of any significance, and she was sold for scrap in 1909.
|
What was the Ottoman ironclad Hifz-ur Rahman?
|
The Hifz-ur Rahman was a big ship in the old days of the Ottoman empire. It had many powerful guns. It fought in wars against Russia and then later against the Greeks. Later in it's life it was restored but new saw battle again. Sadly, it was eventually scrapped.
|
null | false
| null |
In golf, what is the proper technique when hitting a bunker shot? Assume the bunker is green adjacent.
|
If the bunker is near the pin, use your sand-wedge. Also, ensure you properly examined the green, to understand its shape and slope. Combine this with an understanding of your own skill levels to make a decision on where you wish the ball to land. For example, the more skilled you are at these shots, the more precise a location you can identify. Conversely, the less skilled you are, the less precise you can be in your chosen location, and consequently the more you need to consider a range of potential directions. In those latter cases, it is best to aim towards the middle of the green rather than the exact placement of the pin. Once you've made a decision on location, go up to your ball in the bunker. (Pro-tip: don't forget to bring the rake with you). Choose an open stance, where your legs are aiming roughly 45 degrees more to the left of the selected landing location. If you're left-handed, the reverse is true, and your legs are aiming to the right of the intended direction/location. Dig your shoes roughly an inch into the sand, to get a solid footing. Next, bring your club face down to the ball, without actually touching the sand (it is not legal to make contact with the sand prior to contact with the ball). Open the club face so that it aims towards the intended location. Look one more time at the intended direction, then look down at the ball, and commence your backswing. The plane of your backswing should be out, inline with the direction of your legs, and roughly three quarters the length of a full swing. Firmly and confidently accelerate in your downswing, and aim to make contact with the sand roughly half an inch behind the ball. The sand between the ball and the club head will blunt some of the force from the swing, thereby reducing ball travel distance, while still conveying spin. A well hit shot can cause the ball to land on the green and immediately stop, possibly even moving back in the direction of the player. The force in the downswing, combined with the amount of sand caught by the club head, are significant contributors to the amount of backspin on the ball. Once you're done. rake up your marks as you exit the bunker, leaving no trace that you were ever there.
|
|
null | false
| null |
What side of the road do pedestrians walk on?
|
In India, the cars and vehicles keep left and pedestrians are encouraged to walk on the right. This is so the pedestrians can see oncoming traffic. In the USA, all traffic, vehicles and pedestrian, usually stick to the right.
|
|
null | false
| null |
What does democratizing data mean?
|
Making data accessible to all people and educating people on how to work with data, no matter their technical background.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.