paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Fleabag is a British comedy-drama television series created and written by Phoebe Waller-Bridge, based on her one-woman show first performed in 2013 at the Edinburgh Fringe Festival. It was originally produced by Two Brothers Pictures for digital channel BBC Three in a co-production agreement with Amazon Studios. Waller-Bridge stars as the title character, a free-spirited but angry and confused young woman in London. Sian Clifford stars as Fleabag's sister Claire, while Andrew Scott joined the cast in the second series as 'Hot Priest'. The protagonist frequently breaks the fourth wall to provide exposition, internal monologues, and a running commentary to the audience.
|
Who wrote the TV show Fleabag?
|
Fleabag is a British comedy-drama television series created and written by Phoebe Waller-Bridge.
|
null | false
| null |
Ritz Crackers is a brand of snack cracker introduced by Nabisco in 1934. The original style crackers are disc-shaped, lightly salted, and approximately 46 millimetres (1.8 in) in diameter.[citation needed]. Each cracker has seven perforations and a finely scalloped edge. Today, the Ritz cracker brand is owned by Mondelēz International.
A single serving of the original cracker (about 5 crackers or 15 grams) provides 330 kilojoules (79 kilocalories) of food energy, 1 gram of protein, and 4 grams of fat; the whole wheat variety provides 290 kJ (70 kcal) and 2.5 grams of fat.
|
Who owns Ritz Crackers?
|
Ritz was originally owned by the consumer product brand Nabisco, although after changing ownership, it is owned by Mondelēz International.
|
null | false
| null |
Give me a list of the best movies that are sequels
|
Cheaper by the Dozen 2, Night at the Museum 2, Grown-Ups 2, Toy Story 2
|
|
null | false
| null |
Identify which animal species is alive or extinct: Hallucigenia, Fin Whale
|
Fin Whale is alive, Hallucigenia is extinct.
|
|
null | false
| null |
what is hinduism
|
Hinduism is a diverse system of thought marked by a range of philosophies and shared concepts, rituals, cosmological systems, pilgrimage sites, and shared textual sources that discuss theology, metaphysics, mythology, Vedic yajna, yoga, agamic rituals, and temple building, among other topics. Prominent themes in Hindu beliefs include the four Puruṣārthas, the proper goals or aims of human life; namely, dharma (ethics/duties), artha (prosperity/work), kama (desires/passions) and moksha (liberation/freedom from the passions and the cycle of death and rebirth), as well as karma (action, intent and consequences) and saṃsāra (cycle of death and rebirth). Hinduism prescribes the eternal duties, such as honesty, refraining from injuring living beings (Ahiṃsā), patience, forbearance, self-restraint, virtue, and compassion, among others.[web 3] Hindu practices include worship (puja), fire rituals (homa/havan), recitations (pravachan), devotion (bhakti), chanting (japa), meditation (dhyāna), sacrifice (yajña), charity (dāna), selfless service (sevā), homage to one's ancestors (śrāddha), family-oriented rites of passage, annual festivals, and occasional pilgrimages (yatra). Along with the various practices associated with yoga, some Hindus leave their social world and material possessions and engage in lifelong Sannyasa (monasticism) in order to achieve moksha.
|
|
null | false
| 34
|
Single-relation factoid questions are the most common form of questions found in search query logs and community question answering websites BIBREF1 , BIBREF2 . A knowledge-base (KB) such as Freebase, DBpedia, or Wikidata can help answer such questions after users reformulate them as queries. For instance, the question Where was Barack Obama born? can be answered by issuing the following KB query: $
\lambda (x).place\_of\_birth(Barack\_Obama, x)
$
However, automatically mapping a natural language question such as Where was Barack Obama born? to its corresponding KB query remains a challenging task.
There are three key issues that make learning this mapping non-trivial. First, there are many paraphrases of the same question. Second, many of the KB entries are unseen during training time; however, we still need to correctly predict them at test time. Third, a KB such as Freebase typically contains millions of entities and thousands of predicates, making it difficult for a system to predict these entities at scale BIBREF1 , BIBREF3 , BIBREF0 . In this paper, we address all three of these issues with a character-level encoder-decoder framework that significantly improves performance over state-of-the-art word-level neural models, while also providing a much more compact model that can be learned from less data.
First, we use a long short-term memory (LSTM) BIBREF4 encoder to embed the question. Second, to make our model robust to unseen KB entries, we extract embeddings for questions, predicates and entities purely from their character-level representations. Character-level modeling has been previously shown to generalize well to new words not seen during training BIBREF5 , BIBREF6 , which makes it ideal for this task. Third, to scale our model to handle the millions of entities and thousands of predicates in the KB, instead of using a large output layer in the decoder to directly predict the entity and predicate, we use a general interaction function between the question embeddings and KB embeddings that measures their semantic relevance to determine the output. The combined use of character-level modeling and a semantic relevance function allows us to successfully produce likelihood scores for the KB entries that are not present in our vocabulary, a challenging task for standard encoder-decoder frameworks.
Our novel, character-level encoder-decoder model is compact, requires significantly less data to train than previous work, and is able to generalize well to unseen entities in test time. In particular, without use of ensembles, we achieve 70.9% accuracy in the Freebase2M setting and 70.3% accuracy in the Freebase5M setting on the SimpleQuestions dataset, outperforming the previous state-of-arts of 62.7% and 63.9% BIBREF0 by 8.2% and 6.4% respectively. Moreover, we only use the training questions provided in SimpleQuestions to train our model, which cover about 24% of words in entity aliases on the test set. This demonstrates the robustness of the character-level model to unseen entities. In contrast, data augmentation is usually necessary to provide more coverage for unseen entities and predicates, as done in previous work BIBREF0 , BIBREF1 .
Third, to scale our model to handle the millions of entities and thousands of predicates in the KB, instead of using a large output layer in the decoder to directly predict the entity and predicate, we use a general interaction function between the question embeddings and KB embeddings that measures their semantic relevance to determine the output.
|
What approach do the authors take to scale their model to handle the millions of entities and thousands of predicates in the KB?
|
They use a general interaction function between the question embeddings and KB embeddings that measures their semantic relevance to determine the output.
|
null | false
| null |
Black rice, also known as purple rice or forbidden rice, is a range of rice types of the species Oryza sativa, some of which are glutinous rice.
There are several varieties of black rice available today. These include Indonesian black rice, Philippine heirloom balatinaw black rice and pirurutong black glutinous rice, and Thai jasmine black rice. Black rice is known as chak-hao in Manipur, India.
In Bangladesh, it is known as kalo dhaner chaal (black paddy rice) and used to make polao or rice-based desserts. The bran hull (outermost layer) of black rice contains one of the highest levels of anthocyanins found in food. The grain has a similar amount of fiber to brown rice and like brown rice, has a mild, nutty taste.
Black rice has a deep black color and usually turns deep purple when cooked. Its dark purple color is primarily due to its anthocyanin content, which is higher by weight than that of other colored grains. It is suitable for creating porridge, dessert, traditional Chinese black rice cake, bread, and noodles.
|
What is Black rice?
|
Black rice is also known as purple or forbidden rice and is a range within the Oryza sativa, some of the most glutinous rice. When cooked, black rice usually turns deep purple. There are several varieties of black rice available today which include Indonesian black rice, Philippine heirloom black rice and Thai jasmine black rice. In India, black rice is known as chak-hao. In Bangladesh, black rice is used to make desserts.
|
null | false
| null |
Which is a species of fish? Silver hake or Silverback
|
Silver hake
|
|
null | false
| 163
|
In the current work, we created a novel framework to detect sarcasm, that derives insights from human cognition, that manifests over eye movement patterns. We hypothesized that distinctive eye-movement patterns, associated with reading sarcastic text, enables improved detection of sarcasm. We augmented traditional linguistic features with cognitive features obtained from readers' eye-movement data in the form of simple gaze-based features and complex features derived from a graph structure. This extended feature-set improved the success rate of the sarcasm detector by 3.7%, over the best available system. Using cognitive features in an NLP Processing system like ours is the first proposal of its kind.
Our general approach may be useful in other NLP sub-areas like sentiment and emotion analysis, text summarization and question answering, where considering textual clues alone does not prove to be sufficient. We propose to augment this work in future by exploring deeper graph and gaze features. We also propose to develop models for the purpose of learning complex gaze feature representation, that accounts for the power of individual eye movement patterns along with the aggregated patterns of eye movements.
We propose to augment this work in future by exploring deeper graph and gaze features.
|
By what way will the team augment the sarcasm detection system?
|
By exploring deeper graph and gaze features.
|
null | false
| null |
Classify the cities based on whether they are located in South Georgia and South Sandwich Islands or Dominican Republic.
Santiago, Santo Domingo Este, Santo Domingo, Grytviken
|
South Georgia and South Sandwich Islands - Grytviken
Dominican Republic - Santo Domingo, Santiago, Santo Domingo Este
|
|
null | false
| null |
Categorize the days of the week between weekdays and weekend.
Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday
|
weekdays - Monday, Tuesday, Wednesday, Thursday, Friday
weekends - Saturday, Sunday
|
|
null | false
| null |
According to the official Department of Corrections web site, SCI Dallas (Luzerne County, 10 miles from Wilkes-Barre) was opened in 1960 as an institution for defective delinquents. After the state Supreme Court decision of 1966 voided the concept of "defective delinquents," Dallas, like Huntingdon, became an adult institution. It now is a medium-security facility for men.
|
Does SCI Dallas hold men or women?
|
men
|
null | false
| null |
Classify each of the following as human or animal: lion, monkey, zebra, clint eastwood, cat
|
Lion is an animal. Monkey is an animal. Zebra is an animal. Clint Eastwood is a human. Cat is an animal.
|
|
null | false
| null |
Q: Am I eligible for a booster dose of a COVID-19 vaccine, and if so, which one?
|
A: Individuals may receive a single booster dose of one of the authorized bivalent mRNA COVID-19 vaccines, when eligible, as follows:
Children who are 6 months through 5 years of age who received primary vaccination with Moderna COVID-19 Vaccine may receive a single booster dose of the Moderna COVID-19 Vaccine, Bivalent at least two months after completion of primary vaccination.
Children who are 6 months through 4 years of age who received primary vaccination with three doses of the monovalent Pfizer-BioNTech COVID-19 Vaccine may receive a single booster dose of the Pfizer -BioNTech COVID-19 Vaccine, Bivalent at least 2 months after completion of primary vaccination.
Children who are 5 years of age who received primary vaccination with Moderna COVID-19 Vaccine may receive either the Moderna COVID-19 Vaccine, Bivalent or Pfizer-BioNTech COVID-19, Bivalent as a single booster dose at least two months after completion of primary vaccination.
Children who are 5 years of age who received primary vaccination doses with Pfizer-BioNTech COVID-19 Vaccine, including children whose third dose was with the Pfizer-BioNTech COVID-19 Vaccine, Bivalent, may receive only the Pfizer-BioNTech COVID-19 Vaccine, Bivalent as a single booster dose at least two months after completion of primary vaccination.
Individuals who are 6 years of age and older may receive a single booster dose with either the Moderna COVID-19 Vaccine, Bivalent or the Pfizer-BioNTech COVID-19, Bivalent at least 2 months after either:
Completion of a primary vaccination with any FDA-authorized or approved COVID-19 vaccine.
or
The most recent booster dose with any FDA-authorized or approved monovalent COVID-19 vaccine.
|
|
null | false
| null |
What is Data Profiling?
|
Data Profiling is one of the principles of Data Management. Data Profiling is the activity to find patterns in Data. Data profiling is performed by teams that have access to some data sets for different use cases such as Data patterns discovery, high level analysis, data exception handling for ETL etc. Data profiling is the first part of the Data Quality life cycle within the Data Governance methodology. Running Data profiling on data sets can answer simple questions such as how many nulls in the dataset, what are the different patterns of dates in the dataset etc. Profiling is typically done on the subset of data. Tools have limit on the amount of the row in the data set it can profile. These tools can also create visualization on the profiling results for easy understanding of the patterns.
|
|
null | false
| null |
The Premier League (legal name: The Football Association Premier League Limited) is the highest level of the English football league system. Contested by 20 clubs, it operates on a system of promotion and relegation with the English Football League (EFL). Seasons typically run from August to May with each team playing 38 matches (playing all other teams both home and away). Most games are played on Saturday and Sunday afternoons, with occasional weekday evening fixtures.
|
What is the Premier League?
|
The Premier League is the top tier in English Football. The league has 20 teams where the bottom 3 teams are relegated to the 2nd tier and the top 3 teams from the 2nd tier are promoted. Each team plays the other 19 teams twice. Once home and once away, for a total of 38 games. Most games are played on the weekends from August to May.
|
null | false
| null |
What are the best tips for professional Safari photography ?
|
-Patience is key, while most animals have predictable migratory patterns, a number of factors can occur to disrupt that. If you are in a vehicle, try driving around and finding other spots.
-Carry at least 2 cameras, one with a wide angle lens and one with telephoto lens. As you won’t have time to swap out lenses as different animals move around you.
-Gear can be very heavy, so don’t bring too large a telephoto lens, as shooting from a vehicle can be challenging.
-Be conscious of your surroundings, don’t be too distracted as you are still in an active environment and animals can react to you.
|
|
null | false
| null |
How many times have the Boston Celtics won the NBA championship?
|
The Celtics have won 17 NBA championships.
|
|
null | false
| 350
|
Machine reading comprehension and question answering has becomes a crucial application problem in evaluating the progress of AI system in the realm of natural language processing and understanding BIBREF0 . The computational linguistics communities have devoted significant attention to the general problem of machine reading comprehension and question answering.
However, most of existing reading comprehension tasks only focus on shallow QA tasks that can be tackled very effectively by existing retrieval-based techniques BIBREF1 . For example, recently we have seen increased interest in constructing extractive machine reading comprehension datasets such as SQuAD BIBREF2 and NewsQA BIBREF3 . Given a document and a question, the expected answer is a short span in the document. Question context usually contains sufficient information for identifying evidence sentences that entail question-answer pairs. For example, 90.2% questions in SQuAD reported by Min BIBREF4 are answerable from the content of a single sentence. Even in some multi-turn conversation tasks, the existing models BIBREF5 mostly focus on retrieval-based response matching.
In this paper, we focus on multiple-choice reading comprehension datasets such as RACE BIBREF6 in which each question comes with a set of answer options. The correct answer for most questions may not appear in the original passage which makes the task more challenging and allow a rich type of questions such as passage summarization and attitude analysis. This requires a more in-depth understanding of a single document and leverage external world knowledge to answer these questions. Besides, comparing to traditional reading comprehension problem, we need to fully consider passage-question-answer triplets instead of passage-question pairwise matching.
In this paper, we propose a new model, Dual Co-Matching Network, to match a question-answer pair to a given passage bidirectionally. Our network leverages the latest breakthrough in NLP: BERT BIBREF7 contextual embedding. In the origin BERT paper, the final hidden vector corresponding to first input token ([CLS]) is used as the aggregation representation and then a standard classification loss is computed with a classification layer. We think this method is too rough to handle the passage-question-answer triplet because it only roughly concatenates the passage and question as the first sequence and uses question as the second sequence, without considering the relationship between the question and the passage. So we propose a new method to model the relationship among the passage, the question and the candidate answer.
Firstly we use BERT as our encode layer to get the contextual representation of the passage, question, answer options respectively. Then a matching layer is constructed to get the passage-question-answer triplet matching representation which encodes the locational information of the question and the candidate answer matched to a specific context of the passage. Finally we apply a hierarchical aggregation method over the matching representation from word-level to sequence-level and then from sequence level to document-level. Our model improves the state-of-the-art model by 2.6 percentage on the RACE dataset with BERT base model and further improves the result by 3 percentage with BERT large model.
Our model improves the state-of-the-art model by 2.6 percentage on the RACE dataset with BERT base model and further improves the result by 3 percentage with BERT large model.
|
How much percentage does their model improve the state-of-the-art model on the RACE dataset with the BERT base model?
|
2.6 percentage.
|
null | false
| null |
How much is in a baker's dozen?
|
A baker's dozen is not 12, it is 13. There is a story for the origin of this practice that dates back to medieval times and bakers throwing in an extra roll or loaf of bread, to avoid being found guilty of cheating their customers based on prices vs quantity of wheat flour sold. In modern times, the practice is more about generosity of your baker; though you might not find a baker who actually gives you 13 when you pay for a dozen donuts.
|
|
2003.00576
| false
| null |
Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. It shows that the latent and explicit information are complementary and a model can jointly leverage them to produce better summaries.
Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1.
|
By how much they improve over the previous state-of-the-art?
|
The answers are shown as follows:
* 1.08 points in ROUGE-L over our base pointer-generator model
* 0.6 points in ROUGE-1
|
null | false
| null |
Did dinosaurs have lips?
|
Recent research suggests that the popular perception that dinosaurs had exposed teeth may be incorrect, and the current hypothesis is that the Tyrannosaurus rex did indeed have lips. A 2023 study by Dr Mark Witton of the University of Portsmouth said popular depictions of dinosaurs in movies like Jurassic Park are out of date. The study examined a large tooth that had spent over 500 days in the mouth of a Tyrannosaurus rex with no evidence of substantial wear, which is consistent with all findings of teeth found in the theropod family of dinosaurs. When contrasted with alligators and crocodiles, who have no lips and have more easily damaged teeth the theory says the evidence points to theropods, including Tyrannosaurus rex did indeed have lips. There would have been expected to be wear on the thin enamel on the teeth far more consistent with modern animals like crocodiles if dinosaurs had no lips.
|
|
null | false
| null |
Give me some strategies to prevent procrastination
|
Procrastination is a huge problem for individuals in today's society. Though there are many strategies to beat procrastination, the effectiveness of these strategies is largely dependent on the individual and his/her/their circumstances. Here are some popular strategies to beat procrastination:
1. Set realistic, specific, and time-bound goals. The best goals should have some built-in time delays, but not too much, as to maintain some urgency. Large goals should also be broken into achievable chunks.
2. Limit your access to distracting media. Making use of your device's focus modes, or locking away your device all together when working can prevent you from being distracted while executing tasks
3. Make the rewards of accomplishing goals tangible and more immediate. A popular strategy for this is temptation bundling, where you bundle a behavior that is good for you in the long-run with a behavior that feels good in the short-run. For example, you may only allow yourself to watch YouTube videos while you are folding laundry.
4. Use support systems. Often, using peers, family, or other individuals in your support system to help hold you accountable is an effective way to encourage task accomplishment. This method also promotes social interaction and continuous discussion of goals with others. As a caveat, you should not become reliant on this method, as it can create burdens on the other individuals in your support system
|
|
null | false
| null |
What is does ELT stand for?
|
ELT stands for extract, transfer and load. This refers to a data flow lifecycle. Extract refers to extracting data from source system of records, load refers to the process to load the data into another system, and transfer stands for the process load the data into that new system. ELT also describes a process for loading source data into another system of record that is stored in the original format as the source system.
|
|
null | false
| null |
What country has the highest life expectancy?
|
Life expectancy for humans has doubled over the last century but there’s still a noticeable gap in average life expectancies between different countries. In general, developed countries see higher life expectancies than developing regions but over time the gap is expected to narrow. Based on the data from the World Bank including both the healthcare spending per capita of 178 different countries, Japan has the highest life expectancy at birth 85 years.
|
|
null | false
| null |
In 1998, Hingis won all four of the Grand Slam women's doubles titles, only the fourth in women's tennis history to do so, (the Australian Open with Mirjana Lučić and the other three events with Novotná), and she became only the third woman to hold the No. 1 ranking in both singles and doubles simultaneously. She also retained her Australian Open singles title by beating Conchita Martínez in straight sets in the final. Hingis, however, lost in the final of the US Open to Lindsay Davenport. Davenport ended an 80-week stretch Hingis had enjoyed as the No. 1 singles player in October 1998, but Hingis finished the year by beating Davenport in the final of the WTA Tour Championships.
|
From the passage provided, list the partners with whom Martina Hingis won the doubles Grand Slam in 1998
|
- Mirjana Lučić
- Jana Novotná
|
null | false
| null |
A running of the bulls (Spanish: encierro, from the verb encerrar, 'to corral, to enclose'; Occitan: abrivado, literally 'haste, momentum'; Catalan: correbous, 'run-bulls') is an event that involves running in front of a small group of bulls, typically six but sometimes ten or more, that have been let loose on sectioned-off streets in a town, usually as part of a summertime festival. Particular breeds of cattle may be favored, such as the toro bravo in Spain, also often used in post-run bullfighting, and Camargue cattle in Occitan France, which are not fought. Bulls (non-castrated male cattle) are typically used in such events.
The most famous bull-run is the encierro held in Pamplona during the nine-day festival of Sanfermines in honor of Saint Fermin. It has become a major global tourism event, today very different from the traditional, local festival. More traditional summer bull-runs are held in other places such as towns and villages across Spain and Portugal, in some cities in Mexico, and in the Occitan (Camargue) region of southern France. Bull-running was formerly also practiced in rural England, most famously at Stamford until 1837.
|
What is the most famous 'Running of the Bulls' event?
|
The most famous 'Running of the Bulls' event is held annually in Pamplona Spain during the nine-day festival of Sanfermines in honor of Saint Fermin.
|
null | false
| null |
Who is the best all-time soccer player?
|
It is hard to say and very subjective among fans of different clubs and nationalities. But, it would be safe to say that the best all-time soccer player is among the following list: Messi, Cristiano Ronaldo, Maradona, and Pele.
|
|
null | false
| null |
Tell me whether these are literary characters or fruits: Gandalf, banana, Frodo Baggins, Aragorn, orange, apple, Sauron
|
Gandalf - literary character
Banana - fruit
Frodo Baggins - literary character
Aragorn - literary character
orange - fruit
apple - fruit
Sauron - literary character
|
|
null | false
| null |
What do dogs like to do?
|
Dogs like to chase balls. They also like to chase small animals like squirrels. Depending on the breed, they may have other interests like sleeping on your lap. Some dogs bite and fight with cats.
|
|
null | false
| null |
Which of the numbers 1 through 10 are prime numbers?
|
The numbers 2, 3, 5 and 7 are prime numbers.
|
|
null | false
| null |
What are a few words that can be used to describe running?
|
You can use words like jog, fast, slow, trot, sprint, race, hurry, skip, dash, fly rush, to describe or replace run.
|
|
null | false
| null |
Nuclear proliferation is the spread of nuclear weapons, fissionable material, and weapons-applicable nuclear technology and information to nations not recognized as "Nuclear Weapon States" by the Treaty on the Non-Proliferation of Nuclear Weapons, commonly known as the Non-Proliferation Treaty or NPT. Proliferation has been opposed by many nations with and without nuclear weapons, as governments fear that more countries with nuclear weapons will increase the possibility of nuclear warfare (up to and including the so-called countervalue targeting of civilians with nuclear weapons), de-stabilize international or regional relations, or infringe upon the national sovereignty of nation states.
Four countries besides the five recognized Nuclear Weapons States have acquired, or are presumed to have acquired, nuclear weapons: India, Pakistan, North Korea, and Israel. None of these four is a party to the NPT, although North Korea acceded to the NPT in 1985, then withdrew in 2003 and conducted its first nuclear test in 2006. One critique of the NPT is that the treaty is discriminatory in the sense that only those countries that tested nuclear weapons before 1968 are recognized as nuclear weapon states while all other states are treated as non-nuclear-weapon states who can only join the treaty if they forswear nuclear weapons.
Research into the development of nuclear weapons was initially undertaken during World War II by the United States (in cooperation with the United Kingdom and Canada), Germany, Japan, and the USSR. The United States was the first and is the only country to have used a nuclear weapon in war, when it used two bombs against Japan in August 1945. After surrendering to end the war, Germany and Japan ceased to be involved in any nuclear weapon research. In August 1949, the USSR tested a nuclear weapon, becoming the second country to detonate a nuclear bomb. The United Kingdom first tested a nuclear weapon in October 1952. France first tested a nuclear weapon in 1960. The People's Republic of China detonated a nuclear weapon in 1964. India conducted its first nuclear test in 1974, which prompted Pakistan to develop its own nuclear program and, when India conducted a second series of nuclear tests in 1998, Pakistan followed with a series of tests of its own. In 2006, North Korea conducted its first nuclear test.
|
Which countries besides the five recognized Nuclear Weapons States have acquired, or are presumed to have acquired, nuclear weapons?
|
India, Pakistan, North Korea, and Israel.
|
null | false
| null |
What film won the 1943 Oscar as best film
|
Casablanca
|
|
null | false
| null |
Name a good wood for a beginning wood carving project?
|
Basswood is a tree that grows in North America that is particularly good for a first wood carving project. The reason? It is soft enough to be easy to work, but hard enough that it will let you get detail and is reasonably hard to break when you are still learning what you are doing. It responds well to a simple knife or to more advanced woodworking tools like gauges. It's also a light color which looks nice unpainted, but it also is nice to paint. Last, it is relatively inexpensive when compared with woods like Walnut or Cherry.
|
|
null | false
| null |
Identify which animal species is alive or extinct: Pterodactyl, Snow Leopard
|
Snow Leopard is alive, Pterodactyl is extinct.
|
|
null | false
| null |
Why is Sauna or heat therapy good for you?
|
Thanks to our Finnish friends to the north, doctors have been able to collect great data on the effects of regular sauna use in adults. In Finland there are over 2.2 million saunas in a land with less than 6 million people total. Because regular Sauna use is part of their culture, scientists were able to find that going into a sauna at least 5 times a week for a minimum of 12 minutes provided significant improvement in cardiovascular function and overall improvement to health and mood. Although the exact mechanisms are not fully understood it is believed that the body releases spiked amounts of Human Growth Hormones from the bone marrow in your femur along with releasing heat shock proteins that allow the body to remove damaged cells. These factors have shown a reduced risk of certain kinds of cancers, improved sleep, and seems to also have a hand in the slowing of the aging process.
|
|
1806.01733
| false
| null |
Our features consisted of direct similarity over ConceptNet Numberbatch embeddings, the relationships inferred over ConceptNet by SME, features that compose ConceptNet with other resources (WordNet and Wikipedia), and a purely corpus-based feature that looks up two-word phrases in the Google Books dataset.
Our features consisted of direct similarity over ConceptNet Numberbatch embeddings, the relationships inferred over ConceptNet by SME, features that compose ConceptNet with other resources (WordNet and Wikipedia), and a purely corpus-based feature that looks up two-word phrases in the Google Books dataset.
|
What features did they train on?
|
The answers are shown as follows:
* direct similarity over ConceptNet Numberbatch embeddings, the relationships inferred over ConceptNet by SME, features that compose ConceptNet with other resources (WordNet and Wikipedia), and a purely corpus-based feature that looks up two-word phrases in the Google Books dataset
|
null | false
| 121
|
Knowledge and/or data is often modeled in a structure, such as indexes, tables, key-value pairs, or triplets. These data, by their nature (e.g., raw data or long time-series data), are not easily usable by humans; outlining their crucial need to be synthesized. Recently, numerous works have focused on leveraging structured data in various applications, such as question answering BIBREF0, BIBREF1 or table retrieval BIBREF2, BIBREF3. One emerging research field consists in transcribing data-structures into natural language in order to ease their understandablity and their usablity. This field is referred to as “data-to-text" BIBREF4 and has its place in several application domains (such as journalism BIBREF5 or medical diagnosis BIBREF6) or wide-audience applications (such as financial BIBREF7 and weather reports BIBREF8, or sport broadcasting BIBREF9, BIBREF10). As an example, Figure FIGREF1 shows a data-structure containing statistics on NBA basketball games, paired with its corresponding journalistic description.
Designing data-to-text models gives rise to two main challenges: 1) understanding structured data and 2) generating associated descriptions. Recent data-to-text models BIBREF11, BIBREF12, BIBREF13, BIBREF10 mostly rely on an encoder-decoder architecture BIBREF14 in which the data-structure is first encoded sequentially into a fixed-size vectorial representation by an encoder. Then, a decoder generates words conditioned on this representation. With the introduction of the attention mechanism BIBREF15 on one hand, which computes a context focused on important elements from the input at each decoding step and, on the other hand, the copy mechanism BIBREF16, BIBREF17 to deal with unknown or rare words, these systems produce fluent and domain comprehensive texts. For instance, Roberti et al. BIBREF18 train a character-wise encoder-decoder to generate descriptions of restaurants based on their attributes, while Puduppully et al. BIBREF12 design a more complex two-step decoder: they first generate a plan of elements to be mentioned, and then condition text generation on this plan. Although previous work yield overall good results, we identify two important caveats, that hinder precision (i.e. factual mentions) in the descriptions:
Linearization of the data-structure. In practice, most works focus on introducing innovating decoding modules, and still represent data as a unique sequence of elements to be encoded. For example, the table from Figure FIGREF1 would be linearized to [(Hawks, H/V, H), ..., (Magic, H/V, V), ...], effectively leading to losing distinction between rows, and therefore entities. To the best of our knowledge, only Liu et al. BIBREF19, BIBREF11 propose encoders constrained by the structure but these approaches are designed for single-entity structures.
Arbitrary ordering of unordered collections in recurrent networks (RNN). Most data-to-text systems use RNNs as encoders (such as GRUs or LSTMs), these architectures have however some limitations. Indeed, they require in practice their input to be fed sequentially. This way of encoding unordered sequences (i.e. collections of entities) implicitly assumes an arbitrary order within the collection which, as demonstrated by Vinyals et al. BIBREF20, significantly impacts the learning performance.
To address these shortcomings, we propose a new structured-data encoder assuming that structures should be hierarchically captured. Our contribution focuses on the encoding of the data-structure, thus the decoder is chosen to be a classical module as used in BIBREF12, BIBREF10. Our contribution is threefold:
We model the general structure of the data using a two-level architecture, first encoding all entities on the basis of their elements, then encoding the data structure on the basis of its entities;
We introduce the Transformer encoder BIBREF21 in data-to-text models to ensure robust encoding of each element/entities in comparison to all others, no matter their initial positioning;
We integrate a hierarchical attention mechanism to compute the hierarchical context fed into the decoder.
We report experiments on the RotoWire benchmark BIBREF10 which contains around $5K$ statistical tables of NBA basketball games paired with human-written descriptions. Our model is compared to several state-of-the-art models. Results show that the proposed architecture outperforms previous models on BLEU score and is generally better on qualitative metrics.
In the following, we first present a state-of-the art of data-to-text literature (Section 2), and then describe our proposed hierarchical data encoder (Section 3). The evaluation protocol is presented in Section 4, followed by the results (Section 5). Section 6 concludes the paper and presents perspectives.
Designing data-to-text models gives rise to two main challenges: 1) understanding structured data and 2) generating associated descriptions.
|
What are the two main challenges when designing data-to-text models?
|
1) understanding structured data and 2) generating associated descriptions
|
1911.13066
| false
| null |
The English language is well studied under the umbrella of NLP, hence many resources and datasets for the different problems are available. However, research on English-Roman Urdu bilingual text lags behind because of non-availability of gold standard datasets. Our second contribution is that we present a large scale annotated dataset in Roman Urdu and English language with code-switching, for multi-class classification. The dataset consists of more than $0.3$ million records and has been made available for future research.
The dataset consists of more than $0.3$ million records and has been made available for future research.
|
What is the size of the dataset?
|
The answers are shown as follows:
* $0.3$ million records
|
null | false
| 134
|
We evaluate our method on natural language inference (NLI), paraphrase identification (PI), and sentiment classification. We also conduct analysis on gate values and experiments on model variants. For detailed experimental settings, we refer readers to the supplemental material.
For the NLI and PI tasks, there exists recent work specializing in sentence pair classification. However in this work we confine our model to the architecture that encodes each sentence using a shared encoder without any inter-sentence interaction, in order to focus on the effectiveness of the models in extracting semantics. But note that the applicability of CAS-LSTM is not limited to sentence encoding based approaches.
For the evaluation of performance of the proposed method on the NLI task, SNLI and MultiNLI datasets are used.
|
Which datasets are used for the evaluation of the performance of the proposed method on the NLI task?
|
SNLI and MultiNLI datasets
|
null | false
| null |
What is the most invaded country in history?
|
The most invaded country in history is difficult to determine definitively as there are many variables to consider. However, one country that has been invaded numerous times throughout its history is Poland. Poland has a long history of being invaded and occupied by various neighboring powers. In the 18th century, it was partitioned three times by Russia, Prussia, and Austria, and ceased to exist as an independent state for over a century
During World War II, Poland was invaded and occupied by Nazi Germany, resulting in the deaths of over six million Polish citizens, including three million Jews who were murdered in the Holocaust. Following the war, Poland became a satellite state of the Soviet Union and was occupied by Soviet forces until the fall of the Soviet Union in 1989.
Throughout its history, Poland has also been invaded by other powers, including the Mongols, Teutonic Knights, and Ottoman Empire, among others. The country's location in central Europe, as well as its fertile land and valuable resources, have made it a target for invasion throughout history.
Overall, while there is no definitive answer to which country has been the most invaded in history, Poland's history of being invaded and occupied by numerous neighboring powers makes it a strong contender for this title.
|
|
null | false
| null |
Give me some examples of metamorphic rocks.
|
Marble and slate are metamorphic rock types.
|
|
null | false
| null |
The ninth series of the British television drama series Waterloo Road began airing on 5 September 2013 on BBC One and ended on 12 March 2014. The series follows the lives of the staff and pupils of the eponymous school, a troubled Scottish comprehensive school. It consists of twenty episodes. This series also marks the departure of Grantly Budgen (Philip Martin Brown), the last of the original cast members.
Plot
The show follows the lives of the teachers and the pupils at the eponymous school of Waterloo Road, a failing inner-city comprehensive, tackling a wide range of issues often seen as taboo such as drug dealing, sexting, catfishing, stuttering, prescription drug abuse, stroke, identity fraud, stalking, human trafficking, deportation, drink driving and empty nest syndrome.
|
What is the plot of the Waterloo Road
|
The ninth series of the British television drama series Waterloo Road began airing on 5 September 2013 on BBC One and ended on 12 March 2014. The series follows the lives of the staff and pupils of the eponymous school, a troubled Scottish comprehensive school. It consists of twenty episodes. This series also marks the departure of Grantly Budgen (Philip Martin Brown), the last of the original cast members.
The show follows the lives of the teachers and the pupils at the eponymous school of Waterloo Road, a failing inner-city comprehensive, tackling a wide range of issues often seen as taboo such as drug dealing, sexting, catfishing, stuttering, prescription drug abuse, stroke, identity fraud, stalking, human trafficking, deportation, drink driving and empty nest syndrome.
|
1910.11491
| false
| null |
As discussed in section SECREF1, the attention model putting most of attention weight on just a few parts of the input tends to achieve good performance. Mathematically, when only a small number of values are large, the shape of the distribution is sharp and the variance of the attention distribution is large. Drawing on the concept of variance in mathematics, local variance loss is defined as the reciprocal of its variance expecting the attention model to be able to focus on more salient parts. The standard variance calculation is based on the mean of the distribution. However, as previous work BIBREF15, BIBREF16 mentioned that the median value is more robust to outliers than the mean value, we use the median value to calculate the variance of the attention distribution. Thus, local variance loss can be calculated as:
where $\hat{\cdot }$ is a median operator and $\epsilon $ is utilized to avoid zero in the denominator.
Drawing on the concept of variance in mathematics, local variance loss is defined as the reciprocal of its variance expecting the attention model to be able to focus on more salient parts. The standard variance calculation is based on the mean of the distribution. However, as previous work BIBREF15, BIBREF16 mentioned that the median value is more robust to outliers than the mean value, we use the median value to calculate the variance of the attention distribution. Thus, local variance loss can be calculated as:
where $\hat{\cdot }$ is a median operator and $\epsilon $ is utilized to avoid zero in the denominator.
|
How do they define local variance?
|
The reciprocal of the variance of the attention distribution
|
null | false
| null |
Tell me whether these are examples of twentieth-century architecture and design: modernism, art deco, art nouveau, post-modernism, victorian
|
Modernism, post-modernism, art nouveau, victorian and art deco are all examples of twentieth-century architecture and design. Art nouveau and victorian can also be considered to be examples of nineteenth-century architecture and design, as art nouveau began circa 1890 and ended circa 1910, while victorian began circa 1837 and ended circa 1901, coinciding with the reign of Queen Victoria of Great Britain.
|
|
null | false
| 408
|
# of MCMC steps 10 20 30 40 50 200
ModelShort-run EBM 421.3 194.88 117.02 140.79 198.09 54.23
CoopNet 33.74 33.48 34.12 33.85 42.99 38.88
CoopFlow(Pre) 16.46 15.20 15.80 16.80 15.64 17.94
Table 10: FID of Short-run EBM (EBM-SR), CoopNet and CoopFlow model under different numbers of MCMC steps on CIFAR-10 dataset.
|
What would happen if you use 200-step instead of 30-step in CoopFlow (Pre) in section 5.2?
|
We have followed your suggestion to use a 200-step Langevin dynamics in our CoopFlow (Pre) on Cifar10 dataset. We use the step size as the CoopFlow(Long). Due to limited time for carrying our experiments, currently, we get the fid as 17.94 in this setting. We believe after more tuning, this score may be better. On the other hand, this may also suggest that in the pretrained normalizing flow case, the initial proposal provied by normalizing flow may be good enough, so long chain MCMC may actually not be needed. We have included this result in Table 10 of our appendix. Table 10 also shows FID performacnes over different numbers of MCMC steps.
|
null | false
| null |
If my car is broken while I am riding on the Highway, what should I do?
|
If your car is broken while you are on the Highway try to maintain calm and identify a safe place to park your car if possible and try to identify if it's an issue that you can continue driving or if the damage can be fixed like a flat tire.
In case the damage is unknown or dangerous to continue driving, try to call the Highway services to help you and contact your insurance company if you have one.
|
|
null | false
| null |
Tell me whether the following animals are reptiles or amphibians: salamander, alligator, caiman, snake, toad.
|
Salamanders and toads are amphibians. Alligators, caimans, and snakes are reptiles.
|
|
null | false
| 158
|
In this paper, we present an efficient matching mechanism on multiple choice question answering task. We introduce two-staged CNN to match passage and choice on word level and sentence level. In addition, we use query-based CNN attention to enhance matching effect.
The power of the model is verified on MovieQA dataset, which yielded the state of the art result on the dataset. In the future, we are now working on training our model based on our own trained embedding with TF-IDF BIBREF9 weighting. Furthermore, we would like to test our model on open-answer task like SQuaD by seeing the whole corpus as an “answer pool" and solve it like multiple choice question.
The power of the model is verified on MovieQA dataset, which yielded the state of the art result on the dataset. In the future, we are now working on training our model based on our own trained embedding with TF-IDF [10] weighting. Further more, we would like to test our model on open-answer task like SQuaD by seeing the whole corpus as an “answer pool” and solve it like multiple choice question.
|
Do the authors have any work plan for the future?
|
In the future, they are now working on training their model based on their own trained embedding with TF-IDF [10] weighting. Further more, they would like to test their model on open-answer task like SQuaD by seeing the whole corpus as an “answer pool” and solve it like multiple choice question.
|
null | false
| null |
Give me a list of ten Mayan ruins to visit in Central America.
|
- Chichen Itza in Mexico
- Tikal in Guatemala
- Caracol in Belize
- Copan in Honduras
- Palenque in Mexico
- Tulum in Mexico
- Uxmal in Mexico
- Monte Alban in Mexico
- Calakmul in Mexico
- Ek Balam in Mexico
|
|
null | false
| null |
Tell me whether these sports are played on grass or ice: figure skating, hockey, football, soccer, curling
|
Figure Skating: Ice
Hockey: Ice
Football: Grass
Soccer: Grass
Curling: Ice
|
|
null | false
| 110
|
The Transformer architecture BIBREF0 for deep neural networks has quickly risen to prominence in NLP through its efficiency and performance, leading to improvements in the state of the art of Neural Machine Translation BIBREF1, BIBREF2, as well as inspiring other powerful general-purpose models like BERT BIBREF3 and GPT-2 BIBREF4. At the heart of the Transformer lie multi-head attention mechanisms: each word is represented by multiple different weighted averages of its relevant context. As suggested by recent works on interpreting attention head roles, separate attention heads may learn to look for various relationships between tokens BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9.
The attention distribution of each head is predicted typically using the softmax normalizing transform. As a result, all context words have non-zero attention weight. Recent work on single attention architectures suggest that using sparse normalizing transforms in attention mechanisms such as sparsemax – which can yield exactly zero probabilities for irrelevant words – may improve performance and interpretability BIBREF12, BIBREF13, BIBREF14. Qualitative analysis of attention heads BIBREF0 suggests that, depending on what phenomena they capture, heads tend to favor flatter or more peaked distributions.
Recent works have proposed sparse Transformers BIBREF10 and adaptive span Transformers BIBREF11. However, the “sparsity" of those models only limits the attention to a contiguous span of past tokens, while in this work we propose a highly adaptive Transformer model that is capable of attending to a sparse set of words that are not necessarily contiguous. Figure FIGREF1 shows the relationship of these methods with ours.
Our contributions are the following:
We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains.
We propose an adaptive version of sparse attention, where the shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case.
We make an extensive analysis of the added interpretability of these models, identifying both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of our proposed model.
We propose an adaptive version of sparse attention, where the shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case.
|
What features does the adaptive version of sparse attention have?
|
The shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case.
|
null | false
| 27
|
To quantify the challenges that our new dataset presents, we evaluated the performance of a range of classifier models and out-of-scope prediction schemes.
To quantify the challenges that our new dataset presents, we evaluated the performance of a range of classifier models3 and out-of-scope prediction schemes.
|
What did the authors evaluate to quantify the challenges that the new dataset presents?
|
The authors evaluated the performance of a range of classifier models3 and out-of-scope prediction schemes.
|
null | false
| null |
When thinking about ways to describe communications, which of these adjectives describes verbosity, and which describe communication which is concise. Wordy, succinct, prattling, effusive, abridged, babbling.
|
Wordy, prattling, effusive and babbling are adjectives used to describe verbose communication. Succinct and abridged are adjectives describing concise communication.
|
|
null | false
| 104
|
We use the same baseline as used by lang2011unsupervised which has been shown to be difficult to outperform. This baseline assigns a semantic role to a constituent based on its syntactic function, i.e. the dependency relation to its head. If there is a total of INLINEFORM0 clusters, INLINEFORM1 most frequent syntactic functions get a cluster each, and the rest are assigned to the INLINEFORM2 th cluster.
We use the same baseline as used by Lang and Lapata (2011a) which has been shown to be difficult to outperform.
|
Who proposed the baseline model used by the authors?
|
Lang and Lapata.
|
null | false
| 245
|
The goal of sentence simplification is to compose complex sentences into simpler ones so that they are more comprehensible and accessible, while still retaining the original information content and meaning. Sentence simplification has a number of practical applications. On one hand, it provides reading aids for people with limited language proficiency BIBREF1 , BIBREF2 , or for patients with linguistic and cognitive disabilities BIBREF3 . On the other hand, it can improve the performance of other NLP tasks BIBREF4 , BIBREF5 , BIBREF6 . Prior work has explored monolingual machine translation (MT) approaches, utilizing corpora of simplified texts, e.g., Simple English Wikipedia (SEW), and making use of statistical MT models, such as phrase-based MT (PBMT) BIBREF7 , BIBREF8 , BIBREF9 , tree-based MT (TBMT) BIBREF10 , BIBREF11 , or syntax-based MT (SBMT) BIBREF12 .
Inspired by the success of neural MT BIBREF13 , BIBREF14 , recent work has started exploring neural simplification with sequence to sequence (Seq2seq) models, also referred to as encoder-decoder models. Nisioi et al. Nisioi:17 implemented a standard LSTM-based Seq2seq model and found that they outperform PBMT, SBMT, and unsupervised lexical simplification approaches. Zhang and Lapata BIBREF15 viewed the encoder-decoder model as an agent and employed a deep reinforcement learning framework in which the reward has three components capturing key aspects of the target output: simplicity, relevance, and fluency. The common practice for Seq2seq models is to use recurrent neural networks (RNNs) with Long Short-Term Memory BIBREF16 or Gated Recurrent Unit BIBREF17 for the encoder and decoder BIBREF18 , BIBREF15 . These architectures were designed to be capable of memorizing long-term dependencies across sequences. Nevertheless, their memory is typically small and might not be enough for the simplification task, where one is confronted with long and complicated sentences. In this study, we go beyond the conventional LSTM/GRU-based Seq2seq models and propose to use a memory-augmented RNN architecture called Neural Semantic Encoders (NSE). This architecture has been shown to be effective in a wide range of NLP tasks BIBREF0 . The contribution of this paper is twofold:
(1) First, we present a novel simplification model which is, to the best of our knowledge, the first model that use memory-augmented RNN for the task. We investigate the effectiveness of neural Seq2seq models when different neural architectures for the encoder are considered. Our experiments reveal that the NseLstm model that uses an NSE as the encoder and an LSTM as the decoder performed the best among these models, improving over strong simplification systems. (2) Second, we perform an extensive evaluation of various approaches proposed in the literature on different datasets. Results of both automatic and human evaluation show that our approach is remarkably effective for the task, significantly reducing the reading difficulty of the input, while preserving grammaticality and the original meaning. We further discuss some advantages and disadvantages of these approaches.
Results of both automatic and human evaluation show that our approach is remarkably effective for the task, significantly reducing the reading difficulty of the input, while preserving grammaticality and the original meaning.
|
What effect does their method achieve?
|
Significantly reducing the reading difficulty of the input, while preserving grammaticality and the original meaning.
|
null | false
| null |
La buenaventura (1934)
Grand Canary (1934)
Tripping Through the Tropics (1934)
Tres Amores (1934)
El cantante de Napoles (1935)
Julieta Buys a Son (1935)
Rosa de Francia (1935)
El crimen de media noche (1936)
El capitan Tormenta (1936)
Song of the Gringo (1936)
Fiesta (1941)
The Face of Marble (1946)
Gilda (1946)
Two Years Before the Mast (1946)
Secret Beyond the Door... (1947)
Secret of the Incas (1954)
The Rose Tattoo (1955)
The Bottom of the Bottle (1956)
Lawman (1959 episode "The Outsider") as Mrs. Lebeau
|
Which of the following films came out after 1948?
|
Secret of the Incas
The Rose Tattoo
The Bottom of the Bottle
Lawman
|
null | false
| null |
The final section, titled "London 1999", is narrated by Briony herself in the form of a diary entry. Now 77, she is a successful novelist who has recently been diagnosed with vascular dementia, so she is facing rapid mental decline. The reader learns that Briony is the author of the preceding sections of the novel. Briony attends a party in her honor at the Tallis family home, where the extended Tallis children perform The Trials of Arabella, the play that 13-year-old Briony had written and unsuccessfully attempted to stage with her cousins in the summer of 1935. Leon and Pierrot are in attendance, Jackson is fifteen years deceased, and Lola is alive but does not attend. Finally, Briony reveals to the reader that Robbie Turner died of septicaemia on the beaches of Dunkirk, that Cecilia was killed several months later when a bomb destroyed Balham Underground station during the Blitz, and that Briony's story of seeing them together in 1940 was a fabrication. Briony did attend Lola's wedding to Paul Marshall, but confesses she was too "cowardly" to visit the recently bereaved Cecilia to make amends. The novel, which she says is factually true apart from Robbie and Cecilia being reunited, is her lifelong attempt at "atonement" for what she did to them. Briony justifies her invented happy ending by saying she does not see what purpose it would serve to give readers a "pitiless" story. She writes, "I like to think that it isn't weakness or evasion, but a final act of kindness, a stand against oblivion and despair, to let my lovers live and to unite them at the end."
|
Based on this paragraph, what does Briony mean when she says at the conclusion: ""I like to think that it isn't weakness or evasion, but a final act of kindness, a stand against oblivion and despair, to let my lovers live and to unite them at the end."
|
Based on this passage, Briony is explaining, as the narrator of the novel, why she fabricated the portion of the story involving the reunion of Robbie and Cecilia.
|
null | false
| null |
How to remove limescale from showers and taps?
|
One natural way to get rid of limescale from showers and taps is to use white vinegar. You need to apply it on the limescale and give it some time (30 minutes) before using clear water to clean it. The higher the product has concentration of vinegar, the better it is.
|
|
null | false
| null |
write short essay on Buzz Aldrin experience on moon
|
Buzz Aldrin, the second person to step onto the Moon's surface after Neil Armstrong, had a unique experience during the Apollo 11 mission in 1969. Like Armstrong, Aldrin spent just over two hours outside the lunar module, conducting experiments and collecting samples.
Aldrin's experience on the Moon was both exhilarating and challenging. He described the lunar landscape as "magnificent desolation," with vast, barren plains and towering mountains in the distance. Aldrin also noted the difficulty of moving around in the bulky spacesuit and the constant hissing sound of his oxygen supply.
Despite these challenges, Aldrin managed to conduct experiments and collect samples that would later be used to study the Moon's geology and history. He also took photographs and planted the American flag on the lunar surface.
Aldrin's experience on the Moon was a testament to human resilience and the spirit of exploration. He, along with Armstrong and the rest of the Apollo 11 team, pushed the boundaries of what was possible and opened up new frontiers for science and discovery. Aldrin's dedication to science and exploration continue to inspire future generations of astronauts and scientists.
|
|
1910.03814
| false
| null |
The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any).
To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM).
|
What different models for multimodal detection were proposed?
|
The answers are shown as follows:
* Feature Concatenation Model (FCM)
* Spatial Concatenation Model (SCM)
* Textual Kernels Model (TKM)
|
null | false
| 269
|
Subword segmentation has become a standard preprocessing step in many neural approaches to natural language processing (NLP) tasks, e.g Neural Machine Translation (NMT) BIBREF0 and Automatic Speech Recognition (ASR) BIBREF1. Word level modeling suffers from sparse statistics, issues with Out-of-Vocabulary (OOV) words, and heavy computational cost due to a large vocabulary. Word level modeling is particularly unsuitable for morphologically rich languages, but subwords are commonly used for other languages as well. Subword segmentation is best suited for languages with agglutinative morphology.
While rule-based morphological segmentation systems can achieve high quality, the large amount of human effort needed makes the approach problematic, particularly for low-resource languages. The systems are language dependent, necessitating use of multiple tools in multilingual setups. As a fast, cheap and effective alternative, data-driven segmentation can be learned in a completely unsupervised manner from raw corpora. Unsupervised morphological segmentation saw much research interest until the early 2010's; for a survey on the methods, see hammarstrom2011unsupervised. Semi-supervised segmentation with already small amounts of annotated training data was found to improve the accuracy significantly when compared to a linguistic segmentation; see ruokolainen2016comparative for a survey. While this line of research has been continued in supervised and more grammatically oriented tasks BIBREF2, the more recent work on unsupervised segmentation is less focused on approximating a linguistically motivated segmentation. Instead, the aim has been to tune subword segmentations for particular applications. For example, the simple substitution dictionary based Byte Pair Encoding segmentation algorithm BIBREF3, first proposed for NMT by sennrich2015neural, has become a standard in the field. Especially in the case of multilingual models, training a single language-independent subword segmentation method is preferable to linguistic segmentation BIBREF4.
In this study, we compare three existing and one novel subword segmentation method, all sharing the use of a unigram language model in a generative modeling framework. The previously published methods are Morfessor Baseline BIBREF5, Greedy Unigram Likelihood BIBREF6, and SentencePiece BIBREF7. The new Morfessor variant proposed in this work is called Morfessor EM+Prune.
The contributions of this article are
a better training algorithm for Morfessor Baseline, with reduction of search error during training, and improved segmentation quality for English, Finnish and Turkish;
comparing four similar segmentation methods, including a close look at the SentencePiece reference implementation, highlighting details omitted from the original article BIBREF7;
and showing that the proposed Morfessor EM+Prune with particular hyper-parameters yields SentencePiece.
The contributions of this article are a better training algorithm for Morfessor Baseline, with reduction of search error during training, and improved segmentation quality for English, Finnish and Turkish; comparing four similar segmentation methods, including a close look at the SentencePiece reference implementation, highlighting details omitted from the original article; and showing that the proposed Morfessor EM+Prune with particular hyper-parameters yields SentencePiece.
|
What language did they improve segmentation quality for?
|
English, Finnish and Turkish.
|
null | false
| null |
Identify which animal species is alive or extinct: Hovasaurus, Green Turtle
|
Green Turtle is alive, Hovasaurus is extinct.
|
|
null | false
| null |
Ballspielverein Borussia 09 e. V. Dortmund, commonly known as Borussia Dortmund (German pronunciation: [boˈʁʊsi̯aː ˈdɔɐ̯tmʊnt] (listen)), BVB (pronounced [beːfaʊ̯ˈbeː] (listen)), or simply Dortmund (pronounced [ˈdɔʁtmʊnt] (listen)), is a German professional sports club based in Dortmund, North Rhine-Westphalia. It is best known for its men's professional football team, which plays in the Bundesliga, the top tier of the German football league system. The club have won eight league championships, five DFB-Pokals, one UEFA Champions League, one Intercontinental Cup, and one UEFA Cup Winners' Cup.
Founded in 1909 by eighteen football players from Dortmund, the football team is part of a large membership-based sports club with more than 145,000 members, making Borussia Dortmund the second largest sports club by membership in Germany. The club has active departments in other sports, namely in women's handball. Since 1974, Dortmund have played their home games at Westfalenstadion; the stadium is the largest in Germany, and Dortmund has the highest average attendance of any association football club in the world.
|
Tell me when did the Borussia Dortmund found?
|
Founded in 1909
|
null | false
| null |
Kennedy was born on February 22, 1932, at St. Margaret's Hospital in the Dorchester section of Boston, Massachusetts. He was the youngest of the nine children of Joseph Patrick Kennedy and Rose Fitzgerald, members of prominent Irish American families in Boston. They constituted one of the wealthiest families in the nation after their marriage. His eight siblings were Joseph Jr., John, Rosemary, Kathleen, Eunice, Patricia, Robert, and Jean. His older brother John asked to be the newborn's godfather, a request his parents honored, though they did not agree to his request to name the baby George Washington Kennedy (Ted was born on President George Washington's 200th birthday). They named the boy after their father's assistant.
|
Based on this paragraph, how many younger siblings did Kennedy have?
|
Based on this paragraph, Kennedy had no younger siblings, as he was the youngest of nine children.
|
null | false
| 88
|
Knowledge graphs have been proved to benefit many artificial intelligence applications, such as relation extraction, question answering and so on. A knowledge graph consists of multi-relational data, having entities as nodes and relations as edges. An instance of fact is represented as a triplet (Head Entity, Relation, Tail Entity), where the Relation indicates a relationship between these two entities. In the past decades, great progress has been made in building large scale knowledge graphs, such as WordNet BIBREF0 , Freebase BIBREF1 . However, most of them have been built either collaboratively or semi-automatically and as a result, they often suffer from incompleteness and sparseness.
The knowledge graph completion is to predict relations between entities based on existing triplets in a knowledge graph. Recently, a new powerful paradigm has been proposed to encode every element (entity or relation) of a knowledge graph into a low-dimensional vector space BIBREF2 , BIBREF3 . The representations of entities and relations are obtained by minimizing a global loss function involving all entities and relations. Therefore, we can do reasoning over knowledge graphs through algebraic computations.
Although existing methods have good capability to learn knowledge graph embeddings, it remains challenging for entities with few or no facts BIBREF4 . To solve the issue of KB sparsity, many methods have been proposed to learn knowledge graph embeddings by utilizing related text information BIBREF5 , BIBREF6 , BIBREF7 . These methods learn joint embedding of entities, relations, and words (or phrases, sentences) into the same vector space. However, there are still three problems to be solved. (1) The combination methods of the structural and textual representations are not well studied in these methods, in which two kinds of representations are merely aligned on word level or separate loss function. (2) The text description may represent an entity from various aspects, and various relations only focus on fractional aspects of the description. A good encoder should select the information from text in accordance with certain contexts of relations. Figure 1 illustrates the fact that not all information provided in its description are useful to predict the linked entities given a specific relation. (3) Intuitively, entities with many facts depend more on well-trained structured representation while those with few or no facts might be largely determined by text descriptions. A good representation should learn the most valuable information by balancing both sides.
In this paper, we propose a new deep architecture to learn the knowledge representation by utilizing the existing text descriptions of entities. Specifically, we learn a joint representation of each entity from two information sources: one is structure information, and another is its text description. The joint representation is the combination of the structure and text representations with a gating mechanism. The gate decides how much information from the structure or text representation will carry over to the final joint representation. In addition, we also introduce an attention mechanism to select the most related information from text description under different contexts. Experimental results on link prediction and triplet classification show that our joint models can handle the sparsity problem well and outperform the baseline method on all metrics with a large margin.
Our contributions in this paper are summarized as follows.
Specifically, we learn a joint representation of each entity from two information sources: one is structure information, and another is its text description. The joint representation is the combination of the structure and text representations with a gating mechanism.
|
What are the sources of the information to build a joint representation?
|
Structure information and text description.
|
null | false
| null |
Mohandas Karamchand Gandhi was born on 2 October 1869 into a Gujarati Hindu Modh Bania family in Porbandar (also known as Sudamapuri), a coastal town on the Kathiawar Peninsula and then part of the small princely state of Porbandar in the Kathiawar Agency of the British Raj. His father, Karamchand Uttamchand Gandhi (1822–1885), served as the dewan (chief minister) of Porbandar state. His family originated from the then village of Kutiana in what was then Junagadh State.
|
From the passage provided, extract the date of birth of Gandhi
|
Gandhi was born on 2 October 1869
|
1910.02789
| false
| null |
We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent.
We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty.
The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent.
|
What experiments authors perform?
|
The answers are shown as follows:
* a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios
|
null | false
| null |
Classify which of these cars are German brands and which are Japanese: BMW, Mercedes Benz, Nissan, Toyota, Honda, Audi.
|
BMW, Mercedes Benz and Audi are German car manufacturers, and Nissan, Toyota and Honda are Japanese car manufacturers.
|
|
1804.09301
| false
| null |
In this work, we evaluate three publicly-available off-the-shelf coreference resolution systems, representing three different machine learning paradigms: rule-based systems, feature-driven statistical systems, and neural systems.
We evaluate examples of each of the three coreference system architectures described in "Coreference Systems" : the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL).
In this work, we evaluate three publicly-available off-the-shelf coreference resolution systems, representing three different machine learning paradigms: rule-based systems, feature-driven statistical systems, and neural systems.
We evaluate examples of each of the three coreference system architectures described in "Coreference Systems" : the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL).
|
Which coreference resolution systems are tested?
|
The answers are shown as follows:
* the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL).
|
null | false
| null |
Where can you observe the stars?
|
It's typically best to observe the stars on a dark night far away from ambient light of cities.
|
|
null | false
| null |
How to gain more subscriber in Youtube
|
1. Post daily
2. Stick to one theme for your channel
3. DO not post same type of post everyday
|
|
1612.06897
| false
| null |
FLOAT SELECTED: Figure 1: German→English: Learning curve of the continue training. Scores are given in (TERBLEU)/2 (lower is better). tst2013 is our in-domain and newstest2014 is our out-of-domain test set. The baseline model is only trained on the large amount of out-of-domain data.
FLOAT SELECTED: Figure 1: German→English: Learning curve of the continue training. Scores are given in (TERBLEU)/2 (lower is better). tst2013 is our in-domain and newstest2014 is our out-of-domain test set. The baseline model is only trained on the large amount of out-of-domain data.
|
How many examples do they have in the target domain?
|
Around 388k examples, 194k from tst2013 (in-domain) and 194k from newstest2014 (out-of-domain)
|
null | false
| null |
How was Netflix able to overcome Blockbuster’s industry dominance in home video rentals?
|
Blockbuster, at its peak, was raking in $6 billion in revenues when Netflix entered the market as a disruptor with mail-in-DVD service. When Netflix arrived, Blockbuster was already a mature offering with no innovation. It had 6000 stores nationwide, and the customer service could have been better, especially with late fees. Blockbuster collected customers' $800 million in late fees in 2006 alone.
Netflix’s mail-in-DVD service offered a lower performance for the customers as they had to pick titles on their portal pre- and wait for 2 to 3 days before they arrived in their mailbox. However, disgruntled customers from Blockbuster’s late fees switched to Netflix, which was the initial trigger. At first, Netflix's beachhead audience was the dissatisfied customers from Blockbuster.
Netflix is led by Reed Hastings, a visionary who made a bet on video streaming on the internet. With scarce resources spent on technology, Reed Hastings incorporated a “Freedom and responsibility” culture within Netflix. This propelled Netflix into the new era of winning in the subscription video streaming business model. Blockbuster meanwhile had expensive operations with 6000 stores and 25000 staff, while Netflix operated with two data centers and a few warehouses with less than 500 staff. Blockbuster eventually launched its online business. However, the 2008 financial crisis and the mounting debt they incurred for their investments in their online business was too much for the company, and they eventually filed for bankruptcy.
|
|
null | false
| null |
What are a couple of ways to increase profitability?
|
Since profitability can be computed as revenue less expenses for a period, there are several ways to increase profitability. To increase revenue, you could sell more units or increase the price. To lower expenses, you could reduce the cost to produce your good or service or reduce SG&A costs
|
|
null | false
| null |
The University of Oxford is a collegiate research university in Oxford, England. There is evidence of teaching as early as 1096, making it the oldest university in the English-speaking world and the world's second-oldest university in continuous operation. It grew rapidly from 1167 when Henry II banned English students from attending the University of Paris. After disputes between students and Oxford townsfolk in 1209, some academics fled north-east to Cambridge where they established what became the University of Cambridge. The two English ancient universities share many common features and are jointly referred to as Oxbridge.
The university is made up of thirty-nine semi-autonomous constituent colleges, five permanent private halls, and a range of academic departments which are organised into four divisions. All the colleges are self-governing institutions within the university, each controlling its own membership and with its own internal structure and activities. All students are members of a college. It does not have a main campus, and its buildings and facilities are scattered throughout the city centre. Undergraduate teaching at Oxford consists of lectures, small-group tutorials at the colleges and halls, seminars, laboratory work and occasionally further tutorials provided by the central university faculties and departments. Postgraduate teaching is provided predominantly centrally.
Oxford operates the world's oldest university museum, as well as the largest university press in the world and the largest academic library system nationwide. In the fiscal year ending 31 July 2019, the university had a total income of £2.45 billion, of which £624.8 million was from research grants and contracts.
Oxford has educated a wide range of notable alumni, including 30 prime ministers of the United Kingdom and many heads of state and government around the world. As of October 2022, 73 Nobel Prize laureates, 4 Fields Medalists, and 6 Turing Award winners have studied, worked, or held visiting fellowships at the University of Oxford, while its alumni have won 160 Olympic medals. Oxford is the home of numerous scholarships, including the Rhodes Scholarship, one of the oldest international graduate scholarship programmes.
|
Extract the key bulletpoints I need to know about University of Oxford
|
The university of Oxford is the worlds second oldest university. It's made up of 39 semi-autonomous colleges which are self-governing institutions within the university + a range of academic departments organised in four divisions. Oxford has educated more than 70 nobel laureates, multiple Fields Medalists and Turing Award winners, as well as 30 UK prime ministers. The university has an income larger than 2 billion pounds, it operates the world's oldest university museum and the largest university press.
|
null | false
| null |
Identify which animal species is alive or extinct: Palaeophis, Giant Tortoise
|
Giant Tortoise is alive, Palaeophis is extinct.
|
|
null | false
| null |
Classify the below companies on whether they belong to India or Taiwan -
HDFC Bank, Infosys, Taiwan Semiconductor Manufacturing Company, Pegatron
|
HDFC Bank, Infosys - India
Taiwan Semiconductor Manufacturing Company, Pegatron - Taiwan
|
|
null | false
| null |
What kind of species is the Eulima venusta
|
Eulima venusta is a species of sea snail, a marine gastropod mollusk in the family Eulimidae. The species is one of a number within the genus Eulima.
|
|
null | false
| null |
If you found yourself washed up on a deserted tropical island with no hope for immediate rescue, what would you do?
|
The immediate first step is to ensure you are not materially injured. Assuming your only manifestation of harm is a psychological roller coaster of fear, anger and hopelessness, your best course of action is to attend to your basic needs first.
Locate a protected space away from the wind but not too deep into the surrounding jungle, built a fire pit and gather enough combustable materials to carry you through the first few evenings. Find fresh water and someway to hold it (open coconut shells / fashioned palm leave bowls etc) - DO NOT DRINK IT until you boil it first over your new fire so you mitigate the risk of getting sick. Before your first nightfall with secure fire and water taken care of, build yourself a make shift shelter so you are protected from the elements and yet to be discovered critters.
Day two, focus your energies gathering food (coconuts / seaweed / clams etc) so you keep your energy high. Once your very basic needs are met (shelter / water / food to survive), its time to fashion a fishing pole and a sharp spear out of bamboo so you can fish and hunt for birds to expand your food sources.
Day three, focus your energy on building yourself a more permanent shelter. Explore the island for wood and rock sources; along with large palm leave you can fashion rope from. While exploring your new home, look for caves and / or places where you could construct a tree fort so you can improve your protected living space.
Week 2, once your new living quarters are secured and you have sustainable methods to care for your basic needs, climb to the highest point of the island and look for nearby islands / land and any indication of human habitation. While searching for these indicators, assess methods of island escape to get through any surf and start tracking wind and weather patterns daily so you can get a feel for the seasonal rotations.
Week 3 +, dedicate 3-4 hours a day to building signal fires and SOS stone messages that overheard planes may be able to detect. It is also important to manage your mental health and track daily changes so you keep your sanity in check as you adjust to the new reality.
Create a way to chronicle your experiences. One day you will escape your island and can sell your story to Hollywood. Of course, if you never do escape, you will at least have the comfort of leaving something behind for eventual discovery of proof you did survive the incident that landed you on this island.
|
|
1903.07398
| false
| null |
Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.
The open source LJSpeech Dataset was used to train our TTS model. This dataset contains around 13k <text,audio> pairs of a single female english speaker collect from across 7 different non-fictional books. The total training data time is around 21 hours of audio.
The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 .
Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models
The open source LJSpeech Dataset was used to train our TTS model.
The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text
|
Which dataset(s) do they evaluate on?
|
The answers are shown as follows:
* LJSpeech
|
null | false
| 393
|
One of the significant challenges in contemporary information processing is the sheer volume of available data. BIBREF0 , for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing BIBREF1 , relies on hashing data into short, locality-preserving binary codes BIBREF2 . The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. Most of the algorithms from this family are data-oblivious, i.e. can generate hashes for any type of data. Nevertheless, some methods target specific kind of input data, like text or image.
In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by BIBREF3 . Their semantic hashing leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-of-words (BOW) representation. Salakhutdinov & Hinton report that binary codes allow for up to 20-fold improvement in document ranking speed, compared to real-valued representation of the same dimensionality. Moreover, they demonstrate that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning binary representation from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fully-connected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words.
Binary codes have also been applied to cross-modal retrieval where text is one of the modalities. Specifically, BIBREF4 incorporated tag information that often accompany text documents, while BIBREF5 employed siamese neural networks to learn single binary representation for text and image data.
Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. BIBREF6 proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by BIBREF7 to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOW and bag-of-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation BIBREF8 or importance sampling BIBREF9 to approximate the gradients with respect to the softmax logits.
An alternative approach to learning representation of pieces of text has been recently described by BIBREF10 . Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level.
In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by BIBREF11 on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While BIBREF11 employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents.
In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PVDM that learn short binary codes for text documents.
|
What are the Binary Paragraph Vector models?
|
It is an extensions to PV-DBOW and PVDM that learn short binary codes for text documents.
|
null | false
| null |
What are popular street foods from Maharashtra in India
|
Some of the most popular street food list is Vada Pav, Misal Pav, Kanda Bhaji, Pav Bhaji , Sabudana Vada and Sabudana Khichadi. All of these are primarily vegetarian.
|
|
null | false
| null |
Euxoa brevipennis is a moth of the family Noctuidae first described by Smith in 1888. In Canada, it is found in British Columbia, Alberta and Saskatchewan. In the United States, it has been recorded from Utah, Colorado and California.
|
Which family of moth does the Euxoa brevipennis belong to?
|
The Euxoa brevipennis moth belongs to the Noctuidae family of moths.
|
null | false
| null |
Why killing is sin?
|
Killing is sin because you can not kill human being.,
|
|
null | false
| 172
|
We describe a dataset developed for Named Entity Recognition in German federal court decisions. It consists of approx. 67,000 sentences with over 2 million tokens. The resource contains 54,000 manually annotated entities, mapped to 19 fine-grained semantic classes: person, judge, lawyer, country, city, street, landscape, organization, company, institution, court, brand, law, ordinance, European legal norm, regulation, contract, court decision, and legal literature. The legal documents were, furthermore, automatically annotated with more than 35,000 TimeML-based time expressions. The dataset, which is available under a CC-BY 4.0 license in the CoNNL-2002 format, was developed for training an NER service for German legal documents in the EU project Lynx.
The resource contains 54,000 manually annotated entities, mapped to 19 fine-grained semantic classes: person , judge , lawyer , country , city , street , landscape , organization , company , institution , court , brand , law , ordinance , European legal norm , regulation , contract , court decision, and legal literature.
|
How many manually annotated entities does the dataset contain?
|
54000.
|
null | false
| 12
|
Because our work builds heavily upon BERT, we provide a brief summary of its features. BERT is built upon the Transformer architecture BIBREF0, which uses self-attention, feed-forward layers, residual connections and layer normalization as the main building blocks. It has two pre-training objectives:
Masked language modelling - some of the words in a sentence are being masked and the model has to predict them based on the context (note the difference from the typical autoregressive language model training objective);
Next sentence prediction - given two input sequences, decide whether the second one is the next sentence or not.
BERT has been shown to beat the state-of-the-art performance on 11 tasks with no modifications to the model architecture, besides adding a task-specific output layer BIBREF1. We follow same procedure suggested in BIBREF1 for our tasks. Fig. FIGREF8 shows the BERT model for classification. We obtain two kinds of representation from BERT: pooled output from last transformer block, denoted by H, and posterior probabilities, denoted by P. There are two variants of BERT - BERT-Base and BERT-Large. In this work we are using BERT-Base for faster training and experimentation, however, our methods are applicable to BERT-Large as well. BERT-Base and BERT-Large are different in model parameters such as number of transformer blocks, number of self-attention heads. Total number of parameters in BERT-Base are 110M and 340M in BERT-Large.
BERT suffers from major limitations in terms of handling long sequences. Firstly, the self-attention layer has a quadratic complexity $O(n^2)$ in terms of the sequence length $n$ BIBREF0. Secondly, BERT uses a learned positional embeddings scheme BIBREF1, which means that it won't likely be able to generalize to positions beyond those seen in the training data.
To investigate the effect of fine-tuning BERT on task performance, we use either the pre-trained BERT weights, or the weights from a BERT fine-tuned on the task-specific dataset on a segment-level (i.e. we preserve the original label but fine-tune on each segment separately instead of on the whole text sequence). We compare these results to using the fine-tuned segment-level BERT predictions directly as inputs to the next layer.
Given that BERT is limited to a particular input length, we split the input sequence into segments of a fixed size with overlap. For each of these segments, we obtain H or P from BERT model. We then stack these segment-level representations into a sequence, which serves as input to a small (100-dimensional) LSTM layer. Its output serves as a document embedding. Finally, we use two fully connected layers with ReLU (30-dimensional) and softmax (the same dimensionality as the number of classes) activations to obtain the final predictions.
With this approach, we overcome BERT's computational complexity, reducing it to $O(n/k * k^2) = O(nk)$ for RoBERT, with $k$ denoting the segment size (the LSTM component has negligible linear complexity $O(k)$). The positional embeddings are also no longer an issue.
Given that Transformers' edge over recurrent networks is their ability to effectively capture long distance relationships between words in a sequence BIBREF0, we experiment with replacing the LSTM recurrent layer in favor of a small Transformer model (2 layers of transformer building block containing self-attention, fully connected, etc.). To investigate if preserving the information about the input sequence order is important, we also build a variant of ToBERT which learns positional embeddings at the segment-level representations (but is limited to sequences of length seen during the training).
ToBERT's computational complexity $O(\frac{n^2}{k^2})$ is asymptotically inferior to RoBERT, as the top-level Transformer model again suffers from quadratic complexity in the number of segments. However, in practice this number is much smaller than the input sequence length (${\frac{n}{k}} << n$), so we haven't observed performance or memory issues with our datasets.
BERT is built upon the Transformer architecture [1], which uses self-attention, feed-forward layers, residual connections and layer normalization as the main building blocks.
|
What does BERT consist of as its main building blocks?
|
Self-attention, feed-forward layers, residual connections and layer normalization
|
null | false
| null |
How do you cook spare ribs on a pellet smoker?
|
Smoking ribs on a pellet smoker gives the meat a consistent, slow cook, helping to tenderize the meat. The best way to smoke ribs on a smoker consists of the 3-2-1 method. Cook on the grates for 3 hours at 225 degrees F with your choice of seasoning (don't forget to use mustard as a binder!), after 3 hours, double wrap the ribs in foil add & add butter wrap tight and place back on the smoker for 2 hours, for the last hour you will want to remove the tin foil and baste the ribs with your choice of BBQ sauce. Crank up the heat to 250 for the remaining hour. After the hour has been completed, take off and enjoy!
|
|
null | false
| null |
Why do you like ballet so much?
|
Ballet is a true form of art. I enjoy taking ballet classes because we always dance with beautiful piano music during the floor and barre exercises. It is so relaxing when my body naturally dances with classical music. Ballet has some rigid rules when conducting certain movements, but it truly can be creative. Anyone at any age can attend a ballet class, and can create his or her own piece of ballet dance.
Many people say Ballet is an intensive sport, because one often gets very sweaty after taking a ballet class. It is true that ballet is a good exercise for anyone who takes the class or performs on a stage. But because of the beautiful music that comes along during the classes, we/ballet dancers do not feel as tired as going to a gym or running a marathon. The music helps us forget about the soreness in muscles. Our bodies are as light as feathers during turns and leaps.
|
|
1912.01220
| false
| null |
The central problem we consider is category induction: given some instances of a category, predict which other individuals are likely to be instances of that category. When enough instances are given, standard approaches such as the Gaussian classifier from Section UNKREF9, or even a simple SVM classifier, can perform well on this task. For many categories, however, we only have access to a few instances, either because the considered ontology is highly incomplete or because the considered category only has few actual instances. The main research question which we want to analyze is whether (predicted) conceptual neighborhood can help to obtain better category induction models in such cases. In Section SECREF16, we first provide more details about the experimental setting that we followed. Section SECREF23 then discusses our main quantitative results. Finally, in Section SECREF26 we present a qualitative analysis.
As explained in Section SECREF3, we used BabelNet BIBREF29 as our reference taxonomy. BabelNet is a large-scale full-fledged taxonomy consisting of heterogeneous sources such as WordNet BIBREF36, Wikidata BIBREF37 and WiBi BIBREF38, making it suitable to test our hypothesis in a general setting.
BabelNet category selection. To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing. To tune the prior probability $\lambda _A$ for these categories, we hold out 10% from the training set as a validation set.
In Section SECREF16, we first provide more details about the experimental setting that we followed.
As explained in Section SECREF3, we used BabelNet BIBREF29 as our reference taxonomy. BabelNet is a large-scale full-fledged taxonomy consisting of heterogeneous sources such as WordNet BIBREF36, Wikidata BIBREF37 and WiBi BIBREF38, making it suitable to test our hypothesis in a general setting.
To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing. To tune the prior probability $\lambda _A$ for these categories, we hold out 10% from the training set as a validation set.
|
What experiments they perform to demonstrate that their approach leads more accurate region based representations?
|
The answers are shown as follows:
* To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing.
|
1912.05238
| true
| null |
Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to infer socially desired and neglected behaviour. BIBREF0 (BIBREF0) extracted them with the general positive and negative association sets on the Google Slim embedding. Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.
Summarized, even though the contained positive verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. As BIBREF0 (BIBREF0) describe, the listed words can be accepted as commonly agreed Dos and Don'ts.
Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.
Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder.
|
Do they report results only on English data?
|
Yes.
|
null | false
| null |
which marathon competition should I run?
|
The world marathon majors are the six most renowned and largest global marathon competitions. The six are: Tokyo Marathon (usually in March), Boston Marathon (usually in early Apr), London Marathon (usually in late Apr), Berlin Marathon (usually in Sep), Chicago Marathon (usually in Apr) and NYC Marathon (usually in Nov). For die-hard marathon runners, it is a dream come true to run all six. Meanwhile, some marathon events are famous for their breathtaking sceneries. For Rome Marathon, you run along cobblestone streets with sights of the Coliseum, St. Peter’s Square, and the Spanish Steps. The Great Wall Marathon in China is one of a kind race as runners climb the steep steps while enjoying the views rather than running on paved road. The Antarctica Marathon could be your most challenging and hard to reach marathon destination.The course is on King George Island. Runners run with penguins, seals and whales as their companions. The Big Sur Marathon in the US has its course on scenic Highway 1, the iconic Bixby Bridge marked the halfway point of the race, where runners can enjoy the unique site and a pianist performance!
|
|
null | false
| null |
First
Translated from the Latin, Newton's first law reads,
Every body continues in its state of rest, or of uniform motion in a straight line, unless it is compelled to change that state by forces impressed upon it.: 114
Newton's first law expresses the principle of inertia: the natural behavior of a body is to move in a straight line at constant speed. In the absence of outside influences, a body's motion preserves the status quo.
The modern understanding of Newton's first law is that no inertial observer is privileged over any other. The concept of an inertial observer makes quantitative the everyday idea of feeling no effects of motion. For example, a person standing on the ground watching a train go past is an inertial observer. If the observer on the ground sees the train moving smoothly in a straight line at a constant speed, then a passenger sitting on the train will also be an inertial observer: the train passenger feels no motion. The principle expressed by Newton's first law is that there is no way to say which inertial observer is "really" moving and which is "really" standing still. One observer's state of rest is another observer's state of uniform motion in a straight line, and no experiment can deem either point of view to be correct or incorrect. There is no absolute standard of rest.[note 4]
Second
The change of motion of an object is proportional to the force impressed; and is made in the direction of the straight line in which the force is impressed.: 114
By "motion", Newton meant the quantity now called momentum, which depends upon the amount of matter contained in a body, the speed at which that body is moving, and the direction in which it is moving. In modern notation, the momentum of a body is the product of its mass and its velocity:
Newton's second law, in modern form, states that the time derivative of the momentum is the force:
If the mass m does not change with time, then the derivative acts only upon the velocity, and so the force equals the product of the mass and the time derivative of the velocity, which is the acceleration:
As the acceleration is the second derivative of position with respect to time, this can also be written.
The forces acting on a body add as vectors, and so the total force on a body depends upon both the magnitudes and the directions of the individual forces. When the net force on a body is equal to zero, then by Newton's second law, the body does not accelerate, and it is said to be in mechanical equilibrium. A state of mechanical equilibrium is stable if, when the position of the body is changed slightly, the body remains near that equilibrium. Otherwise, the equilibrium is unstable.
A common visual representation of forces acting in concert is the free body diagram, which schematically portrays a body of interest and the forces applied to it by outside influences. For example, a free body diagram of a block sitting upon an inclined plane can illustrate the combination of gravitational force, "normal" force, friction, and string tension.[note 5]
Newton's second law is sometimes presented as a definition of force, i.e., a force is that which exists when an inertial observer sees a body accelerating. In order for this to be more than a tautology — acceleration implies force, force implies acceleration — some other statement about force must also be made. For example, an equation detailing the force might be specified, like Newton's law of universal gravitation. By inserting such an expression for
into Newton's second law, an equation with predictive power can be written.[note 6] Newton's second law has also been regarded as setting out a research program for physics, establishing that important goals of the subject are to identify the forces present in nature and to catalogue the constituents of matter.[note 7]
Third
To every action, there is always opposed an equal reaction; or, the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.: 116
Overly brief paraphrases of the third law, like "action equals reaction" might have caused confusion among generations of students: the "action" and "reaction" apply to different bodies. For example, consider a book at rest on a table. The Earth's gravity pulls down upon the book. The "reaction" to that "action" is not the support force from the table holding up the book, but the gravitational pull of the book acting on the Earth.[note 8]
Newton's third law relates to a more fundamental principle, the conservation of momentum. The latter remains true even in cases where Newton's statement does not, for instance when force fields as well as material bodies carry momentum, and when momentum is defined properly, in quantum mechanics as well.[note 9]
By Newton's second law, the first term is the total force upon the first body, and the second term is the total force upon the second body. If the two bodies are isolated from outside influences, the only force upon the first body can be that from the second, and vice versa. By Newton's third law, these forces have equal magnitude but opposite direction, so they cancel when added, and
→{\vec {p}} is constant. Alternatively, if
→{\vec {p}} is known to be constant, it follows that the forces have equal magnitude and opposite direction.
|
Based on the reference text, provide a bulleted list summarizing Newton's three laws of motion
|
The following is a summary of Newton's three laws of motion:
1. First Law: A body remains at rest, or in motion at constant speed, unless acted upon by a force.
2. Second Law: The change of motion of an object is proportional to the force impressed on the object.
3. Third Law: To every action, there is an equal and opposite reaction.
|
null | false
| null |
In the game of baseball, if a batter achieves three strikeouts in one game what is that called?
|
A Hat Trick - while in other sports the phrase is used for a positive sentiment like scoring three goals in soccer or hockey. Conversely, baseball's original definition comes with a negative connotation which is three strikeouts in one game.
|
|
null | false
| null |
Who is Paulina Flores Arias
|
Paulina Flores Arias (born 1980 in Culiacán, Sinaloa) is a Mexican model, who after winning the national title of Nuestra Belleza México, represented her country in the 2000 Miss World pageant, held in London, England, on November 30, 2000. Paulina is currently a professional fashion model, and has been represented by several national and international modeling agencies.
|
|
null | false
| 107
|
The SemEval-2010 benchmark dataset BIBREF0 is composed of 244 scientific articles collected from the ACM Digital Library (conference and workshop papers). The input papers ranged from 6 to 8 pages and were converted from PDF format to plain text using an off-the-shelf tool. The only preprocessing applied is a systematic dehyphenation at line breaks and removal of author-assigned keyphrases. Scientific articles were selected from four different research areas as defined in the ACM classification, and were equally distributed into training (144 articles) and test (100 articles) sets. Gold standard keyphrases are composed of both author-assigned keyphrases collected from the original PDF files and reader-assigned keyphrases provided by student annotators.
Long documents such as those in the SemEval-2010 benchmark dataset are notoriously difficult to handle due to the large number of keyphrase candidates (i.e. phrases that are eligible to be keyphrases) that the systems have to cope with BIBREF6 . Furthermore, noisy textual content, whether due to format conversion errors or to unusable elements (e.g. equations), yield many spurious keyphrase candidates that negatively affect keyphrase extraction performance. This is particularly true for systems that make use of core NLP tools to select candidates, that in turn exhibit poor performance on degraded text. Filtering out irrelevant text is therefore needed for addressing these issues.
In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below.
Table shows the average number of sentences and words along with the maximum possible recall for each level of preprocessing. The maximum recall is obtained by computing the fraction of the reference keyphrases that occur in the documents. We observe that the level 2 preprocessing succeeds in eliminating irrelevant text by significantly reducing the number of words (-19%) while maintaining a high maximum recall (-2%). Level 3 preprocessing drastically reduce the number of words to less than a quarter of the original amount while interestingly still preserving high recall.
Level 3: As pointed out by (Treeratpituk et al., 2010; Nguyen and Luong, 2010; Wang and Li, 2010; Eichler and Neumann, 2010; El-Beltagy and Rafea, 2010), considering only the keyphrase dense parts of the scientific articles allows to improve keyphrase extraction performance. Accordingly we follow previous work and further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion. Here, the idea is to achieve the best compromise between search space (number of candidates) and maximum performance (recall).
|
How is preprocessing done in Lv.3?
|
They consider only the keyphrase dense parts of the scientific articles allows to improve keyphrase extraction performance. Accordingly they follow previous work and further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion. Here, the idea is to achieve the best compromise between search space (number of candidates) and maximum performance (recall).
|
1702.06700
| true
| null |
In this section, we elaborate our model consisting of four parts: (a) image feature pre-selection part which models the tendency where people focus to ask questions, (b) question encoding part which encodes the question words as a condensed semantic embedding, (c) attention-based feature fusion part performs second selection on image features and (d) answer generation part which gives the answer output.
We propose to perform saliency-like pre-selection operation to alleviate the problems and model the RoI patterns. The image is first divided into $g\times g$ grids as illustrated in Figure. 2 . Taking $m\times m$ grids as a region, with $s$ grids as the stride, we obtain $n\times n$ regions, where $n=\left\lfloor \frac{g-m}{s}\right\rfloor +1$ . We then feed the regions to a pre-trained ResNet BIBREF24 deep convolutional neural network to produce $n\times n\times d_I$ -dimensional region features, where $d_I$ is the dimension of feature from the layer before the last fully-connected layer.
image feature pre-selection part which models the tendency where people focus to ask questions
We propose to perform saliency-like pre-selection operation to alleviate the problems and model the RoI patterns. The image is first divided into $g\times g$ grids as illustrated in Figure. 2 . Taking $m\times m$ grids as a region, with $s$ grids as the stride, we obtain $n\times n$ regions, where $n=\left\lfloor \frac{g-m}{s}\right\rfloor +1$ . We then feed the regions to a pre-trained ResNet BIBREF24 deep convolutional neural network to produce $n\times n\times d_I$ -dimensional region features, where $d_I$ is the dimension of feature from the layer before the last fully-connected layer.
|
Does the new system utilize pre-extracted bounding boxes and/or features?
|
Yes.
|
null | false
| null |
Who would be your starting five on your current NBA team?
|
My starting five is not determined by who would be the best statistical performer at each position but more made up of which group of players I think would be the most fun to watch while also being a championship contender. To start, let's get the elephant out of the room, the big man down under has to be Joel Embiid for me. Offensively he is one of the best players in the league, no questions asked. Defensively, when he can stay out of foul trouble, he could easily make a run at DPOY (Defensive Player of the Year). At the 4, im still going big because of this mans ability to move his feet. Jokic is a 280lb ballerina with Einstein'esque IQ and court vision like an owl. At the 3, I need a lock down defender and a court runner, oh, and two time MVP in Giannis Antetoukoumpo. I doubt this front court would be allowed to share an elevator. For my back court I need to support this tremendous front court cast with shooters and passers. Im going to pair Lamelo Ball with Luka Doncic to further bolster the teams height (avgs 6'9). Lamelo and Luka can both knock down the three ball, get to the paint off the dribble, execute the pick and roll flawlessly and are 6'6 and 6'7 respectively. This group should have no problem taking on any other 5 in the league and probably any historically.
|
|
null | false
| null |
House (also called House, M.D.) is an American medical drama television series that originally ran on the Fox network for eight seasons, from November 16, 2004, to May 21, 2012. The series' main character is Dr. Gregory House (Hugh Laurie), an unconventional, misanthropic medical genius who, despite his dependence on pain medication, leads a team of diagnosticians at the fictional Princeton–Plainsboro Teaching Hospital (PPTH) in New Jersey. The series' premise originated with Paul Attanasio, while David Shore, who is credited as creator, was primarily responsible for the conception of the title character.
|
How many seasons is House the TV series?
|
House, M.D. is an American medical drama television series that originally ran for eight seasons.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.