paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Should I pursue a PhD in STEM?
|
Whether or not to pursue a PhD will depend on a number of factors. These days, many professions require more and more certification and proof of education. However, a PhD is usually only require in the STEM field for positions that are either focussed on research, such as a research scientist position, or on a specialized field, such as many 'wet labs' in biological research, or more specialized knowledge jobs. Other than these roles, a Master's or other accreditation like professional certificates/MBAs etc. can be used for career advancement. For those interested in an academic career, a PhD is still the fundamental requirement.
If you do choose to do a PhD, where you do a PhD will influence the amount and type of work that you do. For example, in the EU, the UK, Australia, and some other places, a PhD is usually three years of research with no or minimal extra course work. In the US, though, a PhD typically takes over 4 years and requires the student to take several classes at a graduate level.
While in your program, you will need to spend a lot of time focusing on publication pressures. PhD students, these days, are required to produced 1-3 papers during their PhD in high impact journals. In many institutions, this is a high stress environment and can lead to increased mental health stress. If you are passionate and build a strong social structure, this can be an enjoyable experience, albeit a trying one.
|
|
null | false
| null |
The many other causes of speech delay include bilingual children with phonological disorders, autism spectrum conditions, childhood apraxia, auditory processing disorder, prematurity, cognitive impairment and hearing loss. In addition, when children are addicted to screens, they aren't stimulated to be involved in conversations, causing speech delays.[unreliable source?] Broomfield and Dodd's (2004a)[full citation needed] found out after survey that 6.4% of children who are perfectly normal showed speech difficulty while they lacked these disorders will often show early signs and are at times identified as "at risk" when the speech delay is diagnosed. However, a study done in Saudi Arabia showed no relationship between smart device use and speech delay, although 64.8% of the parents believed that the smart devices are a "problem". A review study from Indonesia points out the existence of contradicting results on that issue.
|
Does extended screen time cause speech delay in children?
|
There is no definite proof that extended screen time cause speech delay in children because there are contradicting study results.
|
null | false
| null |
Tell me whether these cities are in Texas: Austin, Houston, New York, Chicago, Miami, Dallas
|
Austin, Houston, Dallas
|
|
null | false
| null |
What are some common data warehouse challenges?
|
Common data warehouses can have slow query performance especially when all the data operations happen on the same clusters. Also, many data warehouses have closed architectures which limits choices and drives up egress fees to use the data. Additionally, data warehouses aren’t built to handle real-time analytics which usually require the ability to handle small writes. Lastly, data warehouses often duplicate the storage of data which balloons costs and hinders the ability to have a single source of truth.
|
|
null | false
| null |
Patrick James Mara (1 February 1942 – 15 January 2016) was an Irish public affairs consultant and politician who served as Senator from July 1981 to October 1981 and May 1982 to December 1982. He was most renowned for being the political adviser to former Taoiseach Charles Haughey. He also served as Press Secretary to Fianna Fáil from 1983 to 1987 and Government Press Secretary from 1987 to 1992.
He was appointed the Fianna Fáil Director of Elections for the 1997, 2002, and 2007 general elections. He was a member of the board of Digicel, the company founded by Denis O'Brien to build mobile networks in the Caribbean.
He was regularly lampooned on the satirical Raidió Teilifís Éireann (RTÉ) radio sketch show Scrap Saturday, which ran from 1989 to 1991.
|
Given a reference text about Patrick James Mara, tell me what he was known for.
|
Patrick James Mara was most known for being a political adviser to Taoiseach Charles Haughey and serving as Press Secretary and Government Press Secretary.
|
null | false
| null |
What is the best way to get to Philadelphia from Downingtown if you do not have a car?
|
The first thing you would do would be to look up where the Downingtown train station is in relation to where you are. Find the train schedule and allow the appropriate time to walk to the train station. Board the SEPTA train and ride from there into Philadelphia.
|
|
null | false
| null |
Why is it good to fly direct?
|
When flying direct to a location, you cut down on potential negative variable while traveling. If you have a layover, there is the chance you might miss you flight if your first flight is delayed. Additionally, if you check your luggage there is the chance it is lost between flights. This doesn't always happen but the chance of it happening is there when you book a one or two stop flight. Finally, it is always faster if you are able to fly to a destination directly
|
|
null | false
| null |
How do I save for retirement?
|
First thing is to make a plan and the key is to start saving today no matter how small that amount might be. You will be refining this plan as you move forward to achieving your financial goals. You should always "Pay Yourself" first by allocating a percentage of your pay checks into a company 401K retirement plan or an IRA or Roth IRA. A certain portion of your monthly pay will be pulled out tax free and put into a retirement savings account. You should consult with a tax professional to understand the tax implications. Also, you can work with a professional planner to help create a comprehensive long term plan. This retirement plan will take into account your target financial goals, desired annual spend, investments, social security, pensions, and other financial sources.
|
|
null | false
| null |
What is the 80/20 rule?
|
The 80/20 rule can be applied in many contexts. Casually, it may refer to the observation that often 20% of the effort yields 80% of the benefit and the corollary that the last 20% of the benefit requires 80% of the effort. The 80/20 rule also describes a property that is empirically true of the frequency distribution of many sets, such as 20% of unique words accounting for 80% of words in a book or 20% of companies in a stock portfolio accounting for 80% of the return.
|
|
null | false
| null |
What makes a good teacher?
|
Throughout history ‘what makes a good teacher?’ has constantly been questioned by both parents and children. Many consider a good teacher to be one who is caring, helpful, yet also strict when necessary.
One book that focuses on what makes a good teacher is Our day out, by Willy Russel. The book provided people in the 1970s with a perspective on what both children and parents were looking for, inspiring many British teachers. According to Russel, a good teacher cares for the learning of their students and demonstrates it. If a teacher does not demonstrate caring behavior, their students will sense it. Carol, one of the students says to Mr.Briggs “Don't lie, you! I know you hate me. I've seen you goin' home in your car, passin' us on the street. You hate all the kids.” (Russell 48). Carol believes that Mr. Briggs does not care about his students, even though he does as shown earlier in the book: “Briggs: What? About you? Listen ... if I didn't care, why am I here, now, trying to stop you doing something stupid.” (Russell 48). Mr Briggs believes that caring for students implies also disciplining them at times. “Listen, I'm not going to spend any more time arguing with you. You may have organized this visit, but I'm the one who's been sent by the Headmaster to supervise. Now, either you take control of the children in your charge or I'll be forced to abandon this visit and order everyone home.” Thus, even if a teacher cares about their students, they should demonstrate it. Opinions from students of different ages will help further illustrate the image of a good teacher.
Many people think that a good teacher is primarily kind and loving. However, through multiple interviews it appeared to be only partially true. A good teacher, according to most students, is not overly kind and should apply discipline when needed. For example, 12 year old Monon Mukherjee said that he looks for guidance and ability to teach in a teacher. On the other hand, 40 year old Bilal Aslam looks for discipline and ability to teach. 42 year old Cristian Molina also agrees with the need for discipline, however he also expects teachers to feel an urge to help students no matter who they are. The book and the interviews intersect in many ways. They both highlight the importance of kindness, caring for learning, and teacher’s ability to discipline. However, they also differ in important ways. In the book, teachers were encouraged to be kind and apply discipline when necessary, but through the interviews it has been shown that providing help and guidance are also expected in a good teacher.
In conclusion, a good teacher cares about their students' learning, is prepared to help them, yet also disciplines them when necessary. A good teacher should feel an urge to teach students and guide them through the complex path of life.
|
|
null | false
| 39
|
Typical speech-to-text translation systems pipeline automatic speech recognition (ASR) and machine translation (MT) BIBREF0 . But high-quality ASR requires hundreds of hours of transcribed audio, while high-quality MT requires millions of words of parallel text—resources available for only a tiny fraction of the world's estimated 7,000 languages BIBREF1 . Nevertheless, there are important low-resource settings in which even limited speech translation would be of immense value: documentation of endangered languages, which often have no writing system BIBREF2 , BIBREF3 ; and crisis response, for which text applications have proven useful BIBREF4 , but only help literate populations. In these settings, target translations may be available. For example, ad hoc translations may be collected in support of relief operations. Can we do anything at all with this data?
In this exploratory study, we present a speech-to-text translation system that learns directly from source audio and target text pairs, and does not require intermediate ASR or MT. Our work complements several lines of related recent work. For example, duong2015attentional and antonios+chiang+duongEMNLP2016 presented models that align audio to translated text, but neither used these models to try to translate new utterances (in fact, the latter model cannot make such predictions). berard+etalnipsworkshop16 did develop a direct speech to translation system, but presented results only on a corpus of synthetic audio with a small number of speakers. Finally, Adams et al. adams+etalinterspeech16,adams+etalemnlp16 targeted the same low-resource speech-to-translation task, but instead of working with audio, they started from word or phoneme lattices. In principle these could be produced in an unsupervised or minimally-supervised way, but in practice they used supervised ASR/phone recognition. Additionally, their evaluation focused on phone error rate rather than translation. In contrast to these approaches, our method can make translation predictions for audio input not seen during training, and we evaluate it on real multi-speaker speech data.
Our simple system (§ SECREF2 ) builds on unsupervised speech processing BIBREF5 , BIBREF6 , BIBREF7 , and in particular on unsupervised term discovery (UTD), which creates hard clusters of repeated word-like units in raw speech BIBREF8 , BIBREF9 . The clusters do not account for all of the audio, but we can use them to simulate a partial, noisy transcription, or pseudotext, which we pair with translations to learn a bag-of-words translation model. We test our system on the CALLHOME Spanish-English speech translation corpus BIBREF10 , a noisy multi-speaker corpus of telephone calls in a variety of Spanish dialects (§ SECREF3 ). Using the Spanish speech as the source and English text translations as the target, we identify several challenges in the use of UTD, including low coverage of audio and difficulty in cross-speaker clustering (§ SECREF4 ). Despite these difficulties, we demonstrate that the system learns to translate some content words (§ SECREF5 ).
In this exploratory study, we present a speech-to-text translation system that learns directly from source audio and target text pairs, and does not require intermediate ASR or MT.
|
What system does the paper present?
|
A speech-to-text translation system.
|
null | false
| null |
Traditional dried fruit such as raisins, figs, dates, apricots and apples have been a staple of Mediterranean diets for millennia. This is due partly to their early cultivation in the Middle Eastern region known as the Fertile Crescent, made up by parts of modern Iran, Iraq, southwest Turkey, Syria, Lebanon, Palestine, Israel, and northern Egypt. Drying or dehydration also happened to be the earliest form of food preservation: grapes, dates, and figs that fell from the tree or vine would dry in the hot sun. Early hunter-gatherers observed that these fallen fruit took on an edible form, and valued them for their stability as well as their concentrated sweetness.
The earliest recorded mention of dried fruits can be found in Mesopotamian tablets dating to about 1500 BC, which contain what are probably the oldest known written recipes. These clay slabs, written in Akkadian, the daily language of Babylonia, were inscribed in cuneiform and tell of diets based on grains (barley, millet, wheat), vegetables and fruits such as dates, figs, apples, pomegranates, and grapes. These early civilizations used dates, date juice evaporated into syrup and raisins as sweeteners. They included dried fruits in their breads for which they had more than 300 recipes, from simple barley bread for the workers to very elaborate, spiced cakes with honey for the palaces and temples.
The date palm was one of the first cultivated trees. It was domesticated in Mesopotamia more than 5,000 years ago. It grew abundantly in the Fertile Crescent and it was so productive (an average date palm produces 50 kg (100 lbs) of fruit a year for 60 years or more) that dates were the cheapest of staple foods. Because they were so valuable, they were well recorded in Assyrian and Babylonian monuments and temples. The villagers in Mesopotamia dried them and ate them as sweets. Whether fresh, soft-dried or hard-dried, they helped to give character to meat dishes and grain pies. They were valued by travelers for their energy and were recommended as stimulants against fatigue.
Figs were also prized in early Mesopotamia, Palestine, Israel, and Egypt where their daily use was probably greater than or equal to that of dates. As well as appearing in wall paintings, many specimens have been found in Egyptian tombs as funerary offerings. In Greece and Crete, figs grew very readily and they were the staple of poor and rich alike, particularly in their dried form.
Grape cultivation first began in Armenia and the eastern regions of the Mediterranean in the 4th century BC. Raisins were produced by drying grapes in the hot desert sun. Very quickly, viticulture and raisin production spread across northern Africa including Morocco and Tunisia. The Phoenicians and the Egyptians popularized the production of raisins, probably due to the perfect arid environment for sun drying. They put them in jars for storage and allotted them to the different temples by the thousands. They also added them to breads and various pastries, some made with honey, some with milk and eggs.
From the Middle East, these fruits spread through Greece to Italy where they became a major part of the diet. Ancient Romans consumed raisins in spectacular quantities and at all levels of society, including them as a key part of their common meals, along with olives and fresh fruits. Raisined breads were common for breakfast and were consumed with their grains, beans, and cultured milks. Raisins were so valued that they transcended the food realm and became rewards for successful athletes, as well as premium barter currency.
Having dried fruits was essential in ancient Rome as these instructions for housekeepers around 100 BC tell: "She must keep a supply of cooked food on hand for you and the servants. She must keep many hens and have plenty of eggs. She must have a large store of dried pears, sorbs, figs, raisins, sorbs in must, preserved pears and grapes and quinces. She must also keep preserved grapes in grape-pulp and in pots buried in the ground, as well as fresh Praenestine nuts kept in the same way, and Scantian quinces in jars, and other fruits that are usually preserved, as well as wild fruits. All these she must store away diligently every year."
Figs were also extremely popular in Rome. Dried figs were added to bread and formed a major part of the winter food of country people. They were rubbed with spices such as cumin, anise and fennel seeds, or toasted sesame, wrapped in fig leaves and stored in jars. Today, major producing regions include Israel, Jerusalem, Gaza and many other Arabic countries. Dried figs are rich in vitamins, phosphorus and various other important minerals.
Plums, apricots and peaches had their origins in Asia. They were domesticated in China in the 3rd millennium BC and spread to the Fertile Crescent where they were also very popular, fresh and dried alike. They arrived in Greece and Italy much later and were very expensive but valued in the preparation of gourmet dishes with port or stewed with honey and spices.
|
From the following passage extract information on how dates were used in the Fertile Crescent.
|
The date palm was one of the first cultivated trees. It was domesticated in Mesopotamia more than 5,000 years ago. It grew abundantly in the Fertile Crescent and it was so productive (an average date palm produces 50 kg (100 lbs) of fruit a year for 60 years or more) that dates were the cheapest of staple foods. Because they were so valuable, they were well recorded in Assyrian and Babylonian monuments and temples. The villagers in Mesopotamia dried them and ate them as sweets. Whether fresh, soft-dried or hard-dried, they helped to give character to meat dishes and grain pies. They were valued by travelers for their energy and were recommended as stimulants against fatigue.
|
null | false
| null |
Classify each of the following as either pasta or bread: casarecce, gnocco fritto, gnocchi, focaccia, grissini, ciabatta, tonnarelli, orecchiette, conchiglie, piadina
|
Pasta = casarecce, gnocchi, tonnarelli, orecchiette, conchiglie
Bread = gnocco fritto, focaccia, grissini, ciabatta, piadina
|
|
null | false
| 410
|
Next, we introduce a simple but effective online-offline replay buffer (OORB) to distinguish between near-on-policy online interaction data, and the offline data. OORB consists of two replay buffers.
One is the online buffer that collects the online interaction data. Besides, to ensure the data in the online buffer is near-on-policy, we set it to be very small, and fresh online interaction data are stored into it by following a first-in-first-out rule. The other is the offline buffer consisting of the newly generated online interaction data and the offline dataset which may come from any policies.
Data are sampled from OORB following a Bernoulli distribution, which means that with a probability p, they are sampled from the online buffer, and with probability 1 − p, they are sampled from the offline buffer. To benefit from both online and offline data in a balanced way, we empirically set p to 0.5, and its effect on the final performance is further tested via ablation studies in Section 5.4. Results show that p = 0.5 works best overall, which confirms our claim that offline data and online interaction data are both crucial for policy learning.
Next, we introduce a simple but effective online-offline replay buffer (OORB) to distinguish between near-on-policy online interaction data, and the offline data. OORB consists of two replay buffers. One is the online buffer that collects the online interaction data. Besides, to ensure the data in the online buffer is near-on-policy, we set it to be very small, and fresh online interaction data are stored into it by following a first-in-first-out rule. The other is the offline buffer consisting of the newly generated online interaction data and the offline dataset which may come from any policies. Data are sampled from OORB following a Bernoulli distribution, which means that with a probability p, they are sampled from the online buffer, and with probability 1 − p, they are sampled from the offline buffer. To benefit from both online and offline data in a balanced way, we empirically set p to 0.5, and its effect on the final performance is further tested via ablation studies in Section 5.4. Results show that p = 0.5 works best overall, which confirms our claim that offline data and online interaction data are both crucial for policy learning
|
Is my understanding correct that newly collected samples are added to both the online and offline buffers?
|
Yes. As we explained in section 4.3, the online buffer store near-on-policy data for stabilizing the learning process. That is why we need to keep it relatively small size. By contrast, the offline buffers store all samples.
|
null | false
| null |
How did Robert Baratheon die?
|
King Robert I Baratheon was on a boar hunting trip when he was gored by a boar. Lancel Lannister, one of his squires, provided the king with fortified wine, to ensure the king was overly drunk and would make a mistake. Had he not died from the boar, the king would have been deliberately knocked off his horse or shot with an arrow, according to Lord Varys. The king's death was arranged by his wife, Cersei Lannister, so he would not learn the truth about who actually sired his children he shared with Cersei.
|
|
null | false
| null |
Classify each of the following as amphibian or mammal. Dog, Toad, Salamander, Elephant, Frog, Bear
|
Mammals are Dog, Elephant, Bear. Amphibians are Toad, Salamander, Frog.
|
|
null | false
| null |
What are good female celebrity costumes for an 80s party?
|
There are many amazing 80s female costume options to choose from. See the list below for a few top choices:
- Madonna
- Jem and the Holograms
- Pamela Anderson
- Punky Brewster
- Lady Diana
- Alex Owens from Flashdance
- The Golden Girls
- She-Ra
- Tina Turner
|
|
null | false
| 107
|
This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/
Recent years have seen a surge of interest in automatic keyphrase extraction, thanks to the availability of the SemEval-2010 benchmark dataset BIBREF0 . This dataset is composed of documents (scientific articles) that were automatically converted from PDF format to plain text. As a result, most documents contain irrelevant pieces of text (e.g. muddled sentences, tables, equations, footnotes) that require special handling, so as to not hinder the performance of keyphrase extraction systems. In previous work, these are usually removed at the preprocessing step, but using a variety of techniques ranging from simple heuristics BIBREF1 , BIBREF2 , BIBREF3 to sophisticated document logical structure detection on richly-formatted documents recovered from Google Scholar BIBREF4 . Under such conditions, it may prove difficult to draw firm conclusions about which keyphrase extraction model performs best, as the impact of preprocessing on overall performance cannot be properly quantified.
While previous work clearly states that efficient document preprocessing is a prerequisite for the extraction of high quality keyphrases, there is, to our best knowledge, no empirical evidence of how preprocessing affects keyphrase extraction performance. In this paper, we re-assess the performance of several state-of-the-art keyphrase extraction models at increasingly sophisticated levels of preprocessing. Three incremental levels of document preprocessing are experimented with: raw text, text cleaning through document logical structure detection, and removal of keyphrase sparse sections of the document. In doing so, we present the first consistent comparison of different keyphrase extraction models and study their robustness over noisy text. More precisely, our contributions are:
In this paper, we re-assess the performance of several state-of-the-art keyphrase extraction models at increasingly sophisticated levels of preprocessing.
|
Are there any baseline models?
|
No.
|
null | false
| null |
Who explored New World?
|
Christopher Columbus
|
|
null | false
| 356
|
The Reuters Corpus Volume 2 BIBREF2 , in short RCV2, is a multilingual corpus with a collection of 487,000 news stories. Each news story was manually classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). Topic codes were assigned to capture the major subject of the news story. The entire corpus covers thirteen languages, i.e. Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish, written by local reporters in each language. The news stories are not parallel. Single-label stories, i.e. those labeled with only one topic out of the four top categories, are often used for evaluations. However, the class distributions vary significantly across all the thirteen languages (see Table 1 ). Therefore, using random samples to extract evaluation corpora may lead to very imbalanced test sets, i.e. undesired and misleading variability among the languages when the main focus is to evaluate cross-lingual transfer.
The entire corpus covers thirteen languages, i.e. Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish, written by local reporters in each language.
|
What languages does the corpus cover?
|
Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish
|
null | false
| null |
La buenaventura (1934)
Grand Canary (1934)
Tripping Through the Tropics (1934)
Tres Amores (1934)
El cantante de Napoles (1935)
Julieta Buys a Son (1935)
Rosa de Francia (1935)
El crimen de media noche (1936)
El capitan Tormenta (1936)
Song of the Gringo (1936)
Fiesta (1941)
The Face of Marble (1946)
Gilda (1946)
Two Years Before the Mast (1946)
Secret Beyond the Door... (1947)
Secret of the Incas (1954)
The Rose Tattoo (1955)
The Bottom of the Bottle (1956)
Lawman (1959 episode "The Outsider") as Mrs. Lebeau
|
Which of the following films came out in the 40s?
|
Fiesta
The Face of Marble
Gilda
Two Years Before the Mast
Secret Beyond the Door...
|
null | false
| null |
Bulbophyllum concatenatum is a species of orchid in the genus Bulbophyllum.
Kingdom: Plantae
Clade: Tracheophytes
Clade: Angiosperms
Clade: Monocots
Order: Asparagales
Family: Orchidaceae
Subfamily: Epidendroideae
Tribe: Dendrobieae
Genus: Bulbophyllum
Species: B. concatenatum
|
What is the Family of the orchid Bulbophyllum concatenatum?
|
Orchidaceae
|
null | false
| 192
|
In our approach we develop the idea of BIBREF16 that proposed to construct similarity sets between ngram phrases between each other and single words. Phrases and words are included in the same similarity set if they have the same component word, for example, weapon – nuclear weapon – weapon of mass destruction; discrimination – racial discrimination. It was supposed that if expressions from the same similarity set co-occur in the same document then their contribution into the document's topics is really more than it is presented with their frequencies, therefore their frequencies should be increased. In such an approach, the algorithm can "see" similarities between different multiword expressions with the same component word.
In our approach, at first, we include related single words and phrases from a thesaurus such as WordNet or EuroVoc in these similarity sets. Then, we add preliminarily extracted ngrams into these sets and, this way, we use two different sources of external knowledge. We use the same LDA-SIM algorithm as described in BIBREF16 but study what types of semantic relations can be introduced into such similarity sets and be useful for improving topic models. The pseudocode of LDA-SIM algorithm is presented in Algorithm SECREF3 , where INLINEFORM0 is a similarity set, expressions in similarity sets can comprise single words, thesaurus phrases or generated noun compounds.
We can compare this approach with the approaches applying the generalized Polya urn model BIBREF8 , BIBREF9 , BIBREF10 . To add prior knowledge, those approaches change topic distributions for related words globally in the collection. We modify topic probabilities for related words and phrases locally, in specific texts, only when related words (phrases) co-occur in these texts.
[ht!] collection INLINEFORM0 , vocabulary INLINEFORM1 , number of topics INLINEFORM2 , initial INLINEFORM3 and INLINEFORM4 , sets of similar expressions INLINEFORM5 , hyperparameters INLINEFORM6 and INLINEFORM7 , INLINEFORM8 is the frequency of INLINEFORM9 in the document INLINEFORM10 distributions INLINEFORM11 and INLINEFORM12 not meet the stop criterion INLINEFORM13 INLINEFORM14
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0
LDA-SIM algorithm
In our approach we develop the idea of [17] that proposed to construct similarity sets between ngram phrases between each other and single words.
|
What idea do they develop in their method?
|
To construct similarity sets between ngram phrases between each other and single words.
|
null | false
| null |
What language is easy to learn?
|
English is usually considered as an easier language to learn than Chinese, Russian, Arabic and so on. The grammar has less exceptions, and the pronunciations are easy for people from different places in the world to pronounce.
Also, since English is widely used in today's world, you may naturally find more TV programs, commercials, films, classes etc. that could help you quickly learn English as well.
|
|
null | false
| null |
Write a concise and somewhat funny email about making a betting pool for the 2023 Masters golf tournament
|
From: Me
To: Friend
Subject: 2023 Masters Betting Pool
Hi Closest Friends,
April has arrived, the Easter Bunny is coming and although the azaleas have all died from blooming too early and then freezing, the Masters will go on!
It should be an eventful year with Tiger (aka the GOAT) making his return, Koepka coming off a huge win on the LIV tour, the incredible golf program that is University of Florida (aka Billy Horschel) looking for their first Masters champion, and many more. We plan on having the 101st annual Master's draft on Tuesday around 10PM EST.
See you there. Cheers!
- Me
|
|
null | false
| 203
|
Neural language models BIBREF0 , BIBREF1 , BIBREF2 have become an essential component in several areas of natural language processing (NLP), such as machine translation, speech recognition and image captioning. They have also become a common benchmarking application in machine learning research on recurrent neural networks (RNN), because producing an accurate probabilistic model of human language is a very challenging task which requires all levels of linguistic analysis, from pragmatics to phonology, to be taken into account.
A typical language model is trained on text in a single language, and if one needs to model multiple languages the standard solution is to train a separate model for each language. This presupposes large quantities of monolingual data in each of the languages that needs to be covered and each model with its parameters is completely independent of any of the other models.
We propose instead to use a single model with real-valued vectors to indicate the language used, and to train this model with a large number of languages. We thus get a language model whose predictive distribution INLINEFORM0 is a continuous function of the language vector INLINEFORM1 , a property that is trivially extended to other neural NLP models. In this paper, we explore the “language space” containing these vectors, and in particular explore what happens when we move beyond the points representing the languages of the training corpus.
The motivation of combining languages into one single model is at least two-fold: First of all, languages are related and share many features and properties, a fact that is ignored when using independent models. The second motivation is data sparseness, an issue that heavily influences the reliability of data-driven models. Resources are scarce for most languages in the world (and also for most domains in otherwise well-supported languages), which makes it hard to train reasonable parameters. By combining data from many languages, we hope to mitigate this issue.
In contrast to related work, we focus on massively multilingual data sets to cover for the first time a substantial amount of the linguistic diversity in the world in a project related to data-driven language modeling. We do not presuppose any prior knowledge about language similarities and evolution and let the model discover relations on its own purely by looking at the data. The only supervision that is giving during training is a language identifier as a one-hot encoding. From that and the actual training examples, the system learns dense vector representations for each language included in our data set along with the character-level RNN parameters of the language model itself.
We do not presuppose any prior knowledge about language similarities and evolution and let the model discover relations on its own purely by looking at the data. The only supervision that is giving during training is a language identifier as a one-hot encoding.
|
What is the only supervision during training?
|
A language identifier as a one-hot encoding.
|
null | false
| null |
Who is the greatest musical theatre composer of all time?
|
While art is subjective, there are a few luminary composers who have made significant contributions to musical theatre and whose work has stood the test of time. One could make a case for any of the following. Jerome Kern wrote more than 700 songs and wrote the music to "Showboat", one of the first Broadway musicals with a serious plot and themes. The songwriting team of Rogers & Hammerstein was the first to use songs to further the plot vs. break the plot for a song and dance. Their musicals are still enormously popular and performed constantly. They include "Oklahoma", "The King & I", and "The Sound of Music". Leonard Bernstein, while most known for classical music made a major contribution to musicals with the groundbreaking "West Side Story", an updated Romeo and Juliet story with spectacular music, every song a classic. Bernstein collaborated with Stephen Sondheim on this musical. Sondheim ushered in the modern-day musical in which shows might explore a theme vs. a linear plot. While he is often criticized for not having "hummable" melodies, his genius is undeniable, and he did have one bonafide hit with "Send in the Clowns". Jules Styne composed what is often considered the greatest musical of all time, "Gypsy". The dynamite songwriting team of John Kander and Fred Ebb composed the longest-running American musical, "Chicago" and the dark yet entertaining "Cabaret". Finally, while often criticized by musical-theatre purists as blatantly commercial, the success of Andrew Lloyd Webber cannot be denied. He composed the enormously successful "Cats", the longest-running Broadway musical of all time "Phantom of the Opera" and even into his 80's is still composing, with the recently opened "Bad Cinderella" on Broadway.
|
|
null | false
| null |
Missy Elliott references what infamous celebrity couple in "Let Me Fix My Weave?"
|
Ben Affleck and Jennifer Lopez
|
|
null | false
| null |
Give me a list of golf ball options to use this weekend.
|
-Bridgestone Tour B X
-Titleist Pro V1
-Bridgestone Tour B XS
-Titleist Pro V1x
-Bridgestone Tour B RX
-Titleist Pro V1x Left Dash
-Bridgestone Tour B RXS
-Titleist AVX
-Bridgestone e12 Contact
-Callaway Chrome Soft
-Bridgestone e12 Long Drive
-Callaway Chrome Soft X
-Callaway Supersoft
|
|
null | false
| null |
Mark's innie awakens in Devon's home and finds himself hugging Cobel. While excusing himself to find Devon, he calls Cobel by name, alerting her that the overtime contingency has been activated. Cobel calls Milchick and has him check the security office. Mark privately reveals to Devon that he is in innie form; Devon tells him of Gemma's death, and learns that "Mrs. Selvig" is Mark's boss. Mark encourages Devon to report Lumon's misdeeds to the press as Lumon likely controls the police. Irving wakes up in his apartment, discovering his outie's paintings and background in the U.S. Navy, and finds a map and employee directory in the closet which he uses to locate Burt. Helly wakes up at a Lumon gala where she learns that her outie is Helena Eagan – daughter of Lumon CEO Jame Eagan – who underwent severance to build public support for legalizing the procedure. Cobel races to the gala and attempts to stop Helly from making a scheduled speech. Milchick reaches the security office and cuts his way through the makeshift restraints Dylan has placed on the door. Helly gets onstage and tells the crowd of the innies' subjugation and torment. Irving arrives at Burt's house only to find he is already in a relationship. Mark finds a photo confirming Ms. Casey to be Gemma. He rushes to tell Devon, but is only able to say "She's alive!" before Milchick tackles Dylan, deactivating the overtime contingency and reverting the three to their outie state.
|
What is an "outie" in the context of this paragraph?
|
Based on the provided paragraph, it's not clear what is an "outie". However, based on this sentence, "He rushes to tell Devon, but is only able to say "She's alive!" before Milchick tackles Dylan, deactivating the overtime contingency and reverting the three to their outie state.", it seems like an outie is a state that a person can be in and that the state can be switched on by deactivating the overtime contingency.
|
null | false
| null |
Classify each instrument as a woodwind, string, brass, or percussion: trumpet, violin, oboe, french horn, bassoon, cymbal, cello, clarinet, trombone, triangle, piccolo, flute, harp, tuba, snare drum
|
A violin, cello, and harp are classified as a string instrument. An oboe, bassoon, clarinet, piccolo, and flute are classified as a woodwind instrument. A trumpet, french horn, trombone, and tuba are classified as a brass instrument. A triangle, cymbal, and snare drum are classified as a percussion instrument.
|
|
null | false
| null |
Give me a bulleted list of the 5 oldest countries in the world:
|
Here are the 5 oldest countries in the world:
- Egypt
- Greece
- Hungary
- France
- Japan
|
|
null | false
| null |
What are some of the highest-rated episodes of The X-Files, based on viewer's ratings?
|
According to IMDB, users on the site have ranked the following as the top ten episodes from The X-Files: Bad Blood, Clyde Bruckman's Final Repose, Anasazi, Dreamland, Musings of a Cigarette Smoking Man, Paper Clip, Redux II, The Erlenmeyer Flask, Jose Chung's From Outer Space, and Small Potatoes.
|
|
null | false
| null |
Give me some movie/tv series to learn English
|
There are several tv series that can be used to learn English.
1. "Friends" is an American television sitcom that lasts ten seasons from Sep 22, 1994 to May 6, 2004. The show revolves around six friends in their 20s and 30s who live in Manhattan, New York City. The language used in Friends is casual and not challenging to understand. Watching Friends to learn English will help you to build up your vocabulary.
2. "The Big Bang Theory" is an American television sitcom about two physicists at Caltech who share an apartment and their friends. The show has some challenging English word that requires some domain knowledge to understand. This show is recommended to people who have an intermediate English level.
|
|
null | false
| null |
What is the difference between guitar and ukulele?
|
Guitar and ukulele are similar instruments. But there are many differences between them:
- The biggest difference is that a ukulele is much smaller than a guitar.
- Second, the ukulele has four strings while the guitar has six strings. Ukulele is tuned to G-C-E-A, while the guitar is tuned differently as E-A-D-G-B-E.
- Next, the sound is different. The guitar sounds much fuller than the smaller-body ukulele.
- Ukulele usually comes with nylon string, while the acoustic guitar usually comes with steel ones.
- Scale length. The length from nut to the saddle is much longer in a guitar.
- Difficult to play. The six strings make the guitar a little more difficult to play than a ukulele.
|
|
null | false
| null |
What are the key components of Hadoop?
|
There are three key components of Hadoop:
- Hadoop HDFS (Hadoop Distributed File System) - the storage unit.
- Hadoop MapReduce - the processing unit.
- Hadoop YARN (Yet Another Resource Negotiator) - the resource management unit.
|
|
null | false
| null |
Dementia is a disorder which manifests as a set of related symptoms, which usually surfaces when the brain is damaged by injury or disease. The symptoms involve progressive impairments in memory, thinking, and behavior, which negatively affects a person's ability to function and carry out everyday activities. Aside from memory impairment and a disruption in thought patterns, the most common symptoms include emotional problems, difficulties with language, and decreased motivation. The symptoms may be described as occurring in a continuum over several stages. Consciousness is not affected. Dementia ultimately has a significant effect on the individual, caregivers, and on social relationships in general. A diagnosis of dementia requires the observation of a change from a person's usual mental functioning and a greater cognitive decline than what is caused by normal aging.
|
Based on this paragraph, please list the symptoms of dementia separating them with bullets.
|
-Progressive impairments in memory, thinking and behavior
-Emotional problems
-Difficulties with language
-Decreased motivation
|
null | false
| null |
The University of Virginia (UVA) is a public research university in Charlottesville, Virginia. It was founded in 1819 by Thomas Jefferson. Set within the Academical Village, a UNESCO World Heritage Site, the university is referred to as a "Public Ivy" for offering an academic experience similar to that of an Ivy League university. It is known in part for certain rare characteristics among public universities such as its historic foundations, student-run honor code, and secret societies.
|
According to this paragraph, when was the university of virginia founded?
|
The University of Virginia was founded in 1819.
|
null | false
| null |
Sauvignon blanc is a green-skinned grape variety that originates from the city of Bordeaux in France. The grape most likely gets its name from the French words sauvage ("wild") and blanc ("white") due to its early origins as an indigenous grape in South West France. It is possibly a descendant of Savagnin. Sauvignon blanc is planted in many of the world's wine regions, producing a crisp, dry, and refreshing white varietal wine. The grape is also a component of the famous dessert wines from Sauternes and Barsac. Sauvignon blanc is widely cultivated in France, Chile, Romania, Canada, Australia, New Zealand, South Africa, Bulgaria, the states of Oregon, Washington, and California in the US. Some New World Sauvignon blancs, particularly from California, may also be called "Fumé Blanc", a marketing term coined by Robert Mondavi in reference to Pouilly-Fumé.
Depending on the climate, the flavor can range from aggressively grassy to sweetly tropical. In cooler climates, the grape has a tendency to produce wines with noticeable acidity and "green flavors" of grass, green bell peppers and nettles with some tropical fruit (such as passion fruit) and floral (such as elderflower) notes. In warmer climates, it can develop more tropical fruit notes but risks losing much aroma from over-ripeness, leaving only slight grapefruit and tree fruit (such as peach) notes.
|
What is Sauvignon blanc?
|
Sauvignon blanc is a green-skinned grape variety that originates from the city of Bordeaux in France. Sauvignon blanc is planted in many of the world's wine regions, producing a crisp, dry, and refreshing white varietal wine.
|
null | false
| null |
Who is the primary female antagonist on "The X-Files"?
|
Special Agent Dana Katherine Scully
|
|
null | false
| null |
How much wood could a woodchuck chuck if a woodchuck could chuck wood?
|
The answer to this depends on various factors. If the wood if very heavy, a woodchuck may not be capable of chucking it at all. If the pieces are smaller and the woodchuck was relatively young and spritely, then they could probably chuck at least 30 to 40lbs within a few hours. An older woodchuck may no longer have dexterity in its hands or the sufficient strength needed to chuck any piece of wood regardless of its size or weight.
|
|
null | false
| null |
What is the air speed velocity of an unladen swallow?
|
The European swallow, weighing on average 20 grams, flaps its wings 12 times every second, this results in an estimated air speed of 9 meters per second, which is roughly 20.1 miles per hour.
|
|
null | false
| 192
|
To estimate the quality of topic models, we use two main automatic measures: topic coherence and kernel uniqueness. For human content analysis, measures of topic coherence and kernel uniqueness are both important and complement each other. Topics can be coherent but have a lot of repetitions. On the other hand, generated topics can be very diverse, but incoherent within each topic.
Topic coherence is an automatic metric of interpretability. It was shown that the coherence measure has a high correlation with the expert estimates of topic interpretability BIBREF9 , BIBREF17 . Mimno BIBREF9 described an experiment comparing expert evaluation of LDA-generated topics and automatic topic coherence measures. It was found that most "bad" topics consisted of words without clear relations between each other.
Newman et al. BIBREF6 asked users to score topics on a 3-point scale, where 3=“useful” (coherent) and 1=“useless” (less coherent). They instructed the users that one indicator of usefulness is the ease by which one could think of a short label to describe a topic. Then several automatic measures, including WordNet-based measures and corpus co-occurrence measures, were compared. It was found that the best automatic measure having the largest correlation with human evaluation is word co-occurrence calculated as point-wise mutual information (PMI) on Wikipedia articles. Later Lau et al. BIBREF17 showed that normalized poinwise mutual information (NPMI) BIBREF18 calculated on Wikipedia articles correlates even more strongly with human scores.
We calculate automatic topic coherence using two measure variants. The coherence of a topic is the median PMI (NPMI) of word pairs representing the topic, usually it is calculated for INLINEFORM0 most probable elements (in our study ten elements) in the topic. The coherence of the model is the median of the topic coherence. To make this measure more objective, it should be calculated on an external corpus BIBREF17 . In our case, we use Wikipedia dumps. DISPLAYFORM0
Human-constructed topics usually have unique main words. The measure of kernel uniqueness shows to what extent topics are different from each other and is calculated as the number of unique elements among most probable elements of topics (kernels) in relation to the whole number of elements in kernels. DISPLAYFORM0
If uniqueness of the topic kernels is closer to zero then many topics are similar to each other, contain the same words in their kernels. In this paper the kernel of a topic means the ten most probable words in the topic. We also calculated perplexity as the measure of language models. We use it for additional checking the model quality.
The coherence of the model is the median of the topic coherence. To make this measure more objective, it should be calculated on an external corpus [19]. In our case, we use Wikipedia dumps.
|
What external corpus do they use in their case?
|
Wikipedia dumps.
|
null | false
| null |
Malhar Rao Holkar (1694–1766), a Maratha chief serving Peshwa Baji Rao, established the dynasty's rule over Indore. In the 1720s, he led Maratha armies in Malwa region, and in 1733 was granted 9 parghanas in the vicinity of Indore by the Peshwa. The township of Indore had already existed as an independent principality established by Nandlal Mandloi of Kampel, Nandlal Mandloi was won by the Maratha force and allowed them to camp across the Khan River. In 1734, Malhar Rao established a camp later called Malharganj. In 1747, he started the construction of his royal palace, the Rajwada. By the time of his death, he ruled much of Malwa, and was acknowledged as one of the five virtually independent rulers of the Maratha Confederacy.
He was succeeded by Ahilyabai Holkar (r. 1767–1795), his daughter-in-law. She was born in the Chaundi village in Maharashtra. She moved the capital to Maheshwar, south of Indore on the Narmada River. Rani Ahilyabai was a prolific builder and patron of Hindu temples in Maheshwar and Indore. She also built temples at sacred sites outside her kingdom, from Dwarka in Gujarat east to the Kashi Vishwanath Temple at Varanasi on the Ganges.
|
Can you please tell some details about Malhar Rao Holkar from the given text?
|
1. Malhar Rao Holkar (1694–1766), a Maratha , established the Dynasty 's rule over Indore.
2.In the 1720s he led Maratha armies in Malwa region and in1733 was granted 9 parghanas in the vicinity of Indore by the peshwa.
3. The township of Indore had already existed as an independent principality established by Nandlal Mandloi of kampel
4. Nandlal Mandloi was won by Maratha force and allowed them to camp across the Khan river.
5.In 1734, Malhar Rao established a camp later called Malhar ganj.
6. In 1747,he started the construction of his royal palace,the Rajwada.
|
null | false
| null |
Salesforce, Inc. is an American cloud-based software company headquartered in San Francisco, California. It provides customer relationship management (CRM) software and applications focused on sales, customer service, marketing automation, e-commerce, analytics, and application development.
Founded by former Oracle executive Marc Benioff, Salesforce quickly grew into one of the largest companies in the world, making its IPO in 2004. Salesforce's continued growth makes it the first cloud computing company to reach US$1 billion in annual revenue by fiscal year 2009, and the world's largest enterprise software firm by 2022.
Today, Salesforce is one of the largest technology companies in the world, and as of September 19, 2022, is the 61st largest company in the world by market cap with a value of nearly US$153 billion. Salesforce ranked 136th on the most recent edition of the Fortune 500, making US$26.5 billion in 2022. Since 2020, Salesforce has also been a component of the Dow Jones Industrial Average.
|
Extract the name of the founder of Salesforce from the text
|
Marc Benioff
|
null | false
| null |
American singer-songwriter Taylor Swift has come in dispute with her former record label, Big Machine Records, its founder Scott Borchetta, and new owner Scooter Braun, over the ownership of the master recordings of her first six studio albums. It is a highly publicized conflict, drawing widespread attention and media coverage since its ignition in 2019.
|
Given this paragraph about Taylor Swift re-recording her masters, who was responsible for the controversy?
|
Scooter Braun.
|
null | false
| null |
Procrastination is the action of unnecessarily and voluntarily delaying or postponing something despite knowing that there will be negative consequences for doing so. The word has originated from the Latin word procrastinatus, which itself evolved from the prefix pro-, meaning "forward," and crastinus, meaning "of tomorrow." Oftentimes, it is a habitual human behaviour. It is a common human experience involving delay in everyday chores or even putting off salient tasks such as attending an appointment, submitting a job report or academic assignment, or broaching a stressful issue with a partner. Although typically perceived as a negative trait due to its hindering effect on one's productivity often associated with depression, low self-esteem, guilt and inadequacy, it can also be considered a wise response to certain demands that could present risky or negative outcomes or require waiting for new information to arrive.
From a cultural and a social perspective, students from both Western and non-Western cultures are found to exhibit academic procrastination, but for different reasons. Students from Western cultures tend to procrastinate in order to avoid doing worse than they have done before or from failing to learn as much as they should have, whereas students from non-Western cultures tend to procrastinate in order to avoid looking incompetent, or to avoid demonstrating a lack of ability in front of their peers. It is also important to consider how different cultural perspectives of time management can impact procrastination. For example, in cultures that have a multi-active view of time, people tend to place a higher value on making sure a job is done accurately before finishing. In cultures with a linear view of time, people tend to designate a certain amount of time on a task and stop once the allotted time has expired.
|
From the passage provided, list the traits which are often associated with Procrastination (in a comma separated manner).
|
Based on the passage provided, here are the traits which are often associated with Procrastination: depression, low self-esteem, guilt, inadequacy.
|
null | false
| null |
What is epistatis?
|
Epistasis is a phenomenon in genetics in which the effect of a gene mutation is dependent on the presence or absence of mutations in one or more other genes, respectively termed modifier genes. In other words, the effect of the mutation is dependent on the genetic background in which it appears. Epistatic mutations therefore have different effects on their own than when they occur together. Originally, the term epistasis specifically meant that the effect of a gene variant is masked by that of a different gene.
The concept of epistasis originated in genetics in 1907 but is now used in biochemistry, computational biology and evolutionary biology. The phenomenon arises due to interactions, either between genes (such as mutations also being needed in regulators of gene expression) or within them (multiple mutations being needed before the gene loses function), leading to non-linear effects. Epistasis has a great influence on the shape of evolutionary landscapes, which leads to profound consequences for evolution and for the evolvability of phenotypic traits.
|
|
null | false
| 356
|
There are many tasks in natural language processing which require the classification of sentences or longer paragraphs into a set of predefined categories. Typical applications are for instance topic identification (e.g. sports, news, $\ldots $ ) or product reviews (positive or negative). There is a large body of research on approaches for document classification. An important aspect to compare these different approaches is the availability of high quality corpora to train and evaluate them. Unfortunately, most of these evaluation tasks focus on the English language only, while there is an ever increasing need to perform document classification in many other languages. One could of course collect and label training data for other languages, but this would be costly and time consuming. An interesting alternative is “cross-lingual document classification”. The underlying idea is to use a representation of the words or whole documents which is independent of the language. By these means, a classifier trained on one language can be transferred to a different one, without the need of resources in that transfer language. Ideally, the performance obtained by cross-lingual transfer should be as close as possible to training the entire system on language specific resources. Such a task was first proposed by BIBREF0 using the Reuters Corpus Volume 2. The aim was to first train a classifier on English and then to transfer it to German, and vice versa. An extension to the transfer between English and French and Spanish respectively was proposed by BIBREF1 . However, only few comparative results are available for these transfer directions.
The contributions of this work are as follows. We extend previous works and use the data in the Reuters Corpus Volume 2 to define new cross-lingual document classification tasks for eight very different languages, namely English, French, Spanish, Italian, German, Russian, Chinese and Japanese. For each language, we define a train, development and test corpus. We also provide strong reference results for all transfer directions between the eight languages, e.g. not limited to the transfer between a foreign language and English. We compare two approaches, based either on multilingual word or sentence embeddings respectively. By these means, we hope to define a clear evaluation environment for highly multilingual document classification.
We extend previous works and use the data in the Reuters Corpus Volume 2 to define new cross-lingual document classification tasks for eight very different languages, namely English, French, Spanish, Italian, German, Russian, Chinese and Japanese.
|
What languages did the authors define a new cross-lingual document classification task for?
|
English, French, Spanish, Italian, German, Russian, Chinese and Japanese.
|
null | false
| null |
What are the best hairstyles for a round face?
|
Long hair, angled cut, beach waves
|
|
1901.03860
| false
| null |
We took inspiration from these works to design our experiments to solve the CSKS task.
We took inspiration from these works to design our experiments to solve the CSKS task.
|
What problem do they apply transfer learning to?
|
The answers are shown as follows:
* CSKS task
|
null | false
| null |
Categorize the following countries based on the language their majority speaks: Iraq, Iran, Turkey, UAE, Qatar, Afghanistan, Syria
|
Iran and Afghanistan are majority Farsi (or a variation of it) speaking. Turkey is the majority Turkish speaking. Majority of people in Iraq, UAE, Qatar, and Syria speak Arabic. Contrary to the generally held belief that Iran and Afghanistan are a Arabic speaking countries, they are not. Farsi is a totally different language.
|
|
null | false
| null |
I am working on a blog post on the effect of Snapchat on teenagers. Can you help me think of a few titles for the post? For background, I think Snapchat has a negative effect on our teenagers ability to focus, maintain healthy in-person relationships, and develop into productive adults.
|
Sure, here are a few ideas: "Snapchat negatively affects our teens focus and ability to maintain healthy relationships," "Snapchat and the decline of youth productivity," Snapchat filters hide the complexities of teenage reality," "Snapchat wins when our teenagers lose themselves online," and "But first, let me take a selfie: teens put off college and the workforce to become social influencers."
|
|
null | false
| null |
What is the difference between a mechanical watch and a quartz watch?
|
Mechanical watches are not powered by batteries within the individual watch compartments. They are powered by gears and springs that manually need winding, maintenance, and support. This technology was developed in a time period where quartz watch technology did not exist, and as a result, needed a more simplified way of maintaining time. Since mechanical watches do not rely on a battery, they are often -/+ 2 seconds every day and need regular resetting.
|
|
null | false
| null |
What are the 3 branches of science?
|
Modern Science is mainly constituted of three main branches, i.e. Natural Sciences, Social Sciences, and Formal Sciences as these three areas peruse the nature of our world and the universe most broadly. Thus, the three main branches of Science are:
Natural Sciences
Social Sciences
Formal Sciences
|
|
null | false
| 310
|
We minimize the binary cross-entropy loss between the predicted outputs and the targets. We train our neural model with mini-batch size 8 using Adam BIBREF12 with learning rate INLINEFORM0 and decay the learning rate by multiplying INLINEFORM1 if the F1-Multi score on the validation set decreases during the validation. Gradient norms are clipped within 5 to prevent gradient explosion for feed-forward networks or recurrent neural networks. Since the training corpus is rather small, we use dropout BIBREF13 with probability INLINEFORM2 to prevent overfitting.
Since the training corpus is rather small, we use dropout (Srivastava et al., 2014) with probability 0.3 to prevent overfitting.
|
What is the probability of dropout?
|
0.3.
|
null | false
| null |
In 1876 De Geer became the first Prime Minister of Sweden following a reform where the previous offices of Prime Minister for Justice (which he held at the time) and Prime Minister for Foreign Affairs were changed into Minister for Justice and a Minister for Foreign Affairs. He served until April 1880, when the failure of his repeated efforts to settle the armaments issue again induced him to resign. From 1881 to 1888 he was Chancellor for the Universities of Uppsala and Lund. He was an advocate of free trade and economic liberalism. Some argue that it was De Geer who laid the foundations for the strong economic growth in Sweden from 1870 to 1970.
|
As Chancellor for the Universities of Uppsala and Lund, what did De Geer promote?
|
While holding this position, De Geer promoted free trade and economic liberalism.
|
null | false
| null |
Sum 41 is a Canadian rock band from Ajax, Ontario. Originally called Kaspir, the band was formed in 1996 and currently consists of Deryck Whibley (lead vocals, guitars, keyboards), Dave Baksh (lead guitar, backing vocals), Jason "Cone" McCaslin (bass, backing vocals), Tom Thacker (guitars, keyboards, backing vocals), and Frank Zummo (drums, percussion, occasional backing vocals).
In 1999, Sum 41 signed an international record deal with Island Records and released its first EP, Half Hour of Power, in 2000. The band released its debut album, All Killer No Filler, in 2001. The album achieved mainstream success with its first single, "Fat Lip", which reached number one on the Billboard Modern Rock Tracks chart and remains the band's most successful single to date. The album's next singles "In Too Deep" and "Motivation" also achieved commercial success. All Killer No Filler was certified platinum in both the United States and the United Kingdom and triple platinum in Canada. In 2002, the band released Does This Look Infected?, which was also a commercial and critical success. The singles "The Hell Song" and "Still Waiting" both charted highly on the modern rock charts.
The band released its next album, Chuck, in 2004, led by singles "We're All to Blame" and "Pieces". The album proved successful, peaking at number 10 on the Billboard 200. In 2007, the band released Underclass Hero, which was met with a mixed reception, but became the band's highest-charting album to date. It was also the band's last album on Aquarius Records. The band released the album Screaming Bloody Murder, on Island Records in 2011 to a generally positive reception, though it fell short of its predecessors' commercial success. The band's sixth studio album, 13 Voices was released in 2016. IMPALA awarded the album with a double gold award for 150,000 sold copies across Europe. The band's seventh studio album Order in Decline was released on July 19, 2019. The band is in production of their eighth studio album titled Heaven and Hell. A release date has yet to be announced.
|
Retrieve and list the Sum41 albums mentioned in the text
|
The albums mentioned in the text are:
- All Killer No Filler
- Does This Look Infected?
- Chuck
- Underclass Hero
- Screaming Bloody Murder
- 13 Voices
- Order in Decline
- Heaven and Hell
|
null | false
| null |
Who was Hitler's mistress?
|
Eva Braun
|
|
null | false
| 284
|
We experimented with a dataset of 16K annotated tweets made available by the authors of BIBREF0 . Of the 16K tweets, 3383 are labeled as sexist, 1972 as racist, and the remaining are marked as neither sexist nor racist. For the embedding based methods, we used the GloVe BIBREF5 pre-trained word embeddings. GloVe embeddings have been trained on a large tweet corpus (2B tweets, 27B tokens, 1.2M vocab, uncased). We experimented with multiple word embedding sizes for our task. We observed similar results with different sizes, and hence due to lack of space we report results using embedding size=200. We performed 10-Fold Cross Validation and calculated weighted macro precision, recall and F1-scores.
We use `adam' for CNN and LSTM, and `RMS-Prop' for FastText as our optimizer. We perform training in batches of size 128 for CNN & LSTM and 64 for FastText. More details on the experimental setup can be found from our publicly available source code.
We perform training in batches of size 128 for CNN & LSTM and 64 for FastText.
|
What is the batch size for FastText?
|
64.
|
null | false
| null |
Choose the word which is different from the rest: Cream, Oil, Butter, Curd, Cheese.
|
All except Oil are products obtained from milk.
|
|
null | false
| null |
What is a ceiling joist?
|
The main framing members used to span various structures in modern building construction.
|
|
null | false
| null |
What is the difference between stress and strain?
|
Stress is calculated as the pressure applied per unit area. So stress is essentially the force applied to an object divided by the cross sectional area of where the force is being applied to.
Strain is calculated as the change in parameter value (length, height or angle) as it experiences stress. It is unitless. So strain is essentially the deformation that occurs due to the stress that is applied.
Young's Modulus is a mechanical property that is computed by dividing stress over strain. It is typically expressed in GPa (giga Pascals)
|
|
1805.02400
| true
| null |
We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
Figure~\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
|
Does their detection tool work better than human detection?
|
Yes.
|
null | false
| null |
"Dona nobis pacem" (Ecclesiastical Latin: [ˈdona ˈnobis ˈpatʃem], "Give us peace") is a round for three parts to a short Latin text from the Agnus Dei. The melody has been passed orally. The round is part of many hymnals and songbooks. Beyond use at church, the round has been popular for secular quests for peace, such as the reunification of Germany.
|
Given the following paragraph about Latin Text, what does "Dona nobis pacem" mean in English?
|
"Give us peace"
|
null | false
| null |
What is SoulCycle?
|
SoulCycle is 45-minute indoor cycling class. SoulCycle implements choreography on a stationery bike, making it rhythmic-based. When you go to a SoulCycle class, you are expected to rent or bring a pair of cycling shoes that clip into your bike and you then enter a dark candle-lit room with loud music.
|
|
null | false
| null |
I need to buy a Vinyl record for my friend's birthday. What are some suggestions of classic vinyl records.
|
Here are some classic records that are very popular on Vinyl: Fleetwood Mac - Rumours, Pink Floyd - Dark Side of the Moon, Daft Punk - Random Access Memories, Radiohead - OK Computer, Sgt Pepper's Lonely Hearts Club Band - The Beatles
|
|
null | false
| null |
Who is the greatest tennis player of all time?
|
In the modern era and based on total titles and the overall impact on the game, both Serena Williams and Roger Federer as considered the greatest player of this generation
|
|
null | false
| null |
Will you say sorry to your kids?
|
When I was a child, my mom never apologized to me, making it seem like she could never make a mistake. I didn't like the dynamic of our relationship then; and even now, as an adult, I feel like I should be able to talk to her as equals. Unfortunately, she still believes that she can never be wrong simply because she's my mom. That's why I've made a conscious decision to be a different kind of parent - one who is willing to apologize when they make a mistake. I believe that it's important to treat our children with fairness and respect, and that includes acknowledging when we are in the wrong and taking responsibility for our actions.
|
|
null | false
| 393
|
To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by BIBREF15 indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. In case of English Wikipedia we used words and bigrams with at least 100 occurrences, which gives a vocabulary with approximately 1.5 million elements.
The 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used half of the documents for training and the other half for evaluation. In case of English Wikipedia we held out for testing randomly selected 10% of the documents. We perform document retrieval by selecting queries from the test set and ordering other test documents according to the similarity of the inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valued representations. Results are averaged over queries. We assess the performance of our models with precision-recall curves and two popular information retrieval metrics, namely mean average precision (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10) BIBREF16 . The results depend, of course, on the chosen document relevancy measure. Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is relevant to the query if they both belong to the same newsgroup. In RCV1 each document belongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case we adopted the relevancy measure used by BIBREF3 . That is, the relevancy is calculated as the fraction of overlapping labels in a retrieved document and the query document. Overall, our selection of test datasets and relevancy measures for 20 Newsgroups and RCV1 follows BIBREF3 , enabling comparison with semantic hashing codes. To assess the relevancy of articles in English Wikipedia we can employ categories assigned to them. However, unlike in RCV1, Wikipedia categories can have multiple parent categories and cyclic dependencies. Therefore, for this dataset we adopted a simplified relevancy measure: two articles are relevant if they share at least one category. We also removed from the test set categories with less than 20 documents as well as documents that were left with no categories. Overall, the relevancy is measured over more than INLINEFORM0 categories, making English Wikipedia harder than the other two benchmarks.
We use AdaGrad BIBREF17 for training and inference in all experiments reported in this work. During training we employ dropout BIBREF18 in the embedding layer. To facilitate models with large vocabularies, we approximate the gradients with respect to the softmax logits using the method described by BIBREF9 . Binary PV-DM networks use the same number of dimensions for document codes and word embeddings.
Performance of 128- and 32-bit binary paragraph vector codes is reported in Table TABREF8 and in Figure FIGREF7 . For comparison we also report performance of real-valued paragraph vectors. Note that the binary codes perform very well, despite their far lower capacity: on 20 Newsgroups and RCV1 the 128-bit Binary PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors, while on English Wikipedia its performance is slightly lower. Furthermore, Binary PV-DBOW with bigrams outperforms semantic hashing codes: comparison of precision-recall curves from Figures FIGREF7 a and FIGREF7 b with BIBREF3 shows that 128-bit codes learned with this model outperform 128-bit semantic hashing codes on 20 Newsgroups and RCV1. Moreover, the 32-bit codes from this model outperform 128-bit semantic hashing codes on the RCV1 dataset, and on the 20 Newsgroups dataset give similar precision up to approximately 3% recall and better precision for higher recall levels. Note that the difference in this case lies not only in retrieval precision: the short 32-bit Binary PV-DBOW codes are more efficient for indexing than long 128-bit semantic hashing codes.
We also compared binary paragraph vectors against codes constructed by first inferring short, real-valued paragraph vectors and then using a separate hashing algorithm for binarization. When the dimensionality of the paragraph vectors is equal to the size of binary codes, the number of network parameters in this approach is similar to that of Binary PV models. We experimented with two standard hashing algorithms, namely random hyperplane projection BIBREF19 and iterative quantization BIBREF20 . Paragraph vectors in these experiments were inferred using PV-DBOW with bigrams. Results reported in Table TABREF9 show no benefit from using a separate algorithm for binarization. On the 20 Newsgroups and RCV1 datasets Binary PV-DBOW yielded higher MAP than the two baseline approaches. On English Wikipedia iterative quantization achieved MAP equal to Binary PV-DBOW, while random hyperplane projection yielded lower MAP. Some gain in precision of top hits can be observed for iterative quantization, as indicated by NDCG@10. However, precision of top hits can also be improved by querying with Real-Binary PV-DBOW model (Section SECREF15 ). It is also worth noting that end-to-end inference in Binary PV models is more convenient than inferring real-valued vectors and then using another algorithm for hashing.
BIBREF15 argue that PV-DBOW outperforms PV-DM on a sentiment classification task, and demonstrate that the performance of PV-DBOW can be improved by including bigrams in the vocabulary. We observed similar results with Binary PV models. That is, including bigrams in the vocabulary usually improved retrieval precision. Also, codes learned with Binary PV-DBOW provided higher retrieval precision than Binary PV-DM codes. Furthermore, to choose the context size for the Binary PV-DM models, we evaluated several networks on validation sets taken out of the training data. The best results were obtained with a minimal one-word, one-sided context window. This is the distributed memory architecture most similar to the Binary PV-DBOW model.
To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups1 , a cleansed version (also called v2) of Reuters Corpus Volume 12 (RCV1) and English Wikipedia3 .
|
What dataset is used to carry out experiments?
|
20 Newsgroups1 , a cleansed version (also called v2) of Reuters Corpus Volume 12 (RCV1) and English Wikipedia3 .
|
null | false
| null |
When driving a car, categorize the following as either "increasing safety" or "decreasing safety": Wearing Seatbelt, Drinking Alcohol, Texting, Aligning Mirrors, Airbags
|
Wearing Seatbelt: Increasing Safety
Drinking Alcohol: Decreasing Safety
Texting: Decreasing Safety
Aligning Mirrors: Increasing Safety
Airbags: Increasing Safety
|
|
1706.04815
| false
| null |
In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers.
In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers.
|
Which framework they propose in this paper?
|
The answers are shown as follows:
* extraction-then-synthesis framework
|
null | false
| null |
Why do kids have to take tests in school?
|
Children and adolescents take tests in school because it is a good way to measure their current knowledge. These students study certain topics and testing is a way to measure how much knowledge is retained. Tests are also ways to motivate students to try harder because tests are typically graded with a letter grade. A is associated with a perfect score, F is considered failing, and a C is considered average. These grades can act as motivators for students and create a friendly competition in schools and other external study programs to see who can do the best.
|
|
1710.06700
| false
| null |
To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.
To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.
|
What is the size of the dataset?
|
The answers are shown as follows:
* Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each
|
null | false
| 102
|
Rendering natural language descriptions from structured data is required in a wide variety of commercial applications such as generating descriptions of products, hotels, furniture, etc., from a corresponding table of facts about the entity. Such a table typically contains {field, value} pairs where the field is a property of the entity (e.g., color) and the value is a set of possible assignments to this property (e.g., color = red). Another example of this is the recently introduced task of generating one line biography descriptions from a given Wikipedia infobox BIBREF0 . The Wikipedia infobox serves as a table of facts about a person and the first sentence from the corresponding article serves as a one line description of the person. Figure FIGREF2 illustrates an example input infobox which contains fields such as Born, Residence, Nationality, Fields, Institutions and Alma Mater. Each field further contains some words (e.g., particle physics, many-body theory, etc.). The corresponding description is coherent with the information contained in the infobox.
Note that the number of fields in the infobox and the ordering of the fields within the infobox varies from person to person. Given the large size (700K examples) and heterogeneous nature of the dataset which contains biographies of people from different backgrounds (sports, politics, arts, etc.), it is hard to come up with simple rule-based templates for generating natural language descriptions from infoboxes, thereby making a case for data-driven models. Based on the recent success of data-driven neural models for various other NLG tasks BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , one simple choice is to treat the infobox as a sequence of {field, value} pairs and use a standard seq2seq model for this task. However, such a model is too generic and does not exploit the specific characteristics of this task as explained below. First, note that while generating such descriptions from structured data, a human keeps track of information at two levels. Specifically, at a macro level, she would first decide which field to mention next and then at a micro level decide which of the values in the field needs to be mentioned next. For example, she first decides that at the current step, the field occupation needs attention and then decides which is the next appropriate occupation to attend to from the set of occupations (actor, director, producer, etc.). To enable this, we use a bifocal attention mechanism which computes an attention over fields at a macro level and over values at a micro level. We then fuse these attention weights such that the attention weight for a field also influences the attention over the values within it. Finally, we feed a fused context vector to the decoder which contains both field level and word level information. Note that such two-level attention mechanisms BIBREF6 , BIBREF7 , BIBREF8 have been used in the context of unstructured data (as opposed to structured data in our case), where at a macro level one needs to pay attention to sentences and at a micro level to words in the sentences.
Next, we observe that while rendering the output, once the model pays attention to a field (say, occupation) it needs to stay on this field for a few timesteps (till all the occupations are produced in the output). We refer to this as the stay on behavior. Further, we note that once the tokens of a field are referred to, they are usually not referred to later. For example, once all the occupations have been listed in the output we will never visit the occupation field again because there is nothing left to say about it. We refer to this as the never look back behavior. To model the stay on behaviour, we introduce a forget (or remember) gate which acts as a signal to decide when to forget the current field (or equivalently to decide till when to remember the current field). To model the never look back behaviour we introduce a gated orthogonalization mechanism which ensures that once a field is forgotten, subsequent field context vectors fed to the decoder are orthogonal to (or different from) the previous field context vectors.
We experiment with the WikiBio dataset BIBREF0 which contains around 700K {infobox, description} pairs and has a vocabulary of around 400K words. We show that the proposed model gives a relative improvement of 21% and 20% as compared to current state of the art models BIBREF0 , BIBREF9 on this dataset. The proposed model also gives a relative improvement of 10% as compared to the basic seq2seq model. Further, we introduce new datasets for French and German on the same lines as the English WikiBio dataset. Even on these two datasets, our model outperforms the state of the art methods mentioned above.
Further, we note that once the tokens of a field are referred to, they are usually not referred to later. For example, once all the occupations have been listed in the output we will never visit the occupation field again because there is nothing left to say about it. We refer to this as the never look back behavior.
|
What is "never look back " behavior?
|
It is the behavior they have observed that the model never reffers to tokens that were referred to.
|
null | false
| null |
An early example of an interest in hiking in the United States is Abel Crawford and his son Ethan's clearing of a trail to the summit of Mount Washington, New Hampshire in 1819. This 8.5-mile path is the oldest continually used hiking trail in the United States. The influence of British and European Romanticism reached North America through the transcendentalist movement, and both Ralph Waldo Emerson (1803–82) and Henry David Thoreau (1817-62) were important influences on the outdoors movement in North America. Thoreau's writing on nature and on walking include the posthumously published "Walking" (1862)". His earlier essay "A Walk to Wachusett" (1842) describes a four-day walking tour Thoreau took with a companion from Concord, Massachusetts to the summit of Mount Wachusett, Princeton, Massachusetts and back. In 1876 the Appalachian Mountain Club, America’s earliest recreation organization, was founded to protect the trails and mountains in the northeastern United States.
|
What is the oldest hiking trail in the US?
|
The oldest hiking trail in the US is an 8.5 mile long route leading to the summit of Mount Washington in New Hampshire. The route was first cleared in 1819 by Abel Crawford and his son.
|
null | false
| null |
Provide a bulleted list of easy accommodations for autistic individuals in the workplace.
|
Noise-canceling headphones
Ability to control lighting -- dimming the lights, lamp light instead of overhead lights, or the ability to let in or reduce natural light
Make a quiet sensory free room available for breaks or individual work
Allow pets in the office
|
|
null | false
| null |
Which is a bird or fish: Red grouse, Red grouper
|
Red grouse is a bird, Red grouper is a fish
|
|
null | false
| null |
Article I describes the Congress, the legislative branch of the federal government. Section 1, reads, "All legislative powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives." The article establishes the manner of election and the qualifications of members of each body. Representatives must be at least 25 years old, be a citizen of the United States for seven years, and live in the state they represent. Senators must be at least 30 years old, be a citizen for nine years, and live in the state they represent.
Article I, Section 8 enumerates the powers delegated to the legislature. Financially, Congress has the power to tax, borrow, pay debt and provide for the common defense and the general welfare; to regulate commerce, bankruptcies, and coin money. To regulate internal affairs, it has the power to regulate and govern military forces and militias, suppress insurrections and repel invasions. It is to provide for naturalization, standards of weights and measures, post offices and roads, and patents; to directly govern the federal district and cessions of land by the states for forts and arsenals. Internationally, Congress has the power to define and punish piracies and offenses against the Law of Nations, to declare war and make rules of war. The final Necessary and Proper Clause, also known as the Elastic Clause, expressly confers incidental powers upon Congress without the Articles' requirement for express delegation for each and every power. Article I, Section 9 lists eight specific limits on congressional power.
|
Without quoting directly from the text give me a summary of Article 1 of the U.S. Constitution.
|
Article 1 of the U.S. Constitution defines the legislative branch of our government, or Congress, and establishes the qualifications to serve as part of that body. It goes on to define the powers and responsibilities delegated to Congress in providing for the defense and general welfare of the United States.
|
null | false
| null |
How would you build me a plane out of duct tape? You can leverage other materials, but the exterior of both the fuselage and the wings must be duct tape.
|
We can break down our plane into a few different components, which we will build independently and then piece together during final assembly. They are as follows:
- Fuselage
- Wings
- Tail
- Engine & Propeller
- Landing Gear
The key thing to remember in this plane design is that we won't be achieving altitude above a few thousand feet and will travel slow, since our materials cannot handle extreme conditions.
For simplicity, we will start by building a skeleton of several different aircraft components using a light, strong wood (pine). It is essential that we design the wings appropriately to generate lift, so we will pay additional attention to the design aspect here. This would frame will include the fuselage, wings, and tail. We will then precisely apply our duct tape coating to provide a sleek, strong finish on top of our frame.
The engine will be outsourced to a specialty manufacturer, who can provide us a quality propeller, which will generate enough thrust to achieve flight.
Finally, the landing gear will be a duct tape balloon, which will soften our landing and safely place us on the ground.
This plane will endure a rigorous flight testing routine. To ensure the safety of passengers we will use a variety of testing practices including stall testing, side-slip testing, and many additional tests.
Final assembly will take place by a highly specialized crew who have built many duct tape airplanes previously and have expertise in this domain, ensuring a quality final product that is safe for flight.
|
|
null | false
| null |
Central America (Spanish: América Central [aˈmeɾika senˈtɾal] (listen) or Centroamérica [sentɾoaˈmeɾika] (listen)) is a subregion of the Americas. Its boundaries are defined as bordering Mexico to the north, Colombia to the south, the Caribbean Sea to the east, and the Pacific Ocean to the west. Central America usually consists of seven countries: Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and Panama. Within Central America is the Mesoamerican biodiversity hotspot, which extends from northern Guatemala to central Panama. Due to the presence of several active geologic faults and the Central America Volcanic Arc, there is a high amount of seismic activity in the region, such as volcanic eruptions and earthquakes which has resulted in death, injury, and property damage.
|
Locate 7 countries which are part of Central America.
|
Seven countries: Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and Panama.
|
null | false
| null |
LaFollette was born near Morristown, New Jersey in 1781. His father, Joseph, and grandfather, Jean, were Huguenots who had escaped the persecution in France, traveling first to Jersey and then to the colonies where they operated a small farm near the Wallkill River in northern New Jersey. Jean was killed during the French and Indian Wars. Joseph married Phoebe Gobel of Morristown, New Jersey, whose father's farm along with other neighboring farms in Jockey Hollow was used by George Washington and his troops during the winter of 1780. After serving with Count Casimir Pulaski during the Revolutionary War, Joseph and his family joined the pioneers who trekked westward through the Cumberland Gap.
|
Based on this paragraph, what was the name of Jesse LaFollette's mother?
|
Jesse LaFollette's mother was Phoebe Gobel, who was married to his father Joseph.
|
null | false
| 100
|
Understanding what a question is asking is one of the first steps that humans use to work towards an answer. In the context of question answering, question classification allows automated systems to intelligently target their inference systems to domain-specific solvers capable of addressing specific kinds of questions and problem solving methods with high confidence and answer accuracy BIBREF0 , BIBREF1 .
To date, question classification has primarily been studied in the context of open-domain TREC questions BIBREF2 , with smaller recent datasets available in the biomedical BIBREF3 , BIBREF4 and education BIBREF5 domains. The open-domain TREC question corpus is a set of 5,952 short factoid questions paired with a taxonomy developed by Li and Roth BIBREF6 that includes 6 coarse answer types (such as entities, locations, and numbers), and 50 fine-grained types (e.g. specific kinds of entities, such as animals or vehicles). While a wide variety of syntactic, semantic, and other features and classification methods have been applied to this task, culminating in near-perfect classification performance BIBREF7 , recent work has demonstrated that QC methods developed on TREC questions generally fail to transfer to datasets with more complex questions such as those in the biomedical domain BIBREF3 , likely due in part to the simplicity and syntactic regularity of the questions, and the ability for simpler term-frequency models to achieve near-ceiling performance BIBREF8 . In this work we explore question classification in the context of multiple choice science exams. Standardized science exams have been proposed as a challenge task for question answering BIBREF9 , as most questions contain a variety of challenging inference problems BIBREF10 , BIBREF11 , require detailed scientific and common-sense knowledge to answer and explain the reasoning behind those answers BIBREF12 , and questions are often embedded in complex examples or other distractors. Question classification taxonomies and annotation are difficult and expensive to generate, and because of the unavailability of this data, to date most models for science questions use one or a small number of generic solvers that perform little or no question decomposition BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Our long-term interest is in developing methods that intelligently target their inferences to generate both correct answers and compelling human-readable explanations for the reasoning behind those answers. The lack of targeted solving – using the same methods for inferring answers to spatial questions about planetary motion, chemical questions about photosynthesis, and electrical questions about circuit continuity – is a substantial barrier to increasing performance (see Figure FIGREF1 ).
To address this need for developing methods of targetted inference, this work makes the following contributions:
To date, question classification has primarily been studied in the context of open-domain TREC questions (Voorhees and Tice, 2000), with smaller recent datasets available in the biomedical (Roberts et al., 2014; Wasim et al., 2019) and education (Godea and Nielsen, 2018) domains.
|
In what areas have problem classifications been used so far?
|
To date, question classification has primarily been studied in the context of open-domain TREC questions, with smaller recent datasets available in the biomedical and education domains.
|
null | false
| null |
Classify the movies based on their original language.
Bommarillu, Zeher, Lagaan, Karthikeya, Magadheera, PK
|
Telegu - Karthikeya, Bommarillu, Magadheera
Hindi - Zeher, Lagaan, PK
|
|
null | false
| 15
|
Robotic Process Automation (RPA) is a type of software bots that simulates hand-operated human activities like entering data into a system, registering into accounts, and accomplishing straightforward but repetitive workflows BIBREF0. However, one of the drawbacks of RPA-bots is their susceptibility to changes in defined scenarios: being designed for a particular task, the RPA-bot is usually not adaptable to other domains or even light modifications in a workflow BIBREF0. This inability to readjust to shifting conditions gave rise to Intelligent Process Automation (IPA) systems. IPA-bots combine RPA with Artificial Intelligence (AI) and thus are able to execute more cognitively demanding tasks that require i.a. reasoning and language understanding. Hence, IPA-bots advanced beyond automating shallow “click tasks” and can perform jobs more intelligently – by means of machine learning algorithms. Such IPA-systems undertake time-consuming and routine tasks, and thus enable smart workflows and free up skilled workers to accomplish higher-value activities.
One of the potential applications of Natural Language Processing (NLP) within the IPA domain are conversational interfaces that enable human-to-machine interaction. The main benefit of conversational systems is their ability to give attention to several users simultaneously while supporting natural communication. A conventional dialogue system comprises multiple stages and involves different types of NLP subtasks, starting with Natural Language Understanding (NLU) (e.g., intent classification, named entity extraction) and going towards dialogue management (i.e., determining the next possible bot action, considering the dialogue history) and response generation (e.g., converting the semantic representation of the next system action into a natural language utterance). A typical dialogue system for IPA purposes undertakes shallow customer support requests (e.g., answering of FAQs), allowing human workers to focus on more sophisticated inquiries.
Recent research in the dialogue generation domain is conducted by employing AI-techniques like machine and deep learning BIBREF1, BIBREF2. However, conventional supervised methods have limitations when applied to real-world data and industrial tasks. The primary challenge here refers to a training phase since a robust model requires an extensive amount of structured and labeled data, that is often not available for domain-specific problems. Especially if it concerns dialogue data, which has to be appropriately structured as well as labeled and annotated with additional information. Therefore, despite the popularity of deep learning end-to-end models, one still needs to rely on conventional pipelines in practical dialogue engineering, especially while setting a new domain. However, with few structured data available, transfer learning methods can be used. Such algorithms enable training of the systems with less or even a minimal amount of data, and are able to transfer the knowledge obtained during the training on existing data to the unseen domain.
This paper addresses the challenge of implementing a dialogue system for IPA purposes within the practical e-learning domain with the initial absence of training data.
|
Which problem did the author want to challenge?
|
The challenge of implementing a robust dialogue system for Intelligent Process Automation (IPA) purposes within the practical e-learning domain with the initial absence of training data.
|
null | false
| null |
Identify which instrument is string or percussion: Kakko, Akonting
|
Akoting is string, Kakko is percussion.
|
|
2004.03685
| false
| null |
We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness. We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.
We note two possible approaches to this end:
Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.
For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.
Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only.
We propose the following challenge to the community: We must develop formal definition and evaluation for faithfulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.
We note two possible approaches to this end:
Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks. Perhaps some models or tasks allow sufficiently faithful interpretation, even if the same is not true for others.
For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.
Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves. If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only.
|
What approaches they propose?
|
The answers are shown as follows:
* Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models and tasks.
* Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves.
|
null | false
| null |
I want to get in shape. What nutrition and exercise program should I follow?
|
There are several diets and training programs depending on your specific goals. To improve overall physical fitness it is recommended to combine some sort of low impact aerobic training (e.g. swimming, biking, pilates, gym cardio equipment) with stretching exercises such as yoga. You can combine these workout routines with a healthy diet high in fiber and protein and low in sugar and carbohydrates. For strength and muscle gain try high load resistance training of all muscle groups combined with protein supplements. For weight loss you should avoid extreme one-time diets and exercise routines, on the contrary, focus on developing long term healthy eating and exercising habits that will not only help you control your weight and cardiovascular fitness but also maintain a healthy lifestyle in the long run.
|
|
null | false
| null |
Why should I visit Netherlands?
|
There are many museums in the Netherlands that you can visit. Netherlands has had many famous painters. The city center of Amsterdam is also very nice. It is also considered as one of the best party places in the Europe. There are many famous events that you can join and enjoy.
|
|
null | false
| null |
What is the meaning of RCA in software
|
RCA means Root Cause Analysis, which actually give the cause and solution description of any outage
|
|
null | false
| null |
Tell me the capital of the UK. Then tell me a famous song about that city.
|
The capital of the United Kingdom is London. One song about that city is "London Bridge is Falling Down".
|
|
1607.05408
| false
| null |
The tweetLID workshop shared task requires systems to identify the language of tweets written in Spanish (es), Portuguese (pt), Catalan (ca), English (en), Galician (gl) and Basque (eu). Some language pairs are similar (es and ca; pt and gl) and this poses a challenge to systems that rely on content features alone. We use the supplied evaluation corpus, which has been manually labelled with six languages and evenly split into training and test collections. We use the official evaluation script and report precision, recall and F-score, macro-averaged across languages. This handles ambiguous tweets by permitting systems to return any of the annotated languages. Table TABREF10 shows that using the content model alone is more effective for languages that are distinct in our set of languages (i.e. English and Basque). For similar languages, adding the social model helps discriminate them (i.e. Spanish, Portuguese, Catalan and Galician), particularly those where a less-resourced language is similar to a more popular one. Using the social graph almost doubles the F-score for undecided (und) languages, either not in the set above or hard-to-identify, from 18.85% to 34.95%. Macro-averaged, our system scores 76.63%, higher than the best score in the competition: 75.2%.
The tweetLID workshop shared task requires systems to identify the language of tweets written in Spanish (es), Portuguese (pt), Catalan (ca), English (en), Galician (gl) and Basque (eu).
|
What shared task does this system achieve SOTA in?
|
The answers are shown as follows:
* tweetLID workshop shared task
|
null | false
| null |
Give me a list of the five best ways to take care of your houseplants.
|
1. Get on a watering schedule so you can make sure you are not over-watering or under-watering the plants.
2. Choose plants that will do well in the lighting conditions that you have available in your home. Place plants that like bright, direct light in front of windows, and choose plants that require less light for the darker areas of your home.
3. Consider getting a humidifier if you are going to decorate your home with tropical plants. Certain tropical plants tend like to a more humid environment.
4. If you don't have sufficiently bright natural light, consider getting grow lights to help your plants thrive.
5. Periodically fertilize your plants to make sure they have all of the nutrients that they need to continue growing.
|
|
null | false
| null |
When is the best time to visit Chicago?
|
The best time to visit Chicago is during the Summer months - also known as "Summertime Chi." There are activities and events to check out including: art shows, running and aquatic events, and several outdoor music festivals. Chicago offers a robust shopping and dining scene as well. It's popular to visit in the Summer based on weather conditions; the "Windy City" can be uncomfortably cold during the Winter months.
|
|
null | false
| null |
Classify the following into animals and plants: dog, cat, tiger, roses, tulips, pencil, sofa
|
The classification can be seen below:
- Animals: dog, cat, tiger
- Plants: tulips, roses
- Not included: pencil, sofa
|
|
null | false
| null |
In response to confusion over the book's theme, Hofstadter emphasized that Gödel, Escher, Bach is not about the relationships of mathematics, art, and music—but rather about how cognition emerges from hidden neurological mechanisms. One point in the book presents an analogy about how individual neurons in the brain coordinate to create a unified sense of a coherent mind by comparing it to the social organization displayed in a colony of ants.
|
According to Hofstadter what is the theme of the book Gödel, Escher, Bach?
|
According to Hofstadter, the main theme of Gödel, Escher, Bach is how cognition emerges from hidden neurological mechanisms.
|
null | false
| null |
How do I apply for a single-entry e-visa to Uzbekistan as a US citizen for tourism? What potential pitfalls do I need to look out for?
|
US citizens traveling for tourism who are under the age of 16 or over the age of 55, do not need an e-visa to enter Uzbekistan. Those between the ages of 16 and 55, will need to apply for an e-visa at the following website: https://e-visa.gov.uz/main. Applicants will need a US passport valid for at least 90 days from the date of the e-visa issuance. Applicants will also need to upload a passport photo (35x40mm size, 300 dip resolution, and less than 1MB file size), a photo (also less than 1MB file size) of page 2 of the passport, and $20 USD plus service fee chargeable to a VISA credit card.
Please note that although the e-visa application will ask the applicant to enter dates they will be in Uzbekistan, the e-visa will be valid beginning 30 days after the application date. Therefore, it is recommended that applicants apply between 30-60 days before they plan to enter Uzbekistan. If the applicant plans to visit Uzbekistan starting June 1, they should apply for the e-visa no later than May 1.
Uzbekistan e-visa windows are 90 days, but are only valid 30 days once the applicant enters Uzbekistan. For example, if the applicant applies for an e-visa on May 1, they will receive an e-visa window permitting them to single-entry into country anytime between June 1 – August 31. If the applicant enters Uzbekistan on June 1, the visa will be valid until June 30.
It is recommended that an applicant not apply more than 90 days than the date the applicant plans to exit Uzbekistan. For example, if the applicant intends to visit Uzbekistan between June 1 – June 15, the earliest the applicant can apply is March 15. The e-visa issued would be valid between April 15 – June 15.
Once the application is submitted, the applicant will receive an email to submit payment for the visa. Uzbekistan e-visas are nonrefundable. If there is a mistake in the details or dates, the e-visa should be considered invalid and the applicant must begin a new application and pay the fee again.
Please allow at least 3 business days for processing after payment. Afterward, the applicant will receive a link to download the pdf of their e-visa to Uzbekistan. The applicant should verify for accuracy and ensure that the e-visa dates are valid during their dates of travel.
|
|
null | false
| null |
Which is a species of fish? Loach or Poach
|
Loach
|
|
null | false
| null |
Stefan Richarz (25 November 1874 – 13 July 1934) was a German priest, missionary and geologist.
Richarz was born in Richrath near Cologne, Germany in 1874. He joined the Divine Word Missionaries in 1893 at Steyl, Netherlands. He studied theology in the school of the order Sankt Gabriel in Maria Enzersdorf near Mödling and received his ordination to the priesthood in 1901. Richarz studied geology, paleontology, petrology and chemistry at the University of Vienna and later mineralogy, philosophy and geology at the University of Munich. He received his phD for work with Ernst Weinschenk and Ernst Freiherr Stromer von Reichenbach in 1919 from the University of Munich on basalts from a quarry near Groschlattengrün.
In the following years he taught geology and astronomy at the school of the order Sankt Gabriel. In 1921 he changed to the school of the Divine Word Missionaries in Techny, Illinois where he lectured on geology, chemistry, mathematics and astronomy. In 1933 Richarz became dean of the science department of the Catholic University of Peking. In his first year, he went on a geological field trip to Mengyin County, South Shantung, China where he died 13. July, 1934.
|
How old was Stefan Richarz when he received his phD?
|
Stefan was 45 years old in 1919 when he received his phD from University of Munich, given he was born in 1874.
|
null | false
| null |
"Let's Go" is a song by Canadian singer Shawn Desman. It was released in 2005 as the second single from his album Back for More. The songs heavily features a synthesizer phrase sampled from teh 1982 hits "Don't Go" by British new wave du Yazoo.
|
Fix the typos and grammar in this paragraph
|
"Let's Go" is a song by Canadian singer Shawn Desman. It was released in 2005 as the second single from his album Back for More. The song heavily features a synthesizer phrase sampled from the 1982 hit "Don't Go" by British new wave duo Yazoo.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.