paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
1910.11949
true
null
Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning. In our case, we focus on generating questions from pictures. Our second architecture is inspired by Neural Conversational Model from BIBREF14 where the author presents an end-to-end approach to generate simple conversations. Building an open-domain conversational agent is a challenging problem. As addressed in BIBREF15 and BIBREF16, the lack of a consistent personality and lack of long-term memory which produces some meaningless responses in these models are still unresolved problems. Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning.
Is machine learning system underneath similar to image caption ML systems?
Yes.
null
false
null
How many members are in FATF ?
39 members
null
false
9
Machine reading comprehension (MRC) requires reasoning about both the knowledge involved in a document and knowledge about the world. However, existing datasets are typically dominated by questions that can be well solved by context matching, which fail to test this capability. To encourage the progress on knowledge-based reasoning in MRC, we present knowledge-based MRC in this paper, and build a new dataset consisting of 40,047 question-answer pairs. The annotation of this dataset is designed so that successfully answering the questions requires understanding and the knowledge involved in a document. We implement a framework consisting of both a question answering model and a question generation model, both of which take the knowledge extracted from the document as well as relevant facts from an external knowledge base such as Freebase/ProBase/Reverb/NELL. Results show that incorporating side information from external KB improves the accuracy of the baseline question answer system. We compare it with a standard MRC model BiDAF, and also provide the difficulty of the dataset and lay out remaining challenges. Machine reading comprehension (MRC) requires reasoning about both the knowledge involved in a document and knowledge about the world.
What is MRC?
It refers to Machine reading comprehension.
null
false
null
What would you do if you had all the resources in the world and nothing to worry about?
Wow, as dreamy as it sounds, I would love to live that dream. I would like to go by see, living on a cliff in a glass house that shows both the sunrise and sunset. And it is windy with a big garden where my dog can run and play. I really can not imagine the sun bask, knowing that I don't have to go back from this vacation ever that this is my home and I can just do this alone for the rest of my life :)
null
false
null
Why can't I wake up early in the morning?
It is the problem of your geography. You just have to travel to a place that is a couple of hours behind your current timezone.
null
false
null
The Mona Lisa example pictured here provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either.
From the example given in this paragraph, what is an eigenvector?
An eigenvector of a mapping is any vector that does not change direction when the mapping is applied to it.
null
false
null
What essential tools do I need in my toolbox?
Hammer, screwdriver, adjustable wrench, pliers, level
null
false
null
Classify each as either an amphibian, bird, or fish: salmon, trout, salamander, cardinal, owl, falcon, frog
Salmon: Fish Trout: Fish Salamander: Amphibian Cardinal: Bird Owl: Bird Falcon: Bird Frog: Amphibian
1703.05916
false
null
We extracted Japanese word pairs from the Evaluation Dataset of Japanese Lexical Simplification kodaira. It targeted content words (nouns, verbs, adjectives, adverbs). It included 10 contexts about target words annotated with their lexical substitutions and rankings. Figure FIGREF1 shows an example of the dataset. A word in square brackets in the text is represented as a target word of simplification. A target word is not only recorded in the lemma form but also in the conjugated form. We built a Japanese similarity dataset from this dataset using the following procedure. We extracted Japanese word pairs from the Evaluation Dataset of Japanese Lexical Simplification kodaira.
where does the data come from?
The answers are shown as follows: * Evaluation Dataset of Japanese Lexical Simplification kodaira
null
false
null
In physics, gravity (from Latin gravitas 'weight') is a fundamental interaction which causes mutual attraction between all things with mass or energy. Gravity is, by far, the weakest of the four fundamental interactions, approximately 10^38 times weaker than the strong interaction, 10^36 times weaker than the electromagnetic force and 10^29 times weaker than the weak interaction. As a result, it has no significant influence at the level of subatomic particles. However, gravity is the most significant interaction between objects at the macroscopic scale, and it determines the motion of planets, stars, galaxies, and even light.
What is gravity?
Gravity can be described as the fundamental interaction which causes mutual attraction between all objects with mass or energy. While it is considered the weakest of the four fundamental forces (electromagnetic, strong interaction, and weak interaction), it is most significant force at the macroscopic scale and can influence the motion of planets, stars, galaxies, and light.
null
false
null
What are the primary ingredients and spices that go into Greek cooking?
1) Salt 2) Pepper 3) Extra virgin olive oil 4) Fresh oregano 5) Lemon juice 6) Garlic Technique matters; however, the addition of these six ingredients to any dish will give it a Greek cuisine flavor
null
false
360
Neural machine translation has recently achieved impressive results BIBREF0 , BIBREF1 , while learning from raw, sentence-aligned parallel text and using little in the way of external linguistic information. However, we hypothesize that various levels of linguistic annotation can be valuable for neural machine translation. Lemmatisation can reduce data sparseness, and allow inflectional variants of the same word to explicitly share a representation in the model. Other types of annotation, such as parts-of-speech (POS) or syntactic dependency labels, can help in disambiguation. In this paper we investigate whether linguistic information is beneficial to neural translation models, or whether their strong learning capability makes explicit linguistic features redundant. Let us motivate the use of linguistic features using examples of actual translation errors by neural MT systems. In translation out of English, one problem is that the same surface word form may be shared between several word types, due to homonymy or word formation processes such as conversion. For instance, close can be a verb, adjective, or noun, and these different meanings often have distinct translations into other languages. Consider the following English INLINEFORM0 German example: For the English source sentence in Example SECREF4 (our translation in Example SECREF5 ), a neural MT system (our baseline system from Section SECREF4 ) mistranslates close as a verb, and produces the German verb schließen (Example SECREF6 ), even though close is an adjective in this sentence, which has the German translation nah. Intuitively, part-of-speech annotation of the English input could disambiguate between verb, noun, and adjective meanings of close. As a second example, consider the following German INLINEFORM0 English example: German main clauses have a verb-second (V2) word order, whereas English word order is generally SVO. The German sentence (Example UID7 ; English reference in Example UID8 ) topicalizes the predicate gefährlich 'dangerous', putting the subject die Route 'the route' after the verb. Our baseline system (Example UID9 ) retains the original word order, which is highly unusual in English, especially for prose in the news domain. A syntactic annotation of the source sentence could support the attentional encoder-decoder in learning which words in the German source to attend (and translate) first. We will investigate the usefulness of linguistic features for the language pair German INLINEFORM0 English, considering the following linguistic features: The inclusion of lemmas is motivated by the hope for a better generalization over inflectional variants of the same word form. The other linguistic features are motivated by disambiguation, as discussed in our introductory examples. We will investigate the usefulness of linguistic features for the language pair German↔English, considering the following linguistic features: The inclusion of lemmas is motivated by the hope for a better generalization over inflectional variants of the same word form.
What is motivated by disambiguation?
The other linguistic features are motivated by disambiguation, as discussed in the introductory examples.
null
false
null
Schönbrunn Palace (German: Schloss Schönbrunn [ˈʃlɔs ʃøːnˈbʁʊn] (listen); Central Bavarian: Schloss Scheenbrunn) was the main summer residence of the Habsburg rulers, located in Hietzing, Vienna. The name Schönbrunn (meaning "beautiful spring") has its roots in an artesian well from which water was consumed by the court. The 1,441-room Rococo palace is one of the most important architectural, cultural, and historic monuments in the country. The history of the palace and its vast gardens spans over 300 years, reflecting the changing tastes, interests, and aspirations of successive Habsburg monarchs. It has been a major tourist attraction since the mid-1950s.
Tell me about the Schoenbrunn Palace
Schoenbrunn Palace was the main summer residence of the Habsburg rulers, located in Hietzing, Vienna. The name Schoenbrunn (meaning "beautiful spring") has its roots in an artesian well from which water was consumed by the court. The 1,441-room Rococo palace is one of the most important architectural, cultural, and historic monuments in the country.
null
false
null
The marbled newt is typically found in habitats characterized by hilly and forestry terrain, away from open and exposed areas. Marbled newts live in temporary habitats, such as ponds, that are subject to change depending on the climate conditions of the region. When rainfall is high and the temperature is lower, typically in the fall and winter months, adult marbled newts stay in the ponds. However, these shallow ponds are subject to drought in warmer spring and summer months, which forces the marbled newts to remain on land. Due to this constant change in the habitat conditions, marbled newts have the ability to adapt to different climatic conditions and habitat changes.
Given the reference text below, what problem do Marbled newts' face?
Marbled newt is forced to remain on the land due to droughts.
1912.11602
false
null
Pretraining. We collect three years of online news articles from June 2016 to June 2019. We filter out articles overlapping with the evaluation data on media domain and time range. We then conduct several data cleaning strategies. Pretraining. We collect three years of online news articles from June 2016 to June 2019. We filter out articles overlapping with the evaluation data on media domain and time range. We then conduct several data cleaning strategies.
What did they pretrain the model on?
The answers are shown as follows: * hree years of online news articles from June 2016 to June 2019
null
false
null
Why is open source so important?
Open source is so important because it allows developers to contribute their skills and ideas to coding projects in the programming community. This democratizes coding and is a way to crowd source the development. It is one of the best and quickest ways to innovate and create new technology.
null
false
null
What cause did the Suffragettes advance?
The Suffragettes advocated for the right to vote for women
null
false
null
RMS Titanic was a British passenger liner, operated by the White Star Line, which sank in the North Atlantic Ocean on 15 April 1912 after striking an iceberg during her maiden voyage from Southampton, England, to New York City, United States. Of the estimated 2,224 passengers and crew aboard, more than 1,500 died, making it the deadliest sinking of a single ship up to that time. It remains the deadliest peacetime sinking of an ocean liner or cruise ship. The disaster drew public attention, provided foundational material for the disaster film genre, and has inspired many artistic works.
How did the Titanic sink?
The Titanic sunk after striking an iceberg in the North Atlantic ocean. It was and still is the deadliest ship sinking of a passenger liner.
null
false
null
Give me a list of things that could be found on a desk
Plants, Phone, Computer, Speakers, Cup holders, headphones, mouse, keyboard
null
false
null
I have 13 identical balls that weigh the same, and out of them, one is an odd one weighing less or more. How can I find the odd one out with a simple balance? What is the least number of times I require a simple balance to find the odd one out?
We can start with basic iteration to weigh all the balls against each other, but that would take too many turns. Instead, we can group them in fours to reduce the iterations and maximize the result derivation vector. In the first iteration, we can narrow down to minimum of eight, in the second to minimum of three, and in the third to one. Therefore, the minimum number of turns required is three.
null
false
353
One of the defining traits of previous event-data projects is the method through which they were generated. The original datasets such as WEIS and COPDAB were created by human coders who read news stories and coded events. Future datasets such as KEDS and Phil Schrodt's Levant dataset were created using automated coding software, such as KEDS or TABARI, and news stories download from content aggregators such as Lexis Nexis or Factiva. Both pieces of coding software made use of a technique referred to as shallow parsing BIBREF13 . Shallow parsing is best understood in contrast to a deep parsing method. In deep parsing, the entire syntactic structure of a sentence is used and understood. This syntactic structure includes things such as prepositional phrases, direct and indirect objects, and other grammatical structures. A shallow parse, however, focuses solely on, as the name implies, shallow aspects such as the part of speech of the words within the sentence. The second major dimension that differentiates event datasets is how news content was acquired. For WEIS and COPDAB this was as simple as subscribing to the New York Times and coding from there. Later datasets, such as those created in conjunction with the Kansas Event Data Project, obtained historical content from aggregators, as mentioned above. This difficulty of this process changed at various points in time, with something like full automation possible at some points while human downloading of stories was required at others. There are often gaps in this historical content since the content aggregators catalog of different news services changes at various points and is often fairly limited. Updating datasets based on this type of content was also fairly labor intensive since new content had to be downloaded, cleaned, and run for every update. While orders of magnitude faster than human coding, this remained an involved process. Taken together, these two aspects of event data generation, shallow parsing and content acquisition, form the basis for where the next generation of political event data can improve upon previous efforts. In short, a shift to deep parsing based on relatively recent advances in open-source natural language processing software, combined with realtime acquisition of news content and aggressive strategies for acquiring historical material, provide the motivation for the next generation of political event data. The following section provides greater detail regarding the implementation of these new features. The original datasets such as WEIS and COPDAB were created by human coders who read news stories and coded events.
What kind of people is the WEIS created by?
Human coders who read news stories and coded events.
null
false
29
Over the years, a handful of QA systems have gained popularity around the world. One of the oldest QA system is BASEBALL (created on 1961) BIBREF4 which answers question related to baseball league in America for a particular season. LUNAR BIBREF5 system answers questions about soil samples taken from Apollo lunar exploration. Some of the most popular QA Systems are IBM Watson, Apple Siri and Wolfram Alpha. Examples of some QA systems based on different languages are: Zhang Yu Chinese question classification BIBREF6 based on Incremental Modified Bayes, Arabic QA system (AQAS) BIBREF7 by F. A. Mohammed, K. Nasser, & H. M. Harb and Syntactic open domain Arabic QA system for factoid questions BIBREF8 by Fareed et al. QA systems have been built on different analysis methods such as morphological analysis BIBREF9, syntactical analysis BIBREF10, semantic analysis BIBREF11 and expected answer Type analysis BIBREF12. Researches on question classification, question taxonomies and QA system have been undertaken in recent years. There are two types of approaches for question classification according to Banerjee et al in BIBREF13 - by rules and by machine learning approach. Rule based approaches use some hard coded grammar rules to map the question to an appropriate answer type BIBREF14 BIBREF15. Machine Learning based approaches have been used by Zhang et al and Md. Aminul Islam et al in BIBREF16 and BIBREF0. Many classifiers have been used in machine learning for QC such as Support Vector Machine (SVM) BIBREF16 BIBREF17, Support Vector Machines and Maximum Entropy Model BIBREF18, Naive Bayes (NB), Kernel Naive Bayes (KNB), Decision Tree (DT) and Rule Induction (RI) BIBREF13. In BIBREF0, they claimed to achieve average precision of 0.95562 for coarse class and 0.87646 for finer class using Stochastic Gradient Descent (SGD). A Bengali QC System was built by Somnath Banerjee and Sivaji Bandyopadhyay BIBREF13 BIBREF19 BIBREF20. They proposed a two-layer taxonomy classification with 9 coarse-grained classes and 69 fine-grained classes. There are other research works BIBREF0 BIBREF21 in Bengali Language. A survey was performed on text QA techniques BIBREF22 where there was an analysis conducted in Bengali Language. Syed Mehedi Hasan Nirob et al achieved 88.62% accuracy by using 380 top frequent words as the feature in their work BIBREF17. Some of the most popular QA Systems are IBM Watson, Apple Siri and Wolfram Alpha.
What are the most popular QA Systems mentioned in this paper?
IBM Watson, Apple Siri and Wolfram Alpha.
null
false
53
We adopt the tagging - encoding pipeline framework from the CE task in 2013 ShARe/CLEF. We first tag the clinical entities in the user query and then select relevant terms from a glossary in dermatology to match the entities. We adopt the tagging - encoding pipeline framework from the CE task in 2013 ShARe/CLEF.
What framework do they adopt?
The tagging - encoding pipeline framework from the CE task in 2013 ShARe/CLEF.
null
false
null
Which are the main reasons for doing a PhD?
A PhD degree provides advanced and specialized knowledge on a specific field of Science. People who pursue a PhD, are usually passionate about their subject, and wish to invest significant amount of time to research and contribute in that field.
null
false
null
Observer bias is one of the types of detection bias and is defined as any kind of systematic divergence from accurate facts during observation and the recording of data and information in studies. The definition can be further expanded upon to include the systematic difference between what is observed due to variation in observers, and what the true value is. Observer bias is the tendency of observers to not see what is there, but instead to see what they expect or want to see. This is a common occurrence in the everyday lives of many and is a significant problem that is sometimes encountered in scientific research and studies. Observation is critical to scientific research and activity, and as such, observer bias may be as well. When such biases exist, scientific studies can result in an over- or underestimation of what is true and accurate, which compromises the validity of the findings and results of the study, even if all other designs and procedures in the study were appropriate. Observational data forms the foundation of a significant body of knowledge. Observation is a method of data collection and falls into the category of qualitative research techniques. There are a number of benefits of observation, including its simplicity as a data collection method and its usefulness for hypotheses. Simultaneously, there are many limitations and disadvantages in the observation process, including the potential lack of reliability, poor validity, and faulty perception. Participants’ observations are widely used in sociological and anthropological studies, while systematic observation is used where researchers need to collect data without participants direct interactions. The most common observation method is naturalistic observation, where subjects are observed in their natural environments with the goal to assess the behaviour in an intervention free and natural setting. Observer bias is especially probable when the investigator or researcher has vested interests in the outcome of the research or has strong preconceptions. Coupled with ambiguous underlying data and a subjective scoring method, these three factors contribute heavily to the incidence of observer bias. Examples of cognitive biases include: Anchoring – a cognitive bias that causes humans to place too much reliance on the initial pieces of information they are provided with for a topic. This causes a skew in judgement and prevents humans and observers from updating their plans and predictions as appropriate. Bandwagon effect – the tendency for people to “jump on the bandwagon” with certain behaviours and attitudes, meaning that they adopt particular ways of doings things based on what others are doing. Bias blind spot – the tendency for people to recognize the impact of bias on others and their judgements, while simultaneously failing to acknowledge and recognize the impact that their own biases have on their own judgement. Confirmation bias – the tendency for people to look for, interpret, and recall information in such a way that their preconceived beliefs and values are affirmed. Guilt and innocence by association bias – the tendency for people to hold an assumption that individuals within a group share similar characteristics and behaviours, including those that would hail them as innocent or guilty. Halo effect – the tendency for the positive impressions and beliefs in one area around a person, brand, company, product or the like to influence an observers opinions or feelings in other unrelated areas. Framing effect – the tendency for people to form conclusions and opinions based on whether the pertinent relevant is provided to them with positive or negative connotations. Recency effect – the tendency for more recent pieces of information, ideas, or arguments to be remembered more clearly than those that preceded.
What is observer bias
Observer bias is one of the types of detection bias and is defined as any kind of systematic divergence from accurate facts during observation and the recording of data and information in studies. The definition can be further expanded upon to include the systematic difference between what is observed due to variation in observers, and what the true value is. Observer bias is the tendency of observers to not see what is there, but instead to see what they expect or want to see. This is a common occurrence in the everyday lives of many and is a significant problem that is sometimes encountered in scientific research and studies. Observation is critical to scientific research and activity, and as such, observer bias may be as well. When such biases exist, scientific studies can result in an over- or underestimation of what is true and accurate, which compromises the validity of the findings and results of the study, even if all other designs and procedures in the study were appropriate. Observational data forms the foundation of a significant body of knowledge. Observation is a method of data collection and falls into the category of qualitative research techniques. There are a number of benefits of observation, including its simplicity as a data collection method and its usefulness for hypotheses. Simultaneously, there are many limitations and disadvantages in the observation process, including the potential lack of reliability, poor validity, and faulty perception. Participants’ observations are widely used in sociological and anthropological studies, while systematic observation is used where researchers need to collect data without participants direct interactions. The most common observation method is naturalistic observation, where subjects are observed in their natural environments with the goal to assess the behaviour in an intervention free and natural setting. Observer bias is especially probable when the investigator or researcher has vested interests in the outcome of the research or has strong preconceptions. Coupled with ambiguous underlying data and a subjective scoring method, these three factors contribute heavily to the incidence of observer bias. Examples of cognitive biases include: Anchoring – a cognitive bias that causes humans to place too much reliance on the initial pieces of information they are provided with for a topic. This causes a skew in judgement and prevents humans and observers from updating their plans and predictions as appropriate. Bandwagon effect – the tendency for people to “jump on the bandwagon” with certain behaviours and attitudes, meaning that they adopt particular ways of doings things based on what others are doing. Bias blind spot – the tendency for people to recognize the impact of bias on others and their judgements, while simultaneously failing to acknowledge and recognize the impact that their own biases have on their own judgement. Confirmation bias – the tendency for people to look for, interpret, and recall information in such a way that their preconceived beliefs and values are affirmed. Guilt and innocence by association bias – the tendency for people to hold an assumption that individuals within a group share similar characteristics and behaviours, including those that would hail them as innocent or guilty. Halo effect – the tendency for the positive impressions and beliefs in one area around a person, brand, company, product or the like to influence an observers opinions or feelings in other unrelated areas. Framing effect – the tendency for people to form conclusions and opinions based on whether the pertinent relevant is provided to them with positive or negative connotations. Recency effect – the tendency for more recent pieces of information, ideas, or arguments to be remembered more clearly than those that preceded.
null
false
null
Tell me whether each animal has feathers or hair: falcon, pigeon, chimpanzee, cat, cardinal, hamster, horse
Falcon: Feathers Pigeon: Feathers Chimpanzee: Hair Cat: Hair Cardinal: Feathers Hamster: Hair Horse: Hair
null
false
null
Which of the following items does not look the same as the others? Banana, orange, plum, pomegranate, watermelon.
The correct answer is banana; all of the fruits listed are round except for banana.
null
false
null
Who won six consecutive Wimbledon titles in the 1980s
Martina Navratilova
null
false
null
What does airplane glide ratio mean?
Airplane glide ratio is a measure of an airplane's glide performance. It is the ratio between distance forward to distance downward for an airplane in free, unpowered flight. Glide ratio is affected by properties such as aerodynamical drag, wing construction, and more. Sailplanes are among the most efficient airplanes, achieving glide ratios over 70:1.
null
false
null
What is the history of the saxophone?
The saxophone was invented by Adolphe Sax, a Belgian instrument designer who was born in 1814 and died in 1894. While the instrument is predominantly made of brass, it is classified as a woodwind instrument due to the fact that it uses a reed as a means by which sound is produced. The saxophone is a transposing instrument that is commonly keyed in B flat and E flat, and less commonly being keyed in concert C. There are four types of saxophones that are widely used including Soprano, Alto, Tenor, and Baritone. Two versions, namely the Sopranino and the Bass, are produced by a very few manufacturers and are far less common and rarely used in most modern music. The instrument is used in a variety of musical genres including classical, jazz, fusion, funk, rock and roll, R&B, Hiphop, and even country music. Well known players of the saxophone include Charlie Parker, Cannonball Adderley, John Coltrane, Stan Getz, Sonny Stitt, Gerry Mulligan, Michael Brecker, David Sanborn, Art Pepper, Maceo Parker, Dexter Gordon, Candy Dulfer, Phil Woods, Mindi Abair, Chris Potter, Josh Redman, and many, many more.
null
false
null
What are the top 10 Hollywood movies liked by audiences
Gone with the Wind (1939) was voted the favorite film of Americans in a poll of 2,279 adults taken by Harris Interactive in 2008, and again in a follow-up poll of 2,276 adults in 2014. Roman Holiday (1953) was voted the best foreign (i.e. non-Japanese) film of all time in a 1990 poll of about a million people organized by Japanese public broadcaster NHK. The Godfather (1972) was voted number 1 by Entertainment Weekly's readers in 1999 and voted as number 1 in a Time Out readers' poll in 1998. The film was voted the "Greatest Movie of All Time" in September 2008 by 10,000 readers of Empire magazine, 150 people from the movie business, and 50 film critics. It also topped Empire's June 2017 poll of 20,000 readers. The Empire Strikes Back (1980) was voted the best film of all time by over 250,000 readers of the Empire film magazine in 2015. The Shawshank Redemption (1994) was voted the greatest film of all time by Empire readers in "The 201 Greatest Movies of All Time" poll taken in March 2006. Titanic (1997) was voted the greatest hit of all time in a poll of 6,000 movie fans conducted by English-language newspaper China Daily in March 2008. Shiri (1999) was voted the favorite film of Koreans with 11,918 votes in a 2002 online poll of 54,013 people conducted by Korean movie channel Orion Cinema Network. The Lord of the Rings trilogy (2001–2003) was voted the favorite film of Australians in an audience poll for the Australian television special My Favourite Film in 2005. It was also voted the best film in a poll of 120,000 German voters for the TV special Die besten Filme aller Zeiten ("The best films of all time") in 2004.
null
false
null
Which is the oldest dynasty still ruling?
Japan
null
false
null
Jill, a 60-year-old personal training client, wants to improve her hip flexibility. Can a foam roller target hip flexibility? Suggest an exercise she could perform with this equipment to meet that objective.
Yes! According to research, performing foam rolling for the lower back, buttocks, quadriceps, hamstrings, calves and iliotibial bands really helps with hip flexibility. She can perform “It Band Release”, which essentially foam rolls the side of the leg between hip and knee. This rolls the iliotibial bands, which helps to extend, abduct, and laterally rotate the hip; releasing tightness here helps target hip flexibility.
null
false
null
August 15, 1947
Given this paragraph about battles during India Independence day, what was the date when India win the Independence
Independence Day is celebrated annually on 15 August as a public holiday in India commemorating the nation's independence from the United Kingdom on 15 August 1947, the day when the provisions of the Indian Independence Act, which transferred legislative sovereignty to the Indian Constituent Assembly, came into effect. India retained King George VI as head of state until its transition to a republic, when the Constitution of India came into effect on 26 January 1950 (celebrated as Indian Republic Day) and replaced the dominion prefix, Dominion of India, with the enactment of the sovereign law Constitution of India. India attained independence following the independence movement noted for largely non-violent resistance and civil disobedience.
null
false
244
In Yugoslavia, organized effort in machine translation started in 1959, but the first individual effort was made by Vladimir Matković from the Institute for Telecommunications in Zagreb in 1957 in his PhD thesis on entropy in the Croatian language BIBREF10. The main research group in machine translation was formed in 1958, at the Circle for Young Linguists in Zagreb, initiated by a young linguist Bulcsu Laszlo, who graduated in Russian language, Southern Slavic languages and English language and literature at the University of Zagreb in 1952. The majority of the group members came from different departments of the Faculty of Humanities and Social Sciences of the University of Zagreb, with several individuals from other institutions. The members from the Faculty of Humanities and Social Sciences were: Svetozar Petrović (Department of Comparative Literature), Stjepan Babić (Department of Serbo-Croatian Language and Literature), Krunoslav Pranjić (Department of Serbo-Croatian Language and Literature), Željko Bujas (Department of English Language and Literature), Malik Mulić (Department of Russian Language and Literature) and Bulcsu Laszlo (Department of Comparative Slavistics). The members of the research group from outside the Faculty of Humanities and Social Sciences were: Božidar Finka (Institute for Language of the Yugoslav Academy of Sciences and Arts), Vladimir Vranić (Center for Numerical Research of the Yugoslav Academy of Sciences and Arts), Vladimir Matković (Institute for Telecommunications), Vladimir Muljević (Institute for Regulatory and Signal Devices) BIBREF10. Laszlo and Petrović BIBREF11 also commented on the state of the art of the time, noting the USA prototype efforts from 1954 and the publication of a collection of research papers in 1955 as well as the USSR efforts starting from 1955 and the UK prototype from 1956. They do not detail or cite the articles they mention. However, the fact that they referred to them in a text published in 1959 (probably prepared for publishing in 1958, based on BIBREF11, where Laszlo and Petrović described that the group had started its work in 1958) leads us to the conclusion that the poorly funded Croatian research was lagging only a couple of years behind the research of the superpowers (which invested heavily in this effort). Another interesting moment, which they delineated in BIBREF11, is that the group soon discovered that some experimental work had already been done in 1957 at the Institute of Telecommunications (today a part of the Faculty of Electrical Engineering and Computing at the University of Zagreb) by Vladimir Matković. Because of this, they decided to include him in the research group of the Faculty of Humanities and Social Sciences at the University of Zagreb. The work done by Matković was documented in his doctoral dissertation but remained unpublished until 1959. The Russian machine translation pioneer Andreev expressed hope that the Yugoslav (Croatian) research group could create a prototype, but sadly, due to the lack of federal funding, this never happened BIBREF10. Unlike their colleagues in the USA and the USSR, Laszlo’s group had to manage without an actual computer (which is painfully obvious in BIBREF12), and the results remained mainly theoretical. Appealing probably to the political circles of the time, Laszlo and Petrović note that, although it sounds strange, research in computational linguistics is mainly a top-priority military effort in other countries BIBREF11. There is a quote from BIBREF10 which perhaps best delineates the optimism and energy that the researchers in Zagreb had: "[...] The process of translation has to mechanicalized as soon as possible, and this is only possible if a competent, fast and inexhaustible machine which could inherit the translation task is created, even if just schematic. The machine needs to think for us. If machines help humans in physical tasks, why would they not help them in mental tasks with their mechanical memory and automated logic" (p. 118). Finka [6] notes that Matkovi´c, in his dissertation from 1957, considered the use of bigrams and trigrams to “help model the word context”.
How to model the word context in the dissertation of Matkovi´c?
He considered the use of bigrams and trigrams to “help model the word context”.
null
false
45
Speech-to-Text translation (ST) is essential for a wide range of scenarios: for example in emergency calls, where agents have to respond emergent requests in a foreign language BIBREF0; or in online courses, where audiences and speakers use different languages BIBREF1. To tackle this problem, existing approaches can be categorized into cascaded method BIBREF2, BIBREF3, where a machine translation (MT) model translates outputs of an automatic speech recognition (ASR) system into target language, and end-to-end method BIBREF4, BIBREF5, where a single model learns acoustic frames to target word sequence mappings in one step towards the final objective of interest. Although the cascaded model remains the dominant approach due to its better performance, the end-to-end method becomes more and more popular because it has lower latency by avoiding inferences with two models and rectifies the error propagation in theory. Since it is hard to obtain a large-scale ST dataset, multi-task learning BIBREF5, BIBREF6 and pre-training techniques BIBREF7 have been applied to end-to-end ST model to leverage large-scale datasets of ASR and MT. A common practice is to pre-train two encoder-decoder models for ASR and MT respectively, and then initialize the ST model with the encoder of the ASR model and the decoder of the MT model. Subsequently, the ST model is optimized with the multi-task learning by weighing the losses of ASR, MT, and ST. This approach, however, causes a huge gap between pre-training and fine-tuning, which are summarized into three folds: Subnet Waste: The ST system just reuses the ASR encoder and the MT decoder, while discards other pre-trained subnets, such as the MT encoder. Consequently, valuable semantic information captured by the MT encoder cannot be inherited by the final ST system. Role Mismatch: The speech encoder plays different roles in pre-training and fine-tuning. The encoder is a pure acoustic model in pre-training, while it has to extract semantic and linguistic features additionally in fine-tuning, which significantly increases the learning difficulty. Non-pre-trained Attention Module: Previous work BIBREF6 trains attention modules for ASR, MT and ST respectively, hence, the attention module of ST does not benefit from the pre-training. To address these issues, we propose a Tandem Connectionist Encoding Network (TCEN), which is able to reuse all subnets in pre-training, keep the roles of subnets consistent, and pre-train the attention module. Concretely, the TCEN consists of three components, a speech encoder, a text encoder, and a target text decoder. Different from the previous work that pre-trains an encoder-decoder based ASR model, we only pre-train an ASR encoder by optimizing the Connectionist Temporal Classification (CTC) BIBREF8 objective function. In this way, the additional decoder of ASR is not required while keeping the ability to read acoustic features into the source language space by the speech encoder. Besides, the text encoder and decoder can be pre-trained on a large MT dataset. After that, we employ common used multi-task learning method to jointly learn ASR, MT and ST tasks. Compared to prior works, the encoder of TCEN is a concatenation of an ASR encoder and an MT encoder and our model does not have an ASR decoder, so the subnet waste issue is solved. Furthermore, the two encoders work at tandem, disentangling acoustic feature extraction and linguistic feature extraction, ensuring the role consistency between pre-training and fine-tuning. Moreover, we reuse the pre-trained MT attention module in ST, so we can leverage the alignment information learned in pre-training. Since the text encoder consumes word embeddings of plausible texts in MT task but uses speech encoder outputs in ST task, another question is how one guarantees the speech encoder outputs are consistent with the word embeddings. We further modify our model to achieve semantic consistency and length consistency. Specifically, (1) the projection matrix at the CTC classification layer for ASR is shared with the word embedding matrix, ensuring that they are mapped to the same latent space, and (2) the length of the speech encoder output is proportional to the length of the input frame, so it is much longer than a natural sentence. To bridge the length gap, source sentences in MT are lengthened by adding word repetitions and blank tokens to mimic the CTC output sequences. We conduct comprehensive experiments on the IWSLT18 speech translation benchmark BIBREF1, demonstrating the effectiveness of each component. Our model is significantly better than previous methods by 3.6 and 2.2 BLEU scores for the subword-level decoding and character-level decoding strategies, respectively. Our contributions are three-folds: 1) we shed light on why previous ST models cannot sufficiently utilize the knowledge learned from the pre-training process; 2) we propose a new ST model, which alleviates shortcomings in existing methods; and 3) we empirically evaluate the proposed model on a large-scale public dataset. Concretely, the TCEN consists of three components, a speech encoder, a text encoder, and a target text decoder.
What encoder and decoder does the TCEN consist of?
A speech encoder, a text encoder, and a target text decoder.
null
false
null
What is a Melanochromis simulans
Melanochromis simulans is a species of cichlid endemic to Lake Malawi where it occurs in the Masinje Rocks and at Cape Ngombo. This species can reach a length of 7.4 centimetres (2.9 in) SL. It can also be found in the aquarium trade.
null
false
89
Question Generation (QG) is the task of automatically creating questions from a range of inputs, such as natural language text BIBREF0, knowledge base BIBREF1 and image BIBREF2. QG is an increasingly important area in NLP with various application scenarios such as intelligence tutor systems, open-domain chatbots and question answering dataset construction. In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. As shown in Figure FIGREF1, given a sentence in the reading comprehension paragraph and the text fragment (i.e., the answer) that we want to ask about, we aim to generate a question that is asked about the specified answer. Question generation for reading comprehension is firstly formalized as a declarative-to-interrogative sentence transformation problem with predefined rules or templates BIBREF4, BIBREF0. With the rise of neural models, Du2017LearningTA propose to model this task under the sequence-to-sequence (Seq2Seq) learning framework BIBREF5 with attention mechanism BIBREF6. However, question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence. Zhou2017NeuralQG propose the answer-aware question generation setting which assumes the answer, a contiguous span inside the input sentence, is already known before question generation. To capture answer-relevant words in the sentence, they adopt a BIO tagging scheme to incorporate the answer position embedding in Seq2Seq learning. Furthermore, Sun2018AnswerfocusedAP propose that tokens close to the answer fragments are more likely to be answer-relevant. Therefore, they explicitly encode the relative distance between sentence words and the answer via position embedding and position-aware attention. Although existing proximity-based answer-aware approaches achieve reasonable performance, we argue that such intuition may not apply to all cases especially for sentences with complex structure. For example, Figure FIGREF1 shows such an example where those approaches fail. This sentence contains a few facts and due to the parenthesis (i.e. “the area's coldest month”), some facts intertwine: “The daily mean temperature in January is 0.3$^\circ $C” and “January is the area's coldest month”. From the question generated by a proximity-based answer-aware baseline, we find that it wrongly uses the word “coldest” but misses the correct word “mean” because “coldest” has a shorter distance to the answer “0.3$^\circ $C”. In summary, their intuition that “the neighboring words of the answer are more likely to be answer-relevant and have a higher chance to be used in the question” is not reliable. To quantitatively show this drawback of these models, we implement the approach proposed by Sun2018AnswerfocusedAP and analyze its performance under different relative distances between the answer and other non-stop sentence words that also appear in the ground truth question. The results are shown in Table TABREF2. We find that the performance drops at most 36% when the relative distance increases from “$0\sim 10$” to “$>10$”. In other words, when the useful context is located far away from the answer, current proximity-based answer-aware approaches will become less effective, since they overly emphasize neighboring words of the answer. To address this issue, we extract the structured answer-relevant relations from sentences and propose a method to jointly model such structured relation and the unstructured sentence for question generation. The structured answer-relevant relation is likely to be to the point context and thus can help keep the generated question to the point. For example, Figure FIGREF1 shows our framework can extract the right answer-relevant relation (“The daily mean temperature in January”, “is”, “32.6$^\circ $F (0.3$^\circ $C)”) among multiple facts. With the help of such structured information, our model is less likely to be confused by sentences with a complex structure. Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules. Nevertheless, it is challenging to train a model to effectively utilize both the unstructured sentence and the structured answer-relevant relation because both of them could be noisy: the unstructured sentence may contain multiple facts which are irrelevant to the target question, while the limitation of the OpenIE tool may produce less accurate extracted relations. To explore their advantages simultaneously and avoid the drawbacks, we design a gated attention mechanism and a dual copy mechanism based on the encoder-decoder framework, where the former learns to control the information flow between the unstructured and structured inputs, while the latter learns to copy words from two sources to maintain the informativeness and faithfulness of generated questions. In the evaluations on the SQuAD dataset, our system achieves significant and consistent improvement as compared to all baseline methods. In particular, we demonstrate that the improvement is more significant with a larger relative distance between the answer and other non-stop sentence words that also appear in the ground truth question. Furthermore, our model is capable of generating diverse questions for a single sentence-answer pair where the sentence conveys multiple relations of its answer fragment. Zhou et al. (2017) propose the answer-aware question generation setting which assumes the answer, a contiguous span inside the input sentence, is already known before question generation.
What is the assumption of Zhou' s tea in their answer-aware question generation setting?
Zhou et al. (2017) propose the answer-aware question generation setting which assumes the answer, a contiguous span inside the input sentence, is already known before question generation.
null
false
null
What is the difference between affect and effect?
"Affect" acts as a verb while "effect" acts as a noun. For example, you would use the former spelling of the word when writing the sentence, "Alcohol consumption affects cognitive ability." Alternatively, you would use the latter spelling of the word when writing the sentence, "Alcohol consumption leads to negative effects on cognitive ability."
1610.04377
false
null
We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 . We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate.
What classifier is used for emergency categorization?
The answers are shown as follows: * multi-class Naive Bayes
null
false
null
The Night Buffalo (orig. Spanish El Búfalo de la Noche) is a novel by Guillermo Arriaga. Plot summary After Gregorio commits suicide, his friend Manuel finds himself unraveling his late friend’s world, and what led him to suicide. Gregorio’s tortuous relationship with his girlfriend is now inherited by Manuel; he becomes involved with his late friend’s girlfriend. Gregorio has missed appointments, left strange messages, and has been harassed by a vengeful policeman.
Given this plot summary from The Night Buffalo, what caused Manuel to discover these occurrences caused by his late friend?
Gregorio committed suicide, this caused Manuel to go down a path of discoveries uncovering what led him to suicide.
null
false
null
Give me a list of luxury watches to buy
Depending on the price you want to pay, here is a list of 10 very popular luxury watches: - Rolex Submariner with date or no date. - Rolex GMT Master II (Pepsi or Batman) - Rolex Cosmograph Daytona (White, black and other dial colours) - Audemars Piguet Royal Oak - Audemars Piguet Royal Oak Chronograph - FP.Journe : Chronomètre Souverain - Patek Philippe Nautilus (5711 or 5712) - Patek Philippe Calatrava - Cartier Tank - Jeager-Lecoultre Reverso
null
false
null
Classify the following as either a painting or a sculpture: Mona Lisa, The Starry Night, Venus of Willendorf, Bust of Nefertiti, The Terracotta Army, The Last Supper, The Burghers of Calais
Painting, painting, statue, statue, statue, painting, statue
null
false
null
Why is Rafael Nadal so successful on clay courts?
There are a lot of factors contributing to Rafael Nadal's success on clay. It starts with his preferred play style, heavy topspin. Heavily top spun ball bites into the surface more and jumped up higher, making it harder for the opponents to get the ball back. Next is his physical prowess -- clay courts are slow, which leads to longer points, so the players have to be in fantastic physical shape for them to keep playing at their best levels for 3-4 hours on average. Finally, Rafael Nadal's attitude and competitive spirit -- never give up, always plays with a high-intensity point in point out, and plays the sport with humility make him the ultimate champion he is on clay courts.
null
false
null
Walter K. Whitman (April 25, 1859 – March 27, 1928) was an American character actor of the stage and screen who was active during Hollywood's silent era. He is not to be confused with the influential poet of the same name. Whitman had a long career on the stage in cities like New York, Boston, and Chicago before he began appearing in Triangle films in the 1910s, at which point he was already an older man. He died on March 27, 1928, in Santa Monica, California. "I was only a country lad, but I had seen so many real good shows at our local opera house that I made up my mind I would be an actor," he'd later tell reporters of his beginnings. He also recounted that in 1896, he leapt from the fifth floor of the Hotel Richelieu in Montreal when it caught on fire. Selected filmography Missing Daughters (1924) Long Live the King (1923) The Grub-Stake (1923) The Love Letter (1923) Wasted Lives (1923) Hearts Aflame (1923) A Question of Honor (1922) The Fire Bride (1922) The Girl from Rocky Point (1922) The New Disciple (1921) The Mysterious Rider (1921) His Nibs (1921) The Girl from God's Country (1921) The Three Musketeers (1921) The Home Stretch (1921) The Mark of Zorro (1920) Darling Mine (1920) Passion's Playground (1920) Dangerous Hours (1919) John Petticoats (1919) When Bearcat Went Dry (1919) Pretty Smooth (1919) Destiny (1919) The Cry of the Weak (1919) Whom the Gods Would Destroy (1919) The Heart of Humanity (1918) Desert Law (1918) Daughter Angele (1918) They're Off (1918) The Price of Applause (1918) False Ambition (1918) Everywoman's Husband (1918) His Enemy, the Law (1918) The Last Rebel (1918) Old Hartwell's Cub (1918) Captain of His Soul (1918) The Hopper (1918) Without Honor (1918) The Regenerates (1917) The Firefly of Tough Luck (1917) The Tar Heel Warrior (1917) Polly Ann (1917) Wee Lady Betty (1917) The Girl, Glory (1917) The Millionaire Vagrant (1917) The Desert Man (1917) Paddy O'Hara (1917) The Boss of the Lazy Y (1917) The Dark Road (1917) The Last of the Ingrams (1917) Princess of the Dark (1917) The Sin Ye Do (1916) The Criminal (1916) The Honorable Algy (1916) The Three Musketeers (1916) The Mating (1915)
Give me a list of films in which Walter Whitman appeared while he was under the age of 60.
Walter Whitman, an American actor in silent films, was born in 1859, so he was age 60 in 1919. Here is a list of the films he appeared in before 1919: The Heart of Humanity (1918) Desert Law (1918) Daughter Angele (1918) They're Off (1918) The Price of Applause (1918) False Ambition (1918) Everywoman's Husband (1918) His Enemy, the Law (1918) The Last Rebel (1918) Old Hartwell's Cub (1918) Captain of His Soul (1918) The Hopper (1918) Without Honor (1918) The Regenerates (1917) The Firefly of Tough Luck (1917) The Tar Heel Warrior (1917) Polly Ann (1917) Wee Lady Betty (1917) The Girl, Glory (1917) The Millionaire Vagrant (1917) The Desert Man (1917) Paddy O'Hara (1917) The Boss of the Lazy Y (1917) The Dark Road (1917) The Last of the Ingrams (1917) Princess of the Dark (1917) The Sin Ye Do (1916) The Criminal (1916) The Honorable Algy (1916) The Three Musketeers (1916) The Mating (1915)
null
false
null
Where are most apples grown?
The United States is the second largest apple producer worldwide, behind China. U.S. production is followed by Poland, Italy and France. The United States grows approximately 200 unique apple varieties.
null
false
null
Worcester Park railway station serves the Worcester Park area in south-west London, England. It is 10 miles 53 chains (17.2 km) down the line from London Waterloo. It opened in 1859 when the London and South Western Railway completed the Epsom branch. It was originally known as "Old Malden" and was renamed "Worcester Park" in 1862. Following substantial local housing development, the station was refurbished in the 1930s.
Please provide some information about Worcester Park railway station from the given passage
1. The Worcester Park neighbourhood in southwest London, England, is served by the Worcester Park railway station. 2. It is 10.53 chains (17.2 km) away from London Waterloo on the same line. 3. After the London and South Western Railway finished building the Epsom branch, it was inaugurated. 4. Worcester Park replaced the earlier designation of "Old Malden" in 1862. 5. The station was renovated in the 1930s after significant local housing construction.
null
false
null
Horsfieldia nervosa is a species of plant in the family Myristicaceae. It is a tree endemic to Borneo where it is confined to Sarawak.
Where is Horsfieldia nervosa nervosa endemic to?
Borneo
null
false
null
Name some popular computer games from early this millenia.
World of Warcraft, Grand Theft Auto, Final Fantasy X and Guitar Hero were all popular games in the first decade of the 2000s.
null
false
null
According to the Iyengar Yoga Institute, unlike more experiential approaches where students are encouraged to independently "find their way" to the asanas by imitating the teacher, an Iyengar Yoga class is precise, with misalignments and errors actively explained and corrected. It states that the style "emphasizes precision and alignment", and prioritizes correct movement over quantity, i.e. moving a small amount in the right direction is preferred to moving more but in other directions. Postures are held for a relatively long period of time compared to other schools of yoga; this allows the muscles to relax and lengthen, and encourages awareness in the pose. Props including belts, blocks and blankets are freely used to assist students in correct working in the asanas.
Without quoting directly from the text give me a summary of the differences between Iyengar Yoga and other yoga practices.
Based on the given text, Iyengar yoga differs from other types of yoga in the following points: By focusing on precision and alignment, Iyengar yoga gives you precise instructions and continuously explains and corrects the misalignments and errors. It also prioritizes the quality of the postures over the intensity of doing them. It also keeps the practitioners in the postures for a longer time in comparison to other yoga styles. Also the practitioners are provided with different props like belts, blocks and blankets that they can use freely to help themselves to perform the asanas correctly.
null
false
null
What is the state capitol of Nevada?
The state capitol of Nevada is Carson City which was founded in 1858 and is named after Kit Carson.
null
false
null
Parker's style of composition involved interpolation of original melodies over existing jazz forms and standards, a practice known as contrafact and still common in jazz today. Examples include "Ornithology" (which borrows the chord progression of jazz standard "How High the Moon" and is said to be co-written with trumpet player Little Benny Harris), and "Moose The Mooche" (one of many Parker compositions based on the chord progression of "I Got Rhythm"). The practice was not uncommon prior to bebop, but it became a signature of the movement as artists began to move away from arranging popular standards and toward composing their own material. Perhaps Parker's most well-known contrafact is "Koko," which is based on the chord changes of the popular bebop tune "Cherokee," written by Ray Noble.
What is contrafact?
Contrafact is a type of jazz composition where the composer takes the basic structure of a popular song, rearranges it and provides a new melody to the song. Charlie Parker was a jazz musician and composer who used this style of composition extensively. Some of the best known examples of Parker's use of contrafact are: "Ornithology"- Based on the popular song "How High the Moon" "Moose the Moose"- Based on the standard "I've Got Rhythm" "Koko"-Based on the swing standard "Cherokee"
1907.01413
true
null
There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal. There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame.
Do they propose any further additions that could be made to improve generalisation to unseen speakers?
Yes.
null
false
null
In the main four professional sports leagues, what professional sports teams don't end in "S?"
In total there are eleven teams across the major four professional sports leagues that do not end in "S." There are four NBA teams that do not end in "S." Those teams are the Miami Heat, Oklahoma City Thunder, Orlando Magic and Utah Jazz. There are four NHL teams. Those are the Colorado Avalanche, Minnesota Wild, Seattle Kraken and Tampa Bay Lightning. Two MLB teams do not end in "S" and those are the Boston Red Sox and Chicago White Sox. The Washington Football Team is the only NFL team that does not end in "S."
null
false
null
What is a computer?
A computer is an electronic device that generates output from input by the user. A computer can have a visual display to show user input as well as output. Modern computers have lot of sophisticated parts which includes a monitor for display, a keyboard for typing input, a mouse for browsing and CPU for processing the input.
null
false
null
How can you take good star photos?
Taking good star photos requires 3 critical elements: good observation conditions, good equipment and plenty of time. Picking a dark site without a lot of light pollution that's away from cities and other strong sources of light is a must. The weather also has to be clear, without clouds, and preferably on a night without full moon which is another source of light pollution. On the equipment side, if you're planning to use a cellphone, make sure that it has a night mode and you have a way to keep it immobile for the duration of the photo, which could take 4 or more minutes. You will get better results with a digital camera as those have larger lenses which can capture more light. Also remember that the longer the photo, the more the earth will turn, causing your starts to appear as trails in the photo - which may or may not be desired. For professional looking photos you'll need a telescope, and remember that no matter what you do, your photos won't look anything like those coming from Hubble and other large telescopes. Lastly, plan on spending plenty of time - the longer the exposure the better you can see the stars, and you might want to take several photos - try different parts of the sky, see what happens when you include the ground or not, and remember to let your eyes adjust to the dark and look around yourself, while you're taking the photos!
null
false
null
Write a script about a dystopian past where wars are fought between religion and technology. A younger challenges the ways of the elders.
The Order of the Skvll is holding a sacred meeting, with all members present. As per tradition, they are wearing hooded cloaks and masks. The entire auditorium is packed, and the high council of elders is leading the dialogue. Quo: “The tides of war are shifting and we stand on the edge of a perilous era. We must adapt, or be left behind. We cannot afford to be complacent.” Masked Elder 1(Male): “Centuries of warfare have honed our blades and sharpened our senses. This coming conflict shall be no different.” Quo: “Blinded by pride, you fail to see the threat that looms ahead. The "machines" have evolved. They will soon possess powers that surpass our own. Our sole prospect for defeating them rests in our ability to match their strength, to attain a power that rivals their own.” Masked Elder 2 (Female): “It’s not through technology that we find our strength, for true power comes from the divine within us. It is found in our ability to connect and draw from its boundless source. The Gods have granted us protection and with their aid, we shall vanquish our foes, as we always have.“ Quo: “The idea of divine strength is a fascinating one, but the Gods cannot protect us from the harsh realities upon us. Our duty, as protectors of our realm and the outer worlds, is to remain vigilant against the many threats that seek to undermine our safety. My oath remains resolute.’ Masked elder3 leans in next to her and says: “Quo, I beg you, don’t do this..” With worry in his eyes.
1911.12569
false
null
We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis. We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18. FLOAT SELECTED: TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18. FLOAT SELECTED: TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET.
What performance did they obtain on the SemEval dataset?
F1 score of 82.10%
1910.01363
false
null
In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error: the correct class can be directly inferred from the text content easily, even without background knowledge the correct class can be inferred from the text content, given that event-specific knowledge is provided the correct class can be inferred from the text content if the text is interpreted correctly We can identify three main cases for which the model produces an error: the correct class can be directly inferred from the text content easily, even without background knowledge the correct class can be inferred from the text content, given that event-specific knowledge is provided the correct class can be inferred from the text content if the text is interpreted correctly
What type of errors do the classifiers use?
The answers are shown as follows: * correct class can be directly inferred from the text content easily, even without background knowledge * correct class can be inferred from the text content, given that event-specific knowledge is provided * orrect class can be inferred from the text content if the text is interpreted correctly
1908.08419
false
null
To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two perspectives: (1) the possibility that the candidate word itself can be regarded as a legal word; (2) the rationality of the link that the candidate word directly follows previous segmentation history. Fig. FIGREF10 illustrates the entire scoring model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model. A gated neural network is employed over character embeddings to generate distributed representations of candidate words, which are sent to a LSTM model.
Which neural network architectures are employed?
The answers are shown as follows: * gated neural network
null
false
null
How to do forecasting with small dataset?
Forecasting with small datasets can be challenging, as many traditional methods require larger amounts of data for accurate predictions. Here are few approaches that can help you. Simple moving average: This involves taking the average of a certain number of past data points, and using it as the forecast for the next period. This method can be effective for stable time series with no trend or seasonality. Exponential smoothing: This method assigns exponentially decreasing weights to past observations, with more recent observations being given more weight. This method can be effective for time series with trend and/or seasonality. ARIMA: The AutoRegressive Integrated Moving Average (ARIMA) model is a popular choice for time series forecasting. It can be effective for small datasets if the underlying data follows a pattern that can be modeled using an ARIMA model.
null
false
null
What was discovered in 1922 by Howard Carter
Tutankamen tomb
null
false
null
Voyager 1 is a space probe launched by NASA on September 5, 1977, as part of the Voyager program to study the outer Solar System and interstellar space beyond the Sun's heliosphere. Launched 16 days after its twin Voyager 2, Voyager 1 has been operating for 45 years, 7 months and 1 day as of April 6, 2023 UTC . It communicates through NASA's Deep Space Network to receive routine commands and to transmit data to Earth. Real-time distance and velocity data is provided by NASA and JPL. At a distance of 159.20 AU (23.816 billion km; 14.799 billion mi) from Earth as of March 27, 2023, it is the most distant human-made object from Earth. The probe made flybys of Jupiter, Saturn, and Saturn's largest moon, Titan. NASA had a choice of either doing a Pluto or Titan flyby; exploration of the moon took priority because it was known to have a substantial atmosphere. Voyager 1 studied the weather, magnetic fields, and rings of the two gas giants and was the first probe to provide detailed images of their moons. As part of the Voyager program and like its sister craft Voyager 2, the spacecraft's extended mission is to locate and study the regions and boundaries of the outer heliosphere and to begin exploring the interstellar medium. Voyager 1 crossed the heliopause and entered interstellar space on August 25, 2012, making it the first spacecraft to do so. Two years later, Voyager 1 began experiencing a third "tsunami wave" of coronal mass ejections from the Sun that continued to at least December 15, 2014, further confirming that the probe is indeed in interstellar space. In a further testament to the robustness of Voyager 1, the Voyager team tested the spacecraft's trajectory correction maneuver (TCM) thrusters in late 2017 (the first time these thrusters had been fired since 1980), a project enabling the mission to be extended by two to three years. Voyager 1's extended mission is expected to continue until about 2025, when its radioisotope thermoelectric generators (RTGs) will no longer supply enough electric power to operate its scientific instruments.
Without quoting directly from the text, give me a summary of the Voyager 1 space mission
The Voyager 1 space mission began on September 5, 1977 when the probe was launched with mission parameters to explore out solar system, planets, and outer solar system beyond the sun. The mission is currently in it's 45th year and has provided significant learning about the atmosphere of planets like Jupiter and Saturn, while continuing to scientific data on regions of space never before encountered.
null
false
94
In seeking to answer the question of whether the occupancy rate of a listing could be extracted from the listing’s summary, we ran a number of experiments on our first model. Two parameterizations which we present here are (1) whether the word vectors used in the embedding layer are trained on our corpus or come pretrained from Wikipedia and Gigaword and (2) whether ensembling or the final hidden state in isolation are used to make a prediction for the sequence. Common to all experiments was our decision to use an Adam optimizer, 16 LSTM units, 50-dimensional GloVe vectors, and a 70-30 split in train and test data. Over ten epochs, the model parameterization which performs the best uses GloVe vectors trained on a corpus consisting of all listing descriptions and ensembling to make its class prediction. As a result, our findings are well in-line with those presented by Radford et. al who underscore the importance of training word embeddings on a data-specific corpus for best results on generative tasks [9]. That said, these results, though they do show a marginal increase in dev accuracy and a decrease in CE loss, suggest that perhaps listing description is not too predictive of occupancy rate given our parameterizations. While the listing description is surely an influential metric in determining the quality of a listing, other factors such as location, amenities, and home type might play a larger role in the consumer's decision. We were hopeful that these factors would be represented in the price per bedroom of the listing – our control variable – but the relationship may not have been strong enough. However, should a strong relationship actually exist and there be instead a problem with our method, there are a few possibilities of what went wrong. We assumed that listings with similar occupancy rates would have similar listing descriptions regardless of price, which is not necessarily a strong assumption. This is coupled with an unexpected sparseness of clean data. With over 40,000 listings, we did not expect to see such poor attention to orthography in what are essentially public advertisements of the properties. In this way, our decision to use a window size of 5, a minimum occurrence count of 2, and a dimensionality of 50 when training our GloVe vectors was ad hoc. Seeking to create a model which could generate and discriminate a “high-occupancy listing description”, we wanted to evaluate the capabilities of a generative adversarial network trained on either the standard binary cross-entropy loss or the DMK loss proposed above. Common to both models was the decision to alternate between training the generator for 50 steps and the discriminator for 2000 steps. We leave further tuning of the models to future research as each occasionally falls into unideal local optima within 20 iterations. One potential culprit is the step imbalance between generator and discriminator – should either learn at a much faster rate than the other, one component is liable to be “defeated" and cease to learn the training data. Qualitatively the network trained on the DMK loss shows great promise. With respect to the two experiments presented here, we have shown that it is possible to introduce a measure of suggestion in the text produced by the generator. While this model is also subject to a rapid deadlock between generator and discriminator, it is interesting to see how the introduction of keywords is gradual and affects the proximal tokens included in the output. This behavior was made possible by paying close attention to the hyperparameter $\gamma $ , the weight given to the dot product attention term of the DMK loss. After manual tuning, we settle on $\gamma =0.00045$ for this weight. Below, we illustrate model outputs using different values of Gamma. As is apparent, for a hyper-parameter value less than roughly $\gamma = 0.0004$ , the model tends to ignore the importance of the keyword weights. Conversely, with a $\gamma $ value higher than $0.0005$ , the model tends towards overweighting the representation of the keywords in the model output. In seeking to answer the question of whether the occupancy rate of a listing could be extracted from the listings summary, we ran a number of experiments on our first model. Two parameterizations which we present here are (1) whether the word vectors used in the embedding layer are trained on our corpus or come pretrained from Wikipedia and Gigaword and (2) whether ensembling or the final hidden state in isolation are used to make a prediction for the sequence. Common to all experiments was our decision to use an Adam optimizer, 16 LSTM units, 50-dimensional GloVe vectors, and a 70-30 split in train and test data.
What parameterizations do they present in the first model?
(1) whether the word vectors used in the embedding layer are trained on their corpus or come pretrained from Wikipedia and Gigaword and (2) whether ensembling or the final hidden state in isolation are used to make a prediction for the sequence.
1907.12108
false
null
We apply the Generative Pre-trained Transformer (GPT) BIBREF2 as our pre-trained language model. GPT is a multi-layer Transformer decoder with a causal self-attention which is pre-trained, unsupervised, on the BooksCorpus dataset. BooksCorpus dataset contains over 7,000 unique unpublished books from a variety of genres. Pre-training on such large contiguous text corpus enables the model to capture long-range dialogue context information. Furthermore, as existing EmpatheticDialogue dataset BIBREF4 is relatively small, fine-tuning only on such dataset will limit the chitchat topic of the model. Hence, we first integrate persona into CAiRE, and pre-train the model on PersonaChat BIBREF3 , following a previous transfer-learning strategy BIBREF1 . This pre-training procedure allows CAiRE to have a more consistent persona, thus improving the engagement and consistency of the model. We refer interested readers to the code repository recently released by HuggingFace. Finally, in order to optimize empathy in CAiRE, we fine-tune this pre-trained model using EmpatheticDialogue dataset to help CAiRE understand users' feeling. We apply the Generative Pre-trained Transformer (GPT) BIBREF2 as our pre-trained language model.
What pretrained LM is used?
The answers are shown as follows: * Generative Pre-trained Transformer (GPT)
null
false
null
In 1872 the Boundary Commission travelled through this area while surveying the border between Canada and the United States. During the 1874 March West trek of the North-West Mounted Police, the force established its first camp (called Short Creek Camp) at this location. The community began being served by the Soo Line in 1893, permitting the coal mines in the region to become operational. Roche Percée incorporated as a village on January 12, 1909.
Please summarize the history of the development of Roche Percee based on this article
Roche Percee was originally explored in 1872 by the Boundary Commission before a small camp was built in the area in 1874 by the North-West Mounted Police. The area then grew further after the establishment of the Soo Line train, with coal mining helping to grow the area. Eventually, Roche Percee was finally incorporated in the beginning of 1909.
null
false
null
Classify each of the following as either a US state or a country in the world: Illinois, Arizona, Iran, Washington, India, Bulgaria, Taiwan, New Mexico, Brazil
The US states are: Illinois, Arizona, Washington, New Mexico Countries of the world are: Iran, India, Bulgaria, Brazil
null
false
12
In this paper, we presented two methods for long documents using BERT model: RoBERT and ToBERT. We evaluated our experiments on two classification tasks - customer satisfaction prediction and topic identification - using 3 datasets: CSAT, 20newsgroups and Fisher. We observed that ToBERT outperforms RoBERT on pre-trained BERT features and fine-tuned BERT features for all our tasks. Also, we noticed that fine-tuned BERT performs better than pre-trained BERT. We have shown that both RoBERT and ToBERT improved the simple baselines of taking an average (or the most frequent) of segment-wise predictions for long documents to obtain final prediction. Position embeddings did not significantly affect our models performance, but slightly improved the accuracy on the CSAT task. We obtained the best results on Fisher dataset and good improvements for CSAT task compared to the CNN baseline. It is interesting to note that the longer the average input in a given task, the bigger improvement we observe w.r.t. the baseline for that task. Our results confirm that both RoBERT and ToBERT can be used for long sequences with competitive performance and quick fine-tuning procedure. For future work, we shall focus on training models on long documents directly (i.e. in an end-to-end manner). For future work, we shall focus on training models on long documents directly (i.e. in an end-to-end manner).
What work will the team focus on in the future?
They shall focus on training models on long documents directly (i.e. in an end-to-end manner).
null
false
null
Which Philip K. Dick novel was the basis for the movie "Blade Runner"?
Do Androids Dream of Electric Sheep?
null
false
null
What are the best Sherlock Holmes short stories?
While "best" can be subjective, the author of the Sherlock Holmes series, Sir Arthur Conan Doyle, selected his own list of 12 short stories that he thought were the best. His list was this: 1. The Speckled Band 2. The Red-Headed League 3. The Dancing Men 4. The Final Problem 5. A Scandal in Bohemia 6. The Empty House 7. The Five Orange Pips 8. The Second Stain 9. The Devil’s Foot 10. The Priory School 11. The Musgrave Ritual 12. The Reigate Squires
null
false
null
What is IMAP?
Internet Message Access Protocol (IMAP) is a standard email retrieval protocol. It stores email messages on a mail server and enables the recipient to view and manipulate them as though they were stored locally on their device.
null
false
null
How many time zones does Alaska have?
Alaska has two time zones: Alaska Time Zone (UTC-09:00 standard and UTC-08:00 daylight saving) and Hawaii-Aleutian Time Zone (UTC-10:00 standard and UTC-09:00 daylight saving). The latter is used only by islands on the Aleutian chain that are west of 169°30′W.
null
false
null
Can existing Delta tables be converted into DLT?
Delta Live Tables is actually the pipeline that manages the table, so you don't need to “convert” a delta table, you just migrate the pipeline to a Live Tables pipeline. Migrating an existing pipeline to Delta Live Tables is mostly refactoring the code which writes to a delta table. It still uses your existing transformations and data sources.
null
false
null
Who is the best baseball player of all time?
There is no single answer to this question. The most common candidates are Babe Ruth, Barry Bonds, Willie Mays, or Hank Aaron.
null
false
null
Drangajökull (pronounced [ˈtrauŋkaˌjœːkʏtl̥], regionally also [ˈtraŋk-]) is the northernmost glacier of Iceland. It is situated southwest of the peninsula Hornstrandir in the Westfjords region. The glacier covers an area of 160–200 km2 (62–77 sq mi), at an altitude of up to 925 m (3,035 ft). It is the only Icelandic glacier which lies entirely below an altitude of 1000 metres and also the only one that has not shrunk in recent years.
Given this paragraph about Drangajökull, tell me where it's located.
Drangajökull is located in Iceland in the Westfjords region.
null
false
null
A battery storage power station is a type of energy storage power station that uses a group of batteries to store electrical energy. Battery storage is the fastest responding dispatchable source of power on electric grids, and it is used to stabilise those grids, as battery storage can transition from standby to full power in under a second to deal with grid contingencies. At full rated power, battery storage power stations are generally designed to output for up to a few hours. Battery storage can be used for short-term peak power and ancillary services, such as providing operating reserve and frequency control to minimize the chance of power outages. They are often installed at, or close to, other active or disused power stations and may share the same grid connection to reduce costs. Since battery storage plants require no deliveries of fuel, are compact compared to generating stations and have no chimneys or large cooling systems, they can be rapidly installed and placed if necessary within urban areas, close to customer load.
What dispatchable source of power on electric grids are faster than battery storage power stations?
None. Battery storage is the fastest responding dispatchable source of power on electric grids.
null
false
null
Tell me whether these are cars or CA Amtrak lines: Tesla Model 3, Capitol Corridor, Chevy Nova, Honda Accord, California Zephyr, Subaru Impreza, Coast Starlight, Pacific Surfliner, San Joaquins
The Amtrak routes are the Capitol Corridor, California Zephyr, Coast Starlight, Pacific Surfliner, and San Joaquins, and the cars are the Tesla Model 3, Chevy Nova, Honda Accord, and Subaru Impreza.
null
false
123
Language models can be optimized to recognize syntax and semantics with great accuracy BIBREF0. However, the output generated can be repetitive and generic leading to monotonous or uninteresting responses (e.g “I don't know”) regardless of the input BIBREF1. While application of attention BIBREF2, BIBREF3 and advanced decoding mechanisms like beam search and variation sampling BIBREF4 have shown improvements, it does not solve the underlying problem. In creative text generation, the objective is not strongly bound to the ground truth—instead the objective is to generate diverse, unique or original samples. We attempt to do this through a discriminator which can give feedback to the generative model through a cost function that encourages sampling of creative tokens. The contributions of this paper are in the usage of a GAN framework to generate creative pieces of writing. Our experiments suggest that generative text models, while very good at encapsulating semantic, syntactic and domain information, perform better with external feedback from a discriminator for fine-tuning objectiveless decoding tasks like that of creative text. We show this by evaluating our model on three very different creative datasets containing poetry, metaphors and lyrics. Previous work on handling the shortcomings of MLE include length-normalizing sentence probability BIBREF5, future cost estimation BIBREF6, diversity-boosting objective function BIBREF7, BIBREF1 or penalizing repeating tokens BIBREF8. When it comes to poetry generation using generative text models, Zhang and Lapata BIBREF9, Yi et al. BIBREF10 and Wang et al. BIBREF11 use language modeling to generate Chinese poems. However, none of these methods provide feedback on the quality of the generated sample and hence, do not address the qualitative objective required for creative decoding. For the task of text generation, MaskGAN BIBREF12 uses a Reinforcement Learning signal from the discriminator, FMD-GAN BIBREF13 uses an optimal transport mechanism as an objective function. GumbelGAN BIBREF14 uses Gumbel-Softmax distribution that replaces the non-differentiable sample from a categorical distribution with a differentiable sample to propagate stronger gradients. Li et al. BIBREF1 use a discriminator for a diversity promoting objective. Yu et al. BIBREF15 use SeqGAN to generate poetry and comment on the performance of SeqGAN over MLE in human evaluations, encouraging our study of GANs for creative text generation. However, these studies do not focus solely on creative text. Using GANs, we can train generative models in a two-player game setting between a discriminator and a generator, where the discriminator (a binary classifier) learns to distinguish between real and fake data samples and the generator tries to fool the discriminator by generating authentic and high quality output.
In the two-player game setting, what are the functions of discriminator and generator respectively?
The discriminator (a binary classifier) learns to distinguish between real and fake data samples and the generator tries to fool the discriminator by generating authentic and high quality output.
null
false
null
Palm Cove is located in Far North Queensland on the Australian coast. It has a long sandy beach along most of its seafront except for the rocky headland around Buchan Point in the north of the suburb.
Where in Australia is Palm Cove located?
Palm Cove is located in Far North Queensland on the Australian coast.
null
false
null
What is the reported biodiversity of Costa Rica?
Costa Rica has from a 5% to a 6% of the World's biodiversity.
null
false
null
What is a good second language to learn beyond English, and why?
The most popular second languages to learn for English speakers are Spanish, French, and Italian, although from a utility perspective the most widely spoken languages aside from English are Mandarin and Hindi. You might choose a second language to learn based on upcoming travel plans or for work reasons, or purely for fun!
null
false
null
What is the field of study that looks at tissues and their structure?
Histology is the field of study that takes a look at tissues and their structure via microscope.
null
false
null
Why is wine country in Northern California a great place to travel?
Traveling to wine country in Northern California is an excellent place to travel. There are countless vineyards to visit, some of the best restaurants in the country, and picturesque views everywhere. Visiting wine country is a relaxing vacation where travelers can enjoy slowing down their daily pace and just enjoy the beautiful surroundings around them. Wine country is also rich in history and the quaint towns can be fun to just walk around.
1612.07843
false
null
Note that the BoW/SVM model being a linear predictor relying directly on word frequency statistics, it lacks expressive power in comparison to the CNN model which additionally learns intermediate hidden layer representations and convolutional filters. Moreover the CNN model can take advantage of the semantic similarity encoded in the distributed word2vec representations, while for the BoW/SVM model all words are “equidistant” in the bag-of-words semantic space. As our experiments will show, these limitations lead the BoW/SVM model to sometimes identify spurious words as relevant for the classification task. In analogy to the semantic extraction proposed in section "Word Relevance and Vector-Based Document Representation" for the CNN model, we can build vectors $d$ representing documents by leveraging the word relevances obtained with the BoW/SVM model. To this end, we introduce a binary vector $\tilde{x} \in \mathbb {R}^{V} $ whose entries are equal to one when the corresponding word from the vocabulary is present in the document and zero otherwise (i.e. $\tilde{x}$ is a binary bag-of-words representation of the document). Thereafter, we build the document summary vector $d$ component-wise, so that $d$ is just a vector of word relevances: Note that the BoW/SVM model being a linear predictor relying directly on word frequency statistics, it lacks expressive power in comparison to the CNN model which additionally learns intermediate hidden layer representations and convolutional filters. Moreover the CNN model can take advantage of the semantic similarity encoded in the distributed word2vec representations, while for the BoW/SVM model all words are “equidistant” in the bag-of-words semantic space
According to the authors, why does the CNN model exhibit a higher level of explainability?
The answers are shown as follows: * CNN model which additionally learns intermediate hidden layer representations and convolutional filters. Moreover the CNN model can take advantage of the semantic similarity encoded in the distributed word2vec representations
null
false
null
Give me a list of 7 of the most popular Pokémon
Pikachu, Charizard, Mew, Mewtwo, Squirtle, Jigglypuff, Snorlax
null
false
null
The company was founded in 2006. The aim of the company is to develop Synterra's media projects. In September, 2009 Synterra Group of Companies (GC) got operational control of Synterra Media as GC bought 40,01% stocks. GC "Synterra" owns 60% of "Synterra Media". That gives GC the right to make decisions on further development of this project. In 2010, Megafon under the leadership of Sergey Soldatenkov (Russian: Сергей Владимирович Солдатенков) gained control of Synterra Media with a 60% stake and spun off to Rostelecom several assets including RTKomm.ru, Telecom-Center (renamed Synterra-Media) and Globus-Telecom. On 22 October 2020, PJSC Rostelecom (Russian: ПАО "Ростелеком") agreed to buy a 100% stake in ZAO Synterra Media (Russian: ЗАО "Синтерра Медиа") from MTT (Russian: АО "Межрегиональный Транзиттелеком" («МТТ»)) for 1.5 billion rubles which allows MTT to focus on the integration of real-time telecom services or Unified Communications and Telecom API. Synterra delivers and distributes media content from the event venues to the television broadcasting companies such as Channel One, VGTRK, NTV, Russia Today, TV Center, Match TV. Both directly and through its subsidiaries, MTT had consolidated its 100% stake in Synterra Media in 2013. As of 2014, MTT, which has 25 million subscribers, competes with Inoventika (Russian: ООО "Иновентика Технолоджес") for cloud computing.
Extract the dates of change of ownership and who the new owner was
2009 - Synterra Group of Companies 2010 - Megafon October 22, 2020 - PJSC Rostelecom
null
false
null
Oji Hiroi (広井 王子, Hiroi Ōji, born 8 February 1954), real name Teruhisa Hiroi (廣井 照久, Hiroi Teruhisa), is an author and video game developer. He co-authored Samurai Crusader with Ryoichi Ikegami. He also created the Far East of Eden and Sakura Wars role-playing video game franchises, and wrote the Sakura Wars manga.
Who is Oji Hiroi
Oji Hiroi, born 8 February 1954), real name Teruhisa Hiroi, is an author and video game developer. He co-authored Samurai Crusader with Ryoichi Ikegami. He also created the Far East of Eden and Sakura Wars role-playing video game franchises, and wrote the Sakura Wars manga.
null
false
140
Antisocial behavior. Antisocial behavior online comes in many forms, including harassment BIBREF30, cyberbullying BIBREF31, and general aggression BIBREF32. Prior work has sought to understand different aspects of such behavior, including its effect on the communities where it happens BIBREF33, BIBREF34, the actors involved BIBREF35, BIBREF36, BIBREF37, BIBREF38 and connections to the outside world BIBREF39. Post-hoc classification of conversations. There is a rich body of prior work on classifying the outcome of a conversation after it has concluded, or classifying conversational events after they happened. Many examples exist, but some more closely related to our present work include identifying the winner of a debate BIBREF40, BIBREF41, BIBREF42, identifying successful negotiations BIBREF21, BIBREF43, as well as detecting whether deception BIBREF44, BIBREF45, BIBREF46 or disagreement BIBREF47, BIBREF48, BIBREF49, BIBREF50, BIBREF51 has occurred. Our goal is different because we wish to forecast conversational events before they happen and while the conversation is still ongoing (potentially allowing for interventions). Note that some post-hoc tasks can also be re-framed as forecasting tasks (assuming the existence of necessary labels); for instance, predicting whether an ongoing conversation will eventually spark disagreement BIBREF18, rather than detecting already-existing disagreement. Conversational forecasting. As described in Section SECREF1, prior work on forecasting conversational outcomes and events has largely relied on hand-crafted features to capture aspects of conversational dynamics. Example feature sets include statistical measures based on similarity between utterances BIBREF16, sentiment imbalance BIBREF20, flow of ideas BIBREF20, increase in hostility BIBREF8, reply rate BIBREF11 and graph representations of conversations BIBREF52, BIBREF17. By contrast, we aim to automatically learn neural representations of conversational dynamics through pre-training. Such hand-crafted features are typically extracted from fixed-length windows of the conversation, leaving unaddressed the problem of unknown horizon. While some work has trained multiple models for different window-lengths BIBREF8, BIBREF18, they consider these models to be independent and, as such, do not address the issue of aggregating them into a single forecast (i.e., deciding at what point to make a prediction). We implement a simple sliding windows solution as a baseline (Section SECREF5). Pre-training for NLP. The use of pre-training for natural language tasks has been growing in popularity after recent breakthroughs demonstrating improved performance on a wide array of benchmark tasks BIBREF53, BIBREF54. Existing work has generally used a language modeling objective as the pre-training objective; examples include next-word prediction BIBREF55, sentence autoencoding, BIBREF56, and machine translation BIBREF57. BERT BIBREF58 introduces a variation on this in which the goal is to predict the next sentence in a document given the current sentence. Our pre-training objective is similar in spirit, but operates at a conversation level, rather than a document level. We hence view our objective as conversational modeling rather than (only) language modeling. Furthermore, while BERT's sentence prediction objective is framed as a multiple-choice task, our objective is framed as a generative task. Conversational forecasting. As described in Section 1, prior work on forecasting conversational outcomes and events has largely relied on handcrafted features to capture aspects of conversational dynamics.
What kind of features does the prior work rely on to capture aspects of conversational dynamics in forecasting conversational outcomes and events?
Handcrafted features.
null
false
null
Why do people like Red Light Therapy?
Red Light Therapy is good for anti-aging as well as other benefits. It has been shown to increase the amount of collagen your body produces, thus helping to decrease fine lines and wrinkles. Additionally, it can help reduce the appearance of scars and stretch marks. It's easy, painless and can even be done at home in as little as 3-10 minutes each day.
null
false
null
How would you build a winning basketball team?
Basketball is a team sport that requires both skill and the players on the team to complement one another. A winning team requires each player to complement the others on the team through specialization by having smaller players to handle that ball that can also make shots further away from the basket, as well as taller players who can rebound and defend the basket from the opposing team trying to get easier shots. The team must also learn how to communicate, build trust, and play together unselfishly. Putting too many players who are alphas and are good at the same things is much worse than having a team with less talent who plays as a unit and complement each other.
null
false
null
Classify the below aircraft models based on the companies which created them. Next-Generation 737, A220, 737 MAX, A350, A330, 747-8, 767, A320, 777, 777X, A380, 787
Boeing- Next-Generation 737, 737 MAX, 747-8, 767, 777, 777X, 787 Airbus- A220, A320, A330, A350, A380
null
false
null
What is the difference between pork roll and taylor ham
Pork roll and taylor ham are both a processed breakfast meat popular primarily in New Jersey. The primary difference between the two is the geographic location of the person talking about it. They are the same meat, but people from Northern New Jersey call it Taylor Ham. If a person is from Southern or Central New Jersey they will call it Pork roll. Either way it is the same breakfast meat
null
false
null
The Leaning Tower of Pisa (Italian: torre pendente di Pisa), or simply, the Tower of Pisa (torre di Pisa [ˈtorre di ˈpiːza; ˈpiːsa]), is the campanile, or freestanding bell tower, of Pisa Cathedral. It is known for its nearly four-degree lean, the result of an unstable foundation. The tower is one of three structures in the Pisa's Cathedral Square (Piazza del Duomo), which includes the cathedral and Pisa Baptistry. The height of the tower is 55.86 metres (183 feet 3 inches) from the ground on the low side and 56.67 m (185 ft 11 in) on the high side. The width of the walls at the base is 2.44 m (8 ft 0 in). Its weight is estimated at 14,500 tonnes (16,000 short tons). The tower has 296 or 294 steps; the seventh floor has two fewer steps on the north-facing staircase.
Given the following paragraph, how tall is the Leaning Tower of Pisa?
The height of the tower is 55.86 metres (183 feet 3 inches) from the ground on the low side and 56.67 m (185 ft 11 in) on the high side.
null
false
null
What is social media and how is it used?
Social media are technologies that people interact with that enable them to share things like photos, documents, information, ideas, politics, and interests with other people. They are a way for people to make new friends who have similar interests. Social media is also used in dating and finding love too. It has even been used to find talent all over the world even from under developed nations where only a camera and internet connection is needed to showcase your talent to the world. Social media is also used for selling and buying too.
null
false
null
Nuclear fission of heavy elements was discovered on Monday 19 December 1938 in Berlin, by German chemist Otto Hahn and his assistant Fritz Strassmann in cooperation with Austrian-Swedish physicist Lise Meitner. Hahn understood that a "burst" of the atomic nuclei had occurred. Meitner explained it theoretically in January 1939 along with her nephew Otto Robert Frisch. Frisch named the process by analogy with biological fission of living cells. In their second publication on nuclear fission in February of 1939, Hahn and Strassmann predicted the existence and liberation of additional neutrons during the fission process, opening up the possibility of a nuclear chain reaction.
Based on this paragraph, tell me who was responsible for the discovery of nuclear fission.
Otto Hahn and his assistant Fritz Strassmann achieved nuclear fission in the laboratory, but could not explain what had happened. It was Lise Meitner and her nephew Otto Robert Frisch that interpreted the laboratory results and discovered that nuclear fission had occurred.
null
false
null
Why are plants green?
While not every plant is green, many are green because of the presence of chlorophyll which typically occurs in the leaves of plants. Chlorophyll is a green pigment which assists plants in the process of photosynthesis, where the plant converts light into energy. Chlorophyll absorbs all light except green, which is reflected to the observer.
1909.04181
false
null
For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}. For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender.
What are the three datasets used in the paper?
Data released for APDA shared task contains 3 datasets.
null
false
null
Eclectochromis lobochilus is a species of fish in the family Cichlidae. It is found in Malawi, Mozambique, and Tanzania. Its endemic to Lake Malawi. This species is found along rocky shores and sometimes in the intermediate habitat between sandy and rock areas. It occurs at depths of 5–15 metres (16–49 ft). It feeds on invertebrates, mainly adult and larval in insects, that hide among aquatic vegetation or algae. The breeding males defend a cave among the rocks. The females mouthbrood the eggs and fry, guarding them after they have been released. This species is known as "Haplochromis Hertae" in the aquarium trade.
Where is the Eclectochromis lobochilus fish generally found?
The Eclectochromis lobochilus fish is generally found in Malawi, Mozambique, and Tanzania.
null
false
null
On which national flag is there an eagle and a snake
Mexico
null
false
65
Typical word embeddings models or representations, such as word2vec BIBREF0, GloVe BIBREF1, or FastText BIBREF2, are fast to train and have been pre-trained for a number of different languages. They do not capture the context, though, so each word is always given the same vector, regardless of its context or meaning. This is especially problematic for polysemous words. ELMo (Embeddings from Language Models) embedding BIBREF3 is one of the state-of-the-art pretrained transfer learning models, that remedies the problem and introduces a contextual component. ELMo model`s architecture consists of three neural network layers. The output of the model after each layer gives one set of embeddings, altogether three sets. The first layer is a CNN layer, which operates on a character level. It is context independent, so each word always gets the same embedding, regardless of its context. It is followed by two biLM layers. A biLM layer consists of two concatenated LSTMs. In the first LSTM, we try to predict the following word, based on the given past words, where each word is represented by the embeddings from the CNN layer. In the second LSTM, we try to predict the preceding word, based on the given following words. It is equivalent to the first LSTM, just reading the text in reverse. In NLP tasks, any set of these embeddings may be used; however, a weighted average is usually used. The weights of the average are learned during the training of the model for the specific task. Additionally, an entire ELMo model can be fine-tuned on a specific end task. Although ELMo is trained on character level and is able to handle out-of-vocabulary words, a vocabulary file containing most common tokens is used for efficiency during training and embedding generation. The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words. Later, ELMo models for other languages were trained as well, but limited to larger languages with many resources, like German and Japanese. Standard word embeddings models or representations, such as word2vec (Mikolov et al., 2013b), GloVe (Pennington et al., 2014), or fastText (Bojanowski et al., 2017), are fast to train and have been pre-trained for a number of different languages. They do not capture the context, though, so each word is always given the same vector, regardless of its context or meaning. This is especially problematic for polysemous words. ELMo (Embeddings from Language Models) embedding (Peters et al., 2018) is one of the state- of-the-art pretrained transfer learning models, that remedies the problem and introduces a contextual component.
What is the key improvement of ELMo over baseline approaches?
Standard word embeddings models or representations, such as word2vec (Mikolov et al., 2013b), GloVe (Pennington et al., 2014), or fastText (Bojanowski et al., 2017)do not capture the context, so each word is always given the same vector, regardless of its context or meaning. This is especially problematic for polysemous words. ELMo (Embeddings from Language Models) embedding (Peters et al., 2018) is one of the state- of-the-art pretrained transfer learning models, that remedies the problem and introduces a contextual component.