paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Characters
The original Street Fighter II features a roster of eight playable World Warriors. This includes Ryu and Ken—the main protagonists from Street Fighter—plus six new international newcomers. In the single-player tournament, the player fights the other seven main fighters, then the final opponents—a group of four CPU-only opponents known as the Grand Masters, which includes Sagat from Street Fighter.
Playable characters:
Ryu, a Japanese martial artist seeking no fame or even the crown of "champion", but only to hone his Shotokan Karate skills with the inner power of Chi, he dedicates his life to perfect his own potential while abandoning everything else in life such as having no family, and few friends, his only bond is with Ken. He is the winner of the previous tournament. He is not convinced that he is the greatest fighter in the world and comes to this tournament in search of fresh competition.
E. Honda, a sumo wrestler from Japan. He aims to improve the negative reputation of sumo wrestling by proving competitors to be legitimate athletes.
Blanka, a beast-like mutant from Brazil who was raised in the jungle. He enters the tournament to uncover more origins about his forgotten past.
Guile, a former United States Air Force special forces operative seeking to defeat M. Bison, who killed his best friend Charlie.
Ken, Ryu's best friend, greatest rival and former training partner, from the United States. Ryu's personal challenge rekindled Ken's fighting spirit and persuaded him to enter the World Warrior tournament, as well as feeling lackadaisical in his fighting potential due to spending too much time with his fiancée.
Chun-Li, a Chinese martial artist who works as an Interpol officer. Much like Guile, she does not enter the World Warrior tournament for any personal glory except proving that she can defeat any man who challenges her. Chun-Li's ambition in the past was tracking down the movements of the smuggling operation known as Shadaloo. Her goal now is her trail being led to the tournament by seeking to avenge her deceased father by holding the Grand Master's leader of the crime syndicate responsible.
Zangief, a professional wrestler and sambo fighter from the Soviet Union. He aims to prove "Soviet Strength" is the strongest form of strength, particularly by defeating American opponents with his bare hands.
Dhalsim, a fire-breathing yoga master from India. Even though he is a pacifist, he uses the money earned from fighting in order to lift people out of poverty.
CPU-exclusive characters, in the order of appearance:
Balrog, an American boxer with a similar appearance to Mike Tyson. Called M. Bison in Japan. Once one of the world's greatest heavyweight boxers, he began working for Shadaloo for easy money.
Vega, a Spanish bullfighter who wields a claw and uses a unique style of ninjutsu. Called Balrog in Japan. He is vain and wishes to eliminate ugly people from the world.
Sagat, a Muay Thai kickboxer from Thailand and former World Warrior champion from the original Street Fighter. He was once known as The King of Street Fighters until he got demoted as The King of Muai Thai in his own tournament due to a narrow defeat at the hands of Ryu's shoryuken (rising dragon punch) which left a deep gash across his chest. Ever since that moment he felt disgrace, and will do anything to have a grudge match with Ryu to get his title back, even if it takes joining forces with Shadaloo.
M. Bison, the leader of the criminal organization Shadaloo, who uses a mysterious power known as Psycho Power, and the final opponent of the game. Called Vega in Japan.
|
Extract the names of the playable characters in Street Fighter II from the passage, and list them out separated by a comma.
|
Ryu, E. Honda, Blanka, Guile, Ken, Chu-Li, Zangief, Dhalsim
|
null | false
| 377
|
Reasoning is part of our every day routine: we hear Natural Language (NL) sentences, we participate in dialogues, we read books or legal documents. Successfully understanding, participating or communicating with others in these situations presupposes some form of reasoning: about individual sentences, whole paragraphs of legal documents, small or bigger pieces of dialogue and so on. The human reasoning performed in these different situations cannot be explained by a single rigid system of reasoning, plainly because reasoning is performed in different ways in each one of them. Consider the following example:
Three representatives are needed.
If a human reasoner with expert knowledge was to interpret the above utterance in a legal context, s/he will most probably judge that a situation where more than three references are provided could be compatible with the semantics of the utterance. To the contrary, if the same reasoner was to interpret the above as part of a casual, everyday conversation, then three would most likely be interpreted as exactly three, making the same situation incompatible with the utterance. In this paper, we want to focus on precise inference, inference which is either performed by experts or normal people after taking some time to consider the inferences that follow or not from a set of premises. The problem one encounters in testing systems of this type of reasoning is the fact that no large scale datasets of this sort of inference exist. The commonly used datasets for systems fit for this type of precise reasoning, i.e. logical systems based on some model of formal semantics for Natural Language (NL), are the FraCaS test suite and the Recognizing Textual Entailment (RTE) datasets.
The problem one encounters in testing systems of this type of reasoning is the fact that no large scale datasets of this sort of inference exist.
|
What problem do the authors encounter when testing systems?
|
There is no large scale datasets of this sort of inference.
|
null | false
| null |
What were five of the best english language situation comedies from the 1990s?
|
1. Seinfeld
2. Friends
3. Cheers
4. Frazier
5. The Simpsons
|
|
null | false
| null |
Who was the first women to do the spacewalk?
|
Svetlana Savitskaya in the year 1984
|
|
null | false
| null |
World War II or the Second World War, often abbreviated as WWII or WW2, was a global conflict that lasted from 1939 to 1945. The vast majority of the world's countries, including all of the great powers, fought as part of two opposing military alliances: the Allies and the Axis. Many participants threw their economic, industrial, and scientific capabilities behind this total war, blurring the distinction between civilian and military resources. Aircraft played a major role, enabling the strategic bombing of population centres and the delivery of the only two nuclear weapons ever used in war.
World War II was by far the deadliest conflict in history; it resulted in an estimated 70 to 85 million fatalities, mostly among civilians. Tens of millions died due to genocides (including the Holocaust), starvation, massacres, and disease. In the wake of the Axis defeat, Germany and Japan were occupied, and war crimes tribunals were conducted against German and Japanese leaders.
The causes of World War II are debated, but contributing factors included the Second Italo-Ethiopian War, Spanish Civil War, Second Sino-Japanese War, Soviet–Japanese border conflicts, the rise of fascism in Europe, and European tensions in the aftermath of World War I. World War II is generally considered to have begun on 1 September 1939, when Nazi Germany, under Adolf Hitler, invaded Poland. The United Kingdom and France subsequently declared war on Germany on 3 September. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union had partitioned Poland and marked out their "spheres of influence" across Finland, Estonia, Latvia, Lithuania and Romania. From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, in a military alliance with Italy, Japan and other countries called the Axis. Following the onset of campaigns in North Africa and East Africa, and the fall of France in mid-1940, the war continued primarily between the European Axis powers and the British Empire, with war in the Balkans, the aerial Battle of Britain, the Blitz of the United Kingdom, and the Battle of the Atlantic. On 22 June 1941, Germany led the European Axis powers in an invasion of the Soviet Union, opening the Eastern Front, the largest land theatre of war in history.
Japan, which aimed to dominate Asia and the Pacific, was at war with the Republic of China by 1937. In December 1941, Japan attacked American and British territories with near-simultaneous offensives against Southeast Asia and the Central Pacific, including an attack on the US fleet at Pearl Harbor which resulted in the United States and United Kingdom declaring war against Japan. The European Axis powers declared war on the United States in solidarity. Japan soon captured much of the western Pacific, but its advances were halted in 1942 after losing the critical Battle of Midway; later, Germany and Italy were defeated in North Africa and at Stalingrad in the Soviet Union. Key setbacks in 1943—including a series of German defeats on the Eastern Front, the Allied invasions of Sicily and the Italian mainland, and Allied offensives in the Pacific—cost the Axis powers their initiative and forced them into strategic retreat on all fronts. In 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained its territorial losses and pushed Germany and its allies back. During 1944 and 1945, Japan suffered reversals in mainland Asia, while the Allies crippled the Japanese Navy and captured key western Pacific islands.
The war in Europe concluded with the liberation of German-occupied territories and the invasion of Germany by the Western Allies and the Soviet Union, culminating in the Fall of Berlin to Soviet troops, Hitler's suicide, and the German unconditional surrender on 8 May 1945. Following the refusal of Japan to surrender on the terms of the Potsdam Declaration (issued 26 July 1945), the United States dropped the first atomic bombs on the Japanese cities of Hiroshima on 6 August and Nagasaki on 9 August. Faced with an imminent invasion of the Japanese archipelago, the possibility of additional atomic bombings, and the Soviet Union's declared entry into the war against Japan on the eve of invading Manchuria, Japan announced on 10 August its intention to surrender, signing a surrender document on 2 September 1945.
World War II changed the political alignment and social structure of the globe and set the foundation for the international order of the world's nations during the Cold War and into present day. The United Nations was established to foster international co-operation and prevent future conflicts, with the victorious great powers—China, France, the Soviet Union, the United Kingdom, and the United States—becoming the permanent members of its Security Council. The Soviet Union and the United States emerged as rival superpowers, setting the stage for the nearly half-century-long Cold War. In the wake of European devastation, the influence of its great powers waned, triggering the decolonisation of Africa and Asia. Most countries whose industries had been damaged moved towards economic recovery and expansion. Political and economic integration, especially in Europe, began as an effort to forestall future hostilities, end pre-war enmities, and forge a sense of common identity.
|
Based on the reference text, what were the main countries and alliances of World War II?
|
The main countries involved in World War II were Germany, Japan, Italy, United Kingdom, United States, Soviet Union, and China and there were divided into two opposing alliances called the Allies and the Axis. The Axis alliance was composed of Germany, Japan, and Italy while the Allies alliance was composed of the United Kingdom, United States, Soviet Union, and China.
|
null | false
| null |
Zenthoefer Furs were an amateur U.S. soccer club which played in St. Louis, Missouri during the late 1940s and early 1950s.
They had played as a junior squad named Schumachers in the 1947-48 season where they fell short of national honors when they lost in the final round of the National Junior Cup to Lighthouse Boys of Philadelphia. The Zenthoefers turned senior the next season. Under the guidance of manager George E. McGann they made their way to the 1949 National Amateur Cup final losing to SC Elizabeth by a 6-1 score. They also lost to the Chicago Polish-Americans 5-1 in the 1949 National Challenge Cup quarterfinals. In 1951, they won the St. Louis Major League by ten points over St. Louis Simpkins-Ford. In 1952, they entered the American Soccer League of St. Louis.
|
What is Zenthoefer Furs
|
Zenthoefer Furs were an amateur U.S. soccer club which played in St. Louis, Missouri during the late 1940s and early 1950s.
|
null | false
| null |
Who do you think is the most powerful Mandalorian in Star Wars history?
|
It would have be Te Ani'la Mand'alor, or also called Mandalore the Ultimate.
|
|
null | false
| null |
Stiftung Lesen (Reading Foundation) is a non-profit organization based in Mainz, Germany under the patronage of Joachim Gauck. Stiftung Lesen acts as a stakeholder for reading promotion on a national and international level. It contributes to reading promotion and reading education through programmes, scientific research, and political recommendations. To create a basis for the development of reading skills, and to enhance the overall level of literacy, Stiftung Lesen established adequate and accessible programmes for every member of society – regardless of financial, cultural or social background. In 2006 Stiftung Lesen established its own Institute for Research on Reading and Media in order to align its projects with the latest scientific findings. The institute is tasked with conducting scientific research on the use of media, reading and reading socialisation, the supervision and evaluation of the foundation’s projects and the organisation of conferences on topics concerning literacy and media research (including reading in the digital age), publishing some of its reports for the German government.
|
What is Stiftung Lesen ?
|
Stiftung Lesen or Reading Foundation is a non-profit organization based in Germany that acts as a stakeholder for reading promotion on a national and international level
|
null | false
| 162
|
Experimental results are shown in Table TABREF28. For the baselines, the best scores reported in each original paper are shown. MPAD reaches best performance on 7 out of 10 datasets, and is close second elsewhere. Moreover, the 7 datasets on which MPAD ranks first widely differ in training set size, number of categories, and prediction task (topic, sentiment, subjectivity), which indicates that MPAD can perform well in different settings.
MPAD vs. hierarchical variants. On 9 datasets out of 10, one or more of the hierarchical variants outperform the vanilla MPAD architecture, highlighting the benefit of explicitly modeling the hierarchical nature of documents.
However, on Subjectivity, standard MPAD outperforms all hierarchical variants. On TREC, it reaches the same accuracy. We hypothesize that in some cases, using a different graph to separately encode each sentence might be worse than using one single graph to directly encode the document. Indeed, in the single document graph, some words that never appear in the same sentence can be connected through common neighbors, as was explained in subsection SECREF7. So, this way, some notion of cross-sentence context is captured while learning representations of words, bigrams, etc. at each MP iteration. This creates better informed representations, resulting in a better document embedding. With the hierarchical variants, on the other hand, each sentence vector is produced in isolation, without any contextual information about the other sentences in the document. Therefore, the final sentence embeddings might be of lower quality, and as a group might also contain redundant/repeated information. When the sentence vectors are finally combined into a document representation, it is too late to take context into account.
To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29.
Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary.
Undirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation.
No master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document.
No renormalization. Here, we do not use the renormalization trick of BIBREF9 during MP (see subsection SECREF10). That is, Eq. DISPLAY_FORM11 becomes $\mathbf {M}^{t+1} = \textsc {MLP}^{t+1}\big (\mathbf {A}\mathbf {H}^{t}\big )$. In other words, instead of computing a weighted average of the incoming neighbors' feature vectors, we compute a weighted sum of them. Unlike the mean, which captures distributions, the sum captures structural information BIBREF14. As shown in Table TABREF29, using sum instead of mean decreases performance everywhere, suggesting that in our application, capturing the distribution of neighbor representations is more important that capturing their structure. We hypothesize that this is the case because statistical word co-occurrence networks tend to have similar structural properties, regardless of the topic, polarity, sentiment, etc. of the corresponding documents.
Neighbors-only. In this experiment, we replaced the GRU combine function (see Eq. DISPLAY_FORM14) with the identity function. That is, we simply have $\mathbf {H}^{t+1}$=$\mathbf {M}^{t+1}$. Since $\mathbf {A}$ has zero diagonal, by doing so, we completely ignore the previous feature of the node itself when updating its representation. That is, the update is based entirely on its neighbors. Except on Reuters (almost no change), performance always suffers, stressing the need to take into account the root node during updates, not only its neighborhood.
MPAD vs. hierarchical variants. On 9 datasets out of 10, one or more of the hierarchical variants outperform the vanilla MPAD architecture, highlighting the benefit of explicitly modeling the hierarchical nature of documents.
|
Are variants of MPAD behaving better than the origin MPAD?
|
On 9 datasets out of 10, one or more of the hierarchical variants outperform the vanilla MPAD architecture.
|
null | false
| null |
The following are non-definitive lists of the actors with the highest career-film grosses. There are box-office gross revenue lists, which are not adjusted for inflation, as well as lists for the number of box-office ticket sales. Because of the long-term effects of inflation on movie theater ticket prices, a list unadjusted for inflation gives far more weight to later actors. Therefore, the gross revenue lists are largely meaningless for comparing acting careers widely separated in time, as many actors from earlier eras will never appear on a modern unadjusted list, despite their films' achieving higher commercial success when adjusted for price increases.
|
List 10 highest-grossing actors in the world.
|
The following are non-definitive lists of the actors with the highest career-film grosses. There are box-office gross revenue lists, which are not adjusted for inflation, as well as lists for the number of box-office ticket sales. Because of the long-term effects of inflation on movie theater ticket prices, a list unadjusted for inflation gives far more weight to later actors. Therefore, the gross revenue lists are largely meaningless for comparing acting careers widely separated in time, as many actors from earlier eras will never appear on a modern unadjusted list, despite their films' achieving higher commercial success when adjusted for price increases.
Lead roles
This list only includes leading roles and lead ensemble roles, including voice acting. The list is updated as of 12 January 2023.
Rank Actor Worldwide total Top-grossing film Films Average
1 Scarlett Johansson $14,519,031,650 Avengers: Endgame ($2,794,731,755) 33 $439,970,656
2 Robert Downey Jr. $14,393,065,668 Avengers: Endgame ($2,794,731,755) 43 $334,722,457
3 Samuel L. Jackson $14,376,505,937 Avengers: Endgame ($2,794,731,755) 64 $224,632,905
4 Zoe Saldaña $12,805,874,340 Avatar ($2,899,384,102) 30 $426,862,478
5 Chris Hemsworth $11,958,567,765 Avengers: Endgame ($2,794,731,755) 25 $478,342,711
6 Chris Pratt $11,742,796,476 Avengers: Endgame ($2,794,731,755) 23 $510,556,369
7 Tom Cruise $11,547,725,996 Top Gun: Maverick ($1,487,575,965) 43 $268,551,767
8 Chris Evans $11,277,890,989 Avengers: Endgame ($2,794,731,755) 29 $388,892,793
9 Tom Hanks $10,704,310,520 Toy Story 4 ($1,073,064,540) 55 $194,623,828
10 Vin Diesel $10,537,096,327 Avengers: Infinity War ($2,048,359,754) 27 $390,262,827
|
null | false
| 37
|
With this work we present a resource that will be extremely useful for building language systems in an endangered, under-represented language, Mapudungun. We benchmark NLP systems for speech synthesis, speech recognition, and machine translation, providing strong baseline results. The size of our resource (142 hours, more than 260k total sentences) has the potential to alleviate many of the issues faced when building language technologies for Mapudungun, in contrast to other indigenous languages of the Americas that unfortunately remain low-resource.
Our resource could also be used for ethnographic and anthropological research into the Mapuche culture, and has the potential to contribute to intercultural bilingual education, preservation activities and further general advancement of the Mapudungun-speaking community.
With this work we present a resource that will be extremely useful for building language systems in an endangered, under-represented language, Mapudungun.
|
Are the resource presented by the authors useful for building language systems in an endangered, under-represented language, Mapudungun?
|
Yes, it is.
|
null | false
| null |
Why are In-N-Out cheeseburgers so good?
|
In-N-Out cheeseburgers are seen as good for a few reasons. The meat is never frozen and tight quality controls at their processing plants insure a consistent and high quality meat patty. The incentive structure for the In-n-out employee results in workers staying at the fast food chain longer with the manager frequently in charge of employees and the actual cooking of the burger patties. This method is opposite of competing fast food restaurants who assign the burger cooking to entry level employees and customer management to higher levels. In-n-out maintains a smaller menu focusing on a few key items instead of an expansive or seasonally themed menu making it easier to maintain high standards of quality. Lastly their slow growth and enthusiastic fans provide a positive atmosphere lending itself to people thinking more highly of In-n-out where without this influence might be neutral on the brand.
|
|
null | false
| null |
Where is the best place to view the Northern Lights in Europe?
|
In Europe most people travel to Iceland, Gleenland, Norway, Finland or Sweden with the hope of seeing the Northern Lights. The Northern Lights only tend to be visible between the months of September and April each year. Clear weather is required with little cloud coverage to get the best view.
|
|
1809.10644
| false
| null |
We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search.
We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task.
|
What data are the embeddings trained on?
|
The answers are shown as follows:
* Common Crawl
|
null | false
| null |
The Atlas moth is one of the largest lepidopterans, with a wingspan measuring up to 24 cm (9.4 in) and a wing surface area of about 160 cm2 (≈25 in2). It is only surpassed in wingspan by the white witch (Thysania agrippina) and Attacus caesar, and in wing surface area by the Hercules moth (Coscinocera hercules). As in most silk moths, females are noticeably larger and heavier than males, while males have broader antennae.
|
How big is the Atlas Moth?
|
The Atlas Moth has a wingspan measuring up to 24 cm.
|
null | false
| null |
Should I take extra fuel when traveling to Alaska?
|
There are limited roads in Alaska, as well as northern Canada. While there are some gas stops along the way, options are limited and you might run out of gas in the wilderness. As Alaska is an extremely large state, being stranded in the middle of nowhere is potentially very harmful. It’s a very good idea to take extra fuel when taking a road trip to or through Alaska. Make sure you buy a proper gas can that will not dissolve when in contact with gasoline or diesel.
|
|
1903.03467
| false
| null |
To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API. To test the method on real-world sentences, we consider a monologue from the stand-up comedy show “Sarah Silverman: A Speck of Dust”. The monologue consists of 1,244 English sentences, all by a female speaker conveyed to a plural, gender-neutral audience. Our parallel corpora consists of the 1,244 English sentences from the transcript, and their corresponding Hebrew translations based on the Hebrew subtitles. We translate the monologue one sentence at a time through the Google Cloud API. Eyeballing the results suggest that most of the translations use the incorrect, but default, masculine and singular forms for the speaker and the audience, respectively. We expect that by adding the relevant condition of “female speaking to an audience” we will get better translations, affecting both the gender of the speaker and the number of the audience.
To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API.
|
Which neural machine translation system is used?
|
The answers are shown as follows:
* Google's machine translation system (GMT)
|
null | false
| null |
5 Best place to visit near Bangalore
|
Ooty
Sakeleshpur
Coorg
Udupi
Pondicherry
|
|
null | false
| null |
Why are we like aquariums?
|
Aquariums, like zoos, provide fun-filled leisure and entertainment activities that double-up as centres for animal rescue, care and rehabilitation, scientific research, education and conservation.
|
|
null | false
| null |
Fernando António Nogueira Pessoa (Portuguese: [fɨɾˈnɐ̃du pɨˈsoɐ]; 13 June 1888 – 30 November 1935) was a Portuguese poet, writer, literary critic, translator, publisher, and philosopher, described as one of the most significant literary figures of the 20th century and one of the greatest poets in the Portuguese language. He also wrote in and translated from English and French.
Pessoa was a prolific writer, and not only under his own name, for he created approximately seventy-five others, of which three stand out, Alberto Caeiro, Álvaro de Campos, and Ricardo Reis. He did not call them pseudonyms because he felt that this did not capture their true independent intellectual life and instead called them heteronyms. These imaginary figures sometimes held unpopular or extreme views.
Pessoa's earliest heteronym, at the age of six, was Chevalier de Pas. Other childhood heteronyms included Dr. Pancrácio and David Merrick, followed by Charles Robert Anon, a young Englishman who became Pessoa's alter ego. In 1905/7, when Pessoa was a student at the University of Lisbon, Alexander Search took the place of Anon. The main reason for this was that, although Search was English, he was born in Lisbon, as was his author. But Search represents a transition heteronym that Pessoa used while searching to adapt to the Portuguese cultural reality. After the republican revolution, in 1910, and consequent patriotic atmosphere, Pessoa created another alter ego, Álvaro de Campos, supposedly a Portuguese naval and mechanical engineer, who was born in Tavira, hometown of Pessoa's ancestors, and graduated in Glasgow. Translator and literary critic Richard Zenith notes that Pessoa eventually established at least seventy-two heteronyms. According to Pessoa himself, there are three main heteronyms: Alberto Caeiro, Álvaro de Campos and Ricardo Reis. Pessoa's heteronyms differ from pen names, because they possess distinct biographies, temperaments, philosophies, appearances, writing styles and even signatures. Thus, heteronyms often disagree on various topics, argue and discuss with each other about literature, aesthetics, philosophy, etc.
|
Who were Fernando Pessoa's main heteronyms?
|
Fernando Pessoa created approximately seventy-five heteronyms, of which three stand out, Alberto Caeiro, Álvaro de Campos, and Ricardo Reis.
Pessoa's heteronyms differ from pen names, because they possess distinct biographies, temperaments, philosophies, appearances, writing styles and even signatures.
|
null | false
| null |
Who is the most beautiful lady in the world now
|
chiara ferragni . for her intelligence and elegant look
|
|
null | false
| 26
|
Most existing recursive neural network (RvNN) architectures utilize only the structure of parse trees, ignoring syntactic tags which are provided as by-products of parsing. We present a novel RvNN architecture that can provide dynamic compositionality by considering comprehensive syntactic information derived from both the structure and linguistic tags. Specifically, we introduce a structure-aware tag representation constructed by a separate tag-level tree-LSTM. With this, we can control the composition function of the existing word-level tree-LSTM by augmenting the representation as a supplementary input to the gate functions of the tree-LSTM. In extensive experiments, we show that models built upon the proposed architecture obtain superior or competitive performance on several sentence-level tasks such as sentiment analysis and natural language inference when compared against previous tree-structured models and other sophisticated neural models.
We present a novel RvNN architecture that can provide dynamic compositionality by considering comprehensive syntactic information derived from both the structure and linguistic tags.
|
What method does the author propose?
|
They present a novel RvNN architecture that can provide dynamic compositionality by considering comprehensive syntactic information derived from both the structure and linguistic tags.
|
null | false
| null |
Jesse LaFollette (1781 NJ-1843 Ind) was the grandfather of Robert Marion La Follette, Sr., William La Follette, and Harvey Marion LaFollette. His family lived next to the Knob Creek Farm, Kentucky owned by Thomas Lincoln during Abraham Lincoln's boyhood years. LaFollette is shown in one of the limestone panels at the Lincoln Boyhood National Memorial in Lincoln City, Indiana.
|
Extract the names of Jesse LaFollette's grandchildren from the provided passage
|
Robert Marion La Follette, Sr., William La Follette, and Harvey Marion LaFollette
|
null | false
| null |
Shawboro is an unincorporated community located in Currituck County, North Carolina, United States.
The area takes its name from Henry Marchmore Shaw.
The post office is located close to the intersection of Shawboro Road (NC 34) and North Indian Town Road (SR 1147). The ZIP Code is 27973. The Shawboro area is covered by three different telephone exchanges: Elizabeth City, Moyock, and Shiloh.
Shawboro is served by Crawford Township fire and rescue services as well as Shawboro Elementary School, a K–5 school with approximately 500 students.
Culong, the Shaw House, and Twin Houses are listed on the National Register of Historic Places.
|
Where is the post office of Shawboro located?
|
The post office of Shawboro, North Carolina is located close to the intersection of NC 34 Shawboro road and North Indian Town Road (SR 1147).
|
null | false
| null |
Carol Claire Aguilar Banawa-Crisostomo (born March 4, 1981), better known in the Philippines as Carol Banawa, is a US-based Filipina singer, actress, and nurse. She is a Star Magic Batch 4 alumna.
|
Who is Carol Banawa?
|
Carol Claire Aguilar Banawa-Crisostomo (born March 4, 1981), better known in the Philippines as Carol Banawa, is a US-based Filipina singer, actress, and nurse. She is a Star Magic Batch 4 alumna.
Raised in Batangas, she has two siblings, Alexander and Cherry. Banawa started singing when she was a little girl while her family lived in Saudi Arabia. Her family then moved back to the Philippines. There, she was invited to join the new roster of ABS-CBN stars through the children-oriented show Ang TV.
She has performed at the Madison Square Garden in New York City at the 2003 New York Music Festival.
In 2010, she made her comeback in the music industry, and starred in the hit Philippine TV series, Mara Clara, where she sang the opening song of the said TV series. She also covered Till My Heartaches End, the theme song of the 2010 film of the same name.
In 2016, her song "Bakit 'Di Totohanin" was featured in one of the scenes of the U.S. drama series The Vampire Diaries.
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Towers?
|
Walton Towers
|
|
null | false
| null |
Bánh xèo (Vietnamese: [ɓǎjŋ̟ sɛ̂w], lit. 'sizzling pancake') is a crispy, stuffed rice pancake popular in Vietnam. The name refers to the sound (from xèo – 'sizzling') a thin layer of rice batter makes when it is poured into the hot skillet. It is a savoury fried pancake made of rice flour, water, and turmeric powder. It can also be called a Vietnamese crêpe. Some common stuffings include pork, prawns, diced green onion, mung bean, and bean sprouts. Bánh xèo served with sides. Usually, some commonly added ingredients include leafy greens like lettuces or perilla leaves, other herbs as flavor enhancers like mint leaves and Thai basil, cucumbers, and pickled vegetables, usually carrots and radishes. Lastly, its dipping sauce is Nước chấm (sweet and sour garlic fish sauce). Elements of each side and sauce add to the fresh-tasting fried Bánh Xèo.
|
Would you like to try some Bánh Xèo, a popular Vietnamese disk?
|
Bánh xèo is my favorite quickie food. It's light, but full of flavors.
|
null | false
| null |
What is the meaning of the information age
|
The Information Age (also known as the Computer Age, Digital Age, Silicon Age, or New Media Age) is a historical period that began in the mid-20th century. It is characterized by a rapid shift from traditional industries, as established during the Industrial Revolution, to an economy centered on information technology. The onset of the Information Age has been linked to the development of the transistor in 1947, the optical amplifier in 1957, and Unix time, which began on January 1, 1970. These technological advances have had a significant impact on the way information is processed and transmitted.
According to the United Nations Public Administration Network, the Information Age was formed by capitalizing on computer microminiaturization advances, which led to modernized information systems and internet communications as the driving force of social evolution.
|
|
null | false
| 133
|
We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content. We applied neural network models to capture visual features given visual renderings of documents. Experimental results show that we achieve a 2.9% higher accuracy than state-of-the-art approaches based on textual features over Wikipedia, and performance competitive with or surpassing state-of-the-art approaches over arXiv. We further proposed a joint model, combining textual and visual representations, to predict the quality of a document. Experimental results show that our joint model outperforms the visual-only model in all cases, and the text-only model on Wikipedia and two subsets of arXiv. These results underline the feasibility of assessing document quality via visual features, and the complementarity of visual and textual document representations for quality assessment.
We proposed to use visual renderings of documents to capture implicit document quality indicators, such as font choices, images, and visual layout, which are not captured in textual content.
|
What's the function of the image classifier that the authors find?
|
The authors find that implicit quality indicators, such as images and visual layout, can be captured by an image classifier, at a level comparable to a text classifier.
|
null | false
| 486
|
Affordance-aware navigation is a major challenge in solving complex and long-horizon indoor tasks such as ALFRED with both navigation and object interactions. Specifically, given each object of interest in the scene, the agent is required to not only find and approach it but also end up at a pose (x, y, r, h), that is feasible for subsequent interactions with the object. For instance, to open a fridge, the robot should approach the fridge closely enough (so the door is within reach), look at it (so that the fridge is in the field of view), and leave enough room to open the door. To solve a long-horizon task involving multiple navigation and object interaction subgoals, it is natural to use an explicit semantic map, either 2D or 3D, of the environment (similar to Neural Active SLAM Chaplot et al. (2020a)), together with model-based planning (e.g. as in). This line of work tends to generalize better than models that directly learn mappings from human instructions to navigation & interaction actions (e.g., E.T.). With perfect knowledge of the environment, it is possible to achieve (nearly) perfect performance. In practice, however, the semantic map acquired at inference time is usually far from ideal, primarily due to Incompleteness (missing information due to insufficient exploration of the scene) and Inaccuracy (erroneous object location prediction on the map, especially for small objects).
To improve exploration performance, we propose a multimodal module that, at each step, predicts an exploration action a ∈ {MoveAhead, RotateLeft, RotateRight} by taking visual observations & actions in the past, step-by-step language instructions, and the explored area map which indicates where the agent has visited. We show that, compared to existing model-based approaches on ALFRED (e.g., which applies random exploration), our use of lowlevel language instructions leads to more efficient exploration. The proposed exploration module operates at the subgoal level and only predicts exploration actions (in contrast to E.T. which directly predicts actions for the entire task). The extra modality (the explored area) facilitates exploration by providing the agent with explicit spatial information. We illustrate the exploration module in Figure, elaborate its details in Section 4.3, and empirically demonstrate its advantages in Section 5.
To improve exploration performance, we propose a multimodal module that, at each step, predicts an exploration action a ∈ {MoveAhead, RotateLeft, RotateRight} by taking visual observations & actions in the past, step-by-step language instructions, and the explored area map which indicates where the agent has visited.
|
Where is the explored area coming from (my understanding is that the environment does not provide the explored area)?
|
The “explored area” comes from the agent’s prediction of its exploration actions.
Given such exploration actions, we can directly compute where the agent has visited.
|
null | false
| null |
The United States is a highly developed country, and its economy accounts for approximately a quarter of global GDP and is the world's largest by GDP at market exchange rates. By value, the United States is the world's largest importer and second-largest exporter. Although it accounts for just over 4.2% of the world's total population, the U.S. holds over 30% of the total wealth in the world, the largest share held by any country.
|
What country has the highest GDP?
|
The United States has the world's largest GDP at market rate exchange rates. The United States also accounts for approximately 25% of the global GDP. It is the largest importer and second-largest exporter. Having only 4.2% of the world's population, the United States holds over 30% of the total wealth in the world, which is the largest share held by one country.
|
null | false
| null |
What is the movie Forrest Gump about
|
Forrest Gump is a 1994 movie directed by Robert Zemeckis and starring Tom Hanks in the titular role. The movie follows the life of Forrest, a simple and naive boy as he grows up in Mobile Alabama. Forest unwittingly witnesses or becomes involved in numerous notable events in American history. Forrest fights in the Vietnam War, becomes a global Ping Pong Champion and founds one of the largest Shrimp boating companies in the Nation, among other things. Forrest Gump is a sweet and nostalgic movie that won numerous awards and remains highly popular.
|
|
null | false
| null |
Australia Pacific Airports Corporation Limited (APAC) is an unlisted company and owner of two Australian airports: Melbourne Airport and Launceston Airport. Each airport is operated by an airport lessee company, in which APAC has a controlling interest: Australia Pacific Airports (Melbourne) Pty Ltd and Australia Pacific Airports (Launceston) Pty Ltd respectively.
|
Please give me the list of airports that are owned by Australia Pacific Airports Corporation.
|
Australia Pacific Airports Corporation Limited owns two airports in Australia called Launceston Airport and Melbourne Airport.
|
1906.03538
| false
| null |
There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works.
There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works.
|
What challenges are highlighted?
|
The answers are shown as follows:
* one needs to develop mechanisms to recognize valid argumentative structures
* we ignore trustworthiness and credibility issues
|
null | false
| 8
|
The Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 BIBREF7 which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition BIBREF12 as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB.
Preprocessing: The PDTB dataset documents its annotations as a list of discourse relations, with each relation associated with its two discourse units. To recover the paragraph context for a discourse relation, we match contents of its two annotated discourse units with all paragraphs in corresponding raw WSJ article. When all the matching was completed, each paragraph was split into a sequence of discourse units, with one discourse relation (implicit or explicit) between each two adjacent discourse units. Following this method, we obtained 14,309 paragraphs in total, each contains 3.2 discourse units on average. Table 2 shows the distribution of paragraphs based on the number of discourse units in a paragraph.
We followed the same PDTB section partition (Rutherford and Xue, 2015) as previous work and used sections 2- 20 as training set, sections 21-22 as test set, and sections 0-1 as development set.
|
How did the team divide the PDTB section partition in their experiment?
|
The team used sections 2- 20 as training set, sections 21-22 as test set, and sections 0-1 as development set.
|
null | false
| null |
Short for "simultaneous", simul climbing is when two climbers move at the same time. The pseudo-lead climber places gear that the pseudo-follower collects. When the leader runs low on gear they construct a belay station where the follower can join them to exchange gear. The stronger climber is often the pseudo-follower since a fall by the follower would pull the leader from below towards the last piece of gear—a potentially devastating fall for the leader. In contrast, a fall from the leader would pull the follower from above, resulting in a less serious fall. Most speed ascents involve some form of simul climbing but may also include sections of standard free climbing and the use of placed gear for advancement (i.e. partial aid or pulling on gear).
|
Is simul climbing more dangerous for the leader or the follower?
|
Simul climbing is more dangerous for the leader, since a fall by the follower would pull the leader from below towards the last piece of gear.
|
null | false
| null |
Tell me which of the following are science fiction TV shows: Lost, The X-Files, The Mandalorian, Millennium, Game of Thrones.
|
All except Game of Thrones are classified as science fiction. Game of Thrones is considered high fantasy.
|
|
1904.08386
| false
| null |
As the book is too small to train any models, we leverage recent advances in large-scale language model-based representations BIBREF5 , BIBREF6 to compute a representation of each city. We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects.
We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects.
|
How do they obtain human judgements?
|
Using crowdsourcing
|
null | false
| null |
What kind of cancer did Agent Dana Scully get diagnosed with in "The X-Files"?
|
She developed an inoperable nasopharyngeal tumor.
|
|
null | false
| 163
|
In this paper, we propose a novel mechanism for enriching the feature vector, for the task of sarcasm detection, with cognitive features extracted from eye-movement patterns of human readers. Sarcasm detection has been a challenging research problem, and its importance for NLP applications such as review summarization, dialog systems and sentiment analysis is well recognized. Sarcasm can often be traced to incongruity that becomes apparent as the full sentence unfolds. This presence of incongruity- implicit or explicit- affects the way readers eyes move through the text. We observe the difference in the behaviour of the eye, while reading sarcastic and non sarcastic sentences. Motivated by his observation, we augment traditional linguistic and stylistic features for sarcasm detection with the cognitive features obtained from readers eye movement data. We perform statistical classification using the enhanced feature set so obtained. The augmented cognitive features improve sarcasm detection by 3.7% (in terms of F-score), over the performance of the best reported system.
In this paper, we propose a novel mechanism for enriching the feature vector, for the task of sarcasm detection, with cognitive features extracted from eye-movement patterns of human readers.
|
What does this paper focus on?
|
They propose a novel mechanism for enriching the feature vector, for the task of sarcasm detection, with cognitive features extracted from eye-movement patterns of human readers.
|
null | false
| 371
|
The recent increase in digitally available language corpora made it possible to extend the traditional linguistic tools to a vast amount of often user-generated texts. Understanding how these corpora differ from traditional texts is crucial in developing computational methods for web search, information retrieval or machine translation BIBREF0 . The amount of these texts enables the analysis of language on a previously unprecedented scale BIBREF1 , BIBREF2 , BIBREF3 , including the dynamics, geography and time scale of language change BIBREF4 , BIBREF5 , social media cursing habits BIBREF6 , BIBREF7 , BIBREF8 or dialectal variations BIBREF9 .
From online user activity and content, it is often possible to infer different socio-economic variables on various aggregation scales. Ranging from showing correlation between the main language features on Twitter and several demographic variables BIBREF10 , through predicting heart-disease rates of an area based on its language use BIBREF11 or relating unemployment to social media content and activity BIBREF12 , BIBREF13 , BIBREF14 to forecasting stock market moves from search semantics BIBREF15 , many studies have attempted to connect online media language and metadata to real-world outcomes. Various studies have analyzed spatial variation in the text of OSN messages and its applicability to several different questions, including user localization based on the content of their posts BIBREF16 , BIBREF17 , empirical analysis of the geographic diffusion of novel words, phrases, trends and topics of interest BIBREF18 , BIBREF19 , measuring public mood BIBREF20 .
While many of the above cited studies exploit the fact that language use or social media activity varies in space, it is hard to capture the impact of the geographic environment on the used words or concepts. There is a growing literature on how the sheer size of a settlement influences the number of patents, GDP or the total road length driven by universal laws BIBREF21 . These observations led to the establishment of the theory of urban scaling BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 , where scaling laws with city size have been observed in various measures such as economic productivity BIBREF31 , human interactions BIBREF32 , urban economic diversification BIBREF33 , election data BIBREF34 , building heights BIBREF35 , crime concentration BIBREF36 , BIBREF37 or touristic attractiveness BIBREF38 .
In our paper, we aim to capture the effect of city size on language use via individual urban scaling laws of words. By examining the so-called scaling exponents, we are able to connect geographical size effects to systematic variations in word use frequencies. We show that the sensitivity of words to population size is also reflected in their meaning. We also investigate how social media language and city size affects the parameters of Zipf's law BIBREF39 , and how the exponent of Zipf's law is different from that of the literature value BIBREF39 , BIBREF40 . We also show that the number of new words needed in longer texts, the Heaps law BIBREF1 exhibits a power-law form on Twitter, indicating a decelerating growth of distinct tokens with city size.
We show that the sensitivity of words to population size is also reflected in their meaning.
|
Is the sensitivity of words to population size also reflected in their meaning?
|
Yes, it is.
|
null | false
| null |
Give me a list of best movie review websites.
|
Here are the list of some popular movie review websites.
1. Guardian
2. Rotten Tomatoes
3. Yahoo Movies
4. IMDB
5. Meta Critic
|
|
null | false
| null |
How should I invest my money?
|
There are many different ways to invest your money. You can leave it in a Saving account, high yield saving account, stock market, real estate, bonds, notes, Gold, Silver, Cryptocurrency and many other assets. Each of the asset classes comes with different levels of risk. A diverse portfolio has advantageous to a non-diverse portfolio.
|
|
null | false
| 85
|
In recent years, emotion detection in text has become more popular due to its vast potential applications in marketing, political science, psychology, human-computer interaction, artificial intelligence, etc. In this work, we argue that current methods which are based on conventional machine learning models cannot grasp the intricacy of emotional language by ignoring the sequential nature of the text, and the context. These methods, therefore, are not sufficient to create an applicable and generalizable emotion detection methodology. Understanding these limitations, we present a new network based on a bidirectional GRU model to show that capturing more meaningful information from text can significantly improve the performance of these models. The results show significant improvement with an average of 26.8 point increase in F-measure on our test data and 38.6 increase on the totally new dataset.
In this work, we argue that current methods which are based on conventional machine learning models cannot grasp the intricacy of emotional language by ignoring the sequential nature of the text, and the context.
|
In the author's opinion, what are the shortcomings of text sentiment detection in recent years?
|
They argue that current methods which are based on conventional machine learning models cannot grasp the intricacy of emotional language by ignoring the sequential nature of the text, and the context.
|
null | false
| null |
What is the best hand in poker?
|
The highest value hand in poker is the Royal Flush. It consists of a run of five cards all in the same suit. The hand needs to include an Ace, King, Queen, Jack and a Ten, all in the same suit.
|
|
null | false
| 391
|
To evaluate the proposed DFN model, we conducted experiments on the RACE dataset. Statistics of the training/dev/test data are provided in Table 2 . In this section, we present the experimental results, with a detailed analysis on the dynamic selection of strategies and multi-step reasoning. An ablation study is also provided to demonstrate the effectiveness of dynamic fusion and reasoning in DFN.
In this section, we present the experimental results, with a detailed analysis on the dynamic selection of strategies and multi-step reasoning. An ablation study is also provided to demonstrate the effectiveness of dynamic fusion and reasoning in DFN.
|
Do they conduct an ablation study in the experiment?
|
Yes.
|
null | false
| null |
Tourism to the Andaman and Nicobar Islands is increasing due to the popularity of beaches and adventure sports like snorkelling and sea-walking. Plans to develop various islands under NITI (National Institute of Transforming India) Aayog is also in progress. Luxury resorts with participation from the Government are set up to plan in Avis Island, Smith Island and Long Island.
In Port Blair, the main places to visit are the Cellular Jail, Mahatma Gandhi Marine National Park, Andaman Water sports complex, Chatham Saw Mill, Mini Zoo, Corbyn's cove, Chidiya Tapu, Wandoor Beach, Forest Museum, Anthropological Museum, Fisheries Museum, Naval Museum (Samudrika), Ross Island and North Bay Island. Viper Island which was earlier visited is now kept closed by the administration. Other places include Havelock island famous for Radhanagar Beach, Neil Island for Scuba diving/snorkeling/sea walking, Cinque Island, Saddle peak, Mt Harriet, and Mud Volcano. Diglipur, located at North Andaman is also getting popular in 2018 and many tourists have started visiting North Andaman as well. The southern group (Nicobar islands) is mostly inaccessible to tourists.
Indian tourists do not require a permit to visit the Andaman Islands, but if they wish to visit any tribal areas they need a special permit from the Deputy Commissioner in Port Blair. Permits are required for foreign nationals. For foreign nationals arriving by air, these are granted upon arrival at Port Blair.
|
What are the main places of visit in Port Blair.
|
In Port Blair, the main places to visit are the Cellular Jail, Mahatma Gandhi Marine National Park, Andaman Water sports complex, Chatham Saw Mill, Mini Zoo, Corbyn's cove, Chidiya Tapu, Wandoor Beach, Forest Museum, Anthropological Museum, Fisheries Museum, Naval Museum (Samudrika), Ross Island and North Bay Island.
|
null | false
| null |
Parks and Recreation (also known as Parks and Rec) is an American political satire mockumentary sitcom television series created by Greg Daniels and Michael Schur. The series aired on NBC from April 9, 2009, to February 24, 2015, for 125 episodes, over seven seasons. A special reunion episode aired on April 30, 2020. The series stars Amy Poehler as Leslie Knope, a perky, mid-level bureaucrat in the Parks Department of Pawnee, a fictional town in Indiana. The ensemble and supporting cast features Rashida Jones as Ann Perkins, Paul Schneider as Mark Brendanawicz, Aziz Ansari as Tom Haverford, Nick Offerman as Ron Swanson, Aubrey Plaza as April Ludgate, Chris Pratt as Andy Dwyer, Adam Scott as Ben Wyatt, Rob Lowe as Chris Traeger, Jim O'Heir as Garry "Jerry" Gergich, Retta as Donna Meagle, and Billy Eichner as Craig Middlebrooks.
|
Extract the main actors and actresses who played in the show Parks and Recreation.
|
Amy Poehler, Rashida Jones, Paul Schneider, Aziz Ansari, Nick Offerman, Aubrey Plaza, Chriss Pratt, Adam Scott, Rob Lowe, Jim O'Heir, Retta, Billy Eichner
|
null | false
| null |
Q: What should I do with hand sanitizer on the FDA's list of hand sanitizers that consumers should not use?
|
A: If you have one of the products on the FDA's list of hand sanitizers consumers should not use, immediately stop using it and dispose of the product, ideally in a hazardous waste container. Do not pour these products down the drain or flush them. Contact your local waste management and recycling center for more information on hazardous waste disposal. Learn how to search the FDA’s hand sanitizer list, including a description of how to search for manufacturers and distributors on the label.
|
|
null | false
| null |
Choose the word which is different from the rest: Raid, Attack, Assault, Defence, Ambush
|
All except Defence are forms of attack.
|
|
null | false
| 30
|
Recently, neural machine translation (NMT) has gained popularity in the field of machine translation. The conventional encoder-decoder NMT proposed by Cho2014 uses two recurrent neural networks (RNN): one is an encoder, which encodes a source sequence into a fixed-length vector, and the other is a decoder, which decodes the vector into a target sequence. A newly proposed attention-based NMT by DzmitryBahdana2014 can predict output words using the weights of each hidden state of the encoder by the attention mechanism, improving the adequacy of translation.
Even with the success of attention-based models, a number of open questions remain in NMT. Tu2016 argued two of the common problems are over-translation: some words are repeatedly translated unnecessary and under-translation: some words are mistakenly untranslated. This is due to the fact that NMT can not completely convert the information from the source sentence to the target sentence. Mi2016a and Feng2016 pointed out that NMT lacks the notion of coverage vector in phrase-based statistical machine translation (PBSMT), so unless otherwise specified, there is no way to prevent missing translations.
Another problem in NMT is an objective function. NMT is optimized by cross-entropy; therefore, it does not directly maximize the translation accuracy. Shen2016 pointed out that optimization by cross-entropy is not appropriate and proposed a method of optimization based on a translation accuracy score, such as expected BLEU, which led to improvement of translation accuracy. However, BLEU is an evaluation metric based on n-gram precision; therefore, repetition of some words may be present in the translation even though the BLEU score is improved.
To address to problem of repeating and missing words in the translation, tu2016neural introduce an encoder-decoder-reconstructor framework that optimizes NMT by back-translation from the output sentences into the original source sentences. In their method, after training the forward translation in a manner similar to the conventional attention-based NMT, they train a back-translation model from the hidden state of the decoder into the source sequence by a new decoder to enforce agreement between source and target sentences.
In order to confirm the language independence of the framework, we experiment on two parallel corpora of English-Japanese and Japanese-English translation tasks using encode-decoder-reconstructor. Our experiments show that their method offers significant improvement in BLEU scores and alleviates the problem of repeating and missing words in the translation on English-Japanese translation task, though the difference is not significant on Japanese-English translation task.
In addition, we jointly train a model of forward translation and back-translation without pre-training, and then evaluate this model. As a result, the encoder-decoder-reconstructor can not be trained well without pre-training, so it proves that we have to train the forward translation model in a manner similar to the conventional attention-based NMT as pre-training.
The main contributions of this paper are as follows:
Even with the success of attention-based models, a number of open questions remain in NMT. Tu et al. (2016) argued two of the common problems are over-translation: some words are repeatedly translated unnecessary and under-translation: some words are mistakenly untranslated.
|
What are the unanswered questions about NMT?
|
Over-translation and under-translation.
|
null | false
| null |
How do I build my own computer?
|
To build a computer you will need to set the scope as to what you would like your computer to do. If you will be working on animation, you may want to budget for a stronger graphics card unit (GPU). Similarly, if you are building a gaming computer, a powerful GPU and a solid state drive (SSD) will improve your gaming performance. Additionally, you will need to check the compatibility of all items that you purchase to ensure that your build will actually support each other. While this may be daunting, it actually allows you to have much creative freedom over your computer needs. To find more information on how to build a computer and on what materials to purchase, you should check out available resources online such as YouTube channel "Linus Tech Tips" for build inspiration.
|
|
null | false
| null |
Why is the World Cup the best sporting competition?
|
The World Cup is a global event that happens only once every four years. It always has a Host(s) Nation, which is the only team that automatically qualifies for the tournament. Various tournaments lead up to the event, leading to the qualification for the World Cup. From there, FIFA will rank the qualifiers in different tiers and draw them via lottery into "Groups" together.
The Qualification process is available to all teams across the globe. Regionally-based tournaments hosted by the likes of CONACAF, UEFA, CAF, CONMEBOl, etc lead up to the World Cup Tournament.
Once the Groups are drawn via lottery together, all Four teams (from different regions). They play against each other and the two teams with the highest total points will advance to the elimination rounds, which include a round of 16, Quarterfinals, Semifinals, and the Final. Once the elimination round begins they are single elimination matches the loser is out of the tournament and the winner advances to the next round.
|
|
null | false
| null |
What are the words of House Merryweather?
|
"Behold Our Beauty"
|
|
null | false
| null |
Detective Hieronymus "Harry" Bosch is a fictional character created by American author Michael Connelly. Bosch debuted as the lead character in the 1992 novel The Black Echo, the first in a best-selling police procedural series now numbering 24 novels.
The novels are more or less coincident in timeframe with the year in which they were published. Harry, as he is commonly known by his associates, is a veteran police homicide detective with the Los Angeles Police Department. He was named after the 15th-century Dutch artist Hieronymus Bosch.
Titus Welliver portrayed the title character from 2015 to 2021 in Bosch, a television series adapted from the novels, and from 2022 in its spin-off series Bosch: Legacy.
|
Who played Harry Bosch in the eponymous TV series?
|
Titus Welliver played Detective Harry Bosch in "Bosch" and in "Bosch:Legacy".
|
null | false
| null |
What was the first full-length animated movie that Disney released?
|
"Snow White and the Seven Dwarfs" was the first full-length animated Disney movie, released in 1937.
|
|
1607.06025
| false
| null |
FLOAT SELECTED: Table 4: The performance of classifiers trained on the original and generated datasets. The classifiers were tested on original test set. The generated datasets were generated by the models from Table 3. The generated datasets were filtered with threshold 0.6.
FLOAT SELECTED: Table 4: The performance of classifiers trained on the original and generated datasets. The classifiers were tested on original test set. The generated datasets were generated by the models from Table 3. The generated datasets were filtered with threshold 0.6.
|
What is the highest accuracy score achieved?
|
82.0%
|
null | false
| null |
What is Gingham fabric?
|
Gingham, also called Vichy check, is a medium-weight balanced plain-woven fabric typically with striped, check or plaid duotone patterns, in bright colour and in white made from dyed cotton or cotton-blend yarns. It is made of carded, medium or fine yarns.
|
|
null | false
| null |
Battlefield 3's multiplayer matches see players take on one of four roles: Assault, Support, Engineer and Recon. The Assault class focuses on assault rifles and healing teammates. The Support class focuses on light machine guns and supplying ammunition. The Engineer class focuses on supporting and destroying vehicles. The Recon class focuses on sniping and spotting enemies. The mechanics of the weapons have been changed to utilize the new engine: compatible weapons may have bipods attached which can then be deployed when in the prone position or near suitable scenery, and provide a significant boost to accuracy and recoil reduction. Suppressive fire from weapons blurs the vision and reduces the accuracy of those under fire, as well as health regeneration. The Recon class can put a radio beacon anywhere on the map and all squad members will be able to spawn on the location of the beacon.
|
Given this text about Battlefield 3 on Wikipedia, what was the included classes you could play as in the game?
|
In Battlefield 3 you could play as one of four classes: Support, Assault, Engineer, and Recon. They all had different equipment and capabilities.
|
null | false
| null |
Name the movies made by Quentin Tarantino.
|
As of March 2023, Quentin Tarantino has made these movies:
1. Reservoir Dogs
2. Pulp Fiction
3. Jackie Brown
4. Kill Bill: Volume 1
5. Kill Bill: Volume 2
6. Death Proof
7. Inglourious Basterds
8. Django Unchained
9. The Hateful Eight
10. Once Upon a Time in Hollywood
|
|
null | false
| null |
How may I find a charging port outdoors for my laptop?
|
There are many ways to find a charging port outdoors for your laptop. For example, one may find charging ports in cafes, although usually those are reserved for paying customers. It's not uncommon to pay for a cup of latte and charge your laptop at the same time. Charging ports are also commonly seen in public places, such as public libraries, city hall, and parks. Public transportation is another way to get your laptop charged, as many trains have charging ports near the seats.
|
|
null | false
| null |
Classify the below items based on the type of current they use, AC or DC.
charging of batteries, fans, bulbs, air conditioners
|
AC: fans, bulbs, air conditioners
DC: charging of batteries
|
|
null | false
| null |
From Wikipedia, the free encyclopedia
In online transaction processing (OLTP), information systems typically facilitate and manage transaction-oriented applications. This is contrasted with online analytical processing.
The term "transaction" can have two different meanings, both of which might apply: in the realm of computers or database transactions it denotes an atomic change of state, whereas in the realm of business or finance, the term typically denotes an exchange of economic entities (as used by, e.g., Transaction Processing Performance Council or commercial transactions.): 50 OLTP may use transactions of the first type to record transactions of the second.
OLTP has also been used to refer to processing in which the system responds immediately to user requests. An automated teller machine (ATM) for a bank is an example of a commercial transaction processing application. Online transaction processing applications have high throughput and are insert- or update-intensive in database management. These applications are used concurrently by hundreds of users. The key goals of OLTP applications are availability, speed, concurrency and recoverability (durability). Reduced paper trails and the faster, more accurate forecast for revenues and expenses are both examples of how OLTP makes things simpler for businesses. However, like many modern online information technology solutions, some systems require offline maintenance, which further affects the cost-benefit analysis of an online transaction processing system.
OLTP is typically contrasted to OLAP (online analytical processing), which is generally characterized by much more complex queries, in a smaller volume, for the purpose of business intelligence or reporting rather than to process transactions. Whereas OLTP systems process all kinds of queries (read, insert, update and delete), OLAP is generally optimized for read only and might not even support other kinds of queries. OLTP also operates differently from batch processing and grid computing.: 15
In addition, OLTP is often contrasted to OLEP (online event processing), which is based on distributed event logs to offer strong consistency in large-scale heterogeneous systems. Whereas OLTP is associated with short atomic transactions, OLEP allows for more flexible distribution patterns and higher scalability, but with increased latency and without guaranteed upper bound to the processing time.
|
Based on the below Wikipedia content, what is OLTP?
|
OLTP stands for "online transaction processing" and refers to information systems that process transactional data. The applications generating such data often require a high volume of transactions -- such as being used concurrently by hundreds of users. An example of a commercial transaction processing application is an automated teller machine (ATM). OLTP is often contrasted with OLAP (online analytical processing) which is typically characterized by more complex queries but with much smaller volume (for example, for business intelligence reporting).
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Blackfyre?
|
House Blackfyre, a branch that separated from House Targaryen, was founded by Daemon I Targaryen
|
|
1911.02821
| false
| null |
We carried out experiments on four Chinese NLP tasks, including Emotion Classification (EC), Named Entity Recognition (NER), Sentence Pair Matching (SPM) and Natural Language Inference (NLI). The detail of those tasks and the corresponding datasets are introduced in Appendix.
We carried out experiments on four Chinese NLP tasks, including Emotion Classification (EC), Named Entity Recognition (NER), Sentence Pair Matching (SPM) and Natural Language Inference (NLI).
|
What benchmarks did they experiment on?
|
The answers are shown as follows:
* Emotion Classification (EC)
* Named Entity Recognition (NER)
* Sentence Pair Matching (SPM)
* Natural Language Inference (NLI)
|
null | false
| null |
Moroccan music is of Arabic, Berber and sub-Saharan origins. Rock-influenced chaabi bands are widespread, as is trance music with historical origins in Islamic music.
Morocco is home to Andalusian classical music that is found throughout Northwest Africa. It probably evolved under the Moors in Cordoba, and the Persian-born musician Ziryab is usually credited with its invention. A genre known as Contemporary Andalusian Music and art is the brainchild of Morisco visual artist/composer/oudist Tarik Banzi, founder of the Al-Andalus Ensemble.
A group of Jilala musicians in 1900
Aita is a Bedouin musical style sung in the countryside.
Chaabi ("popular") is a music consisting of numerous varieties which are descended from the multifarious forms of Moroccan folk music. Chaabi was originally performed in markets, but is now found at any celebration or meeting.
Popular Western forms of music are becoming increasingly popular in Morocco, such as fusion, rock, country, metal and, in particular, hip hop.
|
What's a famous music in Morocco
|
Morocco is Home to Andalusian classical music which evolved under the moors in Cordoba and Chaabi music which refer to popular music. It consists of numerous varieties which are descended from the multifarious forms of Moroccan folk music. Aita is a bedouin musical style sung in the countryside.
|
null | false
| null |
Who is the author of Atomic Habits?
|
James Clear is the author of the bestselling book titled Atomic Habits.
|
|
1909.05855
| false
| null |
Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically.
It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below.
Machine-machine Interaction A related line of work explores simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers BIBREF1. Such a framework may be cost-effective and error-resistant since the underlying crowd worker task is simpler, and semantic annotations are obtained automatically.
It is often argued that simulation-based data collection does not yield natural dialogues or sufficient coverage, when compared to other approaches such as Wizard-of-Oz. We argue that simulation-based collection is a better alternative for collecting datasets like this owing to the factors below.
|
How did they gather the data?
|
The answers are shown as follows:
* simulation-based dialogue generation, where the user and system roles are simulated to generate a complete conversation flow, which can then be converted to natural language using crowd workers
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Gardener?
|
Garth the Gardener
|
|
null | false
| null |
How is Delta Lake related to Apache Spark?
|
Delta Lake sits on top of Apache Spark. The format and the compute layer helps to simplify building big data pipelines and increase the overall efficiency of your pipelines.
|
|
null | false
| null |
Trains are up to sixteen cars long. With each car measuring 25 m (82 ft) in length, the longest trains are 400 m (1⁄4 mile) end to end. Stations are similarly long to accommodate these trains. Some of Japan's high-speed maglev trains are considered Shinkansen, while other slower maglev trains (such as the Linimo maglev train line serving local community near the city of Nagoya in Aichi, Japan) are intended as alternatives to conventional urban rapid transit systems.
https://en.wikipedia.org/wiki/Shinkansen
|
Based on this paragraph on Japanese bullet trains, how many cars do the longest trains have?
|
Based on this paragraph, the longest Japanese bullets trains have 16 cars, which is determined by dividing the length of the longes trains (400m) by the length of each car (25m).
|
null | false
| null |
Tell me whether those are countries or cities: Paris, Amsterdam, South Africa, Australia, Vienna, Singapore, Malaysia, Brunei, Argentina
|
Paris, Amsterdam and Vienna are cities. South Africa, Australia, Malaysia, Brunei and Argentina are countries. Singapore is actually both a city and a country.
|
|
null | false
| 451
|
A kernel machine uses the kernel trick to map the non-linear problem into a feature space where the problem may be linearly separable with an appropriate kernel function. Next we give the formal definition of kernel machines discussed in this paper. Given a training data set {X, y} of n training instances where {X ∈ R n×d , y ∈ R n } = {(x 1 , y 1 ), (x 2 , y 2 ), ..., (x n , y n )}, and (x i , y i ) denotes the instance x i ∈ R d with its label y i , the objective of the kernel machine training is to find an optimal ω * which minimizes the structural risk as follows.
min
where λ denotes the regularization constant and f (ω, x i ) = ω, φ(x i ) . The variable ω is defined on the reproducing kernel Hilbert space (RKHS) and •, • is the inner product on the RKHS. The function φ(•) maps the instances from their original data space to a higher dimensional feature space induced by the kernel function. Assume the loss l(•, •) is an affine function of ω. The representer theorem shows that a minimizer of the optimization problem ()
where k(x i , x j ) denotes a positive definite kernel function and k(x i , x j ) = φ(x i ), φ(x j ) . By substituting the expressions of f (ω, x i ) and ω into the Equation (1), we have the objective with respect to α below.
where α = [α 1 . . . α n ] T is an n-dimension vector, each dimension of which corresponds to the contribution of a training instance to the kernel machine.
Then, we can derive that the Hessian matrix H = [H ij ] n×n of Problem () is equal to the kernel matrix. The element in the i-th row and j-th column of the matrix H is H ij = k(x i , x j ). The derivations of the first and second derivatives can be found in the supplementary material. For clarity, we use kernel matrix H to denote the Hessian matrix of the kernel machine in the rest of the paper.
Eq(1)
|
The expression in Eq. (1) is the Lagrangian. Is the dual problem stated somewhere ?
|
We've reorganized Section 2, and now Eq. (1) is in primal form.
|
1910.03771
| false
| null |
Here is a list of architectures for which reference implementations and pretrained weights are currently provided in Transformers. These models fall into two main categories: generative models (GPT, GPT-2, Transformer-XL, XLNet, XLM) and models for language understanding (Bert, DistilBert, RoBERTa, XLM).
BERT (BIBREF13) is a bi-directional Transformer-based encoder pretrained with a linear combination of masked language modeling and next sentence prediction objectives.
RoBERTa (BIBREF5) is a replication study of BERT which showed that carefully tuning hyper-parameters and training data size lead to significantly improved results on language understanding.
DistilBERT (BIBREF32) is a smaller, faster, cheaper and lighter version BERT pretrained with knowledge distillation.
GPT (BIBREF34) and GPT2 (BIBREF9) are two large auto-regressive language models pretrained with language modeling. GPT2 showcased zero-shot task transfer capabilities on various tasks such as machine translation or reading comprehension.
Transformer-XL (BIBREF35) introduces architectural modifications enabling Transformers to learn dependency beyond a fixed length without disrupting temporal coherence via segment-level recurrence and relative positional encoding schemes.
XLNet (BIBREF4) builds upon Transformer-XL and proposes an auto-regressive pretraining scheme combining BERT's bi-directional context flow with auto-regressive language modeling by maximizing the expected likelihood over permutations of the word sequence.
XLM (BIBREF8) shows the effectiveness of pretrained representations for cross-lingual language modeling (both on monolingual data and parallel data) and cross-lingual language understanding.
We systematically release the model with the corresponding pretraining heads (language modeling, next sentence prediction for BERT) for adaptation using the pretraining objectives. Some models fine-tuned on downstream tasks such as SQuAD1.1 are also available. Overall, more than 30 pretrained weights are provided through the library including more than 10 models pretrained in languages other than English. Some of these non-English pretrained models are multi-lingual models (with two of them being trained on more than 100 languages) .
Here is a list of architectures for which reference implementations and pretrained weights are currently provided in Transformers. These models fall into two main categories: generative models (GPT, GPT-2, Transformer-XL, XLNet, XLM) and models for language understanding (Bert, DistilBert, RoBERTa, XLM).
BERT (BIBREF13) is a bi-directional Transformer-based encoder pretrained with a linear combination of masked language modeling and next sentence prediction objectives.
RoBERTa (BIBREF5) is a replication study of BERT which showed that carefully tuning hyper-parameters and training data size lead to significantly improved results on language understanding.
DistilBERT (BIBREF32) is a smaller, faster, cheaper and lighter version BERT pretrained with knowledge distillation.
GPT (BIBREF34) and GPT2 (BIBREF9) are two large auto-regressive language models pretrained with language modeling. GPT2 showcased zero-shot task transfer capabilities on various tasks such as machine translation or reading comprehension.
Transformer-XL (BIBREF35) introduces architectural modifications enabling Transformers to learn dependency beyond a fixed length without disrupting temporal coherence via segment-level recurrence and relative positional encoding schemes.
XLNet (BIBREF4) builds upon Transformer-XL and proposes an auto-regressive pretraining scheme combining BERT's bi-directional context flow with auto-regressive language modeling by maximizing the expected likelihood over permutations of the word sequence.
XLM (BIBREF8) shows the effectiveness of pretrained representations for cross-lingual language modeling (both on monolingual data and parallel data) and cross-lingual language understanding.
We systematically release the model with the corresponding pretraining heads (language modeling, next sentence prediction for BERT) for adaptation using the pretraining objectives. Some models fine-tuned on downstream tasks such as SQuAD1.1 are also available. Overall, more than 30 pretrained weights are provided through the library including more than 10 models pretrained in languages other than English. Some of these non-English pretrained models are multi-lingual models (with two of them being trained on more than 100 languages) .
|
What state-of-the-art general-purpose pretrained models are made available under the unified API?
|
The answers are shown as follows:
* BERT
* RoBERTa
* DistilBERT
* GPT
* GPT2
* Transformer-XL
* XLNet
* XLM
|
null | false
| null |
Joseph Charlton (born April 7, 1997) is an American football punter who is a free agent. He played college football at South Carolina.
College career
Charlton was a member of the South Carolina Gamecocks for five seasons, redshirting as a true freshman. He was named South Carolina's starting punter going into his redshirt sophomore year. As a redshirt junior, Charlton set a new school record by averaging 44.8 yards per punt and was named second-team All-Southeastern Conference (SEC) by the league's coaches. He broke his own record in his senior season by averaging 47.7 yards per punt and was named second-team All-SEC and was an honorable mention All-American by Pro Football Focus. Charlton finished his collegiate career averaging 45.5 yards per punt, a school record.
|
Given a reference text about Joseph Charlton, what college football team did he play for?
|
Joseph Charlton played for the South Carolina Gamecocks.
|
null | false
| null |
Make an argument for making buses free to ride in cities.
|
Making buses free to ride would increase ridership. This benefits cities by reducing cars on the road, reducing traffic congestion and pollution. People who rely on buses often have lower income, and not charging bus fare would be disproportionately beneficial to them. Not charging a fare also avoids the need to maintain payment infrastructure on buses.
|
|
null | false
| null |
Aztec civilization originated from which country?
|
Mexico
|
|
null | false
| 135
|
Machine comprehension datasets consist of three main components: texts, questions and answers. In this section, we describe our data collection for these 3 components. We first describe a series of pilot studies that we conducted in order to collect commonsense inference questions (Section SECREF4 ). In Section SECREF5 , we discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk (henceforth MTurk). Section SECREF17 gives information about some necessary postprocessing steps and the dataset validation. Lastly, Section SECREF19 gives statistics about the final dataset.
Machine comprehension datasets consist of three main components: texts, questions and answers.
|
What are the major contents of machine comprehension datasets?
|
Machine comprehension datasets consist of three main components: texts, questions and answers.
|
null | false
| null |
Steps to create a chatbot
|
Choose the right purpose for Chatbot
Decide the platform where you want it to appear
Design a chatbot conversation on a editor
test and train your chatbot
Collect feedback on your chatbot
|
|
null | false
| null |
What does the theme songs of Jaws, Star Wars, and Harry Potter have in common?
|
They were all composed by John Walker who reignited the use of classical soundtracks in films. He drew inspiration from previous symphonic composers of the earlier 19th century like Richard Wagner. For example Star Wars theme song was based on his Ring cycle.
|
|
null | false
| null |
Who is Shigeru
|
Shigeru is a famous video game developer. He is from Japan and has designed popular iconic games such as Donkey Kong, Mario Kart, Legend of Zelda...etc.
|
|
null | false
| null |
What is sbt in Scala?
|
sbt is an open-source build tool for Scala and Java projects, similar to Apache's Maven and Gradle.
|
|
null | false
| null |
tell me whether these are synonyms or antonyms of love: dislike, care, like, hate, affection, harsh
|
Synonyms: care, like, affection
Antonyms: dislike, hate, harsh
|
|
1808.03430
| false
| null |
FLOAT SELECTED: Table 1: Comparison of different models.
FLOAT SELECTED: Table 1: Comparison of different models.
|
What are the results achieved from the introduced method?
|
Their model resulted in values of 0.476, 0.672 and 0.893 for recall at position 1,2 and 5 respectively in 10 candidates.
|
null | false
| null |
Who is the best soccer player in the world?
|
The best player in the world as of 2023 is Lionel Messi. He is also widely regarded as the greatest player to ever grace the game.
|
|
null | false
| null |
How do I train for a marathon?
|
To train for a marathon, start training 16 to 20 weeks before the day of the race. Your weekly milage should already be about 30 miles per week, with a strong 10 miler long run each week. Gradually build your mileage each week by 10% to 20%. You can also train based on time on your feet, rather than mileage. Your diet may need to be adjusted depending on the number of days/week you are training and your basal metabolic rate. Avoid injury by stretching daily and cross-training at least once a week. Run at least three 18-20 milers within the last 6 weeks of your training. Two weeks before the race, taper your runs to be shorter, with your last 2 long runs being about 15-17 miles and 10-12 miles.
|
|
null | false
| null |
Give me a list of the venues that were used for the US Open Golf tournament through the years 1986-1990
|
1986 - Shinnecock Hills Golf Club
1987 - Olympic Club (Lake Course)
1988 - The Country Club (Composite Course)
1989 - Oak Hill (East Course)
1990 - Medinah Country Club (Course No. 3)
|
|
null | false
| null |
The Legislative Council consisted of six official members appointed by the High Commissioner and twelve elected members, three of which were Muslims and nine of which were non-Muslims.
The island was divided into three constituencies, each formed of two districts. Each constituency elected one Muslim and three non-Muslims. Muslim voters had one vote, whilst non-Muslims could vote for up to three candidates. Due to the high levels of illiteracy, voting was not secret, with voters required to tell the polling officers their candidates of choice, often in front of agents of the candidates.
Suffrage was limited to men aged over 21 who had been permanent residents during the five years prior to the elections, and who had paid either the verghi tax (which was levied as an annual payment of one four-thousandth of a property's value, 4% of rental income or 3% on profits or salaries) or rent for a dwelling or shop within the last year. However, of the estimated 40,000 taxpayers in the territory at the time of the elections, only those that had been able to pay their taxes on time were able to register to vote. The number of registered voters for the non-Muslim seats dropped from 15,408 in 1886 to 10,030.
|
given this reference text about 1891 Cypriot legislative election, What caused the non-Muslim seats to drop in 1886 to 10,030?
|
Due to the requirement of paying your taxes on time in the previous year to be able to register to vote, the number of voters that were non-Muslim dropped.
|
null | false
| 144
|
Multi-document summarization (MDS), the transformation of a set of documents into a short text containing their most important aspects, is a long-studied problem in NLP. Generated summaries have been shown to support humans dealing with large document collections in information seeking tasks BIBREF0 , BIBREF1 , BIBREF2 . However, when exploring a set of documents manually, humans rarely write a fully-formulated summary for themselves. Instead, user studies BIBREF3 , BIBREF4 show that they note down important keywords and phrases, try to identify relationships between them and organize them accordingly. Therefore, we believe that the study of summarization with similarly structured outputs is an important extension of the traditional task.
A representation that is more in line with observed user behavior is a concept map BIBREF5 , a labeled graph showing concepts as nodes and relationships between them as edges (Figure FIGREF2 ). Introduced in 1972 as a teaching tool BIBREF6 , concept maps have found many applications in education BIBREF7 , BIBREF8 , for writing assistance BIBREF9 or to structure information repositories BIBREF10 , BIBREF11 . For summarization, concept maps make it possible to represent a summary concisely and clearly reveal relationships. Moreover, we see a second interesting use case that goes beyond the capabilities of textual summaries: When concepts and relations are linked to corresponding locations in the documents they have been extracted from, the graph can be used to navigate in a document collection, similar to a table of contents. An implementation of this idea has been recently described by BIBREF12 .
The corresponding task that we propose is concept-map-based MDS, the summarization of a document cluster in the form of a concept map. In order to develop and evaluate methods for the task, gold-standard corpora are necessary, but no suitable corpus is available. The manual creation of such a dataset is very time-consuming, as the annotation includes many subtasks. In particular, an annotator would need to manually identify all concepts in the documents, while only a few of them will eventually end up in the summary.
To overcome these issues, we present a corpus creation method that effectively combines automatic preprocessing, scalable crowdsourcing and high-quality expert annotations. Using it, we can avoid the high effort for single annotators, allowing us to scale to document clusters that are 15 times larger than in traditional summarization corpora. We created a new corpus of 30 topics, each with around 40 source documents on educational topics and a summarizing concept map that is the consensus of many crowdworkers (see Figure FIGREF3 ).
As a crucial step of the corpus creation, we developed a new crowdsourcing scheme called low-context importance annotation. In contrast to traditional approaches, it allows us to determine important elements in a document cluster without requiring annotators to read all documents, making it feasible to crowdsource the task and overcome quality issues observed in previous work BIBREF13 . We show that the approach creates reliable data for our focused summarization scenario and, when tested on traditional summarization corpora, creates annotations that are similar to those obtained by earlier efforts.
To summarize, we make the following contributions: (1) We propose a novel task, concept-map-based MDS (§ SECREF2 ), (2) present a new crowdsourcing scheme to create reference summaries (§ SECREF4 ), (3) publish a new dataset for the proposed task (§ SECREF5 ) and (4) provide an evaluation protocol and baseline (§ SECREF7 ). We make these resources publicly available under a permissive license.
As a crucial step of the corpus creation, we developed a new crowdsourcing scheme called lowcontext importance annotation.
|
To create the corpus, what crowdsourcing scheme is developed by them?
|
Lowcontext importance annotation.
|
null | false
| null |
Give me a list of the teams that have won the English Premier League
|
- Manchester United (13 times, most recently 2012-13)
- Blackburn Rovers (Once, 1994-95)
- Arsenal (Three times, most recently 2003-04)
- Chelsea (Five times, most recently 2016-17)
- Manchester City (Six times, most recently 2021-22)
- Leicester City (Once, 2015-16)
- Liverpool (Once, 2019-20)
|
|
null | false
| null |
The Great Ray Charles is the second studio album by American musician Ray Charles, released in 1957 by Atlantic Records. An instrumental jazz album, it features cover art designed by Marvin Israel. Later CD re-issues of The Great Ray Charles often include six out of the eight songs from the 1961 album The Genius After Hours, as bonus tracks.
|
What type of music is featured in the album The Great Ray Charles
|
instrumental jazz
|
null | false
| 335
|
While the demand for physical and manual labor is gradually declining, there is a growing need for a workforce with soft skills. Which soft skill do you think would be the most valuable in your daily life? According to an article in Forbes BIBREF0 , 70% of employed Americans agree that public speaking skills are critical to their success at work. Yet, it is one of the most dreaded acts. Many people rate the fear of public speaking even higher than the fear of death BIBREF1 . To alleviate the situation, several automated systems are now available that can quantify behavioral data for participants to reflect on BIBREF2 . Predicting the viewers' ratings from the speech transcripts would enable these systems to generate feedback on the potential audience behavior.
Predicting human behavior, however, is challenging due to its huge variability and the way the variables interact with each other. Running Randomized Control Trials (RCT) to decouple each variable is not always feasible and also expensive. It is possible to collect a large amount of observational data due to the advent of content sharing platforms such as YouTube, Massive Open Online Courses (MOOC), or ted.com. However, the uncontrolled variables in the observational dataset always keep a possibility of incorporating the effects of the “data bias” into the prediction model. Recently, the problems of using biased datasets are becoming apparent. BIBREF3 showed that the error rates in the commercial face-detectors for the dark-skinned females are 43 times higher than the light-skinned males due to the bias in the training dataset. The unfortunate incident of Google's photo app tagging African-American people as “Gorilla” BIBREF4 also highlights the severity of this issue.
We address the data bias issue as much as possible by carefully analyzing the relationships of different variables in the data generating process. We use a Causal Diagram BIBREF5 , BIBREF6 to analyze and remove the effects of the data bias (e.g., the speakers' reputations, popularity gained by publicity, etc.) in our prediction model. In order to make the prediction model less biased to the speakers' race and gender, we confine our analysis to the transcripts only. Besides, we normalize the ratings to remove the effects of the unwanted variables such as the speakers' reputations, publicity, contemporary hot topics, etc.
For our analysis, we curate an observational dataset of public speech transcripts and other meta-data collected from the ted.com website. This website contains a large collection of high-quality public speeches that are freely available to watch, share, rate, and comment on. Every day, numerous people watch and annotate their perceptions about the talks. Our dataset contains 2231 public speech transcripts and over 5 million ratings from the spontaneous viewers of the talks. The viewers annotate each talk by 14 different labels—Beautiful, Confusing, Courageous, Fascinating, Funny, Informative, Ingenious, Inspiring, Jaw-Dropping, Long-winded, Obnoxious, OK, Persuasive, and Unconvincing.
We use two neural network architectures in the prediction task. In the first architecture, we use LSTM BIBREF7 for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM BIBREF8 to represent the input sentences in the form of a dependency tree. Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). To the best of our knowledge, this is the best performance in the literature on predicting the TED talk ratings. We compare the performances of these two models with a baseline of classical machine learning techniques using hand-engineered features. We find that the neural networks largely outperform the classical methods. We believe this gain in performance is achieved by the networks' ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels.
In the first architecture, we use LSTM (Hochreiter and Schmidhuber, 1997) for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM (Tai et al., 2015) to represent the input sentences in the form of a dependency tree.
|
What did the LSTM use for in the first architecture?
|
It is used for a sequential input of the words within the sentences of the transcripts.
|
null | false
| null |
How do you get dressed in the morning?
|
Typically, you would put your underwear on first. Then most people would put their socks on, followed by a T-shirt or undershirt, and then any type of dress shirt. After that you would put your pants on, and if you were a belt you would put a belt on, then you would put your shoes on, and then typically people would do their hair. After that, you were ready to take on the day!
|
|
1703.04617
| false
| null |
Our baseline model is composed of the following typical components: word embedding, input encoder, alignment, aggregation, and prediction. Below we discuss these components in more details.
word embedding, input encoder, alignment, aggregation, and prediction
|
what is the architecture of the baseline model?
|
The answers are shown as follows:
* word embedding, input encoder, alignment, aggregation, and prediction.
|
null | false
| null |
In economics, the Gini coefficient, also known as the Gini index or Gini ratio, is a measure of statistical dispersion intended to represent the income inequality or the wealth inequality or the consumption inequality within a nation or a social group. It was developed by statistician and sociologist Corrado Gini.
The Gini coefficient measures the inequality among values of a frequency distribution, such as levels of income. A Gini coefficient of 0 reflects perfect equality, where all income or wealth values are the same, while a Gini coefficient of 1 (or 100%) reflects maximal inequality among values. For example, if everyone has the same income, the Gini coefficient will be 0. In contrast, a Gini coefficient of 1 indicates that within a group of people, a single individual has all the income or consumption, while all others have none.
The Gini coefficient was proposed by Corrado Gini as a measure of inequality of income or wealth. For OECD countries, in the late 20th century, considering the effect of taxes and transfer payments, the income Gini coefficient ranged between 0.24 and 0.49, with Slovenia being the lowest and Mexico the highest. African countries had the highest pre-tax Gini coefficients in 2008–2009, with South Africa having the world's highest, estimated to be 0.63 to 0.7, although this figure drops to 0.52 after social assistance is taken into account, and drops again to 0.47 after taxation. The global income Gini coefficient in 2005 has been estimated to be between 0.61 and 0.68 by various sources.
|
Who proposed the Gini coefficient as a measure of inequality of income or wealth?
|
The Gini coefficient was proposed by Corrado Gini as a measure of inequality of income or wealth.
|
null | false
| null |
Where is New Hampshire?
|
Location is a relative concept, although for the sake of this response, let's assume that you are looking at a standard map of the world built using the widely accepted Mercator projection. New Hampshire is a state located in the north east corner of the United States of America, which itself is located in the Northern and Western hemispheres. New Hampshire is bordered on 4 sides by Massachusetts to the south, Vermont to the west, Maine to the east, and Canada to the north.
|
|
null | false
| null |
Who first described the Euphalacra lacunata
|
Euphalacra lacunata is a moth in the family Drepanidae. It was described by Jeremy Daniel Holloway in 1998. It is found on Borneo and Peninsular Malaysia.
|
|
null | false
| null |
Please provide a list of best movies from Shahrukh Khan.
|
Not all these movies might have made big box office warnings but they brought acclaim to Shahrukh Khan.
My Name is Khan
Pathaan
Baazigar
Duplicate
Kal Ho Naa Ho
Kabhi Alvida Naa Ho
Dil Toh Pagal Hai
Dilwaale Dulhaniya Le Jayenge
Kuch Kuch Hota Hai
Kabhi Khushi Kabhi Gham
Main Hoon Na
Don
Chakh De India
Veer Zara
And many others.
|
|
1909.00574
| false
| null |
The dataset MSParS is published by NLPCC 2019 evaluation task. The whole dataset consists of 81,826 samples annotated by native English speakers. 80% of them are used as training set. 10% of them are used as validation set while the rest is used as test set. 3000 hard samples are selected from the test set. Metric for this dataset is the exactly matching accuracy on both full test set and hard test subset. Each sample is composed of the question, the logical form, the parameters(entity/value/type) and question type as the Table TABREF3 demonstrates.
The whole dataset consists of 81,826 samples annotated by native English speakers. 80% of them are used as training set. 10% of them are used as validation set while the rest is used as test set. 3000 hard samples are selected from the test set.
|
What is the difference between the full test set and the hard test set?
|
The answers are shown as follows:
* 3000 hard samples are selected from the test set
|
null | false
| null |
Logging was Seattle's first major industry, but by the late 19th century, the city had become a commercial and shipbuilding center as a gateway to Alaska during the Klondike Gold Rush. Growth after World War II was partially due to the local Boeing company, which established Seattle as a center for aircraft manufacturing. The Seattle area developed into a technology center from the 1980s onwards with companies like Microsoft becoming established in the region; Microsoft founder Bill Gates is a Seattleite by birth. Internet retailer Amazon was founded in Seattle in 1994, and major airline Alaska Airlines is based in SeaTac, Washington, serving Seattle's international airport, Seattle–Tacoma International Airport. The stream of new software, biotechnology, and Internet companies led to an economic revival, which increased the city's population by almost 50,000 between 1990 and 2000.
|
What are the major industries in Seattle today?
|
In the late 20th and early 21st century, the largest industries in Seattle were software, aviation, and biotechnology. Significant companies are Boeing, Microsoft, Alaska Airlines, and Amazon.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.