paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Roosevelt attended Groton School, Harvard College, and Columbia Law School, going on to practice law in New York City. He won election to the New York State Senate in 1910 and then served as Assistant Secretary of the Navy under president Woodrow Wilson during World War I. Roosevelt was James M. Cox's running mate on the Democratic Party's ticket in the 1920 United States presidential election, but Cox was defeated by Republican Warren G. Harding. In 1921, Roosevelt contracted a paralytic illness that permanently paralyzed his legs. He returned to public office by winning the 1928 New York gubernatorial election. He served as governor of New York from 1929 to 1933, promoting programs to combat the Great Depression besetting the United States at the time. In the 1932 presidential election, Roosevelt defeated Republican incumbent president Herbert Hoover in a landslide.
During his first 100 days as president, Roosevelt spearheaded unprecedented federal legislation and issued a profusion of executive orders that instituted the New Deal. He created numerous programs to provide relief to the unemployed and farmers while seeking economic recovery with the National Recovery Administration and other programs. He also instituted major regulatory reforms related to finance, communications, and labor, and presided over the end of Prohibition. In 1936, Roosevelt won a landslide reelection with the economy having improved rapidly from 1933, but the economy relapsed into a deep recession in 1937 and 1938. Later, Roosevelt unsuccessfully sought passage of the Judicial Procedures Reform Bill of 1937. The conservative coalition formed in 1937 to block the implementation of further New Deal programs and reforms. He ran successfully for reelection in 1940, becoming the only American president to serve for more than two terms.
With World War II looming after 1938 in addition to the Japanese invasion of China and the aggression of Nazi Germany, Roosevelt gave strong diplomatic and financial support to China as well as the United Kingdom and the Soviet Union while the United States remained officially neutral. Following the Japanese attack on Pearl Harbor on December 7, 1941, he obtained a declaration of war on Japan the next day, and a few days later, on Germany and Italy. He worked closely with other national leaders in leading the Allies against the Axis powers. Roosevelt supervised the mobilization of the American economy to support the war effort and implemented a Europe first strategy. He also initiated the development of the world's first atomic bomb and worked with the other Allied leaders to lay the groundwork for the United Nations and other post-war institutions. He won reelection in 1944 but with his physical health seriously and steadily declining during the war years, he died in 1945. Since his death, several of Roosevelt's actions have come under substantial criticism, such as his ordering of the incarceration of Japanese Americans in concentration camps. Nonetheless, historical rankings consistently rank Roosevelt as one of the greatest presidents in American history.
|
Please give me a quick overview of Franklin Roosevelt's career.
|
Franklin Delano Roosevelt was an American statesman and political leader who served as the 32nd president of the United States from 1933 until his death in 1945. He previously served as the 44th governor of New York from 1929 to 1933, the Assistant Secretary of the Navy from 1913 to 1920, and a member of the New York State Senate from 1911 to 1913.
|
null | false
| 164
|
The output of our measurement procedures (in the social sciences often called the “scores”) must now be assessed in terms of their reliability and validity with regard to the (systemized) concept. Reliability aims to capture repeatability, i.e. the extent to which a given tool provides consistent results.
Validity assesses the extent to which a given measurement tool measures what it is supposed to measure. In NLP and machine learning, most models are primarily evaluated by comparing the machine-generated labels against an annotated sample. This approach presumes that the human output is the “gold standard" against which performance should be tested. In contrast, when the reliability is measured based on the output of different annotators, no coder is taken as the standard and the likelihood of coders reaching agreement by chance (rather than because they are “correct") is factored into the resulting statistic. Comparing against a “gold standard” suggests that the threshold for human inter- and intra-coder reliability should be particularly high.
Accuracy, as well as other measures such as precision, recall and F-score, are sometimes presented as a measure of validity, but if we do not have a genuinely objective determination of what something is supposed measure—as is often the case in text analysis—then accuracy is perhaps a better indication of reliability than of validity. In that case, validity needs to be assessed based on other techniques like those we discuss later in this section. It is also worth asking what level of accuracy is sufficient for our analysis and to what extent there may be an upper bound, especially when the labels are native to the data or when the notion of a “gold standard” is not appropriate.
For some in the humanities, validation takes the form of close reading, not designed to confirm whether the model output is correct, but to present what BIBREF48 refers to as a form of “further discovery in two directions”. Model outputs tell us something about the texts, while a close reading of the texts alongside those outputs tells us something about the models that can be used for more effective model building. Applying this circular, iterative process to 450 18th-century novels written in three languages, Piper was able to uncover a new form of “conversional novel” that was not previously captured in “literary history's received critical categories” BIBREF48 .
Along similar lines, we can subject both the machine-generated output and the human annotations to another round of content validation. That is, take a stratified random sample, selecting observations from the full range of scores, and ask: Do these make sense in light of the systematized concept? If not, what seems to be missing? Or is something extraneous being captured? This is primarily a qualitative process that requires returning to theory and interrogating the systematized concept, indicators, and scores together. This type of validation is rarely done in NLP, but it is especially important when it is difficult to assess what drives a given machine learning model. If there is a mismatch between the scores and systematized concept at this stage, the codebook may need to be adjusted, human coders retrained, more training data prepared, algorithms adjusted, or in some instances, even a new analytical method adopted.
Other types of validation are also possible, such as comparing with other approaches that aim to capture the same concept, or comparing the output with external measures (e.g., public opinion polls, the occurrence of future events). We can also go beyond only evaluating the labels (or point estimates). BIBREF16 used human judgments to not only assess the positional estimates from a scaling method of latent political traits but also to assess uncertainty intervals. Using different types of validation can increase our confidence in the approach, especially when there is no clear notion of ground truth.
Besides focusing on rather abstract evaluation measures, we could also assess the models in task-based settings using human experts. Furthermore, for insight-driven analyses, it can be more useful to focus on improving explanatory power than making small improvements in predictive performance.
In NLP and machine learning, most models are primarily evaluated by comparing the machine-generated labels against an annotated sample.
|
How are most models primarily evaluated in NLP and machine learning?
|
In NLP and machine learning, most models are primarily evaluated by comparing the machine-generated labels against an annotated sample.
|
null | false
| null |
"Robert Stinnett's Day of Deceit suggests a memorandum prepared by Commander McCollum was central to U.S. policy in the immediate pre-war period. Stinnett claims the memo suggests only a direct attack on U.S. interests would sway the American public (or Congress) to favor direct involvement in the European war, specifically in support of the British. An attack by Japan would not, could not, aid Britain. Although the memo was passed to Captains Walter Anderson and Dudley Knox, two of Roosevelt's military advisors, on October 7, 1940, there is no evidence to suggest Roosevelt ever saw it, while Stinnett's claims of evidence he did is nonexistent. Moreover, although Anderson and Knox offered eight specific plans to aggrieve the Japanese Empire and added, "If by these means Japan could be led to commit an overt act of war, so much the better," of the eight "plans" (actions to be taken) offered in the memo, many if not all were implemented, but there is considerable doubt the McCollum memo was the inspiration.[citation needed] Nonetheless, in Day of Deceit Stinnett claims all action items were implemented. Yet there were numerous instances of members of the Roosevelt Administration insisting on not provoking Japan. Mark Parillo, in his essay The United States in the Pacific, wrote, "hese theories tend to founder on the logic of the situation. Had Roosevelt and other members of his administration known of the attack in advance, they would have been foolish to sacrifice one of the major instruments needed to win the war just to get the United States into it." Furthermore, on 5 November 1941, in a joint memo, Stark, CNO, and Marshall, Army Chief of Staff, warned, "if Japan be defeated and Germany remain undefeated, decision will still not have been reached.... War between the United States and Japan should be avoided...." Additionally, in a 21 November 1941 memo, Brigadier Leonard T. Gerow, head of Army War Plans, stated, "one of our present major objectives the avoidance of war with Japan...[and to] insure continuance of material assistance to the British." He concluded, "[I]t is of grave importance to our war effort in Europe..." Furthermore, Churchill himself, in a 15 May 1940 telegram, said he hoped a U.S. commitment to aid Britain would "quiet" Japan, following with a 4 October message requesting a USN courtesy visit to Singapore aimed at "preventing the spreading of the war" And Stark's own Plan Dog expressly stated, "Any strength that we might send to the Far East would...reduce the force of our blows against Germany..." Roosevelt could scarcely have been ignorant of Stark's views, and war with Japan was clearly contrary to Roosevelt's express wish to aid Britain."
https://en.wikipedia.org/wiki/Pearl_Harbor_advance-knowledge_conspiracy_theory
|
Briefly summarize this paragraph on a Pearl Harbor conspiracy theory.
|
This paragraph summarizes both Robert Stinnett's argument that the US government intentionally provoked the Japanese attack on Pearl Harbor and the counter-arguments. Stinnett's case is based largely on a memo from Commander McCollom stating that the only way to US population and its elected officials could be convinced to enter WWII would be if there was a direct attack on the US. Stinnett argues that the government then did everything recommended in the memo in order to compel the Japanese to attack. According to this paragraph, arguments that disprove Stinnett's claim include the fact that the military would never endanger the fleet in Pearl Harbor needed to win the war in a plan to convince the American people to join the way. In addition, there is compelling evidence that Roosevelt was attempting to minimize the conflict with Japan in order to focus on aid to England.
|
null | false
| 19
|
Here we study the errors produced by our system on the English test set to better understand the remaining shortcomings of the approach. A small portion of the wrong predictions point to an incorrect interpretation of the morpho-syntactic conditioning of the context, e.g. the system predicted plan instead of plans in the context Our _ include raising private capital. The majority of wrong predictions, however, are nonsensical, like bomb for job, fify for fixing, and gnderrate for understand. This observation suggests that generally the system did not learn to copy the characters of lemma into inflected form, which is all it needs to do in a large number of cases. This issue could be alleviated with simple data augmentation techniques that encourage autoencoding BIBREF2 .
Here we study the errors produced by our system on the English test set to better understand the remaining shortcomings of the approach.
|
In what way does the team better understand the remaining shortcomings of the approach?
|
They study the errors produced by their system on the English test set to better understand the remaining shortcomings of the approach.
|
null | false
| null |
Give a list of the different kinds of fire trucks that exist.
|
How firefighting trucks and equipment are classified varies from country to country but in the United States there are a few general categories of fire trucks. Fire Engines are what most people think of when they think of a fire truck. It has a combination of hoses, ladders, and other equipment but also carries several hundred gallons of water and has a built in pump to feed that hoses from it's internal tank or from a hydrant. There are Ladder Trucks which typically have a large extending ladder on top. These may or may not carry water but also generally have a built in pump to feed water to the nozzle attached to the ladder. There are Rescue Trucks which typically have specialized equipment for unique rescue situations such as confined space, vehicle extrication, or collapsed building rescue. These typically don't carry water or pumps but there are a few jurisdictions that have Rescue Engines that blend these two. There are also Tanker trucks that carry large amounts of water to areas where there are no hydrants and they typically do not have their own pumps or hoses. They serve primarily as a water supply to other trucks.
|
|
null | false
| null |
What are some of the best TV shows on HBO?
|
HBO is famous for its award-winning TV shows, with some of the most critically successful and popular shows being the Sopranos, The Wire, Succession, The White Lotus, Game of Thrones (and its sequel show, House of the Dragon). Most recently, the Last of Us, a post-apocalyptic series based on the award winning video game, set viewership records on the HBO GO platform.
|
|
null | false
| null |
What are some common wilderness and survival skills?
|
Some common survival skills are ones that do not rely on carrying specific equipment but do require a working knowledge of prior to being stranded int he wilderness. Being able to use a map and compass is top of mind, as if you are lost ,finding your way back to civilization in short order is critical, particularly if you have little to no supplies. Being able to create shelter in adverse weather is also necessary, as humans exposed to harsh elements to extended periods have a smaller chance of survival. Since food and water are also paramount to survival, understanding your best chances on locating water that is safe to drink or that can be purified using natural resources is a skill set most would consider critical; almost equally, knowing what in nature is edible for foraging will help fuel someone. An example is being able to identify which mushrooms or berries are safe versus toxic. Lastly, being able to monitor one's own vital signs to avoid situations like hypothermia are important, because hypothermia is considered a "silent killer."
|
|
null | false
| null |
Give me a numbered list of all chapters' title from the book, "12 Rules for Life" by Jordan B Peterson.
|
1. Stand up straight with your shoulders back.
2. Treat yourself like you are someone you are responsible for helping.
3. Make friends with people who want the best for you.
4. Compare yourself to who you were yesterday, not to who someone else is today.
5. Do not let your children do anything that makes you dislike them.
6. Set your house in perfect order before you criticize the world.
7. Pursue what is meaningful (not what is expedient).
8. Tell the truth – or, at least, don't lie.
9. Assume that the person you are listening to might know something you don't.
10. Be precise in your speech.
11. Do not bother children while they are skateboarding.
12. Pet a cat when you encounter one in the street.
|
|
null | false
| 302
|
Gates Recurrent Neural Network (GRU) BIBREF0 , BIBREF1 and attention layer are used in sequential NLP problems and successful results are reported in different studies. Figure FIGREF11 shows the diagram of our model.
GRU- has been widely used in the literature to model sequential problems. RNN applies the same set of weights recursively as follow: DISPLAYFORM0
GRU is very similar to LSTM with the following equations: DISPLAYFORM0 DISPLAYFORM1
GRU has two gates, a reset gate INLINEFORM0 , and an update gate INLINEFORM1 . Intuitively, the reset gate determines how to combine the new input with the previous memory, and the update gate defines how much of the previous memory to keep around. We use Keras GRNN implementation to setup our experiments. We note that GRU units are a concatenation of GRU layers in each task.
Attention layer - GRUs update their hidden state h(t) as they process a sequence and the final hidden state holds the summation of all other history information. Attention layer BIBREF2 modifies this process such that representation of each hidden state is an output in each GRU unit to analyze whether this is an important feature for prediction.
Model Architecture - our model has an embedding layer of 300 dimensions using fasttext embedding, and 1024 dimensions using ELMo BIBREF3 embedding. GRU layer has 70 hidden unites. We have 3 perceptron layers with size 300. Last layer is a softmax layer to predict emotion tags. Textual information layers (explained in section SECREF8 ) are concatenated with GRU layer as auxiliary layer. We utilize a dropout BIBREF4 layer after the first perceptron layer for regularization.
We demonstrate the presence of emotion words as an 8 dimension feature, presenting all 8 emotion categories of the NRC lexicon.
|
What do they demonstrate the presence of emotion words as?
|
An 8 dimension feature, presenting all 8 emotion categories of the NRC lexicon.
|
null | false
| null |
The Russo-Ukrainian War is an ongoing international conflict between Russia, alongside Russian-backed separatists, and Ukraine, which began in February 2014. Following Ukraine's Revolution of Dignity, Russia annexed Crimea from Ukraine and supported pro-Russian separatists fighting the Ukrainian military in the Donbas war. The first eight years of conflict also included naval incidents, cyberwarfare, and heightened political tensions. In February 2022, Russia launched a full-scale invasion of Ukraine.
|
What is Russia-Ukraine Conflict?
|
The decision by Ukraine to move toward stronger connections with the European Union, which was opposed by Russia, set off the Russo-Ukrainian War, which started in 2014. The majority of the fighting has been in Ukraine's east, where pro-Russian separatists have proclaimed independent republics. With the backing of Western nations, the Ukrainian government has fought to retake control of these regions. Thousands of people have died as a result of the fighting, including military troops and civilians, and it has significantly impacted Ukraine's humanitarian situation. Access to essential necessities including food, water, and healthcare is a problem for many individuals. Several nations have condemned the fighting, and the United States and the European Union have imposed economic penalties on
|
null | false
| null |
The Gulag was the government agency in charge of the Soviet network of forced labour camps which were set up by order of Vladimir Lenin, reaching its peak during Joseph Stalin's rule from the 1930s to the early 1950s. English-language speakers also use the word gulag in reference to each of the forced-labor camps that existed in the Soviet Union, including the camps that existed in the post-Lenin era. The full official name of the agency changed several times.
The Gulag is recognized as a major instrument of political repression in the Soviet Union. The camps housed a wide range of convicts, from petty criminals to political prisoners, a large number of whom were convicted by simplified procedures, such as NKVD troikas or other instruments of extrajudicial punishment. In 1918–1922, the agency was administered by the Cheka, followed by the GPU (1922–1923), the OGPU (1923–1934), later known as the NKVD (1934–1946), and the Ministry of Internal Affairs (MVD) in the final years. The Solovki prison camp, the first correctional labour camp which was constructed after the revolution, was opened in 1918 and legalized by a decree, "On the creation of the forced-labor camps", on April 15, 1919.
|
What were gulags and who lived in them?
|
Gulags were forced labour camps created during the Soviet Union that were used as an instrument of political repression and punishment. Gulags housed a wide range of individuals, from political prisoners to petty criminals.
|
1706.08032
| false
| null |
Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .
Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.
Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .
Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .
Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.
Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .
|
Which datasets did they use?
|
The answers are shown as follows:
* Stanford - Twitter Sentiment Corpus (STS Corpus)
* Sanders - Twitter Sentiment Corpus
* Health Care Reform (HCR)
|
null | false
| null |
Christoph Karl von Piacsek was a Hungarian officer in Habsburg military service during the French Revolutionary Wars. He was born in 1749 in Klobusics (Hungarian: Klobusic; Czech: Klobušice), in the commune of Trencsén, in the Kingdom of Hungary. As a Rittmeister, or captain of cavalry, in the 9th Hussar Regiment, he received the Knight's Cross of the Military Order of Maria Theresa in 1789, after combat in Tabor during Austria's wars with the Ottoman Empire, and, as a colonel, commanded the 9th Hussars, 1793–1797. In 1793, during the War of the First Coalition, the 9th Hussars were part of the Army of the Rhine (Austrian), and participated in the skirmishes at Offenbach, Knittelsbach, and Otterheim, and, later, at the storming of the Wissembourg line at Lauffenberg. Following this action, Piacsek, Major Baron von Szoreyny, and several other officers were honored.
|
When did Christoph Karl von Piacsek serve as a Hungarian officer?
|
Christoph Karl von Piacsek served as a Hungarian officer in Habsburg military service during the French Revolutionary Wars.
|
null | false
| null |
I'm going to try a keto diet, which means I should prioritise fat and avoid carbs. Which of these foods should I prioritise and which should I avoid? rice, mayonnaise, avocado, pastry, olive oil, bread, cheese and crackers.
|
You should prioritise the following foods because they are high in fat: cheese, avocado, mayonnaise, olive oil. You should avoid the foods that are high in carbs, including: rice, bread, pastry, crackers
|
|
null | false
| null |
What are your favourite films?
|
Le Bossu is a French language film starring Daniel Auteuil is an excellent film that I've watched many times. I also enjoy the same actor's great film, Jean de Florette and its sequel Manon des Sources, which are, combined, an incredible tale of betrayal, love, and revenge. If I had to choose a director, I would recommend any film by Christopher Nolan, including Memento, Inception, and Tenet, all of which can only be fully understood through multiple viewings. My final suggestion though is entirely different, Dirty Rotten Scoundrels, starring Michael Caine and Steve Martin, is very funny and can be watched many times, even though no one would describe it as a work of high art.
|
|
null | false
| null |
What is a benefit of salsa dancing?
|
Salsa dancing is a great way to meet new people. Whether it is through dance lessons or going out to a salsa club, the social nature of salsa dancing makes it a good way to expand your social circle and increase your social activities.
|
|
null | false
| null |
What is a Cheque card?
|
This type of card is issued by a bank to a customer. It guarantees that a cheque used with the card will be paid if the person issuing the cheque has kept to all the conditions.
|
|
null | false
| null |
Tell me which of these are sweet or salty: juice, potato chips, cake, cheese
|
The salty items are: potato chips, cheese
The sweet items are: juice, cake
|
|
1909.08824
| false
| null |
We first compare the perplexity of CWVAE with baseline methods. Perplexity measures the probability of model to regenerate the exact targets, which is particular suitable for evaluating the model performance on one-to-many problem BIBREF20. Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6. The distinct is normalized to $[0, 1]$ by dividing the total number of generated tokens.
Further, we employ BLEU score to evaluate the accuracy of generations BIBREF21, and the number of distinct n-gram to evaluate the diversity of generations BIBREF6.
|
How do they measure the diversity of inferences?
|
by number of distinct n-grams
|
null | false
| null |
Which of the following races do participants run in? A marathon, a triathlon, an aqua bike?
|
A marathon and a triathlon
|
|
null | false
| 92
|
The dataset statistics can be found in Table TABREF7. The number of examples we collected increases per round, starting with approximately 19k examples for Round 1, to around 47k examples for Round 2, to over 103k examples for Round 3. We collected more data for later rounds not only because that data is likely to be more interesting, but also simply because the base model is better and so annotation took longer to collect good, verified correct examples of model vulnerabilities.
For each round, we report the model error rate, both on verified and unverified examples. The unverified model error rate captures the percentage of examples where the model disagreed with the writer's target label, but where we are not (yet) sure if the example is correct. The verified model error rate is the percentage of model errors from example pairs that other annotators were able to confirm the correct label for. Note that this error rate represents a straightforward way to evaluate model quality: the lower the model error rate—assuming constant annotator quality and context-difficulty—the better the model.
We observe that model error rates decrease as we progress through rounds. In Round 3, where we included a more diverse range of contexts from various domains, the overall error rate went slightly up compared to the preceding round, but for Wikipedia contexts the error rate decreased substantially. While for the first round roughly 1 in every 5 examples were verified model errors, this quickly dropped over consecutive rounds, and the overall model error rate is less than 1 in 10. On the one hand, this is impressive, and shows how far we have come with just three rounds. On the other hand, it shows that we still have a long way to go if even untrained annotators can fool ensembles of state-of-the-art models with relative ease.
Table TABREF7 also reports the average number of “tries”, i.e., attempts made for each context until a model error was found (or the number of possible tries is exceeded), and the average time this took (in seconds). Again, these metrics represent a useful way to evaluate model quality. We observe that the average tries and average time per verified error both go up as we progress through the rounds. The numbers clearly demonstrate that the rounds are getting increasingly more difficult.
Table 2 also reports the average number of “tries”, i.e., attempts made for each context until a model error was found (or the number of possible tries is exceeded), and the average time this took (in seconds). Again, these metrics are useful for evaluating model quality: observe that the average number of tries and average time per verified error both go up with later rounds.
|
Is the average number of “tries" useful for evaluating model quality?
|
Yes, it is.
|
null | false
| null |
How can you prepare for a longer airplane flight with children?
|
Taking longer airplane flights with children can be stressful. With some planning you can alleviate some of the stress and keep yourself and your children ready for the journey. In preparation for your trip make sure that you cover the basics of food, hydration, entertainment and comfort. Food: While most longer flights will offer refreshments, not all flights have full meal options available. Prepare by eating a meal before getting on the plane and by packing a range of snack options for you and your children. Hydration: You will likely be served refreshments and drinks several times during longer flights. However you will likely find that you will need to bring additional water and drink options to ensure proper hydrations, especially on very long flights. Entertainment: You want to make sure that your children can engage in plenty of activities on long flights. Most airlines will provide for games, movies and music options via on board entertainment. However, you want to make sure you supplement by bringing additional entertainment options for your children. Comfort: Your plane can be too cold or too warm for your preferences, so be prepared for both scenarios.
|
|
null | false
| 154
|
Four publicly available corpora are selected for our analysis. These corpora are based on Wikipedia, so more comparable than the others, and have already been used for the evaluation of several QA systems.
WikiQA BIBREF6 comprises questions selected from the Bing search queries, where user click data give the questions and their corresponding Wikipedia articles. The abstracts of these articles are then extracted to create answer candidates. The assumption is made that if many queries lead to the same article, it must contain the answer context; however, this assumption fails for some occasions, which makes this dataset more challenging. Since the existence of answer contexts is not guaranteed in this task, it is called answer triggering instead of answer selection.
SelQA BIBREF7 is a product of five annotation tasks through crowdsourcing. It consists of about 8K questions where a half of the questions are paraphrased from the other half, aiming to reduce contextual similarities between questions and answers. Each question is associated with a section in Wikipedia where the answer context is guaranteed, and also with five sections selected from the entire Wikipedia where the selection is made by the Lucene search engine. This second dataset does not assume the existence of the answer context, so can be used for the evaluation of answer triggering.
SQuAD BIBREF12 presents 107K+ crowdsourced questions on 536 Wikipedia articles, where the answer contexts are guaranteed to exist within the provided paragraph. It contains annotation of answer phrases as well as the pointers to the sentences including the answer phrases; thus, it can be used for both answer extraction and selection. This corpus also provides human accuracy on those questions, setting up a reasonable upper bound for machines. To avoid overfitting, the evaluation set is not publicly available although system outputs can be evaluated by their provided script.
InfoboxQA BIBREF13 gives 15K+ questions based on the infoboxes from 150 articles in Wikipedia. Each question is crowdsourced and associated with an infobox, where each line of the infobox is considered an answer candidate. This corpus emphasizes the gravity of infoboxes, which summary arguably the most commonly asked information about those articles. Although the nature of this corpus is different from the others, it can also be used to evaluate answer selection.
INFOBOXQA (Morales et al., 2016) gives 15K+ questions based on the infoboxes from 150 articles in Wikipedia.
|
How many questions are in the INFOBOXQA based on the infoboxes from 150 articles in Wikipedia?
|
15K+ questions.
|
null | false
| null |
Charles Inglis was the second son of Dr. Alexander Inglis (a general practitioner in Worcester) and his first wife, Florence, the daughter of newspaper proprietor John Frederick Feeney. His elder brother was the historian John Alexander Inglis FRSE Their father, Alexander Inglis was born in Scotland to a respectable family – his grandfather, John Inglis, was an Admiral in the Royal Navy and had captained HMS Belliqueux at the Battle of Camperdown in 1797.
Charles Inglis was born on 31 July 1875. He was not expected to survive and was hurriedly baptised in his father's drawing room; his mother died from complications eleven days later. His family moved to Cheltenham and Inglis was schooled at Cheltenham College from 1889 to 1894. In his final year, he was elected head boy and received a scholarship to study the Mathematics Tripos at King's College, Cambridge. Inglis was 22nd wrangler[nb 1] when he received his Bachelor of Arts degree in 1897; he remained for a fourth year, achieving first class honours in Mechanical Sciences. Inglis was a keen sportsman and enjoyed long-distance running, walking, mountaineering and sailing. At Cambridge, he nearly achieved a blue for long-distance running but was forced to withdraw from a significant race because of a pulled muscle. He was also a follower of the Cambridge University Rugby Union team, watching their matches at Grange Road.
After graduation, Inglis began work as an apprentice for the civil engineering firm of John Wolfe-Barry & Partners. He worked as a draughtsman in the drawing office for several months before being placed with Alexander Gibb, who was acting as resident engineer on an extension to the Metropolitan District Railway between Whitechapel and Bow. Inglis was responsible for the design and supervision of all thirteen bridges on the route. It was during this time that he began his lifelong study of vibration and its effects on materials, particularly bridges.
|
Given this paragraph about Charles Inglis, give answers to the following: his father's name, his colleague's name, and his grandfather's occupation.
|
Alexander Inglis, Alexander Gibb, Admiral in the Royal Navy
|
null | false
| null |
Name some famous actors from the 1930's
|
Shirley Temple, Bette Davis, Clark Gable, Cary Grant, Lionel Barrymore, Marlene Dietrich, Errol Flynn, Greta Garbo, Katharine Hepburn, Fred Astaire, Barbara Stanwyck, Joan Crawford, Ginger Rogers, Gary Cooper, Carole Lombard, John Gilbert.
|
|
null | false
| null |
The human brain is the central organ of the human nervous system, and with the spinal cord makes up the central nervous system. The brain consists of the cerebrum, the brainstem and the cerebellum. It controls most of the activities of the body, processing, integrating, and coordinating the information it receives from the sense organs, and making decisions as to the instructions sent to the rest of the body. The brain is contained in, and protected by, the skull bones of the head.
|
Given this paragraph about the human brain, how does my brain know that I'm touching something hot?
|
Humans process information from the various sensory systems via the central nervous system. The nerves in your hand are integral to your sense of touch. When your fingers touch something hot, the nerves in your hand quickly transmit that information through the nervous system to the central nervous system. After processing an electrochemical signal up through the spine, the sensory signal is eventually received in the human brain. The brain takes action and, in this case, initiates an instinctual retraction of the hand.
|
null | false
| null |
Starbucks, an American coffee company and coffeehouse chain, has been the subject of multiple controversies. Public and employee criticism against the company has come from around the world, including a wide range issues from tax avoidance in Europe, anti-competitive practices in the United States, human rights issues in multiple countries, and labor issues involving union busting, questions about pay equity, and ethics in partnerships in Africa.
|
In which regions has Starbucks come under scrutiny?
|
Criticism of Starbucks has been noted in Europe, the United States, as well as Africa.
|
null | false
| null |
The University of California, Los Angeles (UCLA) is a public land-grant research university in Los Angeles, California. UCLA's academic roots were established in 1881 as a normal school then known as the southern branch of the California State Normal School (now San José State University). This school was absorbed with the official founding of UCLA as the Southern Branch of the University of California in 1919, making it the second-oldest of the 10-campus University of California system (after UC Berkeley).
|
List the two oldest University of California campuses separated by a semicolon.
|
UC Berkeley; UCLA
|
null | false
| 153
|
We experimented with discriminative models for our submission. Object level image representations gave a huge uplift in the evaluation metrics. Bi-directional GRUs constantly performed better than uni-directional LSTMs with Memory Networks outperforming Late fusion encoders for encoding conversational history. We even found that fine-tuning Glove embeddings performed better than their counterparts. Our final submission is an ensemble of three discriminative models and achieve the NDCG of 55.46 on test-std.
We experimented with discriminative models for our submission. Object level image representations gave a huge uplift in the evaluation metrics. Bi-directional GRUs constantly performed better than uni-directional LSTMs with Memory Networks outperforming Late fusion encoders for encoding conversational history.
|
Who performs better in their experiment? Bi-directional GRUs or uni-directional LSTMs?
|
Bi-directional GRUs.
|
null | false
| null |
Does Las Vegas have a professional hockey team?
|
The Vegas Golden Knights are a professional ice hockey team based in the Las Vegas metropolitan area. They compete in the National Hockey League as a member of the Pacific Division in the Western Conference.
|
|
null | false
| null |
What is tertiary analysis?
|
Tertiary analysis is a type of analysis that looks at a company's financial statements to evaluate the financial performance of the company. It focuses on the company's profitability, cash flow, and debt level.
|
|
null | false
| null |
What are some benefits of public transportation?
|
Public transportation is better for the environment. Buses and trains emit less carbon dioxide per person than private transportation. Public transportation is more affordable than private transportation. By spreading out the cost of transportation ownership and maintenance among the large number of public transportation riders, public transportation is cheaper than private transportation.
|
|
null | false
| null |
The Yugoslav Cup was a tournament for which clubs from all tiers of the football pyramid were eligible to enter. In addition, amateur teams put together by individual Yugoslav People's Army garrisons and various factories and industrial plants were also encouraged to enter, which meant that each cup edition could have several thousands of teams in its preliminary stages. These teams would play through a number of qualifying rounds before reaching the first round proper, in which they would be paired with top-flight teams.
|
Write down some points on Yugoslav Cup using given paragraph as a base.
|
1. Clubs from all levels of the football pyramid were eligible to participate in the Yugoslav Cup.
2. Additionally, amateur teams assembled by numerous enterprises and industrial plants as well as individual Yugoslav People's Army garrisons were encouraged to compete, which meant that each cup edition may include thousands of teams in its preliminary rounds.
3. Prior to the first round proper, where they would be partnered with top-flight teams, these teams would compete in a number of qualification rounds.
|
null | false
| null |
The Ways to Love a Man is a studio album by American country artist, Tammy Wynette. It was released on January 26, 1970 via Epic Records and was the seventh studio album in Wynette's career. The disc consisted of 11 tracks which included both original material and cover recordings. Its title track was the album's only single included. It became a number one song on the North American country charts while also reaching chart positions in other musical genres as well. The album itself also reached chart positions on the country albums chart following its release. Critics and journalists gave the album a positive reception in the years that followed.
|
The album the ways to love a man by Tammy Wynette
|
The Ways to Love a Man is a studio album by American country artist, Tammy Wynette. It was released on January 26, 1970 via Epic Records and was the seventh studio album in Wynette's career. The disc consisted of 11 tracks which included both original material and cover recordings.
Its title track was the album's only single included. It became a number one song on the North American country charts while also reaching chart positions in other musical genres as well. The album itself also reached chart positions on the country albums chart following its release. Critics and journalists gave the album a positive reception in the years that followed.
|
null | false
| 161
|
EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. Each subset includes $1,000$ English dialogues, and each dialogue can be further divided into a few consecutive utterances. All the utterances are annotated by five annotators on a crowd-sourcing platform (Amazon Mechanical Turk), and the labeling work is only based on the textual content. Annotator votes for one of the seven emotions, namely Ekman’s six basic emotions BIBREF1, plus the neutral. If none of the emotion gets more than three votes, the utterance will be marked as “non-neutral”.
For the datasets, there are properties worth additional mentioning. Although Friends and EmotionPush share the same data format, they are quite different in nature. Friends is a speech-based dataset which is annotated dialogues from the TV sitcom. It means most of the utterances are generated by the a few main characters. The personality of a character often affects the way of speaking, and therefore “who is the speaker" might provide extra clues for emotion prediction. In contrast, EmotionPush does not have this trait due to the anonymous mechanism. In addition, features such as typo, hyperlink, and emoji that only appear in chat-based data will need some domain-specific techniques to process.
Incidentally, the objective of the challenge is to predict the emotion for each utterance. Just, according to EmotionX 2019 specification, there are only four emotions be selected as our label candidates, which are Joy, Sadness, Anger, and Neutral. These emotions will be considered during performance evaluation. The technical detail will also be introduced and discussed in following Section SECREF13 and Section SECREF26.
EmotionLines is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues.
|
What are the two subsets of the emotionline?
|
Friends and EmotionPush.
|
null | false
| 13
|
Our future work will be mainly focused on providing a baseline system for interactive learning which will be evaluated on the dataset. We are also planning improvements for dialog management that is used to gain explanations during the data collection. We believe that with conversation about specific aspects of the discussed question it will be possible to gain even more interesting information from users. The other area of our interest is in possibilities to improve question answering accuracy on test questions of Simple question dataset with the extra information contained in the collected dialogs.
Our future work will be mainly focused on providing a baseline system for interactive learning which will be evaluated on the dataset. We are also planning improvements for dialog management that is used to gain explanations during the data collection.
|
What is the author's plan for his future work?
|
Their future work will be mainly focused on providing a baseline system for interactive learning which will be evaluated on the dataset. They are also planning improvements for dialog management that is used to gain explanations during the data collection.
|
null | false
| null |
Contrast bathing can be used to reduce swelling around injuries or to aid recovery from exercise. It can also significantly improve muscle recovery following exercise by reducing the levels of blood lactate concentration. For any injury presenting with palpable swelling and heat, and visible redness - such as a strain/sprain - contrast baths are contraindicated during the acute inflammation stage. Acute inflammation begins at the time of injury and lasts for approximately 72 hours.
|
What is contrast therapy used for?
|
Contrast bathing can be used to reduce swelling around injuries or to aid recovery from exercise.
|
null | false
| 379
|
Neural machine translation (NMT) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 is a deep learning-based method for translation that has recently shown promising results as an alternative to statistical approaches. NMT systems directly model the probability of the next word in the target sentence simply by conditioning a recurrent neural network on the source sentence and previously generated target words.
While both simple and surprisingly accurate, NMT systems typically need to have very high capacity in order to perform well: Sutskever2014 used a 4-layer LSTM with 1000 hidden units per layer (herein INLINEFORM0 ) and Zhou2016 obtained state-of-the-art results on English INLINEFORM1 French with a 16-layer LSTM with 512 units per layer. The sheer size of the models requires cutting-edge hardware for training and makes using the models on standard setups very challenging.
This issue of excessively large networks has been observed in several other domains, with much focus on fully-connected and convolutional networks for multi-class classification. Researchers have particularly noted that large networks seem to be necessary for training, but learn redundant representations in the process BIBREF6 . Therefore compressing deep models into smaller networks has been an active area of research. As deep learning systems obtain better results on NLP tasks, compression also becomes an important practical issue with applications such as running deep learning models for speech and translation locally on cell phones.
Existing compression methods generally fall into two categories: (1) pruning and (2) knowledge distillation. Pruning methods BIBREF7 , BIBREF8 , BIBREF9 , zero-out weights or entire neurons based on an importance criterion: LeCun1990 use (a diagonal approximation to) the Hessian to identify weights whose removal minimally impacts the objective function, while Han2016 remove weights based on thresholding their absolute values. Knowledge distillation approaches BIBREF0 , BIBREF10 , BIBREF1 learn a smaller student network to mimic the original teacher network by minimizing the loss (typically INLINEFORM0 or cross-entropy) between the student and teacher output.
In this work, we investigate knowledge distillation in the context of neural machine translation. We note that NMT differs from previous work which has mainly explored non-recurrent models in the multi-class prediction setting. For NMT, while the model is trained on multi-class prediction at the word-level, it is tasked with predicting complete sequence outputs conditioned on previous decisions. With this difference in mind, we experiment with standard knowledge distillation for NMT and also propose two new versions of the approach that attempt to approximately match the sequence-level (as opposed to word-level) distribution of the teacher network. This sequence-level approximation leads to a simple training procedure wherein the student network is trained on a newly generated dataset that is the result of running beam search with the teacher network.
We run experiments to compress a large state-of-the-art INLINEFORM0 LSTM model, and find that with sequence-level knowledge distillation we are able to learn a INLINEFORM1 LSTM that roughly matches the performance of the full system. We see similar results compressing a INLINEFORM2 model down to INLINEFORM3 on a smaller data set. Furthermore, we observe that our proposed approach has other benefits, such as not requiring any beam search at test-time. As a result we are able to perform greedy decoding on the INLINEFORM4 model 10 times faster than beam search on the INLINEFORM5 model with comparable performance. Our student models can even be run efficiently on a standard smartphone. Finally, we apply weight pruning on top of the student network to obtain a model that has INLINEFORM6 fewer parameters than the original teacher model. We have released all the code for the models described in this paper.
We run experiments to compress a large state-ofthe-art 4 × 1000 LSTM model, and find that with sequence-level knowledge distillation we are able to learn a 2 × 500 LSTM that roughly matches the performance of the full system.
|
What model do they compress when running experiments?
|
A large state-ofthe-art 4 × 1000 LSTM model.
|
null | false
| 249
|
Web 2.0 has had a substantial impact on communication and relationships in today's society. Children and teenagers go online more frequently, at younger ages, and in more diverse ways (e.g. smartphones, laptops and tablets). Although most of teenagers' Internet use is harmless and the benefits of digital communication are evident, the freedom and anonymity experienced online makes young people vulnerable, with cyberbullying being one of the major threats BIBREF0 , BIBREF1 , BIBREF2 .
Bullying is not a new phenomenon, and cyberbullying has manifested itself as soon as digital technologies have become primary communication tools. On the positive side, social media like blogs, social networking sites (e.g. Facebook) and instant messaging platforms (e.g. WhatsApp) make it possible to communicate with anyone and at any time. Moreover, they are a place where people engage in social interaction, offering the possibility to establish new relationships and maintain existing friendships BIBREF3 , BIBREF4 . On the negative side however, social media increase the risk of children being confronted with threatening situations including grooming or sexually transgressive behaviour, signals of depression and suicidal thoughts, and cyberbullying. Users are reachable 24/7 and are often able to remain anonymous if desired: this makes social media a convenient way for bullies to target their victims outside the school yard.
With regard to cyberbullying, a number of national and international initiatives have been launched over the past few years to increase children's online safety. Examples include KiVa, a Finnish cyberbullying prevention programme, the `Non au harcèlement' campaign in France, Belgian governmental initiatives and helplines (e.g. clicksafe.be, veiligonline.be, mediawijs.be) that provide information about online safety, and so on.
In spite of these efforts, a lot of undesirable and hurtful content remains online. BIBREF1 analysed a body of quantitative research on cyberbullying and observed cybervictimisation rates among teenagers between 20% and 40%. BIBREF5 focused on 12 to 17 year olds living in the United States and found that no less than 72% of them had encountered cyberbullying at least once within the year preceding the questionnaire. BIBREF6 surveyed 9 to 26 year olds in the United States, Canada, the United Kingdom and Australia, and found that 29% of the respondents had ever been victimised online. A study among 2,000 Flemish secondary school students (age 12 to 18) revealed that 11% of them had been bullied online at least once in the six months preceding the survey BIBREF7 . Finally, the 2014 large-scale EU Kids Online Report BIBREF8 published that 20% of 11 to 16 year olds had been exposed to hate messages online. In addition, youngsters were 12% more likely to be exposed to cyberbullying as compared to 2010, clearly demonstrating that cyberbullying is a growing problem.
The prevalence of cybervictimisation depends on the conceptualisation used in describing cyberbullying, but also on research variables such as location and the number and age span of its participants. Nevertheless, the above-mentioned studies demonstrate that online platforms are increasingly used for bullying, which is a cause for concern given its impact. As shown by BIBREF9 , BIBREF10 , BIBREF11 , cyberbullying can have a negative impact on the victim's self-esteem, academic achievement and emotional well-being. BIBREF12 found that self-reported effects of cyberbullying include negative effects on school grades, feelings like sadness, anger, fear, and depression and in extreme cases, cyberbullying could even lead to self-harm and suicidal thoughts.
The above studies demonstrate that cyberbullying is a serious problem the consequences of which can be dramatic. Successful early detection of cyberbullying attempts is therefore of key importance to youngsters' mental well-being. However, the amount of information on the Web makes it practically unfeasible for moderators to monitor all user-generated content manually. To tackle this problem, intelligent systems are required that process this information in a fast way and automatically signal potential threats. This way, moderators can respond quickly and prevent threatening situations from escalating. According to recent research, teenagers are generally in favour of such automatic monitoring, provided that effective follow-up strategies are formulated, and that privacy and autonomy are guaranteed BIBREF13 .
Parental control tools (e.g. NetNanny) already block unsuited or undesirable content and some social networks make use of keyword-based moderation tools (i.e., using lists of profane and insulting words to flag harmful content). However, such approaches typically fail to detect implicit or subtle forms of cyberbullying in which no explicit vocabulary is used. There is therefore a need for intelligent and self-learning systems that can go beyond keyword spotting and hence improve recall of cyberbullying detection.
The ultimate goal of this sort of research is to develop models which could improve manual monitoring for cyberbullying on social networks. We explore the automatic detection of textual signals of cyberbullying, in which it is approached as a complex phenomenon that can be realised in various ways (see Section SECREF15 for a detailed overview). While a lot of the related research focuses on the detection of cyberbullying `attacks', the present study takes into account a broader range of textual signals of cyberbullying, including posts written by bullies, as well as by victims and bystanders.
We propose a machine learning method to cyberbullying detection by making use of a linear SVM classifier BIBREF14 , BIBREF15 exploiting a varied set of features. To the best of our knowledge, this is the first approach to the annotation of fine-grained text categories related to cyberbullying and the detection of signals of cyberbullying events. It is also the first elaborate research on automatic cyberbullying detection on Dutch social media. For the present experiments, we focus on an English and Dutch ASKfm corpus, but the methodology adopted is language and genre independent, provided there is annotated data available.
The remainder of this paper is structured as follows: the next section presents a theoretic overview and gives an overview of the state of the art in cyberbullying detection, whereas Section SECREF3 describes the corpus. Next, we present the experimental setup and discuss our experimental results. Finally, Section SECREF6 concludes this paper and provides perspectives for further research.
The above studies demonstrate that cyberbullying is a serious problem the consequences of which can be dramatic. Successful early detection of cyberbullying attempts is therefore of key importance to youngsters’ mental well-being.
|
Why is early detection of cyberbullying attempts important to youngsters' mental well-being?
|
Because the prevalence of cyberbullying is a serious problem and can lead to dramatic consequences.
|
null | false
| null |
What is a snook?
|
A snook is an inshore fish species. They cannot be commercially fished. If you want to eat a snook, you will need to catch it yourself. Snook catch limits are highly regulated by Florida fish and wildlife.
|
|
null | false
| 145
|
Recent migration to transformer based architectures in language modeling from LSTM models is justified as Transformer-XL obtains strong perplexity results. BERT model also obtains very low pseudo-perplexity scores but it is inequitable to the unidirectional models. Our major contributions in this project, is the use of Transformer-XL architectures for the Finnish language in a sub-word setting, and the formulation of pseudo perplexity for the BERT model. Further comparisons between the transformer architectures can be made by downstreaming it to an ASR task, which will be explored in the future.
Our major contributions in this project, is the use of Transformer-XL architectures for the Finnish language in a sub-word setting, and the formulation of pseudo perplexity for the BERT model.
|
What are the authors’ major contributions in this project?
|
The use of Transformer-XL architectures for the Finnish language in a sub-word setting, and the formulation of pseudo perplexity for the BERT model.
|
1910.00825
| false
| null |
We show all the models' results in Table TABREF24. We observe that SPNet reaches the highest score in both ROUGE and CIC. Both Pointer-Generator and Transformer achieve high ROUGE scores, but a relative low CIC scores. It suggests that the baselines have more room for improvement on preserving critical slot information. All the scaffolds we propose can be applied to different neural network models. In this work we select Pointer-Generator as our base model in SPNet because we observe that Transformer only has a small improvement over Pointer-Generator but is having a higher cost on training time and computing resources. We observe that SPNet outperforms other methods in all the automatic evaluation metrics with a big margin, as it incorporates all the three semantic scaffolds. Semantic slot contributes the most to SPNet's increased performance, bringing the largest increase on all automatic evaluation metrics.
FLOAT SELECTED: Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds.
We show all the models' results in Table TABREF24
FLOAT SELECTED: Table 1: Automatic evaluation results on MultiWOZ. We use Pointer-Generator as the base model and gradually add different semantic scaffolds.
|
By how much does SPNet outperforms state-of-the-art abstractive summarization methods on evaluation metrics?
|
SPNet vs best baseline:
ROUGE-1: 90.97 vs 90.68
CIC: 70.45 vs 70.25
|
null | false
| null |
Give me a few recipes you could make with bread, butter, eggs, and cheese where I have a stovetop, toaster, and a frying pan.
|
Here are a few recipes you could make with the ingredients and tools you mentioned.
1. Grilled cheese and sunny-side up eggs:
Warm up your frying pan on the stove top. Heat a small amount of butter on the frying pan until melted and then place two slices of bread side-by-side into the pan. Place cheese on one slice of bread and heat until bread is crispy and cheese has started to melt. Then cover the slice of bread with melted cheese with the other slice of bread and remove from the frying pan.
Gently crack an egg and place it into the frying pan. Cook until the egg whites are hardened. You may then cook until the yolks are of your desired consistency.
2. Toast and cheese omelette:
Place bread into toaster and cook to desired level. Spread desired amount of butter on bread.
Warm up your frying pan on the stove top. Heat a small amount of butter on the frying pan until melted. In the meantime, gently crack eggs into a bowl and whisk eggs. Place whisked eggs into heated frying pan until eggs have started to harden. Place cheese onto eggs and fold.
3. Scrambled eggs with toast:
Place bread into toaster and cook to desired level. Spread desired amount of butter on bread.
Warm up your frying pan on the stove top. Heat a small amount of butter on the frying pan until melted. In the meantime, gently crack eggs into a bowl and whisk eggs. Place whisked eggs into heated frying pan and consistently mix. Place cheese into pan and continue mixing until eggs set.
|
|
null | false
| null |
Give me a bulleted list of the top five most followed people on instagram.
|
1. @cristiano
2. @leomessi
3. @selenagomez
4. @kyliejenner
5. @therock
|
|
null | false
| null |
What is an experiment?
|
An experiment is a scientific process that uses the element of discovery and hypothesis to test out a known fact. A series of actions are executed and then a person observes the outcomes of those actions. Experiments also give us insight into cause and effect. The best way to develop an experiment is to come up with a set of questions to test a hypothesis about something.
|
|
null | false
| null |
Which is stronger - a lion or a tiger?
|
It really depends on what attributes are being judged. If we look at one of the most telling traits between the two - the family - the lion would be the clear winner. They live in prides and hunt as a family unit. They also rely on the pride to protect their young and their territory from invaders. A tiger is a solitary animal, so it is independent and does not have the social constructs as that of a lion.
|
|
null | false
| null |
Who is the greatest tennis player of all time?
|
Serena Williams
|
|
1904.05862
| true
| null |
What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance.
What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance.
|
Do they explore how much traning data is needed for which magnitude of improvement for WER?
|
Yes.
|
null | false
| 164
|
Preparing the data can be a complex and time-consuming process, often involving working with partially or wholly unstructured data. The pre-processing steps have a big impact on the operationalizations, subsequent analyses and reproducibility efforts BIBREF30 , and they are usually tightly linked to what we intend to measure. Unfortunately, these steps tend to be underreported, but documenting the pre-processing choices made is essential and is analogous to recording the decisions taken during the production of a scholarly edition or protocols in biomedical research. Data may also vary enormously in quality, depending on how it has been generated. Many historians, for example, work with text produced from an analogue original using Optical Character Recognition (OCR). Often, there will be limited information available regarding the accuracy of the OCR, and the degree of accuracy may even vary within a single corpus (e.g. where digitized text has been produced over a period of years, and the software has gradually improved). The first step, then, is to try to correct for common OCR errors. These will vary depending on the type of text, the date at which the `original' was produced, and the nature of the font and typesetting.
One step that almost everyone takes is to tokenize the original character sequence into the words and word-like units. Tokenization is a more subtle and more powerful process than people expect. It is often done using regular expressions or scripts that have been circulating within the NLP community. Tokenization heuristics, however, can be badly confused by emoticons, creative orthography (e.g., U$A, sh!t), and missing whitespace. Multi-word terms are also challenging. Treating them as a single unit can dramatically alter the patterns in text. Many words that are individually ambiguous have clear, unmistakable meanings as terms, like “black hole" or “European Union". However, deciding what constitutes a multi-word term is a difficult problem. In writing systems like Chinese, tokenization is a research problem in its own right.
Beyond tokenization, common steps include lowercasing, removing punctuation, stemming (removing suffixes), lemmatization (converting inflections to a base lemma), and normalization, which has never been clearly defined, but often includes grouping abbreviations like “U.S.A." and “USA", ordinals like “1st" and “first", and variant spellings like “noooooo". The main goal of these steps is to improve the ratio of tokens (individual occurrences) to types (the distinct things in a corpus). Each step requires making additional assumptions about which distinctions are relevant: is “apple” different from “Apple”? Is “burnt” different from “burned”? Is “cool" different from “coooool"? Sometimes these steps can actively hide useful patterns, like social meaning BIBREF32 . Some of us therefore try do as little modification as possible.
From a multilingual perspective, English and Chinese have an unusually simple inflectional system, and so it is statistically reasonable to treat each inflection as a unique word type. Romance languages have considerably more inflections than English; many indigenous North American languages have still more. For these languages, unseen data is far more likely to include previously-unseen inflections, and therefore, dealing with inflections is more important. On the other hand, the resources for handling inflections vary greatly by language, with European languages dominating the attention of the computational linguistics community thus far.
We sometimes also remove words that are not relevant to our goals, for example by calculating vocabulary frequencies. We construct a “stoplist” of words that we are not interested in. If we are looking for semantic themes we might remove function words like determiners and prepositions. If we are looking for author-specific styles, we might remove all words except function words. Some words are generally meaningful but too frequent to be useful within a specific collection. We sometimes also remove very infrequent words. Their occurrences are too low for robust patterns and removing them helps reducing the vocabulary size.
The choice of processing steps can be guided by theory or knowledge about the domain as well as experimental investigation. When we have labels, predictive accuracy of a model is a way to assess the effect of the processing steps. In unsupervised settings, it is more challenging to understand the effects of different steps. Inferences drawn from unsupervised settings can be sensitive to pre-processing choices BIBREF33 . Stemming has been found to provide little measurable benefits for topic modeling and can sometimes even be harmful BIBREF34 . All in all, this again highlights the need to document these steps.
Finally, we can also mark up the data, e.g., by identifying entities (people, places, organizations, etc.) or parts of speech. Although many NLP tools are available for such tasks, they are often challenged by linguistic variation, such as orthographic variation in historical texts BIBREF35 and social media BIBREF32 . Moreover, the performance of NLP tools often drops when applying them outside the training domain, such as applying tools developed on newswire texts to texts written by younger authors BIBREF36 . Problems (e.g., disambiguation in named entity recognition) are sometimes resolved using considerable manual intervention. This combination of the automated and the manual, however, becomes more difficult as the scale of the data increases, and the `certainty' brought by the latter may have to be abandoned.
Beyond tokenization, common steps include lowercasing, removing punctuation, stemming (removing suffixes), lemmatization (converting inflections to a base lemma), and normalization, which has never been clearly defined6, but often includes grouping abbreviations like “U.S.A.” and “USA”, ordinals like “1st” and “first”, and variant spellings like “noooooo”.
|
In data pre-processing, what are other common steps apart from tokenization?
|
Lowercasing, removing punctuation, stemming (removing suffixes), lemmatization (converting inflections to a base lemma), and normalization.
|
null | false
| 349
|
Generating descriptions for multimedia contents such as images and videos, so called Image Captioning, is helpful for e-commerce companies or news agencies. For instance, in e-commerce field, people will no longer need to put much effort into understanding and describing products' images on their websites because image contents can be recognized and descriptions are automatically generated. Inspired by Horus BIBREF0 , Image Captioning system can also be integrated into a wearable device, which is able to capture surrounding images and generate descriptions as sound in real time to guide people with visually impaired.
Image Captioning has attracted attentions from researchers in recent years BIBREF1, BIBREF2, BIBREF3, and there has been promising attempts dealing with language barrier in this task by extending existed dataset captions into different languages BIBREF3, BIBREF4.
In this study, generating image captions in Vietnamese language is put into consideration. One straightforward approach for this task is to translate English captions into Vietnamese by human or by using machine translation tool, Google translation. With the method of translating directly from English to Vietnamese, we found that the descriptions are sometimes confusing and unnatural to native people. Moreover, image understandings are cultural dependent, as in Western, people usually have different ways to grasp images and different vocabulary choices for describing contexts. For instance, in Fig. FIGREF2, one MS-COCO English caption introduce about "a baseball player in motion of pitching", which makes sense and capture accurately the main activity in the image. Though it sounds sensible in English, the sentence becomes less meaningful when we try to translate it into Vietnamese. One attempt of translating the sentence is performed by Google Translation, and the result is not as expected.
Therefore, we come up with the approach of constructing a Vietnamese Image Captioning dataset with descriptions written manually by human. Composed by Vietnamese people, the sentences would be more natural and friendlier to Vietnamese users. The main resources we used from MS-COCO for our dataset are images. Besides, we consider having our dataset focus on sportball category due to several reasons:
By concentrating on a specific domain we are more likely to improve performance of the Image Captioning models. We expect our dataset can be used to confirm or reject this hypothesis.
Sportball Image Captioning can be used in certain sport applications, such as supportting journalists describing great amount of images for their articles.
Our primary contributions of this paper are as follows:
Firstly, we introduce UIT-ViIC, the first Vietnamese dataset extending MS-COCO with manually written captions for Image Captioning. UIT-ViIC is published for research purposes.
Secondly, we introduce our annotation tool for dataset construction, which is also published to help annotators conveniently create captions.
Finally, we conduct experiments to evaluate state-of-the-art models (evaluated on English dataset) on UIT-ViIC dataset, then we analyze the performance results to have insights into our corpus.
The structure of the paper is organized as follows. Related documents and studies are presented in Section SECREF2. UIT-ViIC dataset creation is described in Section SECREF3. Section SECREF4 describes the methods we implement. The experimental results and analysis are presented in Section SECREF5. Conclusion and future work are deduced in Section SECREF6.
The main resources we used from MS-COCO for our dataset are images.
|
What are the main resources used from MS-COCO?
|
Images are used.
|
null | false
| null |
At international level, Hagi played for the Romania national team in three FIFA World Cups, in 1990, 1994 (where he was named in the World Cup All-Star Team after helping his nation to the quarter-finals of the tournament) and 1998; as well as in three UEFA European Championships, in 1984, 1996 and 2000. He won a total of 124 caps for Romania between 1983 and 2000, making him the second-most capped Romanian player of all time, behind only Dorinel Munteanu; he is also the joint all-time leading goalscorer of the Romania national side (alongside Adrian Mutu) with 35 goals.
|
Given this paragraph about Gheorge Hagi, tell me which were his top achievements in the Romanian football team.
|
Gheorghe Hagi is arguably the best Romanian football player holding the record for most goals scored with the national team. He participated in 3 World Cups and 3 European Championships and he was awarded a spot in the World Cup All-Start Team in 1994.
|
null | false
| null |
What are 5 different edible uses for Tomatoes?
|
- Marinara Sauce
- Salsa
- Hamburger Topping
- Bruschetta
- BLT
|
|
null | false
| null |
How do I do a headstand?
|
Start in a tabletop position with your knees on the ground and your hands placed underneath your shoulders. Draw your chin into your chest and lower the crown of your head onto the ground. Curl your toes under and lift your knees up off the ground. Place each knee on your tricep. Push into the heal of your palm to lift your knees off your tricep. With your low core deeply engaged, continue to lift your knees higher until your legs are fully extended. Make sure you keep your breath slow while inverted, and then slowly reverse the steps. You have now done your first head stand!
|
|
null | false
| null |
In 1066, several rival claimants to the English throne emerged. Among them were Harold Godwinson (recognised as king by the Witenagemot after the death of Edward the Confessor), Harald Hardrada (King of Norway who claimed to be the rightful heir of Harthacnut) and Duke William II of Normandy (vassal to the King of France, and first cousin once-removed of Edward the Confessor). Harald and William both invaded separately in 1066. Godwinson successfully repelled the invasion by Hardrada, but ultimately lost the throne of England in the Norman conquest of England.
|
Which leaders claimed the English throne in 1066?
|
Harold Godwinson, Harald Hardrada, and Duke William II of Normandy all claimed the English throne in 1066.
|
null | false
| null |
What are the characteristics of chihuahuas?
|
Chihuahuas are among the smallest of the dog breeds. Small in size but confident and big-dog natured. They originated from Mexico with lineage going back to ancient kingdoms of pre-Columbian times. Chihuahuas are extremely loyal, known for having close attachments with their owners. Their distinct features include oversized, pointy ears and expressive eyes. Coats come in many colors ranging from white to brown to black with many different pattern variations, and will be either short or long. Short hair chihuahuas are known for shivering and needing sweaters in cold weather. They are ideal city dogs given their small size.
|
|
null | false
| null |
Richton-Perry County Airport covers an area of 55 acres (22 ha) at an elevation of 167 feet (51 m) above mean sea level. It has one runway designated 18/36 with an asphalt surface measuring 3,000 by 60 feet (914 x 18 m).
For the 12-month period ending November 17, 2011, the airport had 1,400 general aviation aircraft operations, an average of 116 per month. At that time there were two single-engine aircraft based at this airport.
|
How large is the runway of the Richton-Perry County Airport?
|
The single runway at the Richton-Perry County Airport is 18000 square feet in area (it's dimensions are 3,000 by 60 feet)
|
null | false
| null |
How do you make an Old Fashioned Cocktail?
|
Making an Old Fashioned consists of five steps. The ingredients for an Old Fashioned cocktail include your choice of bourbon or rye, an orange, aromatic bitters, a sugar cube, a dash of soda water, Luxardo cherry and ice.
First, grab your cocktail glass and add two dashes of aromatic bitters along with a spritz of soda water and the sugar cube.
Second, add an ounce and a half of your choice of bourbon or rye.
Third, add ice and stir for around 30 seconds.
Fourth, peel an orange, express the oils over the glass and drop it in.
Fifth, add the Luxardo cherry and enjoy.
|
|
null | false
| null |
When did Lebron James become the NBA's all time leading scorer?
|
Lebron James passed Kareem Abdul-Jabbar as the NBA's all time leading scorer on February 7, 2023.
|
|
1911.03350
| false
| null |
One of the most popular metrics for QG, BLEU BIBREF21 provides a set of measures to compare automatically generated texts against one or more references. In particular, BLEU-N is based on the count of overlapping n-grams between the candidate and its corresponding reference(s).
Within the field of Computational Creativity, Diversity is considered a desirable property BIBREF31. Indeed, generating always the same question such as “What is the meaning of the universe?" would be an undesirable behavior, reminiscent of the “collapse mode" observed in Generative Adversarial Networks (GAN) BIBREF32. Therefore, we adopt Self-BLEU, originally proposed by BIBREF33, as a measure of diversity for the generated text sequences. Self-BLEU is computed as follows: for each generated sentence $s_i$, a BLEU score is computed using $s_i$ as hypothesis while the other generated sentences are used as reference. When averaged over all the references, it thus provides a measure of how diverse the sentences are. Lower Self-BLEU scores indicate more diversity. We refer to these metrics as Self-B* throughout this paper.
Therefore, given a question-context pair as input to a QA model, two type of metrics can be computed:
n-gram based score: measuring the average overlap between the retrieved answer and the ground truth.
probability score: the confidence of the QA model for its retrieved answer; this corresponds to the probability of being the correct answer assigned by the QA model to the retrieved answer.
One of the most popular metrics for QG, BLEU BIBREF21 provides a set of measures to compare automatically generated texts against one or more references. In particular, BLEU-N is based on the count of overlapping n-grams between the candidate and its corresponding reference(s).
Therefore, we adopt Self-BLEU, originally proposed by BIBREF33, as a measure of diversity for the generated text sequences.
Therefore, given a question-context pair as input to a QA model, two type of metrics can be computed:
n-gram based score: measuring the average overlap between the retrieved answer and the ground truth.
probability score: the confidence of the QA model for its retrieved answer; this corresponds to the probability of being the correct answer assigned by the QA model to the retrieved answer.
|
What automated metrics authors investigate?
|
The answers are shown as follows:
* BLEU
* Self-BLEU
* n-gram based score
* probability score
|
null | false
| null |
In which years did the Women's New Zealand cricket team win the world cup
|
2000
|
|
null | false
| null |
Who is the top baseball player based on offensive categories that are not in the major league baseball hall of fame with no chance of getting in while alive.
|
While there is an exhaustive amount of ways one could approach this answer, there is arguable only one player in all of the major offensive categories that is not in the hall of fame. That player is Pete Rose. Players like Barry Bonds and Albert Pujols may have better overall statistics in the power categories of baseball, but they still have an outside chance to get into the hall of fame, whereas Pete Rose has been banned for life by major league baseball and does not have any current pathway into the hall of fame. Speculation is that this will change posthumously, but the executives of major league baseball will not consider any change to this status while he is alive.
|
|
null | false
| null |
How did Jim Henson die?
|
The famous puppeteer succumbed to toxic shock syndrome, resulting from abscesses in his lungs.
|
|
null | false
| null |
What should I think about when driving a car in a city?
|
Cities can be crowded and there are many things to consider when driving in a city. In cities, the driving pace can vary between slow and fast speeds, and one needs to be extra attentive. Other vehicles may switch lanes hastily without indicating, or break with no warning. In some cities, one also has trams to be aware of. Pedestrians, bicyclists, and other electric mobility vehicles might appear unexpectedly in traffic. Another thing to take note of is the time of day, and day of the week, as traffic jams can occur when too many people are in traffic at the same time. In some situations, it might be faster to walk, cycle, or take public transportation than drive in cities. Finally, parking and tolls might be expensive when there is a lot of traffic, yet free when cities tend to be empty.
|
|
null | false
| null |
Tell me about Crater Lake National Park
|
Crater Lake National Park, located in Oregon is an astonishing national park full of beautiful views and pristine natural surroundings. The Park itself was officially founded in 1902 but prior to that it was considered a holy place by Native American Indians for hundreds of years. The water in Crater Lake is a deep, dark blue and remains relatively cold year round. Crater Lake is actually the remnant of an imploded volcano that filled with water of the centuries and is in fact, the deepest lake in the United States. The caldera itself is 6 miles by 5 miles and more than a half mile deep. The rim offers some spectacular views of the lake itself and surrounding caldera formation.
|
|
null | false
| null |
Give me a list of the stations on the Island Line of Hong Kong's MTR network
|
- Kennedy Town
- HKU
- Sai Ying Pun
- Sheung Wan
- Central
- Admiralty
- Wan Chai
- Causeway Bay
- Tin Hau
- Fortress Hill
- North Point
- Quarry Bay
- Tai Koo
- Sai Wan Ho
- Shau Kei Wan
- Heng Fa Chuen
- Chai Wan
|
|
null | false
| null |
Human presence in Pre-Columbian Mexico goes back to 8,000 BCE and it went to become one of the world's six cradles of civilization. In particular, the Mesoamerican region was home to many intertwined civilizations, including the Olmec, Maya, Zapotec, Teotihuacan, and Purepecha. Last were the Aztecs, who dominated the region in the century before European contact. In 1521, the Spanish Empire and its indigenous allies conquered the Aztec Empire from its capital Tenochtitlan (now Mexico City), establishing the colony of New Spain.
|
How old were the indigenous civilizations in Mexico prior to European contact?
|
There were many civilizations that came prior to European contact, some that date back as early as 8,000 BCE. The last civilization to dominate Mexico a century prior to European contact were the Aztecs, whose capital, Tenochtitlan, became what is now Mexico City.
|
null | false
| 115
|
Some eye-tracking corpora of natural reading (e.g. the Dundee BIBREF2, Provo BIBREF3 and GECO corpus BIBREF4), and a few EEG corpora (for example, the UCL corpus BIBREF5) are available. It has been shown that this type of cognitive processing data is useful for improving and evaluating NLP methods (e.g. barrett2018sequence,hollenstein2019cognival, hale2018finding). However, before the Zurich Cognitive Language Processing Corpus (ZuCo 1.0), there was no available data for simultaneous eye-tracking and EEG recordings of natural reading. dimigen2011coregistration studied the linguistic effects of eye movements and EEG co-registration in natural reading and showed that they accurately represent lexical processing. Moreover, the simultaneous recordings are crucial to extract word-level brain activity signals.
While the above mentioned studies analyze and leverage natural reading, some NLP work has used eye-tracking during annotation (but, as of yet, not EEG data). mishra2016predicting and joshi2014measuring recorded eye-tracking during binary sentiment annotation (positive/negative). This data was used to determine the annotation complexity of the text passages based on eye movement metrics and for sarcasm detection BIBREF6. Moreover, eye-tracking has been used to analyze the word sense annotation process in Hindi BIBREF7, named entity annotation in Japanese BIBREF8, and to leverage annotator gaze behaviour for coreference resolution BIBREF9. Finally, tomanek2010cognitive used eye-tracking data during entity annotation to build a cost model for active learning. However, until now there is no available data or research that analyzes the differences in the human processing of normal reading versus annotation.
However, until now there was no available data or research that analyzes the differences in the human processing of normal reading versus annotation.
|
Although many people have studied eye-tracking and EEG recordings of natural reading, what shortcoming still existed?
|
There was no available data or research that analyzes the differences in the human processing of normal reading versus annotation.
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Banefort?
|
The name nor identity of the Banefort house's founder is unknown. They are only referred to as The Hooded Man.
|
|
null | false
| null |
Tell me whether these are popular sports are more popular in United States of America or in Australia.
|
Basketball, American Football, Baseball, and Lacrosse are more popular in the United States of America. Soccer, Australian Rules Football, Rugby, and Cricket are more popular in Australia.
|
|
null | false
| 127
|
We use the pre-trained glove.840B.300d embeddings BIBREF20 for all 6 attacking methods. For FGM, FGVM and DeepFool, we tune $\epsilon $, the overshoot hyper-parameter (Section SECREF9) and keep the iterative step $n$ static (5). For TYC, besides $\epsilon $ we also tune the upper limit of flipped words, ranging from 10%–100% of the maximum length. For HotFlip, we tune only the upper limit of flipped words, in the range of $[1, 7]$.
We pre-train AutoEncoder to reconstruct sentences in different datasets as we found that this improves the quality of the generated adversarial examples. During pre-training, we tune batch size, number of layers and number of units, and stop the training after the performance on the development sets stops improving for 20K steps. The model is then initialised with the pre-trained weights and trained based on objectives defined in Section SECREF11. During the training process we tune $\lambda _{ae}$ and $\lambda _{seq}$ while keeping the batch size (32) and learning rate ($1e^{-4}$) fixed. As part of our preliminary study, we also tested different values for the Gumbel-softmax temperature $\tau $ and find that $\tau =0.1$ performs the best. Embeddings are fixed throughout all training processes.
For target classifiers, we tune batch size, learning rate, number of layers, number of units, attention size (BiLSTM$+$A), filter sizes and dropout probability (CNN). For BERT, we use the default fine-tuning hyper-parameter values except for batch size, where we adjust based on memory consumption. Note that after the target classifiers are trained their weights are not updated when training or testing the attacking methods.
For target classifiers, we tune batch size, learning rate, number of layers, number of units, attention size (for BiLSTM+A), filter sizes and dropout probability (for CNN).
|
What parameters are tuned for target classifiers?
|
Batch size, learning rate, number of layers, number of units, attention size (for BiLSTM+A), filter sizes and dropout probability (for CNN).
|
2001.08051
| false
| null |
FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences.
Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively.
FLOAT SELECTED: Table 4: List of the indicators used by human experts to evaluate specific linguistic competences.
Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively.
|
What proficiency indicators are used to the score the utterances?
|
6 indicators:
- lexical richness
- pronunciation and fluency
- syntactical correctness
- fulfillment of delivery
- coherence and cohesion
- communicative, descriptive, narrative skills
|
null | false
| null |
There is no universally-agreed rule as to who plays first, but in this article the convention that X plays first is used.
Players soon discover that the best play from both parties leads to a draw. Hence, tic-tac-toe is often played by young children who may not have discovered the optimal strategy.
Because of the simplicity of tic-tac-toe, it is often used as a pedagogical tool for teaching the concepts of good sportsmanship and the branch of artificial intelligence that deals with the searching of game trees. It is straightforward to write a computer program to play tic-tac-toe perfectly or to enumerate the 765 essentially different positions (the state space complexity) or the 26,830 possible games up to rotations and reflections (the game tree complexity) on this space. If played optimally by both players, the game always ends in a draw, making tic-tac-toe a futile game.
|
How to play tic-tac-toe
|
Tic-tac-toe is played on a three-by-three grid by two players, who alternately place the marks X and O in one of the nine spaces in the grid. Tic-tac-toe (American English), noughts and crosses (Commonwealth English), or Xs and Os (Canadian or Irish English) is a paper-and-pencil game for two players who take turns marking the spaces in a three-by-three grid with X or O. The player who succeeds in placing three of their marks in a horizontal, vertical, or diagonal row is the winner. It is a solved game, with a forced draw assuming best play from both players.
|
null | false
| null |
Heat shock proteins (HSP) are a family of proteins produced by cells in response to exposure to stressful conditions. They were first described in relation to heat shock, but are now known to also be expressed during other stresses including exposure to cold, UV light and during wound healing or tissue remodeling. Many members of this group perform chaperone functions by stabilizing new proteins to ensure correct folding or by helping to refold proteins that were damaged by the cell stress. This increase in expression is transcriptionally regulated. The dramatic upregulation of the heat shock proteins is a key part of the heat shock response and is induced primarily by heat shock factor (HSF). HSPs are found in virtually all living organisms, from bacteria to humans.
|
What are Heat Shock proteins?
|
Heat shock proteins (HSP) are a family of proteins produced by cells in response to exposure to stressful conditions.
|
null | false
| null |
What is the origin of orange wine?
|
Orange wine was first introduced in Georgia and it dates thousands of years back. It is a variant of white wine where the skin grapes are not removed after crushing the grapes. Instead, the skins are left to ferment with the grape juice, similarly to red wine. The style of skin-contact white wine was adopted by Italian and Slovenian winemakers, after visiting Georgia. It then spread to other European countries.
Skin-fermented white wines were common up until the 1960s, when fresh white wines started to dominate the market.
The term orange wine was coined only in 2004 by a British wine importer.
In Georgia skin-contact white wine is historically known as amber wine.
|
|
1912.13109
| false
| null |
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small. Basic summary and examples of the data from the dataset are below:
We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al. This dataset was important to employ Transfer Learning to our task since the number of labeled dataset was very small.
|
What dataset is used?
|
The answers are shown as follows:
* HEOT
|
null | false
| null |
The Vietnam War (also known by other names) was a conflict in Vietnam, Laos, and Cambodia from 1 November 1955 to the fall of Saigon on 30 April 1975. It was the second of the Indochina Wars and was officially fought between North Vietnam and South Vietnam
|
When did the Vietnam War Start?
|
The Vietnam War started on 1 November 1955
|
null | false
| null |
Which is a species of fish? Toadfish or Toad
|
Toadfish
|
|
null | false
| null |
What is "A Song of Ice and Fire"?
|
"A Song of Ice and Fire" is a medieval fantasy book series created by an American author George R.R. Martin who has been coined the American J.R.R. Tolkien.
It consists of five books published so far:
- "A Game Of Thrones"
- "A Clash Of Kings"
- "A Storm of Swords"
- "A Feast for Crows"
- "A Dance with Dragons"
The series remains unfinished with the last two books in the series still unpublished:
- "The Winds Of Winter"
- "A Dream Of Spring"
|
|
null | false
| null |
South Korea, officially the Republic of Korea (ROK), is a country in East Asia. It constitutes the southern part of the Korean Peninsula and borders North Korea. The country's western border is formed by the Yellow Sea, while its eastern border is defined by the Sea of Japan. South Korea claims to be the sole legitimate government of the entire peninsula and adjacent islands. It has a population of 51.75 million, of which roughly half live in the Seoul Capital Area, the fourth most populous metropolitan area in the world. Other major cities include Incheon, Busan, and Daegu.
|
Given this paragraph about South Korea, which body of water forms the country's western border and which body of water borders it's eastern border?
|
The western border of South Korea is formed by the Yellow Sea, while the eastern border is formed by the Sea of Japan.
|
null | false
| 353
|
There are three main considerations at play when designing software surrounding the Phoenix event data pipeline: modularity, composability, and reproducibility. In short, no one part of the pipeline should be hardcoded to operate within the pipeline, implying other pieces are easily replaced by new and/or better alternative, and the pieces should operate in such a manner that reproducing the exact steps used to create the final dataset is transparent and understandable to those within the broader event data community. Towards this end, the pieces of software are modular in nature; each piece can stand on its own without relying on another other piece of software in the stack. These modular pieces lead to a system that is composable. As pieces can stand on their own, parts of the system can be replaced without affecting the rest of the system in an major way. Finally, the modular and composable nature of the pipeline leads to a system that is inherently reproducible. In many ways, the code itself serves as documentation for reproduction. If the versions of the various pieces are noted, all that is necessary to reproduce the pipeline is to link the correct versions of each module together. Proper design nearly guarantees reproducibility of the data generating process.
There are three main considerations at play when designing software surrounding the Phoenix event data pipeline: modularity, composability, and reproducibility.
|
What are the three main considerations when designing software surrounding the Phoenix event data pipeline?
|
Modularity, composability, and reproducibility.
|
null | false
| 68
|
Today, Internet is one of the widest available media worldwide. It has essentially become a huge hit of data that has the potential to serve many information centric applications in our life. Recommendation system takes an essential part of many internet services and online applications, including applications like social-networking and recommendation of products (films, music, articles,..i.e.). Recommendation techniques have been used by the most known companies such as Amazon, Netflix and eBay to recommend releated items or products by estimating the probable preferences of customers. These techniques are profitable to both service provider and user. According to pervious works, two popular approaches for building recommendation systems can be categorized as content-based (CB), collaborative filtering (CF).
Content-based (CB) recommending is adopted for recommendation systems model widely, which takes advantage of the property of items to create features and characteristics to coordinate user profiles. It can be relied on the properties of the items that each user likes to discover what else the user may like. One major issue of CB filtering method is that the recommendation system is required to gain an understanding of user preferences for some sorts of items and deploy these for other sorts of items.
Nevertheless CF has two widely known problems which are sparsity and cold start (CS). In the rating matrix, The percentage of elements which get values is small. Even it is possible that CF considers only a few rating for popular items. For instance, upon a considerable Netflix rating dataset which is provided for Netflix Prize competition, there are about 100 milion ratings for about 18,000 movies that are given by 480,000 users. The percentage of rating matrix elements which are received ratings is 1. With a sparse ranking matrix it is very challenging topic to make an effective recommendation, depending on estimation of the relationship between items and users. CS problem is another widely known issue for CF approach, which can occur on new users or items. In terms of achieving an effective recommendation, CF approach requires either ratings on an item or a large number of ratings from a user.
Recently, researchers proposed various methods based on probabilistic topic modeling methods BIBREF0. LDA is a generative probabilistic model broadly used in the information retrieval field. Researchers have used topic modeling methods based on LDA for building recommendation systems in various subjects, including app recommendation BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, event recommendationBIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, hashtag recommendation BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, social networks and media BIBREF37, BIBREF38, BIBREF6, BIBREF9, BIBREF39, BIBREF40, BIBREF41. In this paper, We present a taxonomy of recommendation systems applications based on topic modeling (LDA) of the recent research and evaluate ISWC and WWW publications in computer science between 2013 to 2017 from DBLP dataset.
Nevertheless CF has two widely known problems which are sparsity and cold start (CS).
|
What is the limitation of the Collaborative Filtering method?
|
CF has two widely known problems which are sparsity and cold start (CS).
|
null | false
| null |
What is the empire state building?
|
The empire state building is a building in Manhattan, New York. It is 102 stories high and features an art-deco inner and outer appearance. It was built in 1931 and is a steel framed skyscraper. In 1931 it was considered the tallest building in the world until 1971.
|
|
null | false
| 117
|
We obtained our Quora dataset BIBREF7 through web-based crawls between June 2014 to August 2014. This crawling exercise has resulted in the accumulation of a massive Q&A dataset spanning over a period of over four years starting from January 2010 to May 2014. We initiated crawling with 100 questions randomly selected from different topics so that different genre of questions can be covered. The crawling of the questions follow a BFS pattern through the related question links. We obtained 822,040 unique questions across 80,253 different topics with a total of 1,833,125 answers to these questions. For each question, we separately crawl their revision logs that contain different types of edit information for the question and the activity log of the question asker.
We obtained our Quora dataset through web-based crawls between June 2014 to August 2014.
|
How is the Quora dataset obtained?
|
They obtained their Quora dataset through web-based crawls.
|
1806.00722
| false
| null |
We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German.
We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German.
|
what language pairs are explored?
|
The answers are shown as follows:
* German-English
* Turkish-English
* English-German
|
null | false
| null |
Write a friendly email to my neighbor asking them to keep the noise down after 10pm on Weekdays.
|
Hi Neighbor!
I hope this email finds you well and getting ready for the sunny spring weather. As the days grow longer, I wanted to ask you for a small favor: can you please try to keep the noise down after 10pm on weekdays? to be clear, I'm not suggesting we all live in libraries--just that external parties, loud music, and fireworks cease after 10pm on weekdays. I'm hopeful that 10pm represents a reasonable compromise between those socializing late and those waking up early. Let me know if you disagree and we can try to work out an alternative compromise. And by all means, please let us know if there are special occasions and/or considerations we should understand. Our goal is to maintain this healthy, thriving neighborhood.
Thank you,
Kevin
|
|
null | false
| null |
What is red tide?
|
microscopic forms of algae known as dinoflagellates can bloom into dense patches near the surface
|
|
null | false
| null |
In 1905, T. G. Longstaff, accompanied by two alpine guides and six porters, made an attempt on Gurla Mandhata. They turned back at around 7,000 m (23,000 ft) after being caught in an avalanche and encountering other difficulties. This was a strong achievement for the time, especially for such a small group; at that time no summit of over 7,000 m had yet been climbed and Longstaff's height represented a world altitude record.
In 1935, a Viennese student, Herbert Tichy, disguised as an Indian pilgrim, made a trip to the holy mountain Kailash. On the way, he attempted Gurla Mandhata with one of his porters, Kitar. They reached a height of 23,400 ft. where they were turned back by fresh snow and bad weather.
In 1955 the mountain was the intended destination of the inaugural Welsh Himalayan Expedition, led by Sydney Wignall. The expedition, sponsored by the Liverpool Daily Post, intended to summit Gurla Mandata and plant three flags: the Welsh Dragon, the flag of the recently-overthrown Chinese Republic and the Jolly Roger. Before the climb could begin, the group was captured by the Chinese military who imprisoned and tortured the climbers for two months under the belief they were CIA spies (Wignall had in fact offered to spy for India).
The first ascent of the peak was by a joint Japanese/Chinese team led by Katsutoshi Hirabayashi, via the north side of the peak, in May 1985. Since that time, there have been six additional successful ascents and two failed attempts on the peak.
In 1997, an attempt was made to ascend the peak via the then-unclimbed North Face route by Quinn Simons, Soren Peters, and their guide, Charlie Fowler. The team made a valiant effort, climbing high on the mountain, but after severe storms and other difficulties had to retreat. Their descent ended in a fall of some 450 m (1,500 ft) down the North Face of the peak. Fowler was slightly injured, while Simons and Peters both suffered extreme frostbite on their extremities.
The standard ascent route climbs the western flanks of the mountain ascending the Chaglung'mlungha Glacier to the summit plateau. Most teams choose to approach the mountain overland by jeep from either Lhasa, Tibet, or Kathmandu, Nepal. However, an alternate approach begins in the mountain hamlet of Simikot, Nepal, in the remote Humla district of west Nepal and follows the Karnali River northward, crossing into Tibet (China) in the village of Sher. Jeeps then take climbers north through Taklakot (Burang) to basecamp on the mountain.
|
What is the climbing history summary of Gurla Mandhata?
|
The first clmbing attempt of of Gurla Mandhata happened in 1905. T. G. Longstaff, accompanied by two alpine guides and six porters reached 23,000 feet but had to turn around due to an avalanche and other difficulties. Second attempt was made in 1935 by a Viennese student, Herbert Tichy and 1 porter. They reached a height of 23,000 feet but turned around due to fresh snow and bad weather. The first successful happened in 1985 by a joint Japanese/Chinese team led by Katsutoshi Hirabayashi on the north side of the peak. Since then, there was an additional six successful ascent and two failed attempts to reach the peak.
|
null | false
| null |
Nuclear power is the use of nuclear reactions to produce electricity. Nuclear power can be obtained from nuclear fission, nuclear decay and nuclear fusion reactions. Presently, the vast majority of electricity from nuclear power is produced by nuclear fission of uranium and plutonium in nuclear power plants. Nuclear decay processes are used in niche applications such as radioisotope thermoelectric generators in some space probes such as Voyager 2. Generating electricity from fusion power remains the focus of international research.
Most nuclear power plants use thermal reactors with enriched uranium in a once-through fuel cycle. Fuel is removed when the percentage of neutron absorbing atoms becomes so large that a chain reaction can no longer be sustained, typically three years. It is then cooled for several years in on-site spent fuel pools before being transferred to long term storage. The spent fuel, though low in volume, is high-level radioactive waste. While its radioactivity decreases exponentially it must be isolated from the biosphere for hundreds of thousands of years, though newer technologies (like fast reactors) have the potential to reduce this significantly. Because the spent fuel is still mostly fissionable material, some countries (e.g. France and Russia) reprocess their spent fuel by extracting fissile and fertile elements for fabrication in new fuel, although this process is more expensive than producing new fuel from mined uranium. All reactors breed some plutonium-239, which is found in the spent fuel, and because Pu-239 is the preferred material for nuclear weapons, reprocessing is seen as a weapon proliferation risk.
|
List different ways in which Nuclear power can be obtained based on the passage.
|
Nuclear power can be obtained from nuclear fission, nuclear decay and nuclear fusion reactions.
|
null | false
| null |
If you could change one thing about this world, what would it be and why?
|
Assuming I could change just one thing in this world, it would be for all of humankind to develop greater empathy and selflessness. I do believe it could be the start of so much good at all levels - it would help nations, societies and people to focus on their own and others' happiness --- and make decisions that focus on the larger good. Many of the troubles ailing us as nations would disappear if we just saw the other side and put ourselves in their shoes. Societies would heal if artificially created differences were done away with. And families and relationships would benefit from a better understanding of the other side's point of view.
In this utopian world filled with empathy and selflessness, no human would die of hunger, no mother would lose her child due to lack of medical facilities and no human would die of self harm because they were not "normal" or a part of the mainstream.
What saddens me, however, is that this is such an easy thing for everyone to do. If we all just did 1 (or 10) good deed a day, this world would be a better place already. The miracle isn't anywhere else, it is within us.
|
|
null | false
| null |
The Spanish–American War (April 21 – August 13, 1898) began in the aftermath of the internal explosion of USS Maine in Havana Harbor in Cuba, leading to United States intervention in the Cuban War of Independence. The war led to the United States emerging predominant in the Caribbean region, and resulted in U.S. acquisition of Spain's Pacific possessions. It led to United States involvement in the Philippine Revolution and later to the Philippine–American War.
The 19th century represented a clear decline for the Spanish Empire, while the United States went from becoming a newly founded country to being a medium regional power. In the Spanish case, the descent, which already came from previous centuries, accelerated first with the Napoleonic invasion, which in turn would cause the independence of a large part of the American colonies, and later political instability (pronouncements, revolutions, civil wars) bled the country socially and economically. The U.S., on the other hand, expanded economically throughout that century by purchasing territories such as Louisiana and Alaska, militarily by actions such as the Mexican–American War, and by receiving large numbers of immigrants. That process was interrupted only for a few years by the American Civil War and Reconstruction era.
The main issue was Cuban independence. Revolts had been occurring for some years in Cuba against Spanish colonial rule. The United States backed these revolts upon entering the Spanish–American War. There had been war scares before, as in the Virginius Affair in 1873. But in the late 1890s, American public opinion swayed in support of the rebellion because of reports of concentration camps set up to control the populace. Yellow journalism exaggerated the atrocities to further increase public fervor and to sell more newspapers and magazines.
The business community had just recovered from a deep depression and feared that a war would reverse the gains. Accordingly, most business interests lobbied vigorously against going to war. President William McKinley ignored the exaggerated news reporting and sought a peaceful settlement. Though not seeking a war, McKinley made preparations for readiness against one. He unsuccessfully sought accommodation with Spain on the issue of independence for Cuba. However, after the U.S. Navy armored cruiser Maine mysteriously exploded and sank in Havana Harbor on February 15, 1898, political pressures pushed McKinley into a war that he had wished to avoid.
As far as Spain was concerned, there was a nationalist agitation, in which the written press had a key influence, causing the Spanish government to not give in and abandon Cuba as it had abandoned Spanish Florida when faced with a troublesome colonial situation there, transferring it to the U.S. in 1821 in exchange for payment of Spanish debts. If the Spanish government had transferred Cuba it would have been seen as a betrayal by a part of Spanish society and there would probably have been a new revolution. So the government preferred to wage a lost war beforehand, rather than risk a revolution, opting for a "controlled demolition" to preserve the Restoration Regime.
On April 20, 1898, McKinley signed a joint Congressional resolution demanding Spanish withdrawal and authorizing the President to use military force to help Cuba gain independence. In response, Spain severed diplomatic relations with the United States on April 21. On the same day, the United States Navy began a blockade of Cuba. Both sides declared war; neither had allies.
The 10-week war was fought in both the Caribbean and the Pacific. As United States agitators for war well knew, United States naval power would prove decisive, allowing expeditionary forces to disembark in Cuba against a Spanish garrison already facing nationwide Cuban insurgent attacks and further devastated by yellow fever. The invaders obtained the surrender of Santiago de Cuba and Manila despite the good performance of some Spanish infantry units, and fierce fighting for positions such as El Caney and San Juan Hill. Madrid sued for peace after two Spanish squadrons were sunk in the battles of Santiago de Cuba and Manila Bay, and a third, more modern fleet was recalled home to protect the Spanish coasts.
The war ended with the 1898 Treaty of Paris, negotiated on terms favorable to the United States. The treaty ceded ownership of Puerto Rico, Guam, and the Philippines from Spain to the United States and granted the United States temporary control of Cuba. The cession of the Philippines involved payment of $20 million ($650 million today) to Spain by the U.S. to cover infrastructure owned by Spain.
The Spanish–American War brought an end to almost four centuries of Spanish presence in the Americas, Asia, and the Pacific. The defeat and loss of the Spanish Empire's last remnants was a profound shock to Spain's national psyche and provoked a thorough philosophical and artistic reevaluation of Spanish society known as the Generation of '98. The United States meanwhile not only became a major power, but also gained several island possessions spanning the globe, which provoked rancorous debate over the wisdom of expansionism.
|
Given the reference text about the Spanish-American war, when and how did the war end?
|
The Spanish-American war ended on August 13, 1898 with the 1898 Treaty of Paris. The treaty ceded ownership of Puerto Rico, Guam, and the Philippines from Spain the to the United States.
|
null | false
| null |
I'm a 12-year-old girl and I want to have my own cellphone because everyone else in my class have one. I just want to be part of the group and join their conversations. My mother did not refuse my request, but she asked me how to solve her concerns about the potential harm to me. What should I do to convince her to give me a cellphone? Here are here concerns:
1. Using cellphone too much would harm your eye sight. Especially for teenagers, whose eyes are still growing, it's very easy to have near-sight pretty quickly.
2. Using cellphone too much would take too much of your time, and you would spend less time reading books, doing exercises, playing outside, talking to people, which harms your overall development. Adults like me already ready lots of books, did lots of exercises, and talked to a lot of people when I was a teenager, so I already developed those skills.
3. Using cellphone too much would make you feel hard to focus on one thing. The scattered information would stimulate your attention, and make you feel excited at superficial contents, and don't think deeply.
4. You might accidentally lost your cellphone, and you will feel sad about it. Teenagers usually are more likely to lost things than adults in general.
Can you respond with concrete plans to prevent the harmful results so that my mom can be convinced?
|
Hi mom, I think I can handle the problems you mentioned properly. Here are some plans to make sure the things you worry about won't happen:
1. You can take me to do eye exams every month to make sure my eyes are not getting bad. If my eye-sight becomes worse than the current situation, I can return the cellphone to you.
2. You can set the phone to limit my screen time to one hour per day, so that it would not take my time to do other things.
3. You can set the phone to limit my screen time to one hour per day, so that it would not change my habit to use my attention too much. Also, I will keep reading books and discuss my thoughts with you, so that I can practice deep thinking skills.
4. I can start by having a cheaper phone and I will take good care of it. If I don't lose or break it for half a year, you can trust me and buy me a better one.
Does it sounds good to you?
|
|
null | false
| null |
Categorize the below in terms of Red wine or white wine.
Cabernet Sauvignon, Merlot, Shiraz, Pinot Noir, Malbec, Zinfandel, Grenache, Sangiovese, Chardonnay, Sauvignon Blanc, Riesling, Chenin Blanc, Pinot Gris (or Pinot Grigio), Semillon, Viognier, Torrontes, Albariño, Gewurztraminer.
|
White wine - Chardonnay, Sauvignon Blanc, Riesling, Chenin Blanc, Pinot Gris (or Pinot Grigio), Semillon, Viognier, Torrontes, Albariño and Gewurztraminer.
Red wine - Cabernet Sauvignon, Merlot, Shiraz, Pinot Noir, Malbec, Zinfandel, Grenache and Sangiovese.
|
|
null | false
| null |
How did India get independence from colonial rule
|
Non vioalence movement and violence movement both were prevalent, huge number of discussions, fights,protests etc paved way for indepdence
|
|
null | false
| null |
"The Fox in the Attic" was originally published in 1961 by Chatto & Windus: London as v. 1 of The Human Predicament trilogy, and then in the United States by Harper & Brothers: New York. This was 23 years after Hughes's previous novel, In Hazard: A Sea Story, and 33 years after A High Wind in Jamaica, which was a best seller in the United Kingdom and America.It was published the following year in Sweden (Stockholm: Norstedt) as Räven på vinden.
The second novel in The Human Predicament trilogy, The Wooden Shepherdess, was published in 1973 by Chatto & Windus: London; it carries on the story to 1934 and the Night of the Long Knives. The third and final novel was left unfinished, but the completed twelve chapters were included in the 2000 New York Review of Books edition of The Wooden Shepherdess.
|
Based on the provided text, place the following books in chronological order:
The Fox in the Attic, In Hazard: A Sea Story, A High Wind in Jamaica, and The Wooden Shepherdess
|
The order of the books are as follows:
1) A High Wind in Jamaica
2) In Hazard: A Sea Story
3) The Fox in the Attic
4) The Wooden Shepherdess
|
null | false
| null |
Give me a list of the last 10 mens college hockey champions.
|
2022 Denver; 2021 Massachusetts; 2019 Minnesota Duluth; 2018 Minnesota Duluth; 2017 Denver; 2016 North Dakota; 2015 Providence; 2014 Union; 2013 Yale; 2012 Boston College
|
|
null | false
| null |
The Chernobyl disaster was a nuclear accident that occurred on 26 April 1986 at the No. 4 reactor in the Chernobyl Nuclear Power Plant, near the city of Pripyat in the north of the Ukrainian SSR in the Soviet Union. Called the world's worst-ever civil nuclear incident, it is one of only two nuclear energy accidents rated at seven—the maximum severity—on the International Nuclear Event Scale, the other being the 2011 Fukushima nuclear disaster in Japan. The initial emergency response, together with later decontamination of the environment, involved more than 500,000 personnel and cost an estimated 18 billion roubles—roughly US$68 billion in 2019, adjusted for inflation. Chernobyl's health effects to the general population are uncertain. An excess of 15 childhood thyroid cancer deaths were documented as of 2011. A United Nations committee found that to date fewer than 100 deaths have resulted from the fallout. Determining the total eventual number of exposure related deaths is uncertain based on the linear no-threshold model, a contested statistical model. Model predictions of the eventual total death toll in the coming decades vary. The most widely cited studies by the World Health Organization predict an eventual 9,000 cancer related fatalities in Ukraine, Belarus and Russia.
|
Why is the Chernobyl disaster considered to be the world's worst-ever civil nuclear incident when only an excess of 15 childhood thyroid cancer deaths were documented as of 2011?
|
The Chernobyl disaster was called the world's worst-ever civil nuclear incident because it is one of only two nuclear energy accidents rated at seven—the maximum severity—on the International Nuclear Event Scale. The initial emergency response, together with later decontamination of the environment, involved more than 500,000 personnel and cost an estimated 18 billion roubles—roughly US$68 billion in 2019, adjusted for inflation. Even though Chernobyl's health effects to the general population are uncertain, the World Health Organization predict an eventual 9,000 cancer related fatalities in Ukraine, Belarus and Russia.
|
null | false
| null |
Which of the following are writing implements: pen, fine line markers, tiles, morning glories, alcohol markers, mechanical pencils, jugs, sand, ocean salt water, towel
|
pen, fine line markers, alcohol markers, mechanical pencils
|
|
null | false
| null |
Business process mapping refers to activities involved in defining what a business entity does, who is responsible, to what standard a business process should be completed, and how the success of a business process can be determined.
The main purpose behind business process mapping is to assist organizations in becoming more effective. A clear and detailed business process map or diagram allows outside firms to come in and look at whether or not improvements can be made to the current process.
Business process mapping takes a specific objective and helps to measure and compare that objective alongside the entire organization's objectives to make sure that all processes are aligned with the company's values and capabilities.
International Organization for Standardization or ISO 9001 : 2015 encourages a process approach to quality management. It is important to understand how each process relates to other processes within the organization and how those interactions impact Quality Management.
|
Please describe what Business Process Mapping is and why it is used
|
Business Process Mapping describes the capture of a set of business activities that deliver an outcome. Mapping is usually performed using visual tools that use standardized shapes to indicate entities. Mapping involves capturing activities, the roles performing the activities, systems and tools used to execute the activities and inputs and outputs from the activities. Process mapping is frequently conducted in cross functional workshops with a facilitator capturing the activities performed by Subject Matter Experts (SMEs). Common uses of process mapping include:
* automation
* compliance documentation
* business process improvement
* standardization
|
null | false
| 91
|
Neural machine translation (NMT) has achieved the state-of-the-art results on a mass of language pairs with varying structural differences, such as English-French BIBREF0, BIBREF1 and Chinese-English BIBREF2. However, so far not much is known about how and why NMT works, which pose great challenges for debugging NMT models and designing optimal architectures.
The understanding of NMT models has been approached primarily from two complementary perspectives. The first thread of work aims to understand the importance of representations by analyzing the linguistic information embedded in representation vectors BIBREF3, BIBREF4 or hidden units BIBREF5, BIBREF6. Another direction focuses on understanding the importance of input words by interpreting the input-output behavior of NMT models. Previous work BIBREF7 treats NMT models as black-boxes and provides explanations that closely resemble the attention scores in NMT models. However, recent studies reveal that attention does not provide meaningful explanations since the relationship between attention scores and model output is unclear BIBREF8.
In this paper, we focus on the second thread and try to open the black-box by exploiting the gradients in NMT generation, which aims to estimate the word importance better. Specifically, we employ the integrated gradients method BIBREF9 to attribute the output to the input words with the integration of first-order derivatives. We justify the gradient-based approach via quantitative comparison with black-box methods on a couple of perturbation operations, several language pairs, and two representative model architectures, demonstrating its superiority on estimating word importance.
We analyze the linguistic behaviors of words with the importance and show its potential to improve NMT models. First, we leverage the word importance to identify input words that are under-translated by NMT models. Experimental results show that the gradient-based approach outperforms both the best black-box method and other comparative methods. Second, we analyze the linguistic roles of identified important words, and find that words of certain syntactic categories have higher importance while the categories vary across language. For example, nouns are more important for Chinese$\Rightarrow $English translation, while prepositions are more important for English-French and -Japanese translation. This finding can inspire better design principles of NMT architectures for different language pairs. For instance, a better architecture for a given language pair should consider its own language characteristics.
However, so far not much is known about how and why NMT works, which pose great challenges for debugging NMT models and designing optimal architectures.
|
What are the challenges now?
|
So far not much is known about how and why NMT works, which pose great challenges for debugging NMT models and designing optimal architectures.
|
null | false
| null |
Advocates of autonomous building describe advantages that include reduced environmental impacts, increased security, and lower costs of ownership. Some cited advantages satisfy tenets of green building, not independence per se (see below). Off-grid buildings often rely very little on civil services and are therefore safer and more comfortable during civil disaster or military attacks. For example, Off-grid buildings would not lose power or water if public supplies were compromised.
|
Given this paragraph about autonomous buildings, why would they be safer during a military attack?
|
Autonomous buildings often rely on less civil services and are therefore safer and more comfortable during civil disasters or military attacks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.