paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
null
What are the most popular smart ring bells?
Ring is a smart ring bell company that can record events and ping your phone when someone rings your Ring doorbell. The company has other products such as floodlights that are motion activated.
null
false
214
The growing popularity of online interactions through social media has been shown to have both positive and negative impacts. While social media improves information sharing, it also facilitates the propagation of online harassment, including hate speech. These negative experiences can have a measurable negative impact on users. Recently, the Pew Research Center BIBREF0 reported that “roughly four-in-ten Americans have personally experienced online harassment, and 63% consider it a major problem.” To address the growing problem of online hate, an extensive body of work has focused on developing automatic hate speech detection models and datasets BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, simply detecting and blocking hate speech or suspicious users often has limited ability to prevent these users from simply turning to other social media platforms to continue to engage in hate speech as can be seen in the large move of individuals blocked from Twitter to Gab BIBREF9. What's more, such a strategy is often at odds with the concept of free speech. As reported by the Pew Research Center BIBREF0, “Despite this broad concern over online harassment, 45% of Americans say it is more important to let people speak their minds freely online; a slightly larger share (53%) feels that it is more important for people to feel welcome and safe online.” The special rapporteurs representing the Office of the United Nations High Commissioner for Human Rights (OHCHR) have recommended that “The strategic response to hate speech is more speech.” BIBREF10 They encourage to change what people think instead of merely changing what they do, so they advocate more speech that educates about cultural differences, diversity, and minorities as a better strategy to counter hate speech. Therefore, in order to encourage strategies of countering online hate speech, we propose a novel task of generative hate speech intervention and introduce two new datasets for this task. Figure FIGREF5 illustrates the task. Our datasets consist of 5K conversations retrieved from Reddit and 12k conversations retrieved from Gab. Distinct from existing hate speech datasets, our datasets retain their conversational context and introduce human-written intervention responses. The conversational context and intervention responses are critical in order to build generative models to automatically mitigate the spread of these types of conversations. To summarize, our contributions are three-fold: We introduce the generative hate speech intervention task and provide two fully-labeled hate speech datasets with human-written intervention responses. Our data is collected in the form of conversations, providing better context. The two data sources, Gab and Reddit, are not well studied for hate speech. Our datasets fill this gap. Due to our data collecting strategy, all the posts in our datasets are manually labeled as hate or non-hate speech by Mechanical Turk workers, so they can also be used for the hate speech detection task. The performance of commonly-used classifiers on our datasets is shown in Section SECREF6. Due to our data collecting strategy, all the posts in our datasets are manually labeled as hate or nonhate speech by Mechanical Turk workers, so they can also be used for the hate speech detection task.
Why the authors' datasets can be used for the hate speech detection task?
Because all the posts in the datasets are manually labeled as hate or nonhate speech by Mechanical Turk workers.
null
false
null
Owners Further information: Category:Seattle Storm owners Barry and Ginger Ackerley, owners of the Seattle SuperSonics (2000–2001) Howard Schultz, owner of the Seattle SuperSonics (2001–2006) Clay Bennett, owner of the Seattle SuperSonics (2007) Force 10 Hoops LLC, composed of Dawn Trudeau, Lisa Brummel, Ginny Gilder (2008–present)
Extract the most recent owner of the Seattle Storm from this Wikipedia article.
The current owner of the Seattle Storm is Force
null
false
null
Identify which instrument is string or percussion: Ravanne, Sursingar
Sursingar is string, Ravanne is percussion.
null
false
null
What is freedom?
Freedom could mean something different for every person in the world. In this time and place I think of freedom as the ability to make my own decisions.
null
false
null
How can I ask my aging father-in-law to see the cardiologist when he doesn't want to go?
Papa, I was concerned yesterday when you started getting shortness of breath and your chest started to hurt you. It made me really concerned for you. Papa, I want to tell you something, I know your daughter would want to say to you, but it is really hard for her to say. You are a strong man. Your daughter has always seen you as a strong man. Asking you to do something you wouldn't want to do is something she couldn't picture herself asking you to do. Yesterday, she wanted to beg you to go see the cardiologist to check on your heart. She knows you hate to go to the doctor. She didn't want you to see the cardiologist because she thinks you are no longer a strong man. It is because she loves you. She wants you around for many years. She wants you to be part of seeing our kids grow up. She wants you to come to your grandkids sporting events. As your son-in-law, I am asking you to come with me to the cardiologist. I'll respect your privacy. I can come in with you or I can wait out in the family room. You can take it one step at a time. They will give you a few tests. You'll get the results. The doctor will talk to you about the results and discuss any recommendations they may have. It is in your control to take those recommendations. You will never loose control. What do you think, Papa. Can I take you to the doctor?
null
false
null
Which animals are a dog or cat? a cocker spaniel, labrador, Persian, Maine Coone, and king Charles?
A cocker spaniel, labrador and king charles are all dogs. A Persian and Maine Coone are cats.
null
false
null
Apache Spark has its architectural foundation in the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way. The Dataframe API was released as an abstraction on top of the RDD, followed by the Dataset API. In Spark 1.x, the RDD was the primary application programming interface (API), but as of Spark 2.x use of the Dataset API is encouraged even though the RDD API is not deprecated. The RDD technology still underlies the Dataset API.
Extract from the text the method encouraged to use in Apache Spark.
Dataset API
null
false
null
Give me a list of the best albums of all-time
Loveless - My Bloody Valentine, Kid A - RadioHead, Souvlaki - Slowdive
null
false
null
What are some modes of transportation I can use to travel downtown?
It depends on what city you are in. In general, the following modes of transportation are available in most downtowns: - Taxi/Rideshare - Bus - Bicycle rental - Scooter rental - Moped rental - Pedicab - Walking
2001.09899
false
null
Having this in mind and if we draw from the premise that when a discussion has a high controversy it is in general due to the presence of two principal communities fighting each other (or, conversely, that when there is no controversy there is just one principal community the members of which share a common point of view), we can measure the controversy by detecting if the discussion has one or two principal jargons in use. Our method is tested on Twitter datasets. This microblogging platform has been widely used to analyze discussions and polarization BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF2. It is a natural choice for these kind of problems, as it represents one of the main fora for public debate in online social media BIBREF15, it is a common destination for affiliative expressions BIBREF16 and is often used to report and read news about current events BIBREF17. An extra advantage of Twitter for this kind of studies is the availability of real-time data generated by millions of users. Other social media platforms offer similar data-sharing services, but few can match the amount of data and the accompanied documentation provided by Twitter. One last asset of Twitter for our work is given by retweets, whom typically indicate endorsement BIBREF18 and hence become a useful concept to model discussions as we can set “who is with who". However, our method has a general approach and it could be used a priori in any social network. In this work we report excellent result tested on Twitter but in future work we are going to test it in other social networks. Our method is tested on Twitter datasets. This microblogging platform has been widely used to analyze discussions and polarization BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF2. It is a natural choice for these kind of problems, as it represents one of the main fora for public debate in online social media BIBREF15, it is a common destination for affiliative expressions BIBREF16 and is often used to report and read news about current events BIBREF17. An extra advantage of Twitter for this kind of studies is the availability of real-time data generated by millions of users. Other social media platforms offer similar data-sharing services, but few can match the amount of data and the accompanied documentation provided by Twitter. One last asset of Twitter for our work is given by retweets, whom typically indicate endorsement BIBREF18 and hence become a useful concept to model discussions as we can set “who is with who". However, our method has a general approach and it could be used a priori in any social network. In this work we report excellent result tested on Twitter but in future work we are going to test it in other social networks.
What social media platform is observed?
The answers are shown as follows: * Twitter
null
false
null
Write a poem about George Washington.
George Washington was a British man The British had laws he couldn't understand Washington and his friends put together a plan That the people would resist and America could stand
null
false
null
Booker Prize 2021 was given to Damon Galgut for
The Promise
null
false
null
Sly and the Family Stone was an American funk rock band originated from San Francisco, California. Active from 1966 to 1983, they were pivotal in the development of funk, soul, rock, and psychedelic music. Their core line-up was led by singer-songwriter, record producer, and multi-instrumentalist Sly Stone, and included Stone's brother and singer/guitarist Freddie Stone, sister and singer/keyboardist Rose Stone, trumpeter Cynthia Robinson, drummer Greg Errico, saxophonist Jerry Martini, and bassist Larry Graham. The band was the first major American rock group to have a racially integrated, male and female lineup.
Where did Sly and the Family Stone come from?
Sly and the Family Stone originated from San Francisco, California.
null
false
null
Classify each of these are a television show or a movie: Die Hard, The Wire, Greatest American Hero, Better Call Saul, Raiders of the Lost Ark, E.T., and Family Ties.
The television shows in this list include The Wire, Greatest American Hero, Better Call Saul, and Family Ties. The movies in this list include Die Hard, Raiders of the Lost Ark, and E.T.
1908.06606
false
null
FLOAT SELECTED: Fig. 1. An illustrative example of QA-CTS task. Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches. To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows. FLOAT SELECTED: Fig. 1. An illustrative example of QA-CTS task. Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches. Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data.
How is the clinical text structuring task defined?
The answers are shown as follows: * Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. * Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text.
null
false
null
Write a shopping list for me and group the ingredients by type. I want to make stir-fried teriyaki tofu for dinner and I want enough ingredients to make a sauce, because I don't want to buy a bottle of sauce. This meal will serve four people.
Produce: 1lb broccoli florets One yellow onion One red bell pepper One green bell pepper One small knob ginger One bulb garlic Protein: 1lb extra firm tofu Misc: 1 can corn starch 8oz bottle soy sauce 1 bag of dark brown sugar Red pepper flakes
null
false
219
Human-computer dialog/conversation is one of the most challenging problems in artificial intelligence. Given a user-issued utterance (called a query in this paper), the computer needs to provide a reply to the query. In early years, researchers have developed various domain-oriented dialogue systems, which are typically based on rules or templates BIBREF4 , BIBREF5 , BIBREF6 . Recently, open-domain conversation systems have attracted more and more attention in both academia and industry (e.g., XiaoBing from Microsoft and DuMi from Baidu). Due to high diversity, we can hardly design rules or templates in the open domain. Researchers have proposed information retrieval methods BIBREF7 and modern generative neural networks BIBREF8 , BIBREF9 to either search for a reply from a large conversation corpus or generate a new sentence as the reply. In open-domain conversations, context information (one or a few previous utterances) is particularly important to language understanding BIBREF1 , BIBREF9 , BIBREF10 , BIBREF11 . As dialogue sentences are usually casual and short, a single utterance (e.g., “Thank you.” in Figure FIGREF2 ) does not convey much meaning, but its previous utterance (“...writing an essay”) provides useful background information of the conversation. Using such context will certainly benefit the conversation system. However, tracking all previous utterances as the context is unwise. First, commercial chat-bots usually place high demands on efficiency. In a retrieval-based system, for example, performing a standard process of candidate retrieval and re-ranking for each previous utterance may well exceed the time limit (which is very short, e.g., 500ms). Second, we observe that not all sentences in the current conversation session are equally important. The sentence “Want to take a walk?” is irrelevant to the current context, and should not be considered when the computer synthesizes the reply. Therefore, it raises the question of session segmentation in conversation systems. Document segmentation for general-purpose corpora has been widely studied in NLP. For example, Hearst BIBREF12 proposes the TextTiling approach; she measures the similarity of neighboring sentences based on bag-of-words features, and performs segmentation by thresholding. However, such approaches are not tailored to the dialogue genre and may not be suitable for conversation session segmentation. In this paper, we address the problem of session segmentation for open-domain conversations. We leverage the classic TextTiling approach, but enhance it with modern embedding-based similarity measures. Compared with traditional bag-of-words features, embeddings map discrete words to real-valued vectors, capturing underlying meanings in a continuous vector space; hence, it is more robust for noisy conversation corpora. Further, we propose a tailored method for word embedding learning. In traditional word embedding learning, the interaction between two words in a query and a reply is weaker than that within an utterance. We propose to combine a query and its corresponding reply as a “virtual sentence,” so that it provides a better way of modeling utterances between two agents. In this paper, we address the problem of session segmentation for open-domain conversations.
What problem does the paper want to address?
The problem of session segmentation in open-domain conversations.
null
false
37
Mapudungun (iso 639-3: arn) is an indigenous language of the Americas spoken natively in Chile and Argentina, with an estimated 100 to 200 thousand speakers in Chile and 27 to 60 thousand speakers in Argentina BIBREF0. It is an isolate language and is classified as threatened by Ethnologue, hence the critical importance of all documentary efforts. Although the morphology of nouns is relatively simple, Mapudungun verb morphology is highly agglutinative and complex. Some analyses provide as many as 36 verb suffix slots BIBREF1. A typical complex verb form occurring in our corpus of spoken Mapudungun consists of five or six morphemes. Mapudungun has several interesting grammatical properties. It is a polysynthetic language in the sense of BIBREF2; see BIBREF3 for explicit argumentation. As with other polysynthetic languages, Mapudungun has Noun Incorporation; however, it is unique insofar as the Noun appears to the right of the Verb, instead of to the left, as in most polysynthetic languages BIBREF4. One further distinction of Mapudungun is that, whereas other polysynthetic languages are characterized by a lack of infinitives, Mapudungun has infinitival verb forms; that is, while subordinate clauses in Mapudungun closely resemble possessed nominals and may occur with an analytic marker resembling possessor agreement, there is no agreement inflection on the verb itself. One further remarkable property of Mapudungun is its inverse voice system of agreement, whereby the highest agreement is with the argument highest in an animacy hierarchy regardless of thematic role BIBREF5. Mapudungun has several interesting grammatical properties. It is a polysynthetic language in the sense of (Baker, 1996); see (Loncon Antileo, 2011) for explicit argumentation. As with other polysynthetic languages, Mapudungun has Noun Incorporation; however, it is unique insofar as the Noun appears to the right of the Verb, instead of to the left, as in most polysynthetic languages (Baker et al., 2005). One further distinction of Mapudungun is that, whereas other polysynthetic languages are characterized by a lack of infinitives, Mapudungun has infinitival verb forms; that is, while subordinate clauses in Mapudungun closely resemble possessed nominals and may occur with an analytic marker resembling possessor agreement, there is no agreement inflection on the verb itself. One further remarkable property of Mapudungun is its inverse voice system of agreement, whereby the highest agreement is with the argument highest in an animacy hierarchy regardless of thematic role (Arnold, 1996).
What are the interesting grammatical properties of Mapudungun?
Firstly, it is a unique polysynthetic language insofar as the Noun appears to the right of the Verb, instead of to the left. Secondly, the distinction of Mapudungun is that, whereas other polysynthetic languages are characterized by a lack of infinitives, Mapudungun has infinitival verb forms. One further remarkable property of Mapudungun is its inverse voice system of agreement.
null
false
null
What country has the most active valcanoes in the world?
Indonesia has the most volcanoes in the world. They currently have more than 140 located in an area call the Pacific Ring of Fire, around 76 of them are active volcanoes. Active volcanoes can be very dangerous because they can erupt at any time, and predicting when they will erupt is an on-going science that scientists hope to be able to achieve with increased latency in order to save lives.
null
false
null
Who were King Aenys I Targaryen's children?
Aegon, Viserys, Jaehaerys, Rhaena, Alysanne, Vaella
null
false
null
Schindler Holding Ltd. is a Swiss multinational company which manufactures escalators, moving walkways, and elevators worldwide, founded in Switzerland in 1874. Schindler produces, installs, maintains and modernizes lifts and escalators in many types of buildings including residential, commercial and high-rise buildings. The company is present in over 140 countries and employs more than 66,000 people worldwide. The production facilities are located in Brazil, China, Slovakia, Spain, Switzerland, India and the United States. All operations and subsidiaries of Schindler Group are organised into Schindler Holding Ltd. (German: Schindler Holding AG, French: Schindler Holding S.A.), which is publicly traded on SIX Swiss Exchange.
Are Schindler elevators safe
Schindler Holding Ltd. is a Swiss multinational company which manufactures escalators, moving walkways, and elevators worldwide, founded in Switzerland in 1874. Schindler produces, installs, maintains and modernizes lifts and escalators in many types of buildings including residential, commercial and high-rise buildings. The company is present in over 140 countries and employs more than 66,000 people worldwide. The production facilities are located in Brazil, China, Slovakia, Spain, Switzerland, India and the United States. All operations and subsidiaries of Schindler Group are organised into Schindler Holding Ltd., which is publicly traded on SIX Swiss Exchange.
null
false
null
Following an agreement to merge the NFL with the rival American Football League (AFL), the Super Bowl was first held in 1967 to determine a champion between the best teams from the two leagues and has remained as the final game of each NFL season since the merger was completed in 1970
Given this paragraph about the NFL, in what year was the Super Bowl first held?
1967
null
false
null
Is the manatee an endangered species?
All three species of manatee (Amazonian manatee, West Indian manatee, and the West African manatee) are listed by the World Conservation Union as vulnerable to extinction. However, The U.S. Fish and Wildlife Service (FWS) does not consider the West Indian manatee to be "endangered" anymore, having downgraded its status to "threatened" as of March 2017. They cite improvements to habitat conditions, population growth and reductions of threats as reasoning for the change.
null
false
488
Reinforcement learning (RL) offers the potential for autonomous agents to learn complex behaviours without the need for human intervention. Despite the notable success of RL in a variety domains, enabling RL algorithms to learn successfully in numerous real-world tasks remains a challenge. A key obstacle to the success of RL algorithms is the requirement of a rich reward signal that can guide the agent towards an optimal policy. In many settings of interest such as physical tasks and video games, rich informative signals of the agent's performance are not readily available. For example, in the video game Super Mario, the agent must perform sequences of hundreds of actions while receiving no rewards for it to successfully complete its task. In this setting, the sparse reward provides infrequent feedback of the agent's performance. This leads to RL algorithms requiring large numbers of samples (and high expense) for solving problems. Consequently, there is great need for RL techniques that solve these problems efficiently. Various biological systems have mechanisms that produce rewards for activities that present little or no direct biological utility. Such mechanisms, which have been fashioned over millions of years are designed to provide intrinsic rewards to promote behaviours that improve chances of outcomes with biological utility. In RL, reward shaping (RS) is a tool to introduce intrinsic rewards known as shaping rewards that supplement environment reward signals. These rewards can encourage exploration and insert structural knowledge in the absence of informative environment rewards which can improve learning outcomes. In general however, RS algorithms assume hand-crafted and domainspecific shaping functions whose construction is typically highly labour intensive. This runs contrary to the aim of autonomous learning. Moreover, poor choices of shaping rewards can worsen the agent's performance. To resolve these issues, a useful shaping reward must be obtained autonomously. Inspired by naturally occurring systems, we tackle the problem of sparse and uninformative rewards by developing a framework that autonomously constructs shaping rewards during learning. Our framework, ROSA, works by introducing an additional RL agent, Shaper, that adaptively learns to construct shaping rewards by observing Controller (the agent whose goal is to solve the environment task) while Controller learns to solve its task. This generates tailored shaping rewards without the need for domain knowledge or manual engineering. The shaping rewards supplement the environment reward and promote effective learning, our framework therefore addresses the key challenges in RS. The resulting framework is a two-player nonzero-sum Markov game (MG) -an extension of Markov decision process (MDP) that involves two independent learners with distinct objectives. In our framework, the two agents that have distinct learning agendas, cooperate to achieve Controller's objective. This MG formulation confers various advantages: 1) The shaping-reward function is constructed fully autonomously. The game also ensures the shaping reward improves Controller's performance unlike RS methods that can lower performance. 2) By learning the shaping-reward function while Controller learns its optimal policy, Shaper learns to adaptively facilitate Controller's learning and improve Controller's performance. 3) Both learning processes converge so Controller learns the optimal value function for its task. 4) ROSA learns subgoals to decompose complex tasks and promote complex exploration patterns. An integral component of ROSA is a novel combination of RL and switching controls. This enables Controller to quickly determine useful states to learn to add and calibrate shaping rewards (i.e. the states in which adding shaping rewards improve Controller's performance) and disregard others. This is in contrast with an RL controller (i.e. Controller) which must learn its best actions at every state. This leads to Shaper quickly finding shaping rewards that guide Controller's learning process toward optimal trajectories (and away from suboptimal trajectories, c.f. Experiment 1). For our two-player framework to succeed we have to overcome several obstacles. Solving MGs involves finding a stable point in which each player responds optimally to the actions of the other. In our MG, this stable point describes a pair of policies for which Shaper introduces a shaping reward that improves performance and, with that, Controller executes an optimal policy for the task. Tractable methods for solving MGs are rare with convergence of MG methods being seldom guaranteed except in a few special cases. Nevertheless, using special features in the design of our game, we prove the existence of a stable point solution of our MG. We then prove the convergence of our learning method to the solution of the game and show that the solution coincides with the solution of the Controller's problem. This ensures Shaper learns a shaping-reward function that improves Controller's performance and that Controller learns the optimal value function for the task. This generates tailored shaping rewards without the need for domain knowledge or manual engineering. The shaping rewards supplement the environment reward and promote effective learning, our framework therefore addresses the key challenges in RS.****We then compared our method against these baselines on performance benchmarks including Sparse Cartpole, Gravitar, Solaris, and Super Mario. Lastly, we ran a detailed suite of ablation studies (supplementary material).****We compared our method with the baselines in four challenging sparse rewards environments: Cartpole, Gravitar, Solaris, and Super Mario. These environments vary in state representation, transition dynamics and reward sparsity. In Cartpole, a penalty of −1 is received only when the pole collapses; in Super Mario Brothers the agent can go for 100s of steps without encountering a reward.
What components of the method are necessary or helpful in solving sparse reward problems?
A key aspect of the shaping reward methodology is to densify rewards. Since the shaping-reward adds rewards to supplement the environment (extrinsic) reward, the helpfulness of the framework in tackling sparse reward environment follows from its ability to add useful shaping rewards. We also refer the reviewer to our experiment in Sparse Cartpole in Sec. 6 which by construction is a sparse reward problem. There the agent only receives a penalty of -1 if the pole collapses and receives no other rewards otherwise. In this experiment ROSA outperforms all other baselines except for the version of BiPars which has a manually engineered shaping function already included.
1903.00172
false
null
FLOAT SELECTED: Table 3: Precision (P), Recall (R), and Relative Coverage (RC) results on ConciergeQA. FLOAT SELECTED: Table 4: Precision (P), Recall (R), and Relative Coverage (RC) results on AmazonQA dataset. FLOAT SELECTED: Table 1: Various types of training instances. FLOAT SELECTED: Table 3: Precision (P), Recall (R), and Relative Coverage (RC) results on ConciergeQA. FLOAT SELECTED: Table 4: Precision (P), Recall (R), and Relative Coverage (RC) results on AmazonQA dataset. FLOAT SELECTED: Table 1: Various types of training instances.
Where did they get training data?
AmazonQA and ConciergeQA datasets
1904.08386
false
null
While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm. While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm.
How do they model a city description using embeddings?
The answers are shown as follows: * We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm.
null
false
5
Presenting pronunciation data in several languages to the network might create problems because different languages have different pronunciation patterns. For example, the string `real' is pronounced differently in English, German, Spanish, and Portuguese. We solve this problem by prepending each grapheme sequence with an artificial token consisting of the language's ISO 639-3 code enclosed in angle brackets. The English word `real', for example, would be presented to the system as INLINEFORM0 eng INLINEFORM1 r e a l The artificial token is treated simply as an element of the grapheme sequence. This is similar to the approach taken by johnson2016google in their zero-shot NMT system. However, their source-side artificial tokens identify the target language, whereas ours identify the source language. An alternative approach, used by ostling2017continuous, would be to concatenate a language embedding to the input at each time step. They do not evaluate their approach on grapheme-to-phoneme conversion. Presenting pronunciation data in several languages to the network might create problems because different languages have different pronunciation patterns. For example,the string ‘real' is pronounced differently in English,German,Spanish,and Portuguese. We solve this problem by prepending each grapheme sequence with an artificial token consisting of the language's ISO 639-3 code enclosed in angle brackets.
How are different languages identified?
They prepend each grapheme sequence with an artificial token consisting of the language's ISO 639-3 code enclosed in angle brackets.
null
false
null
Tell if these are sports done in water, on land or both: triathlon, biathlon, water polo, sailing, crew, beach volleyball, basketball
Triathlon is done on both land and water, biathlon is done on land, water polo is done in water, sailing is done in water, crew is done in water, beach volleyball is done on land and basketball is done on land
1710.09753
false
null
Slot Filling is an information extraction task which has become popular in the last years BIBREF3 . It is a shared task organized by the Text Analysis Conference (TAC). The task aims at extracting information about persons, organizations or geo-political entities from a large collection of news, web and discussion forum documents. An example is “Steve Jobs” for the slot “X founded Apple”. Thinking of a text passage like “Steve Jobs was an American businessman. In 1976, he co-founded Apple”, it is clear that coreference resolution can play an important role for finding the correct slot filler value. Slot Filling is an information extraction task which has become popular in the last years BIBREF3 . The task aims at extracting information about persons, organizations or geo-political entities from a large collection of news, web and discussion forum documents.
What is the task of slot filling?
The answers are shown as follows: * The task aims at extracting information about persons, organizations or geo-political entities from a large collection of news, web and discussion forum documents.
null
false
null
Kirchenlamitz is a town in the district of Wunsiedel, in Bavaria, Germany. It is situated in the Fichtelgebirge, 13 km northwest of Wunsiedel and 19 km south of Hof.
In which part of Germany is the town Kirchenlamitz located?
Bavaria
null
false
null
Sweet Tree Holdings is a Vermont, United States, company that produces maple products from a sugarbush and processing facility in Island Pond.
What type of products does this company produce?
maple products
null
false
null
What is a DBU?
dBu and dBV are decibel units specifically for measuring voltage. Unlike the dB, they are actually units because they can be converted to an actual voltage value. dBu is dB relative to 0.775 volts; such that 0dBu = 0.775 volts.
1806.04524
false
null
In this paper we have formalized the problem of automatic fill-on-the-blanks quiz generation using two well-defined learning schemes: sequence classification and sequence labeling. We have also proposed concrete architectures based on LSTMs to tackle the problem in both cases. In this paper we have formalized the problem of automatic fill-on-the-blanks quiz generation using two well-defined learning schemes: sequence classification and sequence labeling.
Which two schemes are used?
The answers are shown as follows: * sequence classification * sequence labeling
null
false
null
Why Audi Q5 sportsback is best over its rival
Audi Q5 is a great car. It's rival Mercedes GLC and BMW X3. Compare to this Audi q5 sportsback is good looking. the Car has enough power to beat these 2 on Hill road because of it's Quattro
null
false
null
Give me a list of the top 10 types of clouds
Cumulus, Stratus, Stratocumulus, Cirrus, Cirrocumulus, Cirrostratus, Cumulonimbus, Altocumulus, Nimbostratus, Altostratus,
null
false
65
We prepared precomputed ELMo contextual embeddings for seven languages: Croatian, Estonian, Finnish, Latvian, Lithuanian, Slovenian, and Swedish. We present the necessary background on embeddings and contextual embeddings, the details of training the embedding models, and their evaluation. We show that the size of used training sets importantly affects the quality of produced embeddings, and therefore the existing publicly available ELMo embeddings for the processed languages are inadequate. We trained new ELMo embeddings on larger training sets and analysed their properties on the analogy task and on the NER task. The results show that the newly produced contextual embeddings produce substantially better results compared to the non-contextual fastText baseline. In future work, we plan to use the produced contextual embeddings on the problems of news media industry. The pretrained ELMo models will be deposited to the CLARIN repository by the time of the final version of this paper. The pretrained EMBEDDIA ELMo models are publicly available on the CLARIN repository14.
Are all the trained models publicly available?
Yes.
null
false
null
What kind of species are the Triplophysa longipectoralis
Triplophysa longipectoralis is a cave-living species of stone loach with vestigial eyes. The fish lives in clear water at temperatures below 20 °C. The holotype was caught in Xunle town, Huanjiang Maonan Autonomous County in the Liu River basin, Guangxi, China and was described by Zheng et al. in 2009.
null
false
null
Why does my back hurt?
There are several reason why your back might hurt. Generally, the problem starts not in your back but in a connected muscle group. To assess the problem, start with your feet and work your way up to your shoulders and neck. If your shoulders and/or neck are tight, that will affect your back muscles. Also, consider your shoes, your weight, the strength of your knees, and your core strength.
null
false
null
Mao Zedong (26 December 1893 – 9 September 1976), also known as Chairman Mao, was a Chinese communist revolutionary who was the founder of the People's Republic of China (PRC), which he led as the chairman of the Chinese Communist Party from the establishment of the PRC in 1949 until his death in 1976. Ideologically a Marxist–Leninist, his theories, military strategies, and political policies are collectively known as Maoism. Mao was the son of a prosperous peasant in Shaoshan, Hunan. He supported Chinese nationalism and had an anti-imperialist outlook early in his life, and was particularly influenced by the events of the Xinhai Revolution of 1911 and May Fourth Movement of 1919. He later adopted Marxism–Leninism while working at Peking University as a librarian and became a founding member of the Chinese Communist Party (CCP), leading the Autumn Harvest Uprising in 1927. During the Chinese Civil War between the Kuomintang (KMT) and the CCP, Mao helped to found the Chinese Workers' and Peasants' Red Army, led the Jiangxi Soviet's radical land reform policies, and ultimately became head of the CCP during the Long March. Although the CCP temporarily allied with the KMT under the Second United Front during the Second Sino-Japanese War (1937–1945), China's civil war resumed after Japan's surrender, and Mao's forces defeated the Nationalist government, which withdrew to Taiwan in 1949. On 1 October 1949, Mao proclaimed the foundation of the PRC, a Marxist–Leninist single-party state controlled by the CCP. In the following years he solidified his control through the Chinese Land Reform against landlords, the Campaign to Suppress Counterrevolutionaries, the "Three-anti and Five-anti Campaigns", and through a truce in the Korean War, which altogether resulted in the deaths of several million Chinese. From 1953 to 1958, Mao played an important role in enforcing command economy in China, constructing the first Constitution of the PRC, launching the industrialisation program, and initiating military projects such as the "Two Bombs, One Satellite" project and Project 523. His foreign policies during this time were dominated by the Sino-Soviet split which drove a wedge between China and the Soviet Union. In 1955, Mao launched the Sufan movement, and in 1957 he launched the Anti-Rightist Campaign, in which at least 550,000 people, mostly intellectuals and dissidents, were persecuted. In 1958, he launched the Great Leap Forward that aimed to rapidly transform China's economy from agrarian to industrial, which led to the deadliest famine in history and the deaths of 15–55 million people between 1958 and 1962. In 1963, Mao launched the Socialist Education Movement, and in 1966 he initiated the Cultural Revolution, a program to remove "counter-revolutionary" elements in Chinese society which lasted 10 years and was marked by violent class struggle, widespread destruction of cultural artifacts, and an unprecedented elevation of Mao's cult of personality. Tens of millions of people were persecuted during the Revolution, while the estimated number of deaths ranges from hundreds of thousands to millions. After years of ill health, Mao suffered a series of heart attacks in 1976 and died at the age of 82. During the Mao era, China's population grew from around 550 million to over 900 million while the government did not strictly enforce its family planning policy. Widely regarded as one of the most influential figures of the twentieth century, Mao remains a controversial figure within and outside China. Beyond politics, Mao is also known as a theorist, military strategist, and poet. Mao has been credited with transforming China from a semi-colony to a leading world power, with greatly advanced literacy, women's rights, basic healthcare, primary education and life expectancy. However, Mao's government was responsible for vast numbers of deaths, with estimates ranging from 40 to 80 million victims through starvation, persecution, prison labour, and mass executions. During his leadership tenure, China was heavily involved with other Asian communist conflicts such as the Korean War, the Vietnam War, and the Cambodian Civil War.
Given this text of Mao Zedong, tell me how many people were killed in Mao's era through starvation, persecution, prison labour, and mass executions?
Mao's government was responsible for vast numbers of deaths, with estimates ranging from 40 to 80 million victims through starvation, persecution, prison labour, and mass executions.
1909.01247
true
null
The corpus creation process involved a small number of people that have voluntarily joined the initiative, with the authors of this paper directing the work. Initially, we searched for NER resources in Romanian, and found none. Then we looked at English resources and read the in-depth ACE guide, out of which a 16-class draft evolved. We then identified a copy-right free text from which we hand-picked sentences to maximize the amount of entities while maintaining style balance. The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes. The annotation process was done online, in BRAT. The actual annotation involved 4 people, has taken about 6 months (as work was volunteer-based, we could not have reached for 100% time commitment from the people involved), and followed the steps: The annotation process was a trial-and-error, with cycles composed of annotation, discussing confusing entities, updating the annotation guide schematic and going through the corpus section again to correct entities following guide changes.
Did they experiment with the corpus?
Yes.
null
false
null
what was the early uses of computers?
Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms.
null
false
14
We evaluate our approach by training an autocomplete system on 500K randomly sampled sentences from Yelp reviews BIBREF6 (see Appendix for details). We quantify the efficiency of a communication scheme $(q_{\alpha },p_{\beta })$ by the retention rate of tokens, which is measured as the fraction of tokens that are kept in the keywords. The accuracy of a scheme is measured as the fraction of sentences generated by greedily decoding the model that exactly matches the target sentence. We evaluate our approach by training an autocomplete system on 500K randomly sampled sentences from Yelp reviews [38] (see Appendix C for details).
What dataset is used for training the autocomplete system?
A dataset that contains sampled sentences fromYelp reviews.
null
false
18
Training Data. We choose WIT3's TED corpus BIBREF12 as the basis of our experiments since it might be the only high-quality parallel data of many low-resourced language pairs. TED is also multilingual in a sense that it includes numbers of talks which are commonly translated into many languages. In addition, we use a much larger corpus provided freely by WMT organizers when we evaluate the impact of our approach in a real machine translation campaign. It includes the paralled corpus extracted from the digital corpus of European Parliament (EPPS), the News Commentary (NC) and the web-crawled parallel data (CommonCrawl). While the number of sentences in popular TED corpora varies from 13 thousands to 17 thousands, the total number of sentences in those larger corpus is approximately 3 million sentences. Neural Machine Translation Setup. All experiments have been conducted using NMT framework Nematus, Following the work of Sennrich2016a, subword segmentation is handled in the prepocessing phase using Byte-Pair Encoding (BPE). Excepts stated clearly in some experiments, we set the number of BPE merging operations at 39500 on the joint of source and target data. When training all NMT systems, we take out the sentence pairs exceeding 50-word length and shuffle them inside every minibatch. Our short-list vocabularies contain 40,000 most frequent words while the others are considered as rare words and applied the subword translation. We use an 1024-cell GRU layer and 1000-dimensional embeddings with dropout at every layer with the probability of 0.2 in the embedding and hidden layers and 0.1 in the input and ourput layers. We trained our systems using gradient descent optimization with Adadelta BIBREF13 on minibatches of size 80 and the gradient is rescaled whenever its norm exceed 1.0. All the trainings last approximately seven days if the early-stopping condition could not be reached. At a certain time, an external evaluation script on BLEU BIBREF14 is conducted on a development set to decide the early-stopping condition. This evaluation script has also being used to choose the model archiving the best BLEU on the development set instead of the maximal loglikelihood between the translations and target sentences while training. In translation, the framework produces INLINEFORM0 -best candidates and we then use a beam search with the beam size of 12 to get the best translation. All experiments have been conducted using NMT framework Nematus6 , Following the work of Sennrich et al. (2016b), subword segmentation is handled in the prepocessing phase using Byte-Pair Encoding (BPE).
What tool is used in all of their experiments?
NMT framework Nematus.
null
false
null
Friends is an American television sitcom created by David Crane and Marta Kauffman, which aired on NBC from September 22, 1994, to May 6, 2004, lasting ten seasons. With an ensemble cast starring Jennifer Aniston, Courteney Cox, Lisa Kudrow, Matt LeBlanc, Matthew Perry and David Schwimmer, the show revolves around six friends in their 20s and 30s who live in Manhattan, New York City. The series was produced by Bright/Kauffman/Crane Productions, in association with Warner Bros. Television. The original executive producers were Kevin S. Bright, Kauffman, and Crane. Kauffman and Crane began developing Friends under the working title Insomnia Cafe between November and December 1993. They presented the idea to Bright, and together they pitched a seven-page treatment of the show to NBC. After several script rewrites and changes, including title changes to Six of One and Friends Like Us, the series was finally named Friends. Filming took place at Warner Bros. Studios in Burbank, California. The show ranked within the top ten of the final television season ratings; it ultimately reached the number-one spot in its eighth season. The series finale aired on May 6, 2004, and was watched by around 52.5 million American viewers, making it the fifth-most-watched series finale in television history and the most-watched television episode of the 2000s.
Based on the above information, provide the real names of the cast in the American television sitcom Friends. Separate the names using a comma. Also provide the date when the series finale was aired. Provide the date in MM/DD/YYYY format.
As per the above passage, the cast of Friends include Jennifer Aniston, Courteney Cox, Lisa Kudrow, Matt LeBlanc, Matthew Perry, David Schwimmer. The series finale was aired on 05/06/2004.
null
false
null
List the names of five of Adam Sandler's movies
Waterboy Happy Gilmore The Wedding Singer Anger Management Hustle
null
false
469
To assess the relationship between the hardness degree of samples and the misclassification rate, we compute the hardness degree of CIFAR10 and CIFAR100 test samples and then partition them into ten groups based on their hardness degree. It is important to note that the number of samples in each group is different. Afterward, we calculate what percentage of samples in each group is classified incorrectly. Figure 6 demonstrates that the misclassification rate is increased by increasing the hardness degree of samples. In other words, there is a strong positive correlation between the hardness degree of samples and the misclassification rate. The figure indicates the percentage of samples in each hardness degree range by a green curve. For example, the hardness degree of 40.65% of CIFAR10 test samples (4065 samples) is in the range [0,9] for MobileNet classifier, from which 99.88% is classified correctly, or the hardness degree of 7.4% of CIFAR10 test samples (740 samples) is in the range [90,99] for MobileNet classifier, from which 55.27% is classified correctly. More than 99% and 95% of samples being learned in the first 30 epochs (hardness degree < 30) are correctly classified in CIFAR10 and CIFAR100 test samples, respectively. On the other side, less than 55% and 36% of samples being learned in the last 10 epochs (hardness degree ≥ 90) are correctly classified in CIFAR10 and CIFAR100 test samples, respectively.
Are there any significant result from Figure 2?
We first want to mention that since Figure 2 is not directly relevant to the main purpose of the paper, we have moved it to Appendix B and changed the figure to avoid misunderstanding. As mentioned in the paper, calculating the hardness degree of samples does not require the true label of samples. Nevertheless, since we had access to the true labels of CIFAR10 and CIFAR100 test samples, we depict Figure 2 (Figure 6) to give more intuition about the hardness degree of samples. To depict this figure, we first calculate the hardness degree of CIFAR10 and CIFAR100 test samples and then partition them into ten groups based on their hardness degree. Notably, the number of samples in each group is different. Afterward, we calculate what percentage of samples in each group is classified incorrectly. The figure demonstrates that the misclassification rate is increased by increasing the hardness degree of samples. In other words, there is a strong positive correlation between the hardness degree of samples and the misclassification rate. Figure 6 indicates the percentage of samples in each hardness degree range by a green curve. We have added an example to Appendix B to clarify Figure 6.
null
false
17
Following developing news stories is imperative to making real-time decisions on important political and public safety matters. Given the abundance of media providers and languages, this endeavor is an extremely difficult task. As such, there is a strong demand for automatic clustering of news streams, so that they can be organized into stories or themes for further processing. Performing this task in an online and efficient manner is a challenging problem, not only for newswire, but also for scientific articles, online reviews, forum posts, blogs, and microblogs. A key challenge in handling document streams is that the story clusters must be generated on the fly in an online fashion: this requires handling documents one-by-one as they appear in the document stream. In this paper, we provide a treatment to the problem of online document clustering, i.e. the task of clustering a stream of documents into themes. For example, for news articles, we would want to cluster them into related news stories. To this end, we introduce a system which aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. Our clustering approach is part of a larger media monitoring project to solve the problem of monitoring massive text and TV/Radio streams (speech-to-text). In particular, media monitors write intelligence reports about the most relevant events, and being able to search, visualize and explore news clusters assists in gathering more insight about a particular story. Since relevant events may be spawned from any part of the world (and from many multilingual sources), it becomes imperative to cluster news across different languages. In terms of granularity, the type of story clusters we are interested in are the group of articles which, for example : (i) Narrate recent air-strikes in Eastern Ghouta (Syria); (ii) Describe the recent launch of Space X's Falcon Heavy rocket. Our main contributions are as follows: • We develop a system that aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. As discussed in the introduction, this is a highly relevant task for the use-case of media monitoring. • We formulate the problem of online multilingual document clustering and the representation that such clustering takes by interlacing the problem of monolingual clustering with crosslingual clustering. The representation of our clusters is interpretable, and similarly to topic models, consists of a set of keywords and weights associated with the relevant cluster. In our formulation, a monolingual cluster is a group of documents, and a crosslingual cluster is a group of monolingual clusters in different languages. • We compare our approach to our own implementation of a state-of-the-art streaming method, and show much superior results for a dataset in English, Spanish and German.
What are the author's main contributions?
• They develop a system that aggregates news articles into fine-grained story clusters across different languages in a completely online and scalable fashion from a continuous stream. • They formulate the problem of online multilingual document clustering and the representation that such clustering takes by interlacing the problem of monolingual clustering with crosslingual clustering. • They compare their approach to their own implementation of a state-of-the-art streaming method, and show much superior results for a dataset in English, Spanish and German.
null
false
null
Who were the members of England's men's football World Cup winning team in 1966?
- Gordon Banks (Goalkeeper) - George Cohen (Defender) - Ray Wilson (Defender) - Nobby Stiles (Midfielder) - Jack Charlton (Defender) - Bobby Moore (Defender, Captain) - Alan Ball (Midfielder) - Jimmy Greaves (Forward) - Bobby Charlton (Midfielder) - Geoff Hurst (Forward) - John Connelly (Forward) - Ron Springett (Goalkeeper) - Peter Bonetti (Goalkeeper) - Jimmy Armfield (Defender) - Gerry Byrne (Defender) - Martin Peters (Midfielder) - Ron Flowers (Midfielder) - Norman Hunter (Defender) - Terry Paine (Forward) - Ian Callaghan (Midfielder) - Roger Hunt (Forward) - George Eastham (Midfielder)
null
false
null
What are some examples of frequently purchased essential oils?
Some examples of frequently purchased essential oils include peppermint, spearmint, tea tree, grapefruit, lemon, orange, rosemary, and spruce oils.
null
false
328
We compare performances of our model with the implementation of BIBREF13 and the results are shown in Table TABREF43 . Our model obtains better performances in Multi-Domain scenario with an average improvement of 4.5%, where datasets are product reviews on different domains with similar sequence lengths and the same class number, thus producing stronger correlations. Multi-Cardinality scenario also achieves significant improvements of 2.77% on average, where datasets are movie reviews with different cardinalities. However, Multi-Objective scenario benefits less from multi-task learning due to lacks of salient correlation among sentiment, topic and question type. The QC dataset aims to classify each question into six categories and its performance even gets worse, which may be caused by potential noises introduced by other tasks. In practice, the structure of our model is flexible, as couplings and fusions between some empirically unrelated tasks can be removed to alleviate computation costs. We further explore the influence of INLINEFORM0 in TOS on our model, which can be any positive integer. A higher value means larger and more various samples combinations, while requires higher computation costs. Figure FIGREF45 shows the performances of datasets in Multi-Domain scenario with different INLINEFORM0 . Compared to INLINEFORM1 , our model can achieve considerable improvements when INLINEFORM2 as more samples combinations are available. However, there are no more salient gains as INLINEFORM3 gets larger and potential noises from other tasks may lead to performance degradations. For a trade-off between efficiency and effectiveness, we determine INLINEFORM4 as the optimal value for our experiments. In order to measure the correlation strength between two task INLINEFORM0 and INLINEFORM1 , we learn them jointly with our model and define Pair-wise Performance Gain as INLINEFORM2 , where INLINEFORM3 are the performances of tasks INLINEFORM4 and INLINEFORM5 when learned individually and jointly. We calculate PPGs for every two tasks in Table TABREF35 and illustrate the results in Figure FIGREF47 , where darkness of colors indicate strength of correlation. It is intuitive that datasets of Multi-Domain scenario obtain relatively higher PPGs with each other as they share similar cardinalities and abundant low-level linguistic characteristics. Sentences of QC dataset are much shorter and convey unique characteristics from other tasks, thus resulting in quite lower PPGs. In practice, the structure of our model is flexible, as couplings and fusions between some empirically unrelated tasks can be removed to alleviate computation costs.
Why the structure of the authors' model is flexible in practice?
Because couplings and fusions between some empirically unrelated tasks can be removed to alleviate computation costs.
null
false
null
Give me a list of the best bikes for riding on roads.
The best bikes for riding on the road are Cervelo, BMC, Pinarello, Specialized and Felt.
null
false
null
If Phil has three siblings, Margaret, Janis, and Louis then what are all are the names of Phil’s mother’s children?
Phil’s mother named her children Phil, Margaret, Janis, and Louis.
null
false
null
What is economics?
Economics is the study of the ALLOCATION of SCARCE resources to meet UNLIMITED human wants.
null
false
null
Space exploration stared during the late 1920s leading to the first crewed rocket cars and rocket planes, then there was the US and Soviet activities from 1950 onwards
When did space exploration begin ?
The world's first large-scale experimental rocket program was Opel-RAK under the leadership of Fritz von Opel and Max Valier during the late 1920s leading to the first crewed rocket cars and rocket planes, which paved the way for the Nazi era V2 program and US and Soviet activities from 1950 onwards. The Opel-RAK program and the spectacular public demonstrations of ground and air vehicles drew large crowds, as well as caused global public excitement as so-called "Rocket Rumble" and had a large long-lasting impact on later spaceflight pioneers like Wernher von Braun. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries. The early era of space exploration was driven by a "Space Race" between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union's Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet space program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Alexei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971. After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS). With the substantial completion of the ISS following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020 was judged inadequately funded and unrealistic by an expert review panel reporting in 2009. The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as Earth–Moon L1, the Moon, Earth–Sun L2, near-Earth asteroids, and Phobos or Mars orbit.
null
false
null
Tell me whether these are food or drink: juice, hamburger, coke, pizza, hotdog, steak, lemonade, apple cider.
Food: hamburger, pizza, hotdog, steak. Drink: juice, coke, pizza, lemonade, apple cider.
null
false
118
With social media becoming increasingly pop-ular on which lots of news and real-time eventsare reported, developing automated questionanswering systems is critical to the effective-ness of many applications that rely on real-time knowledge. While previous datasets haveconcentrated on question answering (QA) forformal text like news and Wikipedia, wepresent the first large-scale dataset for QA oversocial media data. To ensure that the tweetswe collected are useful, we only gather tweetsused by journalists to write news articles. Wethen ask human annotators to write questionsand answers upon these tweets. Unlike otherQA datasets like SQuAD in which the answersare extractive, we allow the answers to be ab-stractive. We show that two recently proposedneural models that perform well on formaltexts are limited in their performance when ap-plied to our dataset. In addition, even the fine-tuned BERT model is still lagging behind hu-man performance with a large margin. Our re-sults thus point to the need of improved QAsystems targeting social media text. In this paper, we propose the first large-scale dataset for QA over social media data.
Is their data set the first large-scale dataset for QA over social media data?
Yes.
null
false
null
The most common cause of color blindness is an inherited problem or variation in the functionality of one or more of the three classes of cone cells in the retina, which mediate color vision. The most common form is caused by a genetic disorder called congenital red–green color blindness. Males are more likely to be color blind than females, because the genes responsible for the most common forms of color blindness are on the X chromosome. Non-color-blind females can carry genes for color blindness and pass them on to their children. Color blindness can also result from physical or chemical damage to the eye, the optic nerve, or parts of the brain. Screening for color blindness is typically done with the Ishihara color test. There is no cure for color blindness. Diagnosis may allow an individual, or their parents/teachers to actively accommodate the condition. Special lenses such as EnChroma glasses or X-chrom contact lenses may help people with red–green color blindness at some color tasks, but they do not grant the wearer "normal color vision". Mobile apps can help people identify colors.
Please summarize the causes and cure for color blindness given in the passage?
The most common cause of color blindness is genetic disorder. Males are more likely to be color blind than females, because the genes responsible for the same are present on the X chromosome. Females who are non-color blind could still carry and pass on genes for color blindness. Any kind of physical or chemical damage to the eye, optic nerve or brain could also lead to the same. There is no cure for color blindness. Affected people can actively accommodate the condition or use special glasses or lenses that can help people with red-green color blindness to a certain extent. There are mobile apps available as well to help identify colors.
null
false
null
What is the plot of Persuasion by Jane Austen?
Persuasion, published in 1817, is romantic novel with themes of mature love, second chances, and family loyalty. Persuasion centers around Anne Elliot, who is one of three daughters of Sir Walter Elliot. Sir Walter is a widower who is conceited and selfish, and spends lavishly and beyond his means, which has forced the family to retrench by renting out their landed estate and relocating to lower-cost Bath. Anne is wise and kind but, at 26, is past her prime. At 19, Anne fell in love with, and accepted a marriage proposal from, Frederick Wentworth, a promising naval commander who is of a lower social status than Anne. Anne is persuaded by her family (father and older sister) and close friend (Lady Russell) to break off the proposal. The novel starts with the family planning for their relocation to Bath, and the Elliots have coincidentally rented their home to the sister and brother-in-law of Frederick Wentworth, now a naval captain. This provides Captain Wentworth with the opportunity to re-enter Anne’s life and her social circle. Captain Wentworth has achieved much success and wealth from his naval career. While Anne immediately notices and admires Captain Wentworth, he initially remains cold to her and commented that she has aged significantly since he last saw her. The plot takes various twists and turns, with other potential romantic interests for Captain Wentworth and Anne. As Captain Wentworth sees more of Anne, he learns to appreciate and respect her, and ultimately they fall in love again and agree to renew their engagement.
null
false
381
Named entity typing is the task of detecting the type (e.g., person, location, or organization) of a named entity in natural language text. Entity type information has shown to be useful in natural language tasks such as question answering BIBREF0 , knowledge-base population BIBREF1 , BIBREF2 , and co-reference resolution BIBREF3 . Motivated by its application to downstream tasks, recent work on entity typing has moved beyond standard coarse types towards finer-grained semantic types with richer ontologies BIBREF0 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Rather than assuming an entity can be uniquely categorized into a single type, the task has been approached as a multi-label classification problem: e.g., in “... became a top seller ... Monopoly is played in 114 countries. ...” (fig:arch), “Monopoly” is considered both a game as well as a product. The state-of-the-art approach BIBREF8 for fine-grained entity typing employs an attentive neural architecture to learn representations of the entity mention as well as its context. These representations are then combined with hand-crafted features (e.g., lexical and syntactic features), and fed into a linear classifier with a fixed threshold. While this approach outperforms previous approaches which only use sparse binary features BIBREF4 , BIBREF6 or distributed representations BIBREF9 , it has a few drawbacks: (1) the representations of left and right contexts are learnt independently, ignoring their mutual connection; (2) the attention on context is computed solely upon the context, considering no alignment to the entity; (3) document-level contexts which could be useful in classification are not exploited; and (4) hand-crafted features heavily rely on system or human annotations. To overcome these drawbacks, we propose a neural architecture (fig:arch) which learns more context-aware representations by using a better attention mechanism and taking advantage of semantic discourse information available in both the document as well as sentence-level contexts. Further, we find that adaptive classification thresholds leads to further improvements. Experiments demonstrate that our approach, without any reliance on hand-crafted features, outperforms prior work on three benchmark datasets. In this paper, we focus on designing different variants of using BERT on the extractive summarization task and showing their results on CNN/Dailymail and NYT datasets.
What dataset does the paper use?
CNN/Dailymail and NYT datasets.
null
false
299
Extracting data elements such as study descriptors from publication full texts is an essential step in a number of tasks including systematic review preparation BIBREF0 , construction of reference databases BIBREF1 , and knowledge discovery BIBREF2 . These tasks typically involve domain experts identifying relevant literature pertaining to a specific research question or a topic being investigated, identifying passages in the retrieved articles that discuss the sought after information, and extracting structured data from these passages. The extracted data is then analyzed, for example to assess adherence to existing guidelines BIBREF1 . Figure FIGREF2 shows an example text excerpt with information relevant to a specific task (assessment of adherence to existing guidelines BIBREF1 ) highlighted. Extracting the data elements needed in these tasks is a time-consuming and at present a largely manual process which requires domain expertise. For example, in systematic review preparation, information extraction generally constitutes the most time consuming task BIBREF4 . This situation is made worse by the rapidly expanding body of potentially relevant literature with more than one million papers added into PubMed each year BIBREF5 . Therefore, data annotation and extraction presents an important challenge for automation. A typical approach to automated identification of relevant information in biomedical texts is to infer a prediction model from labeled training data – such a model can then be used to assign predicted labels to new data instances. However, obtaining training data for creating such prediction models can be very costly as it involves the step which these models are trying to automate – manual data extraction. Furthermore, depending on the task at hand, the types of information being extracted may vary significantly. For example, in systematic reviews of randomized controlled trials this information generally includes the patient group, the intervention being tested, the comparison, and the outcomes of the study (PICO elements) BIBREF4 . In toxicology research the extraction may focus on routes of exposure, dose, and necropsy timing BIBREF1 . Previous work has largely focused on identifying specific pieces of information such as biomedical events BIBREF6 or PICO elements BIBREF0 . However, depending on the domain and the end goal of the extraction, these may be insufficient to comprehensively describe a given study. Therefore, in this paper we focus on unsupervised methods for identifying text segments (such as sentences or fixed length sequences of words) relevant to the information being extracted. We develop a model that can be used to identify text segments from text documents without labeled data and that only requires the current document itself, rather than an entire training corpus linked to the target document. More specifically, we utilize representation learning methods BIBREF7 , where words or phrases are embedded into the same vector space. This allows us to compute semantic relatedness among text fragments, in particular sentences or text segments in a given document and a short description of the type of information being extracted from the document, by using similarity measures in the feature space. The model has the potential to speed up identification of relevant segments in text and therefore to expedite annotation of domain specific information without reliance on costly labeled data. We have developed and tested our approach on a reference database of rodent uterotropic bioassays BIBREF1 which are labeled according to their adherence to test guidelines set forth in BIBREF3 . Each study in the database is assigned a label determining whether or not it met each of six main criteria defined by the guidelines; however, the database does not contain sentence-level annotations or any information about where the criteria was mentioned in each publication. Due to the lack of fine-grained annotations, supervised learning methods cannot be easily applied to aid annotating new publications or to annotate related but distinct types of studies. This database therefore presents an ideal use-case for unsupervised approaches. While our approach doesn't require any labeled data to work, we use the labels available in the dataset to evaluate the approach. We train a binary classification model for identifying publications which satisfied given criteria and show the model performs better when trained on relevant sentences identified by our method than when trained on sentences randomly picked from the text. Furthermore, for three out of the six criteria, a model trained solely on the relevant sentences outperforms a model which utilizes full text. The results of our evaluation support the intuition that semantic relatedness to criteria descriptions can help in identifying text sequences discussing sought after information. There are two main contributions of this work. We present an unsupervised method that employs representation learning to identify text segments from publication full text which are relevant to/contain specific sought after information (such as number of dose groups). In addition, we explore a new dataset which hasn't been previously used in the field of information extraction. The remainder of this paper is organized as follows. In the following section we provide more details of the task and the dataset used in this study. In Section SECREF3 we describe our approach. In Section SECREF4 we evaluate our model and discuss our results. In Section SECREF5 we compare our work to existing approaches. Finally, in Section SECREF6 we provide ideas for further study. Therefore, in this paper we focus on unsupervised methods for identifying text segments (such as sentences or fixed length sequences of words) relevant to the information being extracted. We develop a model that can be used to identify text segments from text documents without labeled data and that only requires the current document itself, rather than an entire training corpus linked to the target document.
What their model can be used to do?
To identify text segments from text documents without labeled data and that only requires the current document itself, rather than an entire training corpus linked to the target document.
null
false
null
Who is Lilith?
Lilith is a figure who appears in both Mesopotamian and Judaic mythology and is purported to be Adam's first wife. She was banished from the Garden of Eden for not being subservient and obeying him.
null
false
null
Which one of these instruments are brass instruments? trombone, viola, trumpet, tuba, flute, clarinet, saxophone and piano.
The instruments out that list that are brass instruments are: trombone, trumpet and tuba.
null
false
null
Olson served as a law clerk for Judge Barbara Jacobs Rothstein on the United States District Court for the Western District of Washington for two years immediately after finishing law school. She served as a trial attorney and later Deputy Director of the National Church Arson Task Force in the United States Department of Justice Civil Rights Division from 1992 to 1997. From 1994 to 1997, she also worked part-time as an adjunct professor at George Washington University Law School. In 1997, Olson joined the Office of the United States Attorney for the District of Idaho, rising to the rank of Senior Litigation Counsel at the time of her appointment in 2010. After Olson resigned in 2017, she joined the Boise office of Stoel Rives, a law firm that operates in the Pacific Northwest. In 2021, Olson was included on a shortlist of possible nominees to succeed Judge B. Lynn Winmill.
Give me a chronological bulleted list based on a timeline of events
* 1992 to 1997: Olson served as a trial attorney and later Deputy Director of the National Church Arson Task Force in the United States Department of Justice Civil Rights Division * 1994 to 1997: Olson worked part-time as an adjunct professor at George Washington University Law School * 1997: Olson joined the Office of the United States Attorney for the District of Idaho * 2010: Olson rose to the rank of Senior Litigation Counsel * 2017: Olson resigned * 2017: Olson joined the Boise office of Stoel Rives, a law firm that operates in the Pacific Northwest * 2021: Olson was included on a shortlist of possible nominees to succeed Judge B. Lynn Winmill
null
false
null
Germany, officially the Federal Republic of Germany, is a country in Central Europe. It is the second-most populous country in Europe after Russia, and the most populous member state of the European Union. Germany is situated between the Baltic and North seas to the north, and the Alps to the south; it covers an area of 357,022 square kilometres (137,847 sq mi), with a population of over 84 million within its 16 constituent states. Germany borders Denmark to the north, Poland and the Czech Republic to the east, Austria and Switzerland to the south, and France, Luxembourg, Belgium, and the Netherlands to the west. The nation's capital and most populous city is Berlin and its main financial centre is Frankfurt; the largest urban area is the Ruhr.
What is the population density of Germany?
The population density of Germany is over 235 people per square kilometer, or over 600 people per square mile
null
false
null
Which tastes better, red or green grapes?
Taste. The infamous grocery store dispute often stems from the taste. The superior grape stems from personal preference. Red grapes are often considered sweeter whereas green grapes are sourer
null
false
null
What is the difference between a bath and a shower?
While both a bath and a shower are common ways to get clean, they involve different processes. A bath is when someone fills a tub and immerses themselves in it, whereas a shower is when someone stands under a steady stream of water in order to get clean.
null
false
null
Identify which instrument is string or percussion: Sheker, Taishogoto
Taishogoto is string, Sheker is percussion.
null
false
null
What did the note in Hermione's hand say when she was found petrified in Harry Potter and the Chamber of Secrets?
The note simply said the word, "pipes." Harry Potter and his friends, Hermione Granger and Ron Weasley, were attempting to discover the truth of the legendary Chamber of Secrets, and the monster guarding it. Hermione had realized that the monster must be a basilisk, a rare and very dangerous snake-like beast that had been using the pipes to travel about Hogwarts.
null
false
null
What were the origins of basketball?
James Naismith, a gym instructor in New England, invented basketball in 1891 as an alternative to football. The original basketball game involved throwing a ball into a peach basket. Since then, the game has progressed to become one of the most popular sports in the world.
null
false
null
How do you play Baseball?
Baseball is a game between two teams involving a baseball, which is a ball that is about 2 inches in diameter, and a bat used to hit the ball. Two teams alternate turns pitching and hitting the ball. The hitting team attempts to score as many runs by advancing through the four bases, prior to their team getting three outs during their turn hitting. The pitching team's objective is to prevent the hitting team from scoring runs. Games traditionally last 9 innings, with each team having the opportunity to both hit and pitch 9 times. The team with the most runs after 9 innings wins the game.
null
false
null
Which film star was the first to appear on a postage stamp
Grace Kelly
null
false
null
Humans first learned to consume the milk of other mammals regularly following the domestication of animals during the Neolithic Revolution or the development of agriculture. This development occurred independently in several global locations from as early as 9000–7000 BC in Mesopotamia to 3500–3000 BC in the Americas. People first domesticated the most important dairy animals – cattle, sheep and goats – in Southwest Asia, although domestic cattle had been independently derived from wild aurochs populations several times since. Initially animals were kept for meat, and archaeologist Andrew Sherratt has suggested that dairying, along with the exploitation of domestic animals for hair and labor, began much later in a separate secondary products revolution in the fourth millennium BC. Sherratt's model is not supported by recent findings, based on the analysis of lipid residue in prehistoric pottery, that shows that dairying was practiced in the early phases of agriculture in Southwest Asia, by at least the seventh millennium BC. From Southwest Asia domestic dairy animals spread to Europe (beginning around 7000 BC but did not reach Britain and Scandinavia until after 4000 BC), and South Asia (7000–5500 BC). The first farmers in central Europe and Britain milked their animals. Pastoral and pastoral nomadic economies, which rely predominantly or exclusively on domestic animals and their products rather than crop farming, were developed as European farmers moved into the Pontic–Caspian steppe in the fourth millennium BC, and subsequently spread across much of the Eurasian steppe. Sheep and goats were introduced to Africa from Southwest Asia, but African cattle may have been independently domesticated around 7000–6000 BC.
Given this paragraph about the domestication of mammals, tell me the earliest known time period in which dairying occurred and also give me the location.
The earliest known dairying occurred between 9000-7000 BC in Mesopotamia.
1804.08139
false
null
Table TABREF34 shows the performances of the different methods. From the table, we can see that the performances of most tasks can be improved with the help of multi-task learning. FS-MTL shows the minimum performance gain from multi-task learning since it puts all private and shared information into a unified space. SSP-MTL and PSP-MTL achieve similar performance and are outperformed by ASP-MTL which can better separate the task-specific and task-invariant features by using adversarial training. Our proposed models (SA-MTL and DA-MTL) outperform ASP-MTL because we model a richer representation from these 16 tasks. Compared to SA-MTL, DA-MTL achieves a further improvement of INLINEFORM0 accuracy with the help of the dynamic and flexible query vector. It is noteworthy that our models are also space efficient since the task-specific information is extracted by using only a query vector, instead of a BiLSTM layer in the shared-private models. FLOAT SELECTED: Table 2: Performances on 16 tasks. The column of “Single Task” includes bidirectional LSTM (BiLSTM), bidirectional LSTM with attention (att-BiLSTM) and the average accuracy of the two models. The column of “Multiple Tasks” shows several multi-task models. * is from [Liu et al., 2017] . Table TABREF34 shows the performances of the different methods. FLOAT SELECTED: Table 2: Performances on 16 tasks. The column of “Single Task” includes bidirectional LSTM (BiLSTM), bidirectional LSTM with attention (att-BiLSTM) and the average accuracy of the two models. The column of “Multiple Tasks” shows several multi-task models. * is from [Liu et al., 2017] .
What evaluation metrics are used?
Accuracy on each dataset and the average accuracy on all datasets.
null
false
null
North America is a continent in the Northern Hemisphere and almost entirely within the Western Hemisphere.
what is north america
North America is a continent in the Northern Hemisphere and almost entirely within the Western Hemisphere. It is bordered to the north by the Arctic Ocean, to the east by the Atlantic Ocean, to the southeast by South America and the Caribbean Sea, and to the west and south by the Pacific Ocean. Because it is on the North American Tectonic Plate, Greenland is included as a part of North America geographically.
null
false
null
Ralph Agas (or Radulph Agas) (c. 1540 – 26 November 1621) was an English land surveyor and cartographer. He was born at Stoke-by-Nayland, Suffolk, in about 1540, and lived there throughout his life, although he travelled regularly to London. He began to practise as a surveyor in about 1566, and has been described as "one of the leaders of the emerging body of skilled land surveyors". Agas is particularly known for his large-scale town map of Oxford (surveyed 1578, published 1588). Early maps of London and Cambridge were also formerly attributed to him, but these attributions are no longer upheld.
What did Ralph Agas did as a profession?
Ralph Agas was an English land surveyor and cartographer, he was born at Stoke-by-Nayland, Suffolk.
null
false
null
During the height of the American Revolution, in the summer of 1780, British sympathizers (known as Tories) began attacking the outposts of American revolutionaries located along the Susquehanna River in the Wyoming Valley. Because of reports of Tory activity in the region, Captain Daniel Klader and a platoon of 41 men from Northampton County were sent to investigate. They traveled north from the Lehigh Valley along a path known as "Warrior's Trail" (which is present-day Pennsylvania Route 93). This route connects the Lehigh River in Jim Thorpe (formerly known as Mauch Chunk) to the Susquehanna River in Berwick. Captain Klader's men made it as far north as present-day Conyngham, when they were ambushed by Tory militiamen and members of the Seneca tribe. In all, 15 men were killed on September 11, 1780, in what is now known as the Sugarloaf Massacre.
What happened at the height of the American revolution?
In the summer of 1780 Tories also known as British sympathizers started to attack American revolutionaries outposts along a river in the Wyoming valley. Captain Daniel Klader sent an American Platoon of 41 men to go see what was going on and they ended up getting ambushed by the British sympathizers. 15 men in the platoon were killed in 1780 and this is what is known today as the Sugarloaf Massacre.
null
false
null
Endpoints are important aspects of interacting with server-side web APIs, as they specify where resources lie that can be accessed by third party software. Usually the access is via a URI to which HTTP requests are posted, and from which the response is thus expected. Web APIs may be public or private, the latter of which requires an access token. Endpoints need to be static, otherwise the correct functioning of software that interacts with it cannot be guaranteed. If the location of a resource changes (and with it the endpoint) then previously written software will break, as the required resource can no longer be found at the same place. As API providers still want to update their web APIs, many have introduced a versioning system in the URI that points to an endpoint.
What is an API endpoint?
Endpoints specify where resources lie that can be accessed by third party software. Clients use endpoints to interact with server-side web APIs.
null
false
null
What is structural biology?
Structural biology is a field that is many centuries old which, as defined by the Journal of Structural Biology, deals with structural analysis of living material (formed, composed of, and/or maintained and refined by living cells) at every level of organization. Early structural biologists throughout the 19th and early 20th centuries were primarily only able to study structures to the limit of the naked eye's visual acuity and through magnifying glasses and light microscopes. In the 20th century, a variety of experimental techniques were developed to examine the 3D structures of biological molecules. The most prominent techniques are X-ray crystallography, nuclear magnetic resonance, and electron microscopy. Through the discovery of X-rays and its applications to protein crystals, structural biology was revolutionized, as now scientists could obtain the three-dimensional structures of biological molecules in atomic detail. Likewise, NMR spectroscopy allowed information about protein structure and dynamics to be obtained. Finally, in the 21st century, electron microscopy also saw a drastic revolution with the development of more coherent electron sources, aberration correction for electron microscopes, and reconstruction software that enabled the successful implementation of high resolution cryo-electron microscopy, thereby permitting the study of individual proteins and molecular complexes in three-dimensions at angstrom resolution. With the development of these three techniques, the field of structural biology expanded and also became a branch of molecular biology, biochemistry, and biophysics concerned with the molecular structure of biological macromolecules (especially proteins, made up of amino acids, RNA or DNA, made up of nucleotides, and membranes, made up of lipids), how they acquire the structures they have, and how alterations in their structures affect their function. This subject is of great interest to biologists because macromolecules carry out most of the functions of cells, and it is only by coiling into specific three-dimensional shapes that they are able to perform these functions. This architecture, the "tertiary structure" of molecules, depends in a complicated way on each molecule's basic composition, or "primary structure." At lower resolutions, tools such as FIB-SEM tomography have allowed for greater understanding of cells and their organelles in 3-dimensions, and how each hierarchical level of various extracellular matrices contributes to function (for example in bone). In the past few years it has also become possible to predict highly accurate physical molecular models to complement the experimental study of biological structures. Computational techniques such as Molecular Dynamics simulations can be used in conjunction with empirical structure determination strategies to extend and study protein structure, conformation and function.
null
false
null
What are dividends?
A dividend is paid as a percentage on the price of an individual stock. It can be paid out quarterly, monthly, or yearly. It's a payment or distribution of earning from a company paid to shareholders and it is approved by the company's board of directors. It can be paid out as a cash payment or in company stock.
null
false
null
Which of these fruits, apple, strawberry, tomato, orange, cherry, grape; would be a poor choice to end a line in a song lyric with?
While all the items listed are indeed fruits, the worst fruit to use to end a line in a song lyric is the "orange" because it has no rhymes unless you're the exceptional Eminem wom can rhyme any word, including orange.
2004.01694
false
null
We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article: We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article:
What languages do they investigate for machine translation?
The answers are shown as follows: * English * Chinese
null
false
517
To show that STU-KD is applicable to different network architectures, two additional experiments are conducted. In the first experiment, Proxyless-Mobile (Cai et al., 2019) is selected as the compact model, and ResNet-50 (He et al., 2016) is selected as the large source model. The results are shown in Table 6. In the second group, ResNet-34 is selected as the compact model, and the ResNet-101 is selected as the large source model. The results are shown in Table 7.
In knowledge distillation or model compression, analysis on different architectures is required. However, in this submission, the large network is fixed as ResNet-50 and the compact network is fixed as ResNet-18. What's the reason?
We will conduct two additional groups of experiments for the analysis on different network architectures. In the first group, ResNet-101 is selected as the large model, and ResNet-34 is selected as the compact model; in the second group, ResNet-50 is selected as the large model, and Proxyless-Mobile [2] is selected as the compact model.
null
false
null
Is the saying "An Apple a Day Keeos the Doctor Away" Factually correct?
The above saying is factually incorrect. Just apple in itself although healthy might just not be sufficient. Although it is highly recommended to incorporate at least 1 cup of fruit in your daily meal.
null
false
null
Identify which car manufacturer is German or American: Opel, GMC
Opel is German, GMC is American
null
false
103
Despite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples BIBREF0 , a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures. For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 . In this paper, we focus on adversarially-chosen spelling mistakes in the context of text classification, addressing the following attack types: dropping, adding, and swapping internal characters within words. These perturbations are inspired by psycholinguistic studies BIBREF4 , BIBREF5 which demonstrated that humans can comprehend text altered by jumbling internal characters, provided that the first and last characters of each word remain unperturbed. First, in experiments addressing both BiLSTM and fine-tuned BERT models, comprising four different input formats: word-only, char-only, word+char, and word-piece BIBREF6 , we demonstrate that an adversary can degrade a classifier's performance to that achieved by random guessing. This requires altering just two characters per sentence. Such modifications might flip words either to a different word in the vocabulary or, more often, to the out-of-vocabulary token UNK. Consequently, adversarial edits can degrade a word-level model by transforming the informative words to UNK. Intuitively, one might suspect that word-piece and character-level models would be less susceptible to spelling attacks as they can make use of the residual word context. However, our experiments demonstrate that character and word-piece models are in fact more vulnerable. We show that this is due to the adversary's effective capacity for finer grained manipulations on these models. While against a word-level model, the adversary is mostly limited to UNK-ing words, against a word-piece or character-level model, each character-level add, drop, or swap produces a distinct input, providing the adversary with a greater set of options. Second, we evaluate first-line techniques including data augmentation and adversarial training, demonstrating that they offer only marginal benefits here, e.g., a BERT model achieving $90.3$ accuracy on a sentiment classification task, is degraded to $64.1$ by an adversarially-chosen 1-character swap in the sentence, which can only be restored to $69.2$ by adversarial training. Third (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs. The word recognition model's outputs form the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 . While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to $88.3$ , $81.1$ , $78.0$ accuracy for swap, drop, add attacks respectively, as compared to $69.2$ , $63.6$ , and $50.0$ for adversarial training Fourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robustness on the downstream task. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker. We provide a metric to quantify this notion of sensitivity in word recognition models and study its relation to robustness empirically. Models with low sensitivity and word error rate are most robust. Fourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robust ness on the downstream task.
Whether low word error rate alone is sufficient for a word recognizer to confer robustness on the downstream task?
No.
null
false
null
Best place to visit for a day from Bangalore
1. Myshore 2. Coorg 3. Ooty
null
false
50
We use stochastic gradient descent with Adagrad BIBREF10 to fit the embedding and context vectors. Following BIBREF1 , we reduce the computational complexity by splitting the gradient into two terms. The first term contains the non-zero entries ( INLINEFORM0 ); the second term contains the zero entries ( INLINEFORM1 ). We compute the exact gradient for the non-zero points; We subsample for the zero data points. This is similar to negative sampling BIBREF5 , which also down-weights the contributions of the zero points. Unlike BIBREF1 which uses INLINEFORM2 regularization to protect against overfitting when fitting the embedding vectors we use early stopping based on validation accuracy, for the same effect. We use stochastic gradient descent with Adagrad (Duchi et al., 2011) to fit the embedding and context vectors.
What is used by them to fit the embedding and context vectors?
Stochastic gradient descent with Adagrad(Duchi et al., 2011) .
null
false
28
The task of generating natural language descriptions of structured data (such as tables) BIBREF2 , BIBREF3 , BIBREF4 has seen a growth in interest with the rise of sequence to sequence models that provide an easy way of encoding tables and generating text from them BIBREF0 , BIBREF1 , BIBREF5 , BIBREF6 . For text generation tasks, the only gold standard metric is to show the output to humans for judging its quality, but this is too expensive to apply repeatedly anytime small modifications are made to a system. Hence, automatic metrics that compare the generated text to one or more reference texts are routinely used to compare models BIBREF7 . For table-to-text generation, automatic evaluation has largely relied on BLEU BIBREF8 and ROUGE BIBREF9 . The underlying assumption behind these metrics is that the reference text is gold-standard, i.e., it is the ideal target text that a system should generate. In practice, however, when datasets are collected automatically and heuristically, the reference texts are often not ideal. Figure FIGREF2 shows an example from the WikiBio dataset BIBREF0 . Here the reference contains extra information which no system can be expected to produce given only the associated table. We call such reference texts divergent from the table. We show that existing automatic metrics, including BLEU, correlate poorly with human judgments when the evaluation sets contain divergent references (§ SECREF36 ). For many table-to-text generation tasks, the tables themselves are in a pseudo-natural language format (e.g., WikiBio, WebNLG BIBREF6 , and E2E-NLG BIBREF10 ). In such cases we propose to compare the generated text to the underlying table as well to improve evaluation. We develop a new metric, PARENT (Precision And Recall of Entailed N-grams from the Table) (§ SECREF3 ). When computing precision, PARENT effectively uses a union of the reference and the table, to reward correct information missing from the reference. When computing recall, it uses an intersection of the reference and the table, to ignore extra incorrect information in the reference. The union and intersection are computed with the help of an entailment model to decide if a text n-gram is entailed by the table. We show that this method is more effective than using the table as an additional reference. Our main contributions are: The task of generating natural language descriptions of structured data (such as tables) (Kukich, 1983; McKeown, 1985; Reiter and Dale, 1997) has seen a growth in interest with the rise of sequence to sequence models that provide an easy way of encoding tables and generating text from them (Lebret et al., 2016; Wiseman et al., 2017; Novikova et al., 2017b; Gardent et al., 2017).
What kind of support does sequence to sequence models provide for the growth of the task of generating natural language descriptions of structured data?
Sequence to sequence models provide an easy way of encoding tables and generating text from them.
null
false
332
As language processing tools become more prevalent concern has grown over their susceptibility to social biases and their potential to propagate bias BIBREF1, BIBREF2. Natural language training data inevitably reflects biases present in our society. For example, gender bias manifests itself in training data which features more examples of men than of women. Tools trained on such data will then exhibit or even amplify the biases BIBREF3. Gender bias is a particularly important problem for Neural Machine Translation (NMT) into gender-inflected languages. An over-prevalence of some gendered forms in the training data leads to translations with identifiable errors BIBREF0. Translations are better for sentences involving men and for sentences containing stereotypical gender roles. For example, mentions of male doctors are more reliably translated than those of male nurses BIBREF2, BIBREF4. Recent approaches to the bias problem in NLP have involved training from scratch on artificially gender-balanced versions of the original dataset BIBREF5, BIBREF6 or with de-biased embeddings BIBREF7, BIBREF8. While these approaches may be effective, training from scratch is inefficient and gender-balancing embeddings or large parallel datasets are challenging problems BIBREF9. Instead we propose treating gender debiasing as a domain adaptation problem, since NMT models can very quickly adapt to a new domain BIBREF10. To the best of our knowledge this work is the first to attempt NMT bias reduction by fine-tuning, rather than retraining. We consider three aspects of this adaptation problem: creating less biased adaptation data, parameter adaptation using this data, and inference with the debiased models produced by adaptation. Regarding data, we suggest that a small, trusted gender-balanced set could allow more efficient and effective gender debiasing than a larger, noisier set. To explore this we create a tiny, handcrafted profession-based dataset for transfer learning. For contrast, we also consider fine-tuning on a counterfactual subset of the full dataset and propose a straightforward scheme for artificially gender-balancing parallel text for NMT. We find that during domain adaptation improvement on the gender-debiased domain comes at the expense of translation quality due to catastrophic forgetting BIBREF11. We can balance improvement and forgetting with a regularised training procedure, Elastic Weight Consolidation (EWC), or in inference by a two-step lattice rescoring procedure. We experiment with three language pairs, assessing the impact of debiasing on general domain BLEU and on the WinoMT challenge set BIBREF0. We find that continued training on the handcrafted set gives far stronger and more consistent improvements in gender-debiasing with orders of magnitude less training time, although as expected general translation performance as measured by BLEU decreases. We further show that regularised adaptation with EWC can reduce bias while limiting degradation in general translation quality. We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model. We believe this approach, rescoring with models targeted to remove bias, is novel in NMT. The rescoring procedure improves WinoMT accuracy by up to 30% with no decrease in BLEU on the general test set. Recent recommendations for ethics in Artificial Intelligence have suggested that social biases or imbalances in a dataset be addressed prior to model training BIBREF12. This recommendation presupposes that the source of bias in a dataset is both obvious and easily adjusted. We show that debiasing a full NMT dataset is difficult, and suggest alternative efficient and effective approaches for debiasing a model after it is trained. This avoids the need to identify and remove all possible biases prior to training, and has the added benefit of preserving privacy, since no access to the original data or knowledge of its contents is required. As evidence, in section SECREF43, we show this scheme can be applied to remove gender bias in the output of ‘black box‘ online commercial MT systems. We experiment with three language pairs, assessing the impact of debiasing on general domain BLEU and on the WinoMT challenge set.
What is assessed by the authors?
The authors experiment with three language pairs, assessing the impact of debiasing on general domain BLEU and on the WinoMT challenge set.
null
false
null
Google Sheets is a spreadsheet program included as part of the free, web-based Google Docs Editors suite offered by Google. Google Sheets is available as a web application, mobile app for: Android, iOS, Microsoft Windows, BlackBerry OS and as a desktop application on Google's ChromeOS. The app is compatible with Microsoft Excel file formats. The app allows users to create and edit files online while collaborating with other users in real-time. Edits are tracked by a user with a revision history presenting changes. An editor's position is highlighted with an editor-specific color and cursor and a permissions system regulates what users can do. Updates have introduced features using machine learning, including "Explore", offering answers based on natural language questions in a spreadsheet. This is one of the services provided by Google that also includes Google Docs, Google Slides, Google Drawings, Google Forms, Google Sites and Google Keep.
What is Google Sheets and how does it compatible with Microsoft Excel?
Google Sheets is a spreadsheet program included as part of the free, web-based Google Docs Editors suite offered by Google and is compatible with Microsoft Excel file formats (e.g., .XLS, .XLSX, .CSV, etc., . . .).
null
false
153
Visual dialog BIBREF0 is an interesting new task combining the research efforts from Computer Vision, Natural Language Processing and Information Retrieval. While BIBREF1 presents some tips and tricks for VQA 2.0 Challenge, we follow their guidelines for the Visual Dialog challenge 2018. Our models use attention similar to BIBREF2 to get object level image representations from Faster R-CNN model BIBREF3. We experiment with different encoder mechanisms to get representations of conversational history. Common to all the models, we initialize our embedding matrix with pre-trained Glove word vectors of 300 dimensions using 6B tokens .
How do the authors initialize their embedding matrix?
Initialize embedding matrix with pre-trained Glove word vectors of 300 dimensions using 6B tokens.
null
false
null
What are Flos
Flos is a genus of butterflies in the family Lycaenidae. The species of this genus are commonly known as plushblues and are found in the Indomalayan realm.
null
false
null
How can we cool the planet?
One way to cool the planet is to reduce the amount of greenhouse gases in the atmosphere. This can be done by reducing the use of fossil fuels, increasing the use of renewable energy sources, and reducing waste. Additionally, we can also reduce our reliance on air conditioning and other energy-intensive activities.
null
false
170
It is well-established that gender bias exists in language – for example, we see evidence of this given the prevalence of sexism in abusive language datasets BIBREF0, BIBREF1. However, these are extreme cases of gender norms in language, and only encompass a small proportion of speakers or texts. Less studied in NLP is how gender norms manifest in everyday language – do people talk about women and men in different ways? These types of differences are far subtler than abusive language, but they can provide valuable insight into the roots of more extreme acts of discrimination. Subtle differences are difficult to observe because each case on its own could be attributed to circumstance, a passing comment or an accidental word. However, at the level of hundreds of thousands of data points, these patterns, if they do exist, become undeniable. Thus, in this work, we introduce new datasets and methods so that we can study subtle gender associations in language at the large-scale. Our contributions include: Two datasets for studying language and gender, each consisting of over 300K sentences. Methods to infer gender-associated words and labeled clusters in any domain. Novel findings that demonstrate in both domains that people do talk about women and men in different ways. Each contribution brings us closer to modeling how gender associations appear in everyday language. In the remainder of the paper, we present related work, our data collection, methods and findings, and human evaluations of our system. Thus, in this work, we introduce new datasets and methods so that we can study subtle gender associations in language at the large-scale.
What do the new datasets introduced by authors use for?
For studying subtle gender associations in language at the large-scale by using the new datasets.
null
false
null
Give me some ideas on how to grow tomatoes in my back yard.
1. Plant tomatoes directly in the ground with a support system like a trellis 2. Plant tomatoes in a raised bed with support cages around each plant 3. Place tomatoes in individual pots, use the string method to support tomatoes
null
false
null
What are the primary characteristics of pluralism (also known as interest group liberalism or polyarchy)? What are the main criticisms of this theory?
Robert Dahl argues that interest group liberalism dominants power in the United States. This is where non-cumulative inequalities, manifested by multiple groups that each have strengths and weaknesses, fight for political power. Groups form to represent constituencies, where leaders are elected and engage in compromise decision-making within their group and with leaders of other groups. Groups with opposing interests prevent the accumulation of power with one group. Critics of this theory argue that Dahl only talked about policy decisions that were openly debated. There is a private or hidden power struggle where it is decided whether or not to consider policy at all. Second, that there are cumulative inequalities where those with a low level of resources are not able to mobilize and represent their own interests. Lastly, a group's leadership does not always represent the rank and file membership of the group as a whole.
null
false
null
What are 5 easy dinner ideas that kids will like?
Macaroni and cheese, chicken nuggets, ramen noodles, chicken and potato soup, grilled cheese sandwiches
null
false
18
Inspired by the multi-source NMT as additional parallel data in several languages are expected to benefit single translations, we aim to develop a NMT-based approach toward an universal framework to perform multilingual translation. Our solution features two treatments: 1) Coding the words in different languages as different words in the language-mixed vocabularies and 2) Forcing the NMT to translating a representation of source sentences into the sentences in a desired target language. Language-specific Coding. When the encoder of a NMT system considers words across languages as different words, with a well-chosen architecture, it is expected to be able to learn a good representation of the source words in an embedding space in which words carrying similar meaning would have a closer distance to each others than those are semantically different. This should hold true when the words have the same or similar surface form, such as (@de@Obama; @en@Obama) or (@de@Projektion; @en@projection). This should also hold true when the words have the same or similar meaning across languages, such as (@en@car; @en@automobile) or (@de@Flussufer; @en@bank). Our encoder then acts similarly to the one of multi-source approach BIBREF10 , collecting additional information from other sources for better translations, but with a much simpler embedding function. Unlike them, we need only one encoder, so we could reduce the number of parameters to learn. Furthermore, we neither need to change the network architecture nor depend on which recurrent unit (GRU, LSTM or simple RNN) is currently using in the encoder. We could apply the same trick to the target sentences and thus enable many-to-many translation capability of our NMT system. Similar to the multi-target translation BIBREF8 , we exploit further the correlation in semantics of those target sentences across different languages. The main difference between our approach and the work of BIBREF8 is that we need only one decoder for all target languages. Given one encoder for multiple source languages and one decoder for multiple target languages, it is trivial to incorporate the attention mechanism as in the case of a regular NMT for single language translation. In training, the attention layers were directed to learn relevant alignments between words in specific language pair and forward the produced context vector to the decoder. Now we rely totally on the network to learn good alignments between source and target sides. In fact, giving more information, our system are able to form nice alignments. In comparison to other research that could perform complete multi-task learning, e.g. the work from BIBREF9 or the approach proposed by BIBREF11 , our method is able to accommodate the attention layers seemlessly and easily. It also draws a clear distinction from those works in term of the complexity of the whole network: considerably less parameters to learn, thus reduces overfitting, with a conventional attention mechanism and a standard training procedure. Target Forcing. While language-specific coding allows us to implement a multilingual attention-based NMT, there are two issues we have to consider before training the network. The first is that the number of rare words would increase in proportion with the number of languages involved. This might be solved by applying a rare word treatment method with appropriate awareness of the vocabularies' size. The second one is more problematic: Ambiguity level in the translation process definitely increases due to the additional introduction of words having the same or similar meaning across languages at both source and target sides. We deal with the problem by explicitly forcing the attention and translation to the direction that we prefer, expecting the information would limit the ambiguity to the scope of one language instead of all target languages. We realize this idea by adding at the beginning and at the end of every source sentences a special symbol indicating the language they would be translated into. For example, in a multilingual NMT, when a source sentence is German and the target language is English, the original sentence (already language-specific coded) is: @de@darum @de@geht @de@es @de@in @de@meinem @de@Vortrag Now when we force it to be translated into English, the target-forced sentence becomes: <E> @de@darum @de@geht @de@es @de@in @de@meinem @de@Vortrag <E> Due to the nature of recurrent units used in the encoder and decoder, in training, those starting symbols encourage the network learning the translation of following target words in a particular language pair. In testing time, information of the target language we provided help to limit the translated candidates, hence forming the translation in the desired language. Figure FIGREF8 illustrates the essence of our approach. With two steps in the preprocessing phase, namely language-specific coding and target forcing, we are able to employ multilingual attention-based NMT without any special treatment in training such a standard architecture. Our encoder and attention-enable decoder can be seen as a shared encoder and decoder across languages, or an universal encoder and decoder. The flexibitily of our approach allow us to integrate any language into source or target side. As we will see in Section SECREF4 , it has proven to be extremely helpful not only in low-resourced scenarios but also in translation of well-resourced language pairs as it provides a novel way to make use of large monolingual corpora in NMT. Figure 1 illustrates the essence of our approach. With two steps in the preprocessing phase, namely language-specific coding and target forcing, we are able to employ multilingual attention-based NMT without any special treatment in training such a standard architecture. Our encoder and attention-enable decoder can be seen as a shared encoder and decoder across languages, or an universal encoder and decoder. The flexibitily of our approach allow us to integrate any language into source or target side.
What is the essence of their approach?
Employ multilingual attention-based NMT without any special treatment in training such a standard architecture with language-specific coding and target forcing. And it can integrate any language into the source or target side because of the flexibity.
null
false
252
In recent years, the proliferation of fake news with various content, high-speed spreading, and extensive influence has become an increasingly alarming issue. A concrete instance was cited by Time Magazine in 2013 when a false announcement of Barack Obama's injury in a White House explosion “wiped off 130 Billion US Dollars in stock value in a matter of seconds". Other examples, an analysis of the US Presidential Election in 2016 BIBREF0 revealed that fake news was widely shared during the three months prior to the election with 30 million total Facebook shares of 115 known pro-Trump fake stories and 7.6 million of 41 known pro-Clinton fake stories. Therefore, automatically detecting fake news has attracted significant research attention in both industries and academia. Most existing methods devise deep neural networks to capture credibility features for fake news detection. Some methods provide in-depth analysis of text features, e.g., linguistic BIBREF1, semantic BIBREF2, emotional BIBREF3, stylistic BIBREF4, etc. On this basis, some work additionally extracts social context features (a.k.a. meta-data features) as credibility features, including source-based BIBREF5, user-centered BIBREF6, post-based BIBREF7 and network-based BIBREF8, etc. These methods have attained a certain level of success. Additionally, recent researches BIBREF9, BIBREF10 find that doubtful and opposing voices against fake news are always triggered along with its propagation. Fake news tends to provoke controversies compared to real news BIBREF11, BIBREF12. Therefore, stance analysis of these controversies can serve as valuable credibility features for fake news detection. There is an effective and novel way to improve the performance of fake news detection combined with stance analysis, which is to build multi-task learning models to jointly train both tasks BIBREF13, BIBREF14, BIBREF15. These approaches model information sharing and representation reinforcement between the two tasks, which expands valuable features for their respective tasks. However, prominent drawback to these methods and even typical multi-task learning methods, like the shared-private model, is that the shared features in the shared layer are equally sent to their respective tasks without filtering, which causes that some useless and even adverse features are mixed in different tasks, as shown in Figure FIGREF2(a). By that the network would be confused by these features, interfering effective sharing, and even mislead the predictions. To address the above problems, we design a sifted multi-task learning model with filtering mechanism (Figure FIGREF2(b)) to detect fake news by joining stance detection task. Specifically, we introduce a selected sharing layer into each task after the shared layer of the model for filtering shared features. The selected sharing layer composes of two cells: gated sharing cell for discarding useless features and attention sharing cell for focusing on features that are conducive to their respective tasks. Besides, to better capture long-range dependencies and improve the parallelism of the model, we apply transformer encoder module BIBREF16 to our model for encoding input representations of both tasks. Experimental results reveal that the proposed model outperforms the compared methods and gains new benchmarks. In summary, the contributions of this paper are as follows: We explore a selected sharing layer relying on gate mechanism and attention mechanism, which can selectively capture valuable shared features between tasks of fake news detection and stance detection for respective tasks. The transformer encoder is introduced into our model for encoding inputs of both tasks, which enhances the performance of our method by taking advantages of its long-range dependencies and parallelism. Experiments on two public, widely used fake news datasets demonstrate that our method significantly outperforms previous state-of-the-art methods. The selected sharing layer composes of two cells: gated sharing cell for discarding useless features and attention sharing cell for focusing on features that are conducive to their respective tasks.
What is the gated sharing cell used for?
Discarding useless features.