paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
null
What is Nasdaq?
The Nasdaq Stock Market (National Association of Securities Dealers Automated Quotations Stock Market) is an American stock exchange based in New York City. It is the most active stock trading venue in the US by volume, and ranked second on the list of stock exchanges by market capitalization of shares traded, behind the New York Stock Exchange. The exchange platform is owned by Nasdaq, Inc., which also owns the Nasdaq Nordic stock market network and several U.S.-based stock and options exchanges. According to a Gallup poll conducted in 2022, approximately 58% of American adults reported having money invested in the stock market, either through individual stocks, mutual funds, or retirement accounts.
null
false
453
We compare the time and memory complexity of SpaLoc against NLM on the Has-Sister and Grandparent tasks. In Fig. 5, we plot the curve of memory consumption and inference time of each sample as a function of the number of objects in the evaluation domains. The experimental results show that our method can reduce the space complexity from the original O(n3) complexity of NLM to the O(n2) complexity required to solve the has-sister and grandparent task. In terms of the inference time, there is a constant level of optimization. On small graphs, adding sparsity does not necessarily improve the running time on GPUs because of the overhead of handling sparse tensors due to the implementation. More efficient implementation of sparse operations is interesting future work.
However, the authors are missing key details: for example, when the baseline methods hit OOM, what memory budgets did the authors put?
Thank you for your suggestion. We have included details about our memory setting in the paper. Specifically, for all methods, we have allocated 12GB of GPU memory. We added a scalability analysis in the appendix. In a nutshell, our method focuses on optimizing the space complexity of inference. The experimental results show that our method can reduce the space complexity from the original O(n^3)complexity of NLM to the O(n^2) complexity required to solve the has-sister and grandparent task. In terms of the inference time, there is a constant level of optimization.
null
false
null
Give me 5 great landmarks to visit in the western united states
- The Grand Canyon - Mount Rainier - The Golden Gate Bridge - Yellow Stone National Park - Disney Land
null
false
null
Why use an ereader to read a book?
Using an electronic reader over a book has many advantages. You have immediate access to large libraries of books. You can purchase and start reading a new book in minutes. You can also take your entire library with you when you travel without lugging around many or large books. E-readers also give the ability to read in the dark without using other lighting that may disturb those around you. While tablets have many of these same benefits, the experience on ereaders is better for reading books. The screens cause much less eye strain due to the use of electron inc on many of the devices. They are also free from notifications and other distractions found on tablets and smartphones.
null
false
null
Clarence E. Dietz (April 17, 1919 – June 23, 1993) was a former Republican member of the Pennsylvania House of Representatives. He was born in 1919 to Clarence and Mary Elizabeth Dietz. He died in 1993 at his Bedford County home after a short illness
When was Clarence Dietz born?
Clarence Dietz was born in April 17, 1919.
null
false
null
Several books on Phish have been published, including two official publications: The Phish Book, a 1998 coffee table book credited to the band members and journalist Richard Gehr which focused on the band's activities during 1996 and 1997, and Phish: The Biography, a semi-official biographical book written by music journalist and Phish fan Parke Puterbaugh, was published in 2009 and was based on interviews with the four band members, their friends and crew. An installment of the 33⅓ book series on A Live One, written by Walter Holland, was published in 2015. The 2013 book You Don't Know Me but You Don't Like Me: Phish, Insane Clown Posse, and My Misadventures with Two of Music's Most Maligned Tribes, written by music critic Nathan Rabin, compares and contrasts the fanbases of Phish and Insane Clown Posse.
Based on this paragraph, extract the names of the books about Phish and list them using bullets.
-The Phish Book -Phish: The Biography -A Live One -You Don't Know Me But You Don't Like Me: Phish, Insane Clown Posse, and My Misadventures with Two of Music's Most Maligned Tribes
1910.11493
false
null
BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. The model is globally normalized with the use of a second order linear-chain CRF. To efficiently calculate the partition function, the choice of lemmata are pruned with the use of pre-extracted edit trees. BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization.
What were the non-neural baselines used for the task?
The Lemming model in BIBREF17
null
false
null
William Henry Harrison (February 9, 1773 – April 4, 1841) was an American military officer and politician who served as the ninth president of the United States. Harrison died just 31 days after his inauguration in 1841, and had the shortest presidency in United States history. He was also the first United States president to die in office, and a brief constitutional crisis resulted as presidential succession was not then fully defined in the United States Constitution. Harrison was the last president born as a British subject in the Thirteen Colonies and was the paternal grandfather of Benjamin Harrison, the 23rd president of the United States.
Given a reference text about a president, tell me when he died.
April 4, 1841
null
false
null
The type of bicycle you buy will depend on a few different variables to consider. Are you looking for a city, hybrid, folding or off roading bike? How much are you looking to spend on the bicycle is also important when making these decisions as there are several hundred options out there to choose from. There are also options to customize bicycles.
I would like to spend no more than $1000 for my hybrid bicycle purchase.
null
false
null
Reliable counts of speakers of Louisiana French are difficult to obtain as distinct from other varieties of French. However, the vast majority of native residents of Louisiana and east and southeast Texas who speak French are likely speakers of Louisiana French. In Louisiana, as of 2010, the population of French speakers was approximately 115,183. These populations were concentrated most heavily in the southern, coastal parishes. In Texas, as of 2010, the French-speaking population was 55,773, though many of these were likely immigrants from France and other locations, living in the urban areas. Nevertheless, in the rural eastern/southeastern Texas counties of Orange, Jefferson, Chambers, Newton, Jasper, Tyler, Liberty, and Hardin alone—areas where it can be reasonably presumed that almost all French speakers are Louisiana French speakers—the total French-speaking population was composed of 3,400 individuals. It is likely a substantial portion of the 14,493 speakers in Houston's Harris county are also Louisiana French speakers. With this in mind, a marked decline in the number of French speakers in Texas has been noticed in the last half of the twentieth century. For example, at one point[when?] the French-speaking population of Jefferson County was 24,049 as compared to the mere 1,922 today. Likewise, in Harris County the French-speaking population has shifted from 26,796 to 14,493 individuals. Louisiana French-speaking populations can also be found in southern Mississippi and Alabama, as well as pockets in other parts of the United States.
Given this text about speakers of varieties of Louisiana French, has the number of French speakers in Texas increased or decreased since the middle of the twentieth century?
It has generally decreased. For example, in the second half of the twentieth century, the French-speaking population of Jefferson County was 24,049 as compared to 1,922 today. Similarly, in Harris County the French-speaking population has shifted from 26,796 to 14,493.
null
false
null
What is the popular TV show called where they send a number of people to a deserted island and they have to vote people off of the tribe?
The show is called Survivor, a reality-competition television franchise. Contestants are left somewhere in the world and they have to practice survival skills while also competing in challenges to make it to the final three to win a grand prize.
null
false
null
The following article lists the most valuable corporate brands in the world according to different estimates by Kantar Group, Interbrand, Brand Finance and Forbes. Factors that influence brand value are sales, market share, market capitalization, awareness of a brand, products, popularity, image, etc. Readers should note that lists like this, while informative, are somewhat subjective, as no single metric exists for determining valuation for brands. Top 10 valuable companies according Kantar 2022 list are Apple Inc, Google Inc, Amazon.com Inc, Microsoft corp, Tencent, McDonald, Visa Inc, Facebook, Alibaba, Louis Vuitton
Which are most valuable corporate brands in the world
Kantar group, Interbrand, Brand Finance and Forbes every year release most valuable corporate brands in the world. Top 10 valuable companies according Kantar 2022 list are Apple Inc (US) , Google Inc (US), Amazon.com Inc (US), Microsoft corp (US), Tencent (China), McDonald (US), Visa Inc(US), Facebook (US), Alibaba group (China) and Louis Vuitton (France) Factors that influence brand value are sales, market share, market capitalization, awareness of a brand, products, popularity, image, etc. Readers should note that lists like this, while informative, are somewhat subjective, as no single metric exists for determining valuation for brands.
1906.06442
false
null
To support this hypothesis, we first demonstrate that the permutation and word-dropping noise used by BIBREF19 do not improve or significantly degrade NMT accuracy, corroborating that noise might act as an indicator that the source is back-translated, without much loss in mutual information between the source and target. We then train models on WMT English-German (EnDe) without BT noise, and instead explicitly tag the synthetic data with a reserved token. We call this technique “Tagged Back-Translation" (TaggedBT). These models achieve equal to slightly higher performance than the noised variants. We repeat these experiments with WMT English-Romanian (EnRo), where NoisedBT underperforms standard BT and TaggedBT improves over both techniques. We demonstrate that TaggedBT also allows for effective iterative back-translation with EnRo, a technique which saw quality losses when applied with standard back-translation. We perform our experiments on WMT18 EnDe bitext, WMT16 EnRo bitext, and WMT15 EnFr bitext respectively. We use WMT Newscrawl for monolingual data (2007-2017 for De, 2016 for Ro, 2007-2013 for En, and 2007-2014 for Fr). For bitext, we filter out empty sentences and sentences longer than 250 subwords. We remove pairs whose whitespace-tokenized length ratio is greater than 2. This results in about 5.0M pairs for EnDe, and 0.6M pairs for EnRo. We do not filter the EnFr bitext, resulting in 41M sentence pairs. We then train models on WMT English-German (EnDe) without BT noise, and instead explicitly tag the synthetic data with a reserved token. We repeat these experiments with WMT English-Romanian (EnRo), where NoisedBT underperforms standard BT and TaggedBT improves over both techniques. We perform our experiments on WMT18 EnDe bitext, WMT16 EnRo bitext, and WMT15 EnFr bitext respectively. We use WMT Newscrawl for monolingual data (2007-2017 for De, 2016 for Ro, 2007-2013 for En, and 2007-2014 for Fr). For bitext, we filter out empty sentences and sentences longer than 250 subwords. We remove pairs whose whitespace-tokenized length ratio is greater than 2. This results in about 5.0M pairs for EnDe, and 0.6M pairs for EnRo. We do not filter the EnFr bitext, resulting in 41M sentence pairs.
What datasets was the method evaluated on?
The answers are shown as follows: * WMT18 EnDe bitext * WMT16 EnRo bitext * WMT15 EnFr bitext * We perform our experiments on WMT18 EnDe bitext, WMT16 EnRo bitext, and WMT15 EnFr bitext respectively. We use WMT Newscrawl for monolingual data (2007-2017 for De, 2016 for Ro, 2007-2013 for En, and 2007-2014 for Fr). For bitext, we filter out empty sentences and sentences longer than 250 subwords. We remove pairs whose whitespace-tokenized length ratio is greater than 2. This results in about 5.0M pairs for EnDe, and 0.6M pairs for EnRo. We do not filter the EnFr bitext, resulting in 41M sentence pairs.
null
false
247
We conduct experiments on a simplified Chinese emotion cause corpus BIBREF31 , the only publicly available dataset on this task to the best of our knowledge. The corpus contains 2,105 documents from SINA city news. Each document has only one emotion word and one or more emotion causes. The documents are segmented into clauses manually. The main task is to identify which clause contains the emotion cause. [id=lq]Details of the corpus are shown in Table 1. The metrics we used in evaluation follows lee2010text. It is commonly accepted so that we can compare our results with others. If a proposed emotion cause clause covers the annotated answer, the word sequence is considered correct. The precision, recall, and F-measure are defined by INLINEFORM0 In the experiments, we randomly select 90% of the dataset as training data and 10% as testing data. In order to obtain statistically credible results, we evaluate our method and baseline methods 25 times with different train/test splits. The corpus contains 2,105 documents from SINA city news.
What does the simplified Chinese emotion cause corpus contain?
The corpus contains 2,105 documents from SINA city news.
null
false
null
Which of the following are cities from South India: Bangalore, Hyderabad, Mumbai, Delhi, Chennai, Srinagar, Jaipur, Kolkata, Pune, Kochi?
Bangalore, Hyderabad, Chennai, Kochi are cities in South India.
null
false
null
What is a quantum computer?
A quantum computer is a computer that exploits quantum mechanical phenomena. At small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. In particular, a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is still largely experimental and impractical. The basic unit of information in quantum computing is the qubit, similar to the bit in traditional digital electronics. Unlike a classical bit, a qubit can exist in a superposition of its two "basis" states, which loosely means that it is in both states simultaneously. When measuring a qubit, the result is a probabilistic output of a classical bit. If a quantum computer manipulates the qubit in a particular way, wave interference effects can amplify the desired measurement results. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform calculations efficiently. Physically engineering high-quality qubits has proven challenging. If a physical qubit is not sufficiently isolated from its environment, it suffers from quantum decoherence, introducing noise into calculations. National governments have invested heavily in experimental research that aims to develop scalable qubits with longer coherence times and lower error rates. Two of the most promising technologies are superconductors (which isolate an electrical current by eliminating electrical resistance) and ion traps (which confine a single atomic particle using electromagnetic fields). Any computational problem that can be solved by a classical computer can also be solved by a quantum computer. Conversely, any problem that can be solved by a quantum computer can also be solved by a classical computer, at least in principle given enough time. In other words, quantum computers obey the Church–Turing thesis. This means that while quantum computers provide no additional advantages over classical computers in terms of computability, quantum algorithms for certain problems have significantly lower time complexities than corresponding known classical algorithms. Notably, quantum computers are believed to be able to solve certain problems quickly that no classical computer could solve in any feasible amount of time—a feat known as "quantum supremacy." The study of the computational complexity of problems with respect to quantum computers is known as quantum complexity theory.
null
false
null
How to understand or classify Indian film industry?
India has a big and vibrant movie industry producing between 1500 to 2000 movies per year. But not all of them are Bollywood movies - a term given to movies made in Hindi out of Bombay (mumbai). Only 300-500 of the moves are bollywood movies, the rest are made in over 20 different languages in different states of India. Apart from Bollywood ( movies made in Bombay), there is Tollywood (Telegu and Tamil films made in Telangana and TamilNadu), Kollywood ( Movies made in Kolkota in Bengali), Pollywood (movies made in Punjabi in the state of Punjab) , Mollywood ( movies made in Malayalam in the state of Kerala) etc. India is a diverse country with each state having its own distinct culture, language and food and now that is reflected in its movies as well.More and more Indians are enjoying different films from across their contries now thanks to not only cinema but newer OTT platforms like Netflix India, Amazon Prime India, HotStar etc. RRR is a prime example of a Tollywood regional film winning a few oscars this year - and it is important to note that it is NOT a bollywood or a hindi film. So this is usual way to classify the film industry in India - apart from other classifications like Art films or mainstream films.
null
false
null
First, it's important to know what your goal is with a bicycle. If you go out with the family, you can choose a regular bicycle, but if you go to use for competition, it is better to choose a more sophisticated carbon fiber bicycle, for gain more performance in races.
It is correct, but I would also recommend checking the purpose of buying a bicycle, and evaluate the cost-benefit.
null
false
null
What is ASEAN?
ASEAN stands for the Association of South East Asian Nations. It was formed as an economic and political union on the 8th of August 1967 through the Bangkok Declaration. It has a secondary objective of promoting peace. It currently consists of ten member countries in South East Asia, namely Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, Philippines, Singapore, Thailand, and Vietnam
null
false
null
Classify the below car manufacturing companies based on their countries of origin (Germany or India). Audi, BMW, Tata Motors, Mahindra and Mahindra
Germany - Audi, BMW India - Tata Motors, Mahindra and Mahindra
1910.02754
false
null
In this section, we analyze the visual and text based attention mechanisms. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. Thus, in practice, we find that a small weight ($\gamma =0.1$) is necessary to prevent degradation due to this sparse visual attention component. Figure FIGREF18 & FIGREF19 shows the comparison of visual and text based attention for two sentences, one long source sentence of length 21 and one short source sentence of length 7. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths.
What is result of their attention distribution analysis?
The answers are shown as follows: * visual attention is very sparse * visual component of the attention hasn't learnt any variation over the source encodings
null
false
281
Adversarial examples are generally minimal perturbations applied to the input data in an effort to expose the regions of the input space where a trained model performs poorly. Prior works BIBREF0, BIBREF1 have demonstrated the ability of an adversary to evade state-of-the-art classifiers by carefully crafting attack examples which can be even imperceptible to humans. Following such approaches, there has been a number of techniques aimed at generating adversarial examples BIBREF2, BIBREF3. Depending on the degree of access to the target model, an adversary may operate in one of the two different settings: (a) black-box setting, where an adversary doesn't have access to target model's internal architecture or its parameters, (b) white-box setting, where an adversary has access to the target model, its parameters, and input feature representations. In both these settings, the adversary cannot alter the training data or the target model itself. Depending on the purpose of the adversary, adversarial attacks can be categorized as (a) targeted attack and (b) non-targeted attack. In a targeted attack, the output category of a generated example is intentionally controlled to a specific target category with limited change in semantic information. While a non-targeted attack doesn't care about the category of misclassified results. Most of the prior work has focused on image classification models where adversarial examples are obtained by introducing imperceptible changes to pixel values through optimization techniques BIBREF4, BIBREF5. However, generating natural language adversarial examples can be challenging mainly due to the discrete nature of text samples. Continuous data like image or speech is much more tolerant to perturbations compared to text BIBREF6. In textual domain, even a small perturbation is clearly perceptible and can completely change the semantics of the text. Another challenge for generating adversarial examples relates to identifying salient areas of the text where a perturbation can be applied successfully to fool the target classifier. In addition to fooling the target classifier, the adversary is designed with different constraints depending on the task and its motivations BIBREF7. In our work, we focus on constraining our adversary to craft examples with semantic preservation and minimum perturbations to the input text. Given different settings of the adversary, there are other works that have designed attacks in “gray-box” settings BIBREF8, BIBREF9, BIBREF10. However, the definitions of “gray-box” attacks are quite different in each of these approaches. In this paper, we focus on “black-box” setting where we assume that the adversary possesses a limited set of labeled data, which is different from the target's training data, and also has an oracle access to the system, i.e., one can query the target classifier with any input and get its corresponding predictions. We propose an effective technique to generate adversarial examples in a black-box setting. We develop an Adversarial Example Generator (AEG) model that uses a reinforcement learning framing to generate adversarial examples. We evaluate our models using a word-based BIBREF11 and character-based BIBREF12 text classification model on benchmark classification tasks: sentiment classification and news categorization. The adversarial sequences generated are able to effectively fool the classifiers without changing the semantics of the text. Our contributions are as follows: We propose a black-box non-targeted attack strategy by combining ideas of substitute network and adversarial example generation. We formulate it as a reinforcement learning task. We introduce an encoder-decoder that operates over words and characters of an input text and empowers the model to introduce word and character-level perturbations. We adopt a self-critical sequence training technique to train our model to generate examples that can fool or increase the probability of misclassification in text classifiers. We evaluate our models on two different datasets associated with two different tasks: IMDB sentiment classification and AG's news categorization task. We run ablation studies on various components of the model and provide insights into decisions of our model. We propose a black-box non-targeted attack strategy by combining ideas of substitute network and adversarial example generation. We formulate it as a reinforcement learning task.
How is black-box non-targeted attack strategy proposed?
By combining ideas of substitute network and adversarial example generation.
null
false
143
Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc. Fact extraction involves extraction of Named Entities from a corpus, usually given a certain ontology. When compared to NER in the domain of general text, the biomedical domain has some characteristic challenges: Some of the earliest systems were heavily dependent on hand-crafted features. The method proposed in BIBREF4 for recognition of protein names in text does not require any prepared dictionary. The work gives examples of diversity in protein names and lists multiple rules depending on simple word features as well as POS tags. BIBREF5 adopt a machine learning approach for NER. Their NER system extracts medical problems, tests and treatments from discharge summaries and progress notes. They use a semi-Conditional Random Field (semi-CRF) BIBREF6 to output labels over all tokens in the sentence. They use a variety of token, context and sentence level features. They also use some concept mapping features using existing annotation tools, as well as Brown clustering to form 128 clusters over the unlabelled data. The dataset used is the i2b2 2010 challenge dataset. Their system achieves an F-Score of 0.85. BIBREF7 is an incremental paper on NER taggers. It uses 3 types of word-representation techniques (Brown clustering, distributional clustering, word vectors) to improve performance of the NER Conditional Random Field tagger, and achieves marginal F-Score improvements. BIBREF8 propose a boostrapping mechanism to bootstrap biomedical ontologies using NELL BIBREF9 , which uses a coupled semi-supervised bootstrapping approach to extract facts from text, given an ontology and a small number of “seed” examples for each category. This interesting approach (called BioNELL) uses an ontology of over 100 categories. In contrast to NELL, BioNELL does not contain any relations in the ontology. BioNELL is motivated by the fact that a lot of scientific literature available online is highly reliable due to peer-review. The authors note that the algorithm used by NELL to bootstrap fails in BioNELL due to ambiguities in biomedical literature, and heavy semantic drift. One of the causes for this is that often common words such as “white”, “dad”, “arm” are used as names of genes- this can easily result in semantic drift in one iteration of the bootstrapping. In order to mitigate this, they use Pointwise Mutual Information scores for corpus level statistics, which attributes a small score to common words. In addition, in contrast to NELL, BioNELL only uses high instances as seeds in the next iteration, but adds low ranking instances to the knowledge base. Since evaluation is not possible using Mechanical Turk or a small number of experts (due to the complexity of the task), they use Freebase BIBREF10 , a knowledge base that has some biomedical concepts as well. The lexicon learned using BioNELL is used to train an NER system. The system shows a very high precision, thereby showing that BioNELL learns very few ambiguous terms. More recently, deep learning techniques have been developed to further enhance the performance of NER systems. BIBREF11 explore recurrent neural networks for the problem of NER in biomedical text. When compared to NER in the domain of general text, the biomedical domain has some characteristic challenges: 1. Synonymy: the same biomedical entity is often known by different names. E.g. “cyclin-dependent kinase inhibitor p27” and “p27kip1” are the same proteins, “heart attack” and “myocardial infarcation” refer to the same medical problem. 2. Abbreviations: The literature is rich with ambiguous abbreviations: “RA” can refer to “right atrium”, “rheumatoid arthritis”, “renal artery” or several other concepts (Pakhomov, 2002) 3. Entity names are subject to many variants, and also change over time.
When compared to NER in the domain of general text, what are the characteristic challenges of the biomedical domain?
There are synonymy, abbreviations in the biomedical domain and entity names are subject to many variants, and also change over time.
null
false
null
PPG Industries, Inc. is an American Fortune 500 company and global supplier of paints, coatings, and specialty materials. With headquarters in Pittsburgh, Pennsylvania, PPG operates in more than 70 countries around the globe. By revenue it is the largest coatings company in the world followed by AkzoNobel. It is headquartered in PPG Place, an office and retail complex in downtown Pittsburgh, and is known for its glass facade designed by Postmodern architect Philip Johnson.
Who is the largest coatings company in the world by revenue?
PPG Industries, Inc.
null
false
null
What happened to Ned Stark's ancestral family greatsword, Ice?
After he was beheaded, Ned Stark's family sword was taken by Tywin Lannnister and reforged to make two longswords: Widow's Wail, which was gifted to his eldest grandson, Joffrey, and Oathkeeper, which was gifted to his son, Jaime.
null
false
null
In Asia Cricket Cup 2022 ,Player of the match was
Bhanuka Rajapaksa
null
false
124
Speculation is a naturally occurring phenomena in textual data, forming an integral component of many systems, especially in the biomedical information retrieval domain. Previous work addressing cue detection and scope resolution (the two subtasks of speculation detection) have ranged from rule-based systems to deep learning-based approaches. In this paper, we apply three popular transformer-based architectures, BERT, XLNet and RoBERTa to this task, on two publicly available datasets, BioScope Corpus and SFU Review Corpus, reporting substantial improvements over previously reported results (by at least 0.29 F1 points on cue detection and 4.27 F1 points on scope resolution). We also experiment with joint training of the model on multiple datasets, which outperforms the single dataset training approach by a good margin. We observe that XLNet consistently outperforms BERT and RoBERTa, contrary to results on other benchmark datasets. To confirm this observation, we apply XLNet and RoBERTa to negation detection and scope resolution, reporting state-of-the-art results on negation scope resolution for the BioScope Corpus (increase of 3.16 F1 points on the BioScope Full Papers, 0.06 F1 points on the BioScope Abstracts) and the SFU Review Corpus (increase of 0.3 F1 points). We observe that XLNet consistently outperforms BERT and RoBERTa, contrary to results on other benchmark datasets.
Does XLNet outperform RoBERTa?
Yes.
null
false
null
Identify which animal species is alive or extinct: Pelycosaurus, Hippopotamus
Hippopotamus is alive, Pelycosaurus is extinct.
1911.03681
false
null
We ensemble BERT and E-BERT by (a) mean-pooling their outputs (AVG) or (b) concatenating the entity and its name with a slash symbol (CONCAT), e.g.: Jean_Marais / Jean Mara ##is. We ensemble BERT and E-BERT by (a) mean-pooling their outputs (AVG) or (b) concatenating the entity and its name with a slash symbol (CONCAT), e.g.: Jean_Marais / Jean Mara ##is.
What are the two ways of ensembling BERT and E-BERT?
The answers are shown as follows: * mean-pooling their outputs (AVG) * concatenating the entity and its name with a slash symbol (CONCAT)
null
false
276
Among neural networks, LSTMs BIBREF0 are commonly used for language modeling. Although new architectures BIBREF1, BIBREF2 challenge this standard, LSTMs remain competitive for language modeling BIBREF3. However, despite the success of LM LSTMs, it is not clear what makes them so effective. In particular, are representations derived through language modeling able to effectively encode syntactic structures and relations? Do they encode them in a reliable and systematic way? The typical metric used to compare LMs, perplexity, is not adapted to address these questions. Perplexity measures the probability assigned to held-out data from the corpus the LM is trained on. Because the held-out and training data are typically randomly extracted from an initial corpus, they have similar statistics, which is good from a machine learning viewpoint, but bad from the viewpoint of linguistic analysis: perplexity is mostly sensitive to the most common sentence types in the initial corpus and therefore will not reflect well the behavior of the LM in the tail of the distribution. In addition, the sentences extracted from a natural corpus confound several factors: syntax, semantics, pragmatics, etc. further complicating the interpretation of a good perplexity score. To circumvent this limitation, recent work has focused on using probing techniques inspired by linguistic and psycholinguistics (for instance, grammaticality or acceptability judgments, or forced choice). In addition, instead of using sentences from the training corpus, studies rely more and more on automatically constructed test sentences, which enable for a removal of the bias in the original corpus and focus on particular linguistic phenomena. Here, we will use acceptability judgments operationalized by the log probability of sentences according to the LM and sets of synthetic sentences generated from template sentences to probe for a challenging linguistic structure: verb argument structure. Verb argument structure provides languages with a way to link syntactic position in a sentence (subject, direct object, etc) with semantic roles (agent, patient, etc), in other words, to determine who is doing what. It is currently unknown whether neural LMs purely trained from surface statistics are able to capture this kind of structure, or whether additional information from another modality would be needed to provide some semantic grounding. Verb argument structure is typically correlated to sentence position in many languages like English. But in other languages with relatively free word order, it is indicated by morphological markers. Here, we study German, where the arguments of a verb can occur in any position (when occurring within a relative clause), and is indicated by the case of the noun phrase (nominative, accusative, etc). We setup a test of argument structure representation by presenting a trained LM with carefully constructed sets of sentences that either have the right set of arguments, or abnormal sentences where one case is missing or duplicated. We use word order permutations to control for unigram and positional statistics. If the LM is able to track argument structure irrespective of word order, it should assign lower grammaticality scores (log probabilities) to the incorrect sentences as compared to the correct ones. Since at the level of the sentence, we study a global rather than local syntactic phenomenon, we depart from earlier work BIBREF4, BIBREF5, BIBREF6, BIBREF7 and do not compare pairs of sentences. Rather, we compare a set of valid grammatical variations of the template to a corresponding set of grammatical violations of the template. Thus, for each template, we measure the model's ability to discriminate grammatical sentences from ungrammatical ones using receiver operating characteristic curves, or ROC curves. We also compute the area under the ROC curve, or AUC. In our results, we often report the average AUC over templates as our metric. We evaluate three LMs on our dataset, the two-layer LSTM of BIBREF8 trained on German Wikipedia text, as well as n-gram baselines using the same corpus. We ask proficient German speakers to annotate our sentences for grammaticality, providing a human comparison. Since some of these sentences are rather implausible because of the permutations, we also collect human meaningfulness scores. We find that our dataset is challenging for both LMs and humans and that LMs lag behind human performance. We find that our dataset is challenging for both LMs and humans and that LMs lag behind human performance.
Is the LMs lag behind human performance?
Yes, it is.
null
false
null
What is the PEDALS method
PEDALS is a mnemonic to help in system design interviews, which stands for Process, Estimate, Design, Articulate Data Model, List Architecture and Scale
null
false
null
What is the best way to get a good night of rest.
Do not look at any screens with blue light 1 hour before bed. Drink tea. Meditate. Ensure there is no light in the room and it is a quiet space. Optimal temperature is 65 degrees.
null
false
null
How do you get good at a hobby?
You can get good at a hobby with a lot of practice. Some hobbies that require a lot of practice to get good are gymnastics, swimming, sports, and running. The practice required for to become skilled at these hobbies is done through drills and repetition. This creates the muscle memory needed to perform at the highest levels. The practice also changes the body and increases your cardiovascular's system to handle high stress loads on the body with ease in the presence of less oxygen and high physical intensity.
null
false
null
Who are the Seven Dwarfs?
The seven dwarfs are commonly referred as seven fictional characters in the fairy tale Snow White. According to the fairy tale, the seven dwarfs live in a cottage in the wood. Snow White's step mother, the Queen of the kingdom, wants Snow White dead and sends a huntsman to kill her. Snow White escapes form the Queen and hides in the dwarfs cottage. When the Queen knows Snow White is still alive, she goes to the dwarfs' cottage in disguise and offer Snow White a magic poisoned apple that causes her to faint. When the dwarfs return back that day from work in the mines, they find Snow White unconscious and believe she is dead. The dwarfs place Snow White in a glass coffin. After some time, a prince finds Snow White in the coffin and kisses her. Snow White wakes up, marries the prince and live happily ever after.
1907.02030
false
null
The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. The X axis shows the euclidean distance between vectors and the Y axis frequency. A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right, showing that all non-duplicate questions have a greater euclidean distance than the least similar duplicate pair of questions. As can be clearly seen in the figure above, Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem. A much greater disparity is shown by the Google USE models BIBREF14 , and even more for the Google USE Large model. In fact the Google USE Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates. In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate. We had no original data on whether sentences were semantically similar, but we did have a corpus of articles clustered into stories. Working on the assumption that similar claims would be more likely to be in the same story, we developed an equation to judge how well our corpus of sentences was clustered, rewarding clustering which matches the article clustering and the total number of claims clustered. The precise formula is given below, where INLINEFORM0 is the proportion of claims in clusters from one story cluster, INLINEFORM1 is the proportion of claims in the correct claim cluster, where they are from the most common story cluster, and INLINEFORM2 is the number of claims placed in clusters. A,B and C are parameters to tune. INLINEFORM3 The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates. In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate.
How is the accuracy of the system measured?
The answers are shown as follows: * F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates * distances between duplicate and non-duplicate questions using different embedding systems
null
false
null
Neumann University is a member of the Atlantic East Conference as well as the United Collegiate Hockey Conference (UCHC). It competes in the National Collegiate Athletic Association (NCAA), Division III. In 2009 Neumann University's men's ice hockey team won the NCAA Division III National Championship. In the summer of 2012, Neumann University constructed the Mirenda Center, a 72,000 square foot athletic complex which houses an indoor track, NCAA Division III regulation basketball court, dance studios, student lounges, a classroom, weight and cardio rooms as well as the Center for Sport Spirituality and Character Development. Neumann sponsors 12 women's athletic teams and 10 men's teams. In addition the 22 varsity teams, Neumann University also has a very active club sports roster which includes baseball, basketball, cheer leading, a dance team, ice hockey, roller hockey as well as men's and women's rugby. Most recently, Neumann University won back to back national championships (2015 and 2016) as part of the National Collegiate Roller Hockey Association (NCHRA).
List the club sports teams that Neumann University has
Based off of the shared information, Neumann University has the following club sports teams: * baseball * basketball * cheer leading * a dance team * ice hockey * roller hockey * men's rugby * women's rugby Note that this may not be a fully exhaustive list.
null
false
null
How do I put a bouquet of fresh flowers in a vase?
In order to put fresh flowers in a vase, follow these steps: 1. Find a vase that it is the right size for the flowers. You can use water glasses here in a pinch. 2. Fill the vase to about two-thirds full with water. You can add a dash of bleach in the water as well, but it is not necessary. 3. Now, trim your flowers to the size of the vase. Cut the end of each flower in a diagonal cut. 4. Remove the leaves from the flowers. This will help to grow the flower without putting resources towards the leaves. Be sure to remove any leaves under the water line, since they can cause infections. 5. Arrange the flowers in the vase. You can adjust the positions until you are happy with the arrangement. 6. Place your vase in a shady spot and you're done!
1911.03310
false
null
Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language. We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of sentences in that language and subtracting the language centroid from the contextual embeddings. We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks. For all tasks, we test all layers of the model. For tasks utilizing a single-vector sentence representation, we test both the vector corresponding to the [cls] token and mean-pooled states. Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language. We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We then analyze the semantic properties of both the original and the centered representations using a range of probing tasks.
How they show that mBERT representations can be split into a language-specific component and a language-neutral component?
The answers are shown as follows: * We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space.
null
false
null
The British Bull Dog was a popular type of solid-frame pocket revolver introduced by Philip Webley & Son of Birmingham, England, in 1872, and subsequently copied by gunmakers in continental Europe and the United States. It featured a 2.5-inch (64 mm) barrel and was chambered for .442 Webley or .450 Adams cartridges, with a five-round cylinder. Webley produced smaller scaled .320 Revolver and .380 calibre versions later, but did not mark them with the British Bull Dog name.
Measured in feet, what was the length of the gun barrel produced by Philip Webley & Son of Birmingham discussed in the above passage?
0.208 feet
null
false
null
What is the terminal outbound station of the D line in the Boston MBTA?
Riverside Station is the last station headed outbound on the D line of Boston's MBTA.
null
false
168
Knowledge representation and reasoning (KRR) is the process of representing the domain knowledge in formal languages (e.g., SPARQL, Prolog) such that it can be used by expert systems to execute querying and reasoning services. KRR have been applied in many fields including financial regulations, medical diagnosis, laws, and so on. One major obstacle in KRR is the creation of large-scale knowledge bases with high quality. For one thing, this requires the knowledge engineers (KEs) not only to have the background knowledge in a certain domain but have enough skills in knowledge representation as well. Unfortunately, qualified KEs are also in short supply. Therefore, it would be useful to build a tool that allows the domain experts without any background in logic to construct and query the knowledge base simply from text. Controlled natural languages (CNLs) BIBREF0 were developed as a technology that achieves this goal. CNLs are designed based on natural languages (NLs) but with restricted syntax and interpretation rules that determine the unique meaning of the sentence. Representative CNLs include Attempto Controlled English BIBREF1 and PENG BIBREF2 . Each CNL is developed with a language parser which translates the English sentences into an intermediate structure, discourse representation structure (DRS) BIBREF3 . Based on the DRS structure, the language parsers further translate the DRS into the corresponding logical representations, e.g., Answer Set Programming (ASP) BIBREF4 programs. One main issue with the aforementioned CNLs is that the systems do not provide enough background knowledge to preserve semantic equivalences of sentences that represent the same meaning but are expressed via different linguistic structures. For instance, the sentences Mary buys a car and Mary makes a purchase of a car are translated into different logical representations by the current CNL parsers. As a result, if the user ask a question who is a buyer of a car, these systems will fail to find the answer. In this thesis proposal, I will present KALM BIBREF5 , BIBREF6 , a system for knowledge authoring and question answering. KALM is superior to the current CNL systems in that KALM has a complex frame-semantic parser which can standardize the semantics of the sentences that express the same meaning via different linguistic structures. The frame-semantic parser is built based on FrameNet BIBREF7 and BabelNet BIBREF8 where FrameNet is used to capture the meaning of the sentence and BabelNet BIBREF8 is used to disambiguate the meaning of the extracted entities from the sentence. Experiment results show that KALM achieves superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems. The rest parts are organized as follows: Section SECREF2 discusses the related works, Section SECREF3 presents the KALM architecture, Section SECREF4 presents KALM-QA, the question answering part of KALM, Section SECREF5 shows the evaluation results, Section SECREF6 shows the future work beyond the thesis, and Section SECREF7 concludes the paper. For one thing, this requires the knowledge engineers (KEs) not only to have the background knowledge in a certain domain but have enough skills in knowledge representation as well. For one thing, this requires the knowledge engineers (KEs) not only to have the background knowledge in a certain domain but have enough skills in knowledge representation as well. Unfortunately, qualified KEs are also in short supply.
Why it would be useful to build a tool that allows the domain experts without any background in logic to construct and query the knowledge base simply from the text?
Becauset to be the knowledge engineers (KEs) not only to have the background knowledge in a certain domain but have enough skills in knowledge representation as well. Unfortunately, qualified KEs are also in short supply.
1909.09491
false
null
The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation. The greater, the richer. In practice, we find a rough threshold of r is 5. The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richness in the scenario of machine translation. The greater, the richer. In practice, we find a rough threshold of r is 5
How they measure robustness in experiments?
The answers are shown as follows: * We empirically provide a formula to measure the richness in the scenario of machine translation.
null
false
null
What are the best travel destinations in 2023.
With the world opens up after long impact of Covid, a lot of people resume traveling again and started planning for their next trips! Globally, popular travel destinations often have less restrictive entry requirements for visitors, safe social and health environment, and of course, beautiful sceneries and profound culture backgrounds. Here are some recommendations: Singapore, Seoul, Hawaii, Amsterdam and Toronto.
null
false
null
Can you share a response of politely informing the customer that we dont support a feature
Yes, we understand that we don't support that feature and apologize for the inconvenience. We will take your feedback into consideration and look into alternative options.
null
false
null
Could a nuclear explosion change the Earth's rotation?
Nuclear explosions release more energy than anything else that humans do. Luckily, the energy released by a nuclear blast is only about one trillionth of the power of the Earth spinning. Scientists compare it to trying to slow the speed of a truck by crashing it into a mosquito.
null
false
null
Mao Zedong (26 December 1893 – 9 September 1976), also known as Chairman Mao, was a Chinese communist revolutionary who was the founder of the People's Republic of China (PRC), which he led as the chairman of the Chinese Communist Party from the establishment of the PRC in 1949 until his death in 1976. Ideologically a Marxist–Leninist, his theories, military strategies, and political policies are collectively known as Maoism.
Who is the first chairman of Chine?
Mao Zedong is the first chairman of the People's Republic of China, he is also known as Chairman Mao, was a Chinese communist revolutionary and the founder of the People's Republic of China. He served as the chairman of the Chinese Communist Party from the establishment of the PRC in 1949 until his death in 1976
1910.07181
false
null
Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly outperforming replace. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\text{base}$ and 31% compared to Attentive Mimicking. This makes sense considering that compared to Attentive Mimicking, the key enhancement of Bertram lies in improving context representations and interconnection of form and context; naturally, the more contexts are given, the more this comes into play. Noticeably, despite being both based on and integrated into a BERT$_\text{base}$ model, our architecture even outperforms a standalone BERT$_\text{large}$ model by a large margin. Moreover, the add and add-gated variants of Bertram perform surprisingly well for more frequent words, improving the score for WNLaMPro-medium by 50% compared to BERT$_\text{base}$ and 31% compared to Attentive Mimicking.
How much is representaton improved for rare/medum frequency words compared to standalone BERT and previous work?
The answers are shown as follows: * improving the score for WNLaMPro-medium by 50% compared to BERT$_\text{base}$ and 31% compared to Attentive Mimicking
null
false
null
Classify each country as being in the Caribbean or Mediterranean: Haiti, Cuba, Jamaica, Greece, Italy, France
Haiti: Caribbean Cuba: Caribbean Jamaica: Caribbean Greece: Mediterranean Italy: Mediterranean France: Mediterranean
null
false
null
The type of soil would make a difference to the design of the motte, as clay soils could support a steeper motte, whilst sandier soils meant that a motte would need a more gentle incline. Where available, layers of different sorts of earth, such as clay, gravel and chalk, would be used alternatively to build in strength to the design. Layers of turf could also be added to stabilise the motte as it was built up, or a core of stones placed as the heart of the structure to provide strength. Similar issues applied to the defensive ditches, where designers found that the wider the ditch was dug, the deeper and steeper the sides of the scarp could be, making it more defensive. Although militarily a motte was, as Norman Pounds describes it, "almost indestructible", they required frequent maintenance. Soil wash was a problem, particularly with steeper mounds, and mottes could be clad with wood or stone slabs to protect them. Over time, some mottes suffered from subsidence or damage from flooding, requiring repairs and stabilisation work.
Give a single-sentence explanation of this paragraph without quoting from the text.
Soil type and composition play a significant role in the kind of mound that can be built to support a motte and the maintenance required to upkeep it.
1708.01776
true
null
FLOAT SELECTED: Figure 1. The User-Agent Interaction In Example UID13 , illustrating a successful interaction, the Agent asks for the value of $V0 and the User responds with the answer (Silvia) as well as an explanation indicating that it was correct (helpful) and why. Specifically, in this instance it was helpful because it enabled an inference which reduced the possible answer set (and reduced the set of relevant variables). On the other hand, in Example UID30 , we see an example of a bad query and corresponding critical explanation. FLOAT SELECTED: Figure 1. The User-Agent Interaction In Example UID13 , illustrating a successful interaction, the Agent asks for the value of $V0 and the User responds with the answer (Silvia) as well as an explanation indicating that it was correct (helpful) and why.
Does the Agent ask for a value of a variable using natural language generated text?
Yes.
1905.10044
true
null
Our Recurrent +ELMo model uses the language model from BIBREF9 to provide contextualized embeddings to the baseline model outlined above, as recommended by the authors. Our OpenAI GPT model fine-tunes the 12 layer 768 dimensional uni-directional transformer from BIBREF27 , which has been pre-trained as a language model on the Books corpus BIBREF36 . Our Recurrent +ELMo model uses the language model from BIBREF9 to provide contextualized embeddings to the baseline model outlined above, as recommended by the authors. Our OpenAI GPT model fine-tunes the 12 layer 768 dimensional uni-directional transformer from BIBREF27 , which has been pre-trained as a language model on the Books corpus BIBREF36 .
did they use other pretrained language models besides bert?
Yes.
null
false
null
What is Categorization and Sentiment Analysis
Categorization and Sentiment Analysis is typically referred in the Customer Experience space where customer interaction data is analyzed to get insights into categories of topics and sentiment associated with those interactions. Some of the use cases that benefit from these insights include Customer Representative/Agent Performance and Training, Workload Management, Product Performance et al
null
false
106
As a main field of interest in the current study, we chose controversies in education. One distinguishing feature of educational topics is their breadth of sub-topics and points of view, as they attract researchers, practitioners, parents, students, or policy-makers. We assume that this diversity leads to the linguistic variability of the education topics and thus represents a challenge for NLP. In a cooperation with researchers from the German Institute for International Educational Research we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling, (2) public versus private schools, (3) redshirting — intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically BIBREF51 , (4) prayer in schools — whether prayer in schools should be allowed and taken as a part of education or banned completely, (5) single-sex education — single-sex classes (males and females separate) versus mixed-sex classes (“co-ed”), and (6) mainstreaming — including children with special needs into regular classes. Since we were also interested in whether argumentation differs across registers, we included four different registers — namely (1) user comments to newswire articles or to blog posts, (2) posts in discussion forums (forum posts), (3) blog posts, and (4) newswire articles. Throughout this work, we will refer to each article, blog post, comment, or forum posts as a document. This variety of sources covers mainly user-generated content except newswire articles which are written by professionals and undergo an editing procedure by the publisher. Since many publishers also host blog-like sections on their portals, we consider as blog posts all content that is hosted on personal blogs or clearly belong to a blog category within a newswire portal. In cooperation with researchers from the German Institute for International Educational Research, we identified the following current controversial topics in education in English-speaking countries: (1) homeschooling; (2) public versus private schools; (3) redshirting—intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically (Huang and Invernizzi 2013); (4) prayer in schools—whether prayer in schools should be allowed and taken as a part of education or banned completely; (5) single-sex education—single-sex classes (boys and girls separate) versus mixed-sex classes (“co-ed”); and (6) mainstreaming—including children with special needs into regular classes.
What controversial topics did the authors choose?
(1) homeschooling; (2) public versus private schools; (3) redshirting—intentionally delaying the entry of an age-eligible child into kindergarten, allowing their child more time to mature emotionally and physically (Huang and Invernizzi 2013); (4) prayer in schools—whether prayer in schools should be allowed and taken as a part of education or banned completely; (5) single-sex education—single-sex classes (boys and girls separate) versus mixed-sex classes (“co-ed”); and (6) mainstreaming—including children with special needs into regular classes.
null
false
null
Project Management Professional (PMP) is an internationally recognized professional designation offered by the Project Management Institute (PMI). As of 31 July 2020, there are 1,036,368 active PMP-certified individuals and 314 chartered chapters across 214 countries and territories worldwide. The exam is one of eight credentials offered by PMI and is based on the ECO PMP Examination Content Outline. Most of the questions reference the Exam Content Outline PMP Examination Content Outline (also known as the E.C.O ).
Given the paragraph above paragraph, what is the PMP and who established the exam.
PMP stands for Project Management Professional, and it was established by the Project Management Institute (PMI). The PMP is a professional certification for project managers that is recognized internationally. The PMP is based on the ECO PMP Examination Content Outline.
null
false
156
After we obtain the tokenizations of speech by either UTD or AUD, each spoken document is represented by a vector of unigram occurrence counts over discovered terms, or a vector of INLINEFORM0 -gram counts over acoustic units, respectively. Each feature vector can be further scaled by inverse document frequency (IDF), producing a TF-IDF feature. After we obtain the tokenizations of speech by either UTD or AUD, each spoken document is represented by a vector of unigram occurrence counts over discovered terms, or a vector of ngram counts over acoustic units, respectively. Each feature vector can be further scaled by inverse document frequency (IDF), producing a TF-IDF feature.
What vector do they use to represent each spoken document after obtaining the tokenizations of speech by either UTD or AUD?
A vector of unigram occurrence counts over discovered terms, or a vector of ngram counts over acoustic units, respectively.
null
false
null
Which of the following movies is not from the Harry Potter Franchise? - Order of the Phoenix. - Goblet of Fire. - Sorcerer's Stone. - Chamber of Secrets. - The Fellowship of the rings.
"The Fellowship of the rings" is not part of the Harry Potter Franchise, it is in-fact a part of the Lord of the Rings Franchise. The canonical Harry Potter movies in chronological order are: - Harry Potter and the Philosopher's Stone. - Harry Potter and the Chamber of Secrets. - Harry Potter and the Prisoner of Azkaban. - Harry Potter and the Goblet of Fire. - Harry Potter and the Order of the Phoenix. - Harry Potter and the Half-Blood Prince. - Harry Potter and the Deathly Hallows – Part 1. - Harry Potter and the Deathly Hallows – Part 2.
1911.00133
false
null
We first experiment with a suite of non-neural models, including Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees. We tune the parameters for these models using grid search and 10-fold cross-validation, and obtain results for different combinations of input and features. We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., BIBREF22, BIBREF23). Because neural models require large amounts of data, we do not cull the data by annotator agreement for these experiments and use all the labeled data we have. We experiment with training embeddings with random initialization as well as initializing with our domain-specific Word2Vec embeddings, and we also concatenate the best feature set from our non-neural experiments onto the representations after the recurrent and convolutional/pooling layers respectively. We first experiment with a suite of non-neural models, including Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees. We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., BIBREF22, BIBREF23).
What supervised methods are used?
The answers are shown as follows: * Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees * a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21)
1909.01638
false
null
The landscape of CLWE methods has recently been dominated by the so-called projection-based methods BIBREF15 , BIBREF16 , BIBREF17 . They align two monolingual embedding spaces by learning a projection/mapping based on a training dictionary of translation pairs. Besides their simple conceptual design and competitive performance, their popularity originates from the fact that they rely on rather weak cross-lingual supervision. Originally, the seed dictionaries typically spanned several thousand word pairs BIBREF15 , BIBREF18 , BIBREF19 , but more recent work has shown that CLWEs can be induced with even weaker supervision from small dictionaries spanning several hundred pairs BIBREF20 , identical strings BIBREF21 , or even only shared numerals BIBREF22 . Originally, the seed dictionaries typically spanned several thousand word pairs BIBREF15 , BIBREF18 , BIBREF19 , but more recent work has shown that CLWEs can be induced with even weaker supervision from small dictionaries spanning several hundred pairs BIBREF20 , identical strings BIBREF21 , or even only shared numerals BIBREF22 .
How are seed dictionaries obtained by fully unsupervised methods?
The answers are shown as follows: * the latest CLWE developments almost exclusively focus on fully unsupervised approaches BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 : they fully abandon any source of (even weak) supervision and extract the initial seed dictionary by exploiting topological similarities between pre-trained monolingual embedding spaces
null
false
null
The Milky Way is the galaxy that includes the Solar System, with the name describing the galaxy's appearance from Earth: a hazy band of light seen in the night sky formed from stars that cannot be individually distinguished by the naked eye. The term Milky Way is a translation of the Latin via lactea, from the Greek γαλακτικὸς κύκλος (galaktikòs kýklos), meaning "milky circle". From Earth, the Milky Way appears as a band because its disk-shaped structure is viewed from within. Galileo Galilei first resolved the band of light into individual stars with his telescope in 1610. Until the early 1920s, most astronomers thought that the Milky Way contained all the stars in the Universe. Following the 1920 Great Debate between the astronomers Harlow Shapley and Heber Doust Curtis, observations by Edwin Hubble showed that the Milky Way is just one of many galaxies.
What is Milky Way?
Milky Way is the galaxy that includes Solar System in which our planet Earth is located.
null
false
null
Of the following measurements, which units relate to mass? Kilogram, microgram, pound, litre, year, kilometre and ounce.
Kilogram (kg), microgram (μg), pound (lb), and ounce (oz) are all units of mass. Kilogram and microgram are units from the metric (SI) system. Pound and ounce are units from the imperial system. Litre is a unit of volume, year is a unit of time and kilometre is a unit of distance.
null
false
148
With the availability of rich data on users' locations, profiles and search history, personalization has become the leading trend in large-scale information retrieval. However, efficiency through personalization is not yet the most suitable model when tackling domain-specific searches. This is due to several factors, such as the lexical and semantic challenges of domain-specific data that often include advanced argumentation and complex contextual information, the higher sparseness of relevant information sources, and the more pronounced lack of similarities between users' searches. A recent study on expert search strategies among healthcare information professionals BIBREF0 showed that, for a given search task, they spend an average of 60 minutes per collection or database, 3 minutes to examine the relevance of each document, and 4 hours of total search time. When written in steps, their search strategy spans over 15 lines and can reach up to 105 lines. With the abundance of information sources in the medical domain, consumers are more and more faced with a similar challenge, one that needs dedicated solutions that can adapt to the heterogeneity and specifics of health-related information. Dedicated Question Answering (QA) systems are one of the viable solutions to this problem as they are designed to understand natural language questions without relying on external information on the users. In the context of QA, the goal of Recognizing Question Entailment (RQE) is to retrieve answers to a premise question ( INLINEFORM0 ) by retrieving inferred or entailed questions, called hypothesis questions ( INLINEFORM1 ) that already have associated answers. Therefore, we define the entailment relation between two questions as: a question INLINEFORM2 entails a question INLINEFORM3 if every answer to INLINEFORM4 is also a correct answer to INLINEFORM5 BIBREF1 . RQE is particularly relevant due to the increasing numbers of similar questions posted online BIBREF2 and its ability to solve differently the challenging issues of question understanding and answer extraction. In addition to being used to find relevant answers, these resources can also be used in training models able to recognize inference relations and similarity between questions. Question similarity has recently attracted international challenges BIBREF3 , BIBREF4 and several research efforts proposing a wide range of approaches, including Logistic Regression, Recurrent Neural Networks (RNNs), Long Short Term Memory cells (LSTMs), and Convolutional Neural Networks (CNNs) BIBREF5 , BIBREF6 , BIBREF1 , BIBREF7 . In this paper, we study question entailment in the medical domain and the effectiveness of the end-to-end RQE-based QA approach by evaluating the relevance of the retrieved answers. Although entailment was attempted in QA before BIBREF8 , BIBREF9 , BIBREF10 , as far as we know, we are the first to introduce and evaluate a full medical question answering approach based on question entailment for free-text questions. Our contributions are: The next section is dedicated to related work on question answering, question similarity and entailment. In Section SECREF3 , we present two machine learning (ML) and deep learning (DL) methods for RQE and compare their performance using open-domain and clinical datasets. Section SECREF4 describes the new collection of medical question-answer pairs. In Section SECREF5 , we describe our RQE-based approach for QA. Section SECREF6 presents our evaluation of the retrieved answers and the results obtained on TREC 2017 LiveQA medical questions. In this paper, we study question entailment in the medical domain and the effectiveness of the end-to-end RQE-based QA approach by evaluating the relevance of the retrieved answers.
What is the baseline of this paper?
The baseline is Recognizing Question Entailment (RQE).
1808.05077
false
null
In the literature, deep learning based automated feature extraction has been shown to outperform state-of-the-art manual feature engineering based classifiers such as Support Vector Machine (SVM), Naive Bayes (NB) or Multilayer Perceptron (MLP) etc. One of the important techniques in deep learning is the autoencoder that generally involves reducing the number of feature dimensions under consideration. The aim of dimensionality reduction is to obtain a set of principal variables to improve the performance of the approach. Similarly, CNNs have been proven to be very effective in sentiment analysis. However, little work has been carried out to exploit deep learning based feature representation for Persian sentiment analysis BIBREF8 BIBREF9 . In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. The obtained deep learning results are compared with MLP. To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1 In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%.
Which deep learning model performed better?
The answers are shown as follows: * CNN
null
false
null
Should I buy a tesla model 3 or tesla model y?
It comes down to what your family needs are at end of the day! Model Y has more leg space, more height and good luggage space compared to model 3. Riding comfort is also relatively better on model Y considering it is an SUV. Ultimately it is up-to your personal choice to go with 3 or Y.
null
false
null
The United States of America has separate federal, state, and local governments with taxes imposed at each of these levels. Taxes are levied on income, payroll, property, sales, capital gains, dividends, imports, estates and gifts, as well as various fees. In 2020, taxes collected by federal, state, and local governments amounted to 25.5% of GDP, below the OECD average of 33.5% of GDP. The United States had the seventh-lowest tax revenue-to-GDP ratio among OECD countries in 2020, with a higher ratio than Mexico, Colombia, Chile, Ireland, Costa Rica, and Turkey. Taxes fall much more heavily on labor income than on capital income. Divergent taxes and subsidies for different forms of income and spending can also constitute a form of indirect taxation of some activities over others. For example, individual spending on higher education can be said to be "taxed" at a high rate, compared to other forms of personal expenditure which are formally recognized as investments. Taxes are imposed on net income of individuals and corporations by the federal, most state, and some local governments. Citizens and residents are taxed on worldwide income and allowed a credit for foreign taxes. Income subject to tax is determined under tax accounting rules, not financial accounting principles, and includes almost all income from whatever source. Most business expenses reduce taxable income, though limits apply to a few expenses. Individuals are permitted to reduce taxable income by personal allowances and certain non-business expenses, including home mortgage interest, state and local taxes, charitable contributions, and medical and certain other expenses incurred above certain percentages of income. State rules for determining taxable income often differ from federal rules. Federal marginal tax rates vary from 10% to 37% of taxable income. State and local tax rates vary widely by jurisdiction, from 0% to 13.30% of income, and many are graduated. State taxes are generally treated as a deductible expense for federal tax computation, although the 2017 tax law imposed a $10,000 limit on the state and local tax ("SALT") deduction, which raised the effective tax rate on medium and high earners in high tax states. Prior to the SALT deduction limit, the average deduction exceeded $10,000 in most of the Midwest, and exceeded $11,000 in most of the Northeastern United States, as well as California and Oregon. The states impacted the most by the limit were the tri-state area (NY, NJ, and CT) and California; the average SALT deduction in those states was greater than $17,000 in 2014. The United States is one of two countries in the world that taxes its non-resident citizens on worldwide income, in the same manner and rates as residents; the other is Eritrea. The U.S. Supreme Court upheld the constitutionality of imposition of such a tax in the case of Cook v. Tait. Nonetheless, the foreign earned income exclusion eliminates U.S. taxes on the first $108,700 of annual foreign income earned by U.S. citizens living and working abroad. Payroll taxes are imposed by the federal and all state governments. These include Social Security and Medicare taxes imposed on both employers and employees, at a combined rate of 15.3% (13.3% for 2011 and 2012). Social Security tax applies only to the first $132,900 of wages in 2019. There is an additional Medicare tax of 0.9% on wages above $200,000. Employers must withhold income taxes on wages. An unemployment tax and certain other levies apply to employers. Payroll taxes have dramatically increased as a share of federal revenue since the 1950s, while corporate income taxes have fallen as a share of revenue. (Corporate profits have not fallen as a share of GDP). Property taxes are imposed by most local governments and many special purpose authorities based on the fair market value of property. School and other authorities are often separately governed, and impose separate taxes. Property tax is generally imposed only on realty, though some jurisdictions tax some forms of business property. Property tax rules and rates vary widely with annual median rates ranging from 0.2% to 1.9% of a property's value depending on the state. Sales taxes are imposed by most states and some localities on the price at retail sale of many goods and some services. Sales tax rates vary widely among jurisdictions, from 0% to 16%, and may vary within a jurisdiction based on the particular goods or services taxed. Sales tax is collected by the seller at the time of sale, or remitted as use tax by buyers of taxable items who did not pay sales tax. The United States imposes tariffs or customs duties on the import of many types of goods from many jurisdictions. These tariffs or duties must be paid before the goods can be legally imported. Rates of duty vary from 0% to more than 20%, based on the particular goods and country of origin. Estate and gift taxes are imposed by the federal and some state governments on the transfer of property inheritance, by will, or by lifetime donation. Similar to federal income taxes, federal estate and gift taxes are imposed on worldwide property of citizens and residents and allow a credit for foreign taxes.
Provide a bulleted summary of the different types of taxes in the USA.
The following are the different types of taxes in the USA: - Income Tax: Imposed on income from any source - Payroll Tax: Social Security and Medicare taxes imposed on both employers and employees - Property Tax: Imposed on value of property - Sales Tax: Imposed on the price at retail sale of goods and services
null
false
null
Which is a species of fish? Rohu or Naked Mole
Rohu
null
false
null
Classify these as things you would want to wear for a rainy day or a very hot day: umbrella, ball cap, rain jacket, t-shirt, sunglasses, rain boots, flip-flops
umbrella: rain, ball cap: hot day, rain jacket: rainy day, t-shirt: hot day, sunglasses: hot day, rain boots: rainy day, flip-flops: hot day
null
false
451
We evaluate Kernel SGD on three types of data: tabular, image, and text. Table shows the details of these data sets. The tabular data sets were downloaded from LIBSVM website 1 . The dimensions of mnist and usps are 780 and 256, respectively. We used cifar10 and SARS-CoV-2 (S-CoV) as the image data. S-CoV contains RGB CT scans and is used to identify whether the patient is infected by the virus. Each image in the two data sets was first reshaped into 32 × 32 image and then normalized according to the mean and variance sampled, respectively. We used IMDb and COVID-19-tweets (COV-tw) as the text data. COV-tw collects the English Tweets about COVID-19 and is labeled as informative or not. We took 20% of the training data to serve as the validation set except for COV-tw. In COV-tw, 1000 validation instances are provided. On tabular data, our method was evaluated using a simple two-linear-layer neural network which has 100 neurons in each hidden layer. For image classification, our method was tested on the ResNet-18 network. To solve text classification problems, we used an LSTM network with pre-trained word vectors. Baselines: Our method was compared with the widely used L-BFGS and Equilibrated SGD (ESGD) optimization methods which both use the second-order information in training. ESGD adopts an equilibration preconditioner to reduce the condition number and escape saddle points. We used the implementation of ESGD published by. The experiments were conducted on a machine with an Intel(R) Xeon(R) Silver 4210 CPU of 126 GB memory and two GeForce RTX 3090 GPUs running on a Linux OS. All the methods including the baselines and our method were implemented using PyTorch. The size of history used in L-BFGS was set as 3 for limited memory. The mini-batch size was set to 64. L 2 regularization was used with its coefficient set to 10 −4 following the setting in the studies. The same regularization coefficient for all the methods leads to a fair comparison. In Kernel SGD, to compute the kernel matrix, we used the radial basis function kernel (i.e., k(x i , x j ) = exp(−γ||x i − x j || 2 2 )). Through the whole experiment, the learning rate and the hyper-parameter γ for the kernel were selected from {0.1, 0.01, 0.001, 0.0001}. We terminated the training when the convergence conditions were satisfied, i.e., the change of loss is less than 1 × 10 −4 in 3 consecutive epochs or the training reaches 500 epochs. We repeated each experiment five times to acquire average performance. L2 regularization was used with its coefficient set to 10−4 following the setting in the studies (He et al., 2016; Huang et al., 2017). The same regularization coefficient for all the methods leads to a fair comparison.
Is the regularization coefficient set to 10^{-4} for all the methods? If so, do authors have an argument why this parameter shouldn't be tuned for every method separately?
The regularization coefficient was set to 10^{-4} for all the methods following the setting in the popular studies [1,2]. The same coefficient for all the methods leads to a fair comparison, because different regularization coefficients lead to different objectives, which makes different methods incomparable. We have added this explanation in the last paragraph of Section 4.1.
null
false
null
During the Revolutionary War, Danbury was an important military supply depot for the Continental Army. Sybil Ludington, 16-year-old daughter of American Colonel Henry Ludington, is said to have made a 40-mile ride in the early hours of the night on April 26, 1777, to warn the people of Danbury and her father's forces in Putnam County, New York, of the approach of British regulars, helping them muster in defense; these accounts, originating from the Ludington family, are questioned by modern scholars. During the following day on April 26, 1777, the British, under Major General William Tryon, burned and sacked Danbury, but fatalities were limited due to Ludington's warning. The central motto on the seal of the City of Danbury is Restituimus, (Latin for "We have restored"), a reference to the destruction caused by the Loyalist army troops. The American General David Wooster was mortally wounded at the Battle of Ridgefield by the British forces which had raided Danbury, but at the beginning of the battle, the Americans succeeded in driving the British forces down to Long Island Sound. Wooster is buried in Danbury's Wooster Cemetery; the private Wooster School in Danbury also was named in his honor.
What role did Danbury, Connecticut play in the American Revolution?
Danbury, Connecticut served as an important military depot for the Continental Army during the revolutionary war. In April 26, 1777 the British Army attacked, burned and sacked the city, but the American troop were able to drive the British forces back to the Long Island sound.
null
false
null
What are some lazy meals I can cook at home?
If you don't want to spend much energy cooking, consider one of options in this list: - Tacos - Mac & Cheese - Fried Rice - Meatball Sandwiche - Pasta Carbonara - Instant Ramen Bowl - Shakshuka - Chicken Wraps - Hamburger - Baked Potatoes - Salad - Bacon and Eggs
null
false
null
What are a couple of good ideas to celebrate a spring birthday for an adult?
There are many ways to celebrate a birthday for an adult in the spring. One could host a BBQ, either at home or at a park with BBQ pits. Another idea would be to have a picnic in a local park. If weather doesn't permit an outside activity, meeting with friends for brunch or dinner is another nice way to celebrate a birthday. For an activity that doesn't involve other people, treating oneself to a massage or spa treatment is another way to celebrate a birthday.
null
false
null
In football, if touchdowns is the main way to get points, why can't the quarterback always throw the ball to the end zone every time?
To win at the game of American football, points are scored by touchdowns and kicking field goals, primarily. At a simplistic level, it might appear that throwing the ball to the end zone at every opportunity is the straight forward strategy. However, if one did that, then the defense would adjust and accordingly defend. Also, typically the team has to move the ball 75 to 80 yards to the end zone. Only a few strong armed quarterbacks can throw that far. Even fewer can do it under pressure. The other main issue is that there is a very low chance of completing the pass. So, smart teams and coaches, use a variety of strategies of both throwing the ball different distances, running the ball, and based on distance, kicking field goals. One of the key strategies is to mix it up, so that there is an element of surprise and the defense is caught off-guard.
null
false
null
Why some believe that continuous optimisation problems might be easier to solve than discrete optimisation problems?
Some believe that continuous optimisation problems might be easier to solve than their discrete counterparts, owing to the smoothness of the objective functions, which allows the use of calculus techniques. In general, calculus techniques mean to use objective and constraint information at a particular point x, to deduce information about the behaviour of the objective function at all surrounding points close to x. The deduced information is then used to guide the search direction. In contrast, combinatorial optimisation problems are to some extent harder to solve than continuous ones. This is because the behaviour of the objective and constraints may change significantly as we move from one feasible point to another, even if the two points are close according to some measure. Thus it is usually not possible to deduce information about the neighbouring points from the current one.
null
false
null
Who were the Grateful Dead?
The Grateful Dead were a band founded in 1965 in the Bay Area. Their original members include Jerry Garcia, Bob Weir, Phil Lesh, Ron (Pigpen) McKernan and Bill Kreutzman.
null
false
168
Knowledge representation and reasoning (KRR) is the process of representing the domain knowledge in formal languages (e.g., SPARQL, Prolog) such that it can be used by expert systems to execute querying and reasoning services. KRR have been applied in many fields including financial regulations, medical diagnosis, laws, and so on. One major obstacle in KRR is the creation of large-scale knowledge bases with high quality. For one thing, this requires the knowledge engineers (KEs) not only to have the background knowledge in a certain domain but have enough skills in knowledge representation as well. Unfortunately, qualified KEs are also in short supply. Therefore, it would be useful to build a tool that allows the domain experts without any background in logic to construct and query the knowledge base simply from text. Controlled natural languages (CNLs) BIBREF0 were developed as a technology that achieves this goal. CNLs are designed based on natural languages (NLs) but with restricted syntax and interpretation rules that determine the unique meaning of the sentence. Representative CNLs include Attempto Controlled English BIBREF1 and PENG BIBREF2 . Each CNL is developed with a language parser which translates the English sentences into an intermediate structure, discourse representation structure (DRS) BIBREF3 . Based on the DRS structure, the language parsers further translate the DRS into the corresponding logical representations, e.g., Answer Set Programming (ASP) BIBREF4 programs. One main issue with the aforementioned CNLs is that the systems do not provide enough background knowledge to preserve semantic equivalences of sentences that represent the same meaning but are expressed via different linguistic structures. For instance, the sentences Mary buys a car and Mary makes a purchase of a car are translated into different logical representations by the current CNL parsers. As a result, if the user ask a question who is a buyer of a car, these systems will fail to find the answer. In this thesis proposal, I will present KALM BIBREF5 , BIBREF6 , a system for knowledge authoring and question answering. KALM is superior to the current CNL systems in that KALM has a complex frame-semantic parser which can standardize the semantics of the sentences that express the same meaning via different linguistic structures. The frame-semantic parser is built based on FrameNet BIBREF7 and BabelNet BIBREF8 where FrameNet is used to capture the meaning of the sentence and BabelNet BIBREF8 is used to disambiguate the meaning of the extracted entities from the sentence. Experiment results show that KALM achieves superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems. The rest parts are organized as follows: Section SECREF2 discusses the related works, Section SECREF3 presents the KALM architecture, Section SECREF4 presents KALM-QA, the question answering part of KALM, Section SECREF5 shows the evaluation results, Section SECREF6 shows the future work beyond the thesis, and Section SECREF7 concludes the paper. The frame-semantic parser is built based on FrameNet and BabelNet where FrameNet is used to capture the meaning of the sentence and BabelNet is used to disambiguate the meaning of the extracted entities from the sentence.
What is the FrameNet used to?
The FrameNet is used to capture the meaning of the sentence.
null
false
null
What are some adventurous activities we could do during a bachelors party?
Adventurous activities to do on a bachelors party include mountain biking, sky diving, canyoning, rafting, bungee jumping, laser tag, archery, and paintball.
null
false
null
What is a question mark?
A question mark is a form of punctuation used at the end of a sentence to denote uncertainty. A question mark looks like this: ?
1909.04556
true
null
CodeInternational: A tool which can translate code between human languages, powered by Google Translate. CodeInternational: A tool which can translate code between human languages, powered by Google Translate.
Is this auto translation tool based on neural networks?
Yes.
null
false
28
We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 . Table TABREF37 also indicates whether PARENT is significantly better than a baseline metric. BIBREF21 suggest using the William's test for this purpose, but since we are computing correlations between only 4/13 systems at a time, this test has very weak power in our case. Hence, we use the bootstrap samples to obtain a INLINEFORM0 confidence interval of the difference in correlation between PARENT and any other metric and check whether this is above 0 BIBREF27 . Correlations are higher for the systems category than the hyperparams category. The latter is a more difficult setting since very similar models are compared, and hence the variance of the correlations is also high. Commonly used metrics which only rely on the reference (BLEU, ROUGE, METEOR, CIDEr) have only weak correlations with human judgments. In the hyperparams category, these are often negative, implying that tuning models based on these may lead to selecting worse models. BLEU performs the best among these, and adding n-grams from the table as references improves this further (BLEU-T). Among the extractive evaluation metrics, CS, which also only relies on the reference, has poor correlation in the hyperparams category. RG-F, and both variants of the PARENT metric achieve the highest correlation for both settings. There is no significant difference among these for the hyperparams category, but for systems, PARENT-W is significantly better than the other two. While RG-F needs a full information extraction pipeline in its implementation, PARENT-C only relies on co-occurrence counts, and PARENT-W can be used out-of-the-box for any dataset. To our knowledge, this is the first rigorous evaluation of using information extraction for generation evaluation. On this dataset, the word-overlap model showed higher correlation than the co-occurrence model for entailment. In § SECREF47 we will show that for the WebNLG dataset, where more paraphrasing is involved between the table and text, the opposite is true. Lastly, we note that the heuristic for selecting INLINEFORM0 is sufficient to produce high correlations for PARENT, however, if human annotations are available, this can be tuned to produce significantly higher correlations (PARENT*-W/C). Correlations are higher for the systems cate_x0002_gory than the hyperparams category. The latter is a more difficult setting since very similar models are compared, and hence the variance of the correlations is also high.
Is the hyperparams category a more difficult setting?
Yes.
null
false
295
To evaluate our work we measured how the discovered units compared to the forced aligned phones in term of segmentation and information. The accuracy of the segmentation was measured in term of Precision, Recall and F-score. If a unit boundary occurs at the same time (+/- 10ms) of an actual phone boundary it is considered as a true positive, otherwise it is considered to be a false positive. If no match is found with a true phone boundary, this is considered to be a false negative. The consistency of the units was evaluated in term of normalized mutual information (NMI - see BIBREF1 , BIBREF3 , BIBREF5 for details) which measures the statistical dependency between the units and the forced aligned phones. A NMI of 0 % means that the units are completely independent of the phones whereas a NMI of 100 % indicates that the actual phones could be retrieved without error given the sequence of discovered units. To evaluate our work we measured how the discovered units compared to the forced aligned phones in term of segmentation and information.
What do the authors measure to evaluate the work?
The authors measured how the discovered units compared to the forced aligned phones in term of segmentation and information.
null
false
null
Barton switched parties three times in his political life. He was first elected as a Democratic-Republican; he then switched to an Adams Democrat for his re-election in 1825. When he ran for the Senate again in 1830 he switched to being an Anti-Jacksonian candidate. In the Senate, he served as chairman of the committee on public lands. He was a member of the Missouri State Senate in 1834–1835.
How many times did Barton switch parties?
Three times
1811.00854
false
null
In order to evaluate the precision of the retrieved documents in each experiment, we used "TREC_Eval" tool [3]. TREC_Eval is a standard tool for evaluation of IR tasks and its name is a short form of Text REtrieval Conference (TREC) Evaluation tool. The Mean Average Precision (MAP) reported by TREC_Eval was 27.99% without query expansion and 37.10% with query expansion which shows more than 9 percent improvement. In order to evaluate the precision of the retrieved documents in each experiment, we used "TREC_Eval" tool [3]. TREC_Eval is a standard tool for evaluation of IR tasks and its name is a short form of Text REtrieval Conference (TREC) Evaluation tool. The Mean Average Precision (MAP) reported by TREC_Eval was 27.99% without query expansion and 37.10% with query expansion which shows more than 9 percent improvement.
Which evaluation metric has been measured?
The answers are shown as follows: * Mean Average Precision
1612.05270
false
null
In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques. In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish.
What eight language are reported on?
The answers are shown as follows: * Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish
null
false
null
What seven houses ruled the Seven Kingdoms before Aegon's Conquest?
Stark, Tully, Arryn, Lannister, Tyrell, Baratheon, and Martell
null
false
null
When Guettel took up music composition in his mid-teens, he was encouraged by his family. His mother said that she offered him advice for around a year, "After that, he was so far beyond anything I could ever have dreamed of, I just backed off." Richard Rodgers, who died when Guettel was 15, overheard an early composition, said he liked it and asked him to play it louder. Guettel has qualified the compliment, noting that "He was literally on his deathbed on the other side of the living-room wall." In his high school and collegiate years and into his early twenties, Guettel worked as a rock and jazz musician, singing and playing bass, before realizing "that writing for character and telling stories through music was something that I really loved to do, and that allowed me to express love."
List all the names of people mentioned in this passage. Use a numbered list.
1. Guettel 2. Richard Rodgers
null
false
null
What do the suits on a deck of cards represent?
Each of the suits on a deck of cards represents the four major pillars of the economy in the middle ages: heart represented the Church, spades represented the military, clubs represented agriculture, and diamonds represented the merchant class.
null
false
null
He was ordained to the priesthood on 14 June 1862, and then taught at the Salesian House of Studies in Turin until 1875. Cagliero led the first ten Salesians to America and established five houses in Uruguay and Argentina between 1875 and 1877, after which he became spiritual director of his society and the first General Director of Daughters of Mary Help of Christians in Turin. He was made Pro-Apostolic Vicar of new vicariate in Northern Patagonia, in Argentina, on 20 November 1883.
Extract all of the dates mentioned in this paragraph and list them using bullets in the format {Date} - {Description}
• 14 June 1862 - When Giovanni Cagliero was ordained to the priesthood • 1875 - When Giovanni Cagliero finished teaching at the Salesian House of Studies in Turin • 1875 - When Giovanni Cagliero led the first ten Salesians to America and established five houses in Uruguay and Argentina • 1877 - When Giovanni Cagliero finished the establishment of five houses in Uruguay and Argentina • 20 November 1883 - When Giovanni Cagliero was made Pro-Apostolic Vicar of new vicariate in Northern Patagonia, in Argentina
null
false
432
Task setup. We cast the NN performance prediction as a classification task. Specifically, the generated time-evolving graphs are labeled as high and low accuracy based on the performance of their corresponding NNs; Table lists the threshold value chosen for low and high accuracy labels based on the final accuracy range of trained NNs, as well as the early stopping epochs for each architecture. Five-fold cross validation is used to predict the label of the test graphs in a binary classification task using SVM and MLP, where the input is the set of temporal signatures {s t tr1 , s t tr2 , ..., s t trn }. We report the classification accuracy. Since the sample of NNs (App. A.1) is randomly selected with balanced high/low accuracy instances, the accuracy of a random classifier as the baseline is 50% (omitted from the charts to avoid clutter). Additionally, to show that our proposed graph representation and signatures are general and can be useful across different NN architectures, we consider the following setup: we train the classifier on a small set of NN models (i.e., different architectures-such as LeNet, AlexNet, and VGG-and hyperparameters), and predict the performance on unseen architectures (e.g., ResNet). We describe these experiments in App. A.4. Results. Figure illustrates the performance of SVM and MLP classifiers operating on weighted degree-based signatures of the rolled and unrolled graph representations of the LeNet and AlexNet architectures, trained on the CIFAR-10 dataset. We omit the results on VGG and ResNet, as well as image classification on ImageNet, because the unrolled graph generation process is prohibitively expensive, both in terms of time and space. Overall, the rolled and unrolled graph representations show similar trends in classifying NNs by effectively capturing their early training dynamics: the structural changes in the training NN architectures during the first 6-15 iterations are sufficient to classify the performance of NN instances with over 90% accuracy. However, as we discuss in § 4.3, our proposed rolled representation is significantly more efficient than the unrolled representation, and can generalize to deeper and larger NNs. We provide more details for these experiments in App. A.2. OBSERVATION 1 Both the rolled and unrolled time-evolving graph representations of NNs are effective in capturing the changes in the NN dynamics during the training phase, and can be used to predict the accuracy of an NN instance after observing only a few training epochs. Our proposed rolled representation is also space-and time-efficient, unlike the unrolled representation. In the remainder of this analysis, we focus on the rolled representation, which is more efficient for larger NN models and datasets. We present the classification results for both types of signatures for the temporal graphs corresponding to the training dynamics of VGG and ResNet-44 (CIFAR-10 dataset) in Fig.(e)-(h) and LeNet, AlexNet and ResNet-50 (ImageNet datatset) in Fig.. In addition to the results discussed above for LeNet and AlexNet, we provide the NN classification accuracy for the eigenvector-based signatures in Fig. in the appendix. In all the cases, classification accuracy of 80-95% is achieved in less than 10 training epochs. For degree-based signatures, SVM tends to outperform MLP, while the trend is reversed for eigenvector-based signatures. For example, for both VGG and AlexNet for the CIFAR-10 image classification task, MLP can predict the performance with accuracy ∼95% using the eigenvector-based signatures from the first 6 training epochs; the same trend is observed on ImageNet for the LeNet and AlexNet architectures. For ResNet on the ImageNet classification task, SVM tends to perform well for both degree and eigenvector centrality signature vectors; while MLP tends to perform poorly in this case, the MLP variant operating on the exponential average of signature vectors outperforms the original MLP and all SVM variants. In all the cases, both classifiers reach performance over 80%-90% significantly before the early stopping point for all the architectures. In general, for ImageNet, we observe that the signatures based on eigenvector centrality are more effective in the NN performance prediction task compared to the weighted degree-based signatures-irrespective of NN architecture. OBSERVATION 2 For the CIFAR-10 image classification task, the rolled graph representation for all NN architectures and both signature types achieve accuracy of >90%. For ImageNet, the eigenvector centrality-based signatures tend to yield higher performance compared to the weighted degree-based signatures, though both achieve accuracy of 80-90%. Results. Figure 4 illustrates the performance of SVM and MLP classifiers operating on weighted degree-based signatures of the rolled and unrolled graph representations of the LeNet and AlexNet architectures, trained on the CIFAR-10 dataset. We omit the results on VGG and ResNet, as well as image classification on ImageNet, because the unrolled graph generation process is prohibitively expensive, both in terms of time and space. Overall, the rolled and unrolled graph representations show similar trends in classifying NNs by effectively capturing their early training dynamics: the structural changes in the training NN architectures during the first 6-15 iterations are sufficient to classify the performance of NN instances with over 90% accuracy. H
Is it true that baselines are omitted from the experiment?
To the best of our knowledge, there is no other method that aims to solve the problem that we introduced in this paper, i.e., the prediction of the accuracy of NNs. Nevertheless, we reported results on a previously proposed graph representation for convolutional layers (i.e., the unrolled graph representation introduced by Rieck et al. 2019) in order to compare the efficiency and accuracy of our proposed rolled representation. We reported results on this “baseline” representation in Fig 4 (c) and (d). Additionally, we considered different options for other steps of our proposed methodology beyond the graph construction (e.g., different signature constructions, classification vs. regression) and discussed our observations. Finally, we compared the efficiency of our approach to the “early stopping method”.
null
false
190
Relation extraction (RE) is one of the most fundamental tasks in natural language processing, and its goal is to identify the relationship between a given pair of entities in a sentence. Typically, a large-scale training dataset with clean labels is required to train a reliable relation extraction model. However, it is time-consuming and labor-intensive to annotate such data by crowdsourcing. To overcome the lack of labeled training data, BIBREF0 mintz2009distant presents a distant supervision approach that automatically generates a large-scale, labeled training set by aligning entities in knowledge graph (e.g. Freebase BIBREF1) to corresponding entity mentions in natural language sentences. This approach is based on a strong assumption that, any sentence containing two entities should be labeled according to the relationship of the two entities on the given knowledge graph. However, this assumption does not always hold. Sometimes the same two entities in different sentences with various contexts cannot express a consistent relationship as described in the knowledge graph, which certainly results in wrongly labeled problem. To alleviate the aformentioned problem, BIBREF2 riedel2010modeling proposes a multi-instance learning framework, which relaxes the strong assumption to expressed-at-least-one assumption. In plainer terms, this means any possible relation between two entities hold true in at least one distantly-labeled sentence rather than all of the them that contains those two entities. In particular, instead of generating a sentence-level label, this framework assigns a label to a bag of sentences containing a common entity pair, and the label is a relationship of the entity pair on knowledge graph. Recently, based on the labeled data at bag level, a line of works BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 under selective attention framework BIBREF5 let model implicitly focus on the correctly labeled sentence(s) by an attention mechanism and thus learn a stable and robust model from the noisy data. However, such selective attention framework is vulnerable to situations where a bag is merely comprised of one single sentence labeled; and what is worse, the only one sentence possibly expresses inconsistent relation information with the bag-level label. This scenario is not uncommon. For a popular distantly supervised relation extraction benchmark, e.g., NYT dataset BIBREF2, up to $80\%$ of its training examples (i.e., bags) are one-sentence bags. From our data inspection, we randomly sample 100 one-sentence bags and find $35\%$ of them is incorrectly labeled. Two examples of one-sentence bag are shown in Table TABREF1. These results indicate that, in training phrase the selective attention module is enforced to output a single-valued scalar for $80\%$ examples, leading to an ill-trained attention module and thus hurting the performance. Motivated by aforementioned observations, in this paper, we propose a novel Selective Gate (SeG) framework for distantly supervised relation extraction. In the proposed framework, 1) we employ both the entity embeddings and relative position embeddings BIBREF8 for relation extraction, and an entity-aware embedding approach is proposed to dynamically integrate entity information into each word embedding, yielding more expressively-powerful representations for downstream modules; 2) to strengthen the capability of widely-used piecewise CNN (PCNN) BIBREF3 on capturing long-term dependency BIBREF9, we develop a light-weight self-attention BIBREF10, BIBREF11 mechanism to capture rich dependency information and consequently enhance the capability of neural network via producing complementary representation for PCNN; and 3) based on preceding versatile features, we design a selective gate to aggregate sentence-level representations into bag-level one and alleviate intrinsic issues appearing in selective attention. Compared to the baseline framework (i.e., selective attention for multi-instance learning), SeG is able to produce entity-aware embeddings and rich-contextual representations to facilitate downstream aggregation modules that stably learn from noisy training data. Moreover, SeG uses gate mechanism with pooling to overcome problem occurring in selective attention, which is caused by one-sentence bags. In addition, it still keeps a light-weight structure to ensure the scalability of this model. The experiments and extensive ablation studies on New York Time dataset BIBREF2 show that our proposed framework achieves a new state-of-the-art performance regarding both AUC and top-n precision metrics for distantly supervised relation extraction task, and also verify the significance of each proposed module. Particularly, the proposed framework can achieve AUC of 0.51, which outperforms selective attention baseline by 0.14 and improves previous state-of-the-art approach by 0.09. Motivated by aforementioned observations, in this paper, we propose a novel Selective Gate (SeG) framework for distantly supervised relation extraction.
What kind of framework do they propose??
A novel Selective Gate (SeG) framework for distantly supervised relation extraction.
1909.06434
false
null
We have presented adaptive schedules for multilingual machine translation, where task weights are controlled by validation BLEU scores. The schedules may either be explicit, directly changing how task are sampled, or implicit by adjusting the optimization process. Compared to single-task baselines, performance improved on the low-resource En-De task and was comparable on high-resource En-Fr task. Compared to single-task baselines, performance improved on the low-resource En-De task and was comparable on high-resource En-Fr task.
How big are negative effects of proposed techniques on high-resource tasks?
The negative effects were insignificant.
null
false
null
What are crawfish and how do you eat them?
Crawfish are considered a staple of cajun cuisine. Many Louisiana natives and surrounding southern states eat these crustaceans. To eat crawfish, sometimes called crawdads or mudbugs, you first need to purchase them. Crawfish are usually sold by the sack with typical weights around 30-40 pounds. You must then clean the crawfish as their natural habitats are within mud, swamps, and muck. Rinsing them thoroughly in water is required. Many will do this a sack at a time and continue to rinse until the water is crystal clear noting that they're clean. Others go a step further through a process of purging. This is using salt to have the crawfish clean out its insides. Many think purging is a myth and simply skip this step. Once clean, you'll want to bring a large pot (typically 80+ quarts in size) of water to a rolling boil. Add in spices to flavor the water. There are also pre-made bags of spice known as 'boil.' Boil the crawfish until they float to the top and turn bright red. Then allow the crawfish to soak to absorb the added spice. This process can last as long as an hour or more, though 30 minutes is fairly typical. Many people will add potatoes, corn, onions, and other sides to their boil. Learning to peel a crawfish is an art, but can be quickly done by twisting off the tail. Peel the rest of the shell off and eat the tail. Many will also suck the head to get a burst of flavor.
1905.11037
false
null
Although not done in this work, an alternative (but also natural) way to address the task is as a special case of language modelling, where the output vocabulary is restricted to the size of the `action' vocabulary. Also, note that the performance for this task is not expected to achieve a perfect accuracy, as there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty. Also, note that the performance for this task is not expected to achieve a perfect accuracy, as there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty.
Why do they think this task is hard? What is the baseline performance?
1. there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty. 2. Macro F1 = 14.6 (MLR, length 96 snippet) Weighted F1 = 31.1 (LSTM, length 128 snippet)
null
false
null
Valve announced the first edition of The International on August 1, 2011. 16 teams were invited to compete in the tournament, which would also serve as the first public viewing of Dota 2. The tournament was funded by Valve, including the US$1 million grand prize, with Nvidia supplying the hardware. It took place at Gamescom in Cologne from August 17–21 the same year. The tournament started with a group stage in which the winners of each of the four groups were entered into a winner's bracket, and the other teams entered the loser's bracket. The rest of the tournament was then played as a double-elimination tournament. The final of this inaugural tournament was between Ukrainian-based Natus Vincere and Chinese-based EHOME, with Natus Vincere winning the series 3–1. EHOME won US$250,000, with the rest of the 14 teams splitting the remaining $350,000.
Can you summarize the main points about the first edition of the Dota 2 tournament, The International?
The first edition of The International was held at Gamescom in Cologne, Germany August 17-21 2011. Sixteen teams were directly invited by Valve to complete for $1 Million grand prize. The tournament was played in a round robin group stage, followed by a double elimination playoff. The Ukrainian based team, Natus Vincere defeated the Chinese based EHOME 3-1 in the grand finals.
null
false
null
Whale skin is how many cm deep?
15 cm
null
false
null
Tell me which of these teams are in the National Football League (NFL): Seattle Seahawks, San Francisco 49ers, Seattle Mariners, Seattle Super Sonics, New England Patriots.
The Seattle Seahawks, San Francisco 49ers, and New England Patriots are part of the National Football League.
null
false
473
Adversarial Robustness. The vulnerability of DNN models to adversarial examples has been verified by many existing successful attack methods. To improve model robustness against adversarial attacks, various defense methods have been proposed. Among them, adversarial training has been proven to be one of the most effective defense methods. Adversarial training can be formulated as solving a min-max optimization problem where the outer minimization process enforces the model to be robust to adversarial examples, generated by the inner maximization process via some existing attacking methods like PGD. Based on adversarial training, several variants, such as TRADES, MART, have been presented to improve the model's performance further. More details about adversarial robustness can be found in recent surveys. Since almost all studies of adversarial training are focused on balanced datasets, it's worthwhile to investigate the performance of adversarial training methods on imbalanced training datasets. Imbalanced Learning. Most existing works of imbalanced training can be roughly classified into two categories, i.e., re-sampling and reweighting. Re-sampling methods aim to reduce imbalance level through either over-sampling examples from under-represented classes or under-sampling examples from well-represented classes. reweighting methods allocate different weights for different classes or even different examples. For example, Focal loss enlarges the weights of wrongly-classified examples while reducing the weights of well-classified examples in the standard cross entropy loss; and LDAM loss regularizes the underrepresented classes more strongly than the well-represented classes to attain good generalization on under-represented classes. More details about imbalanced learning can be found in recent surveys. The majority of existing methods focused on the nature training scenario and their trained models will be crashed when facing adversarial attacks. Hence, in this paper, we develop a novel method that can defend adversarial attacks and achieve well-pleasing performance under imbalance settings. Imbalanced Learning. Most existing works of imbalanced training can be roughly classified into two categories, i.e., re-sampling and reweighting. Re-sampling methods aim to reduce imbalance level through either over-sampling examples from under-represented classes (Buda et al., 2018; Byrd & Lipton, 2019) or under-sampling examples from well-represented classes (Japkowicz & Stephen, 2002; Drummond et al., 2003; He & Garcia, 2009). reweighting methods allocate different weights for different classes or even different examples. For example, Focal loss (Lin et al., 2017) enlarges the weights of wrongly-classified examples while reducing the weights of well-classified examples in the standard cross entropy loss; and LDAM loss (Cao et al., 2019) regularizes the underrepresented classes more strongly than the well-represented classes to attain good generalization on under-represented classes. More details about imbalanced learning can be found in recent surveys (He & Ma, 2013; Johnson & Khoshgoftaar, 2019). The majority of existing methods focused on the nature training scenario and their trained models will be crashed when facing adversarial attacks (Szegedy et al., 2013; Goodfellow et al., 2014). Hence, in this paper, we develop a novel method that can defend adversarial attacks and achieve well-pleasing performance under imbalance settings.
Why studying reweighting strategy?
As we mentioned in Section 6, reweighting can be regarded as one of two main categories for existing imbalanced learning methods, and several recently proposed reweighting based methods have achieved well-pleasing performance of the imbalanced learning task, such as Focal loss [1] and LDAM loss [2]. These methods can be easily implemented and worked well with deep neural networks. In addition, re-sampling based methods may bring extra challenges for the learning process, like insufficient training data and additional noises. Hence, as the initial effort to investigate the imbalanced adversarial training problem, we start our study from the reweighting strategy and leave other methods as future work. [1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017. [2] Cao, Kaidi, et al. "Learning imbalanced datasets with label-distribution-aware margin loss." arXiv preprint arXiv:1906.07413 (2019).
null
false
null
Write a short message to send to your doctor to ask if you need to make a separate appointment for each of your vaccines for an upcoming vacation to South Africa. The doctor is named Dr. Gupta
Hi Dr. Gupta, I'm reaching out because I will be traveling to South Africa soon, and I want to make sure I am up to date on all my vaccines before I go abroad. I know I need to make a travel appointment, but I wanted to check if I should make separate appointments for each vaccine or if I will be able to have all the vaccines given at one travel clinic appointment. Please let me know what scheduling makes sense, thanks so much!
null
false
null
When was the first episode of the TV show, The X-Files?
The first episode of The X-Files, titled "Pilot," September 10, 1993
null
false
null
Process engineering involves the utilization of multiple tools and methods. Depending on the exact nature of the system, processes need to be simulated and modeled using mathematics and computer science. Processes where phase change and phase equilibria are relevant require analysis using the principles and laws of thermodynamics to quantify changes in energy and efficiency. In contrast, processes that focus on the flow of material and energy as they approach equilibria are best analyzed using the disciplines of fluid mechanics and transport phenomena. Disciplines within the field of mechanics need to be applied in the presence of fluids or porous and dispersed media. Materials engineering principles also need to be applied, when relevant. Manufacturing in the field of process engineering involves an implementation of process synthesis steps. Regardless of the exact tools required, process engineering is then formatted through the use of a process flow diagram (PFD) where material flow paths, storage equipment (such as tanks and silos), transformations (such as distillation columns, receiver/head tanks, mixing, separations, pumping, etc.) and flowrates are specified, as well as a list of all pipes and conveyors and their contents, material properties such as density, viscosity, particle-size distribution, flowrates, pressures, temperatures, and materials of construction for the piping and unit operations.
What is a PFD?
PFD is an abbreviation for process flow diagram. Process flow diagrams specify material flow paths, storage equipment (such as tanks and silos), transformations (such as distillation columns, receiver/head tanks, mixing, separations, pumping, etc.), and flowrates. They also list all pipes and conveyors and their contents, material properties such as density, viscosity, particle-size distribution, flowrates, pressures, temperatures, and materials of construction for the piping and unit operations.
null
false
null
What are the functions of the 3 branches of the United States government.
The legislative branch creates laws, the executive branch enforces and administrates laws, and the judicial branch interprets laws.
null
false
284
With the massive increase in social interactions on online social networks, there has also been an increase of hateful activities that exploit such infrastructure. On Twitter, hateful tweets are those that contain abusive speech targeting individuals (cyber-bullying, a politician, a celebrity, a product) or particular groups (a country, LGBT, a religion, gender, an organization, etc.). Detecting such hateful speech is important for analyzing public sentiment of a group of users towards another group, and for discouraging associated wrongful activities. It is also useful to filter tweets before content recommendation, or learning AI chatterbots from tweets. The manual way of filtering out hateful tweets is not scalable, motivating researchers to identify automated ways. In this work, we focus on the problem of classifying a tweet as racist, sexist or neither. The task is quite challenging due to the inherent complexity of the natural language constructs – different forms of hatred, different kinds of targets, different ways of representing the same meaning. Most of the earlier work revolves either around manual feature extraction BIBREF0 or use representation learning methods followed by a linear classifier BIBREF1 , BIBREF2 . However, recently deep learning methods have shown accuracy improvements across a large number of complex problems in speech, vision and text applications. To the best of our knowledge, we are the first to experiment with deep learning architectures for the hate speech detection task. In this paper, we experiment with multiple classifiers such as Logistic Regression, Random Forest, SVMs, Gradient Boosted Decision Trees (GBDTs) and Deep Neural Networks(DNNs). The feature spaces for these classifiers are in turn defined by task-specific embeddings learned using three deep learning architectures: FastText, Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs). As baselines, we compare with feature spaces comprising of char n-grams BIBREF0 , TF-IDF vectors, and Bag of Words vectors (BoWV). Main contributions of our paper are as follows: (1) We investigate the application of deep learning methods for the task of hate speech detection. (2) We explore various tweet semantic embeddings like char n-grams, word Term Frequency-Inverse Document Frequency (TF-IDF) values, Bag of Words Vectors (BoWV) over Global Vectors for Word Representation (GloVe), and task-specific embeddings learned using FastText, CNNs and LSTMs. (3) Our methods beat state-of-the-art methods by a large margin ( INLINEFORM0 18 F1 points better). In this work, we focus on the problem of classifying a tweet as racist, sexist or neither.
What kind of problem do they focus on in their work?
The problem of classifying a tweet as racist, sexist or neither.