paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| 224
|
Recently, there is a surge of excitement in adding numerous new domains to conversational agents such as Alexa, Google Assistant, Cortana and Siri to support a myriad of use cases. However, building a slot tagger, which is a key component for natural language understanding (NLU) BIBREF0 , for a new domain requires massive amounts of labeled data, hindering rapid development of new skills. To address the data-intensiveness problem, domain adaptation approaches have been successfully applied. Previous approaches are roughly categorized into two groups: data-driven approaches BIBREF1 , BIBREF2 and model-driven approaches BIBREF3 , BIBREF4 .
In the data-driven approach, new target models are trained by combining target domain data with relevant data from a repository of arbitrary labeled datasets using domain adaptation approaches such as feature augmentation BIBREF1 . A disadvantage of this approach is the increase in training time as the amount of reusable data grows. The reusable data might contain hundreds of thousands of samples, making iterative refinement prohibitive. In contrast, the model-driven approach utilizes “expert" models for summarizing the data for reusable slots BIBREF3 , BIBREF4 . The outputs of the expert models are directly used when training new domains, allowing for faster training. A drawback of this approach is that it requires explicit concept alignments which itself is not a trivial task, potentially missing lots of reusable concepts. Additionally, it's not easy to generalize these models to new, unseen slots.
In this paper, we present a new domain adaptation technique for slot tagging inspired by recent advances in zero-shot learning. Traditionally, slot tagging is formulated as a sequence labeling task using the BIO representation (Figure 1 ). Our approach formulates this problem as detecting spans that contain values for each slot as shown in Figure 1 . For implicit transfer of reusable concepts across domains, we represent slots in a shared latent semantic space by embedding the slot description. With the shared latent space, domain adaptation can simply be done by fine-tuning a base model, which is trained on massive data, with a handful of target domain data without any explicit concept alignments. A similar idea of utilizing zero-shot learning for slot tagging has been proven to work in semi-supervised settings BIBREF5 . Our zero-shot model architecture differs from this by adding: 1) an attention layer to produce the slot-aware representations of input words, 2) a CRF layer to better satisfy global consistency constraints, 3) character-level embeddings to incorporate morphological information. Despite its simplicity, we show that our model outperforms all existing methods including the previous zero-shot learning approach in domain adaptation settings.
We first describe our approach called Zero-Shot Adaptive Transfer model (ZAT) in detail. We then describe the dataset we used for our experiments. Using this data, we conduct experiments comparing our ZAT model with a set of state-of-the-art models: Bag-of-Expert (BoE) models and their non-expert counterparts BIBREF4 , and the Concept Tagger model BIBREF5 , showing that our model can lead to significant F1-score improvements. This is followed by an in-depth analysis of the results. We then provide a survey of related work and concluding remarks.
Despite its simplicity, we show that our model outperforms all existing methods including the previous zero-shot learning approach in domain adaptation settings.
|
Does the ZAT model outperform all existing methods?
|
Yes.
|
null | false
| null |
Bike or Car?
|
Car is always better, but bike fans will deny it.
Car has more power then Bike.
Car is more safe
Car has more feature compare to Bike
|
|
null | false
| null |
How are weather forecasts created?
|
Weather forecasts are created by combining data from many sources, such as satellites, weather balloons, and ground stations. Those data points are then fed to large computer systems with sophisticated models to predict how the weather will change over time.
Weather sites and even local weather forecasters use this information to provide a regional and short-term forecasts.
|
|
null | false
| null |
Education in the United States is provided in public and private schools and by individuals through homeschooling. State governments set overall educational standards, often mandate standardized tests for K–12 public school systems and supervise, usually through a board of regents, state colleges, and universities. The bulk of the $1.3 trillion in funding comes from state and local governments, with federal funding accounting for about $260 billion in 2021 compared to around $200 billion in past years.
Private schools are free to determine their own curriculum and staffing policies, with voluntary accreditation available through independent regional accreditation authorities, although some state regulation can apply.
In 2013, about 87% of school-age children (those below higher education) attended state-funded public schools, about 10% attended tuition and foundation-funded private schools, and roughly 3% were home-schooled.
|
Extract the amounts of federal funding provided to state and local governments for education in the past year
|
$260 Billion
|
null | false
| null |
Provide a possible jurisdiction section for a federal civil rights complaint leaving blanks for the location.
|
This is a civil rights complaint arising out of the county of _______ within the state of _______. This complaint sets forth causes of action under Title 42 SS 1981, 1982, 1983, 1985, 1986. The Plaintiff will show that her ability to engage in a contractual agreement was infringed upon by a private corporation conspiring with state actors acting under color of state law in violation of the aforementioned federal statutes. These facts give this honorable court jurisdiction over the matter.
|
|
null | false
| 308
|
Given the large collections of unstructured and semi-structured data available on the web, there is a crucial need to enable quick and efficient access to the knowledge content within them. Traditionally, the field of information extraction has focused on extracting such knowledge from unstructured text documents, such as job postings, scientific papers, news articles, and emails. However, the content on the web increasingly contains more varied types of data, including semi-structured web pages, tables that do not adhere to any schema, photographs, videos, and audio. Given a query by a user, the appropriate information may appear in any of these different modes, and thus there's a crucial need for methods to construct knowledge bases from different types of data, and more importantly, combine the evidence in order to extract the correct answer.
Motivated by this goal, we introduce the task of multimodal attribute extraction. Provided contextual information about an entity, in the form of any of the modes described above, along with an attribute query, the goal is to extract the corresponding value for that attribute. While attribute extraction on the domain of text has been well-studied BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , to our knowledge this is the first time attribute extraction using a combination of multiple modes of data has been considered. This introduces additional challenges to the problem, since a multimodal attribute extractor needs to be able to return values provided any kind of evidence, whereas modern attribute extractors treat attribute extraction as a tagging problem and thus only work when attributes occur as a substring of text.
In order to support research on this task, we release the Multimodal Attribute Extraction (MAE) dataset, a large dataset containing mixed-media data for over 2.2 million commercial product items, collected from a large number of e-commerce sites using the Diffbot Product API. The collection of items is diverse and includes categories such as electronic products, jewelry, clothing, vehicles, and real estate. For each item, we provide a textual product description, collection of images, and open-schema table of attribute-value pairs (see Figure 1 for an example). The provided attribute-value pairs only provide a very weak source of supervision; where the value might appear in the context is not known, and further, it is not even guaranteed that the value can be extracted from the provided evidence. In all, there are over 4 million images and 7.6 million attribute-value pairs. By releasing such a large dataset, we hope to drive progress on this task similar to how the Penn Treebank BIBREF5 , SQuAD BIBREF6 , and Imagenet BIBREF7 have driven progress on syntactic parsing, question answering, and object recognition, respectively.
To asses the difficulty of the task and the dataset, we first conduct a human evaluation study using Mechanical Turk that demonstrates that all available modes of information are useful for detecting values. We also train and provide results for a variety of machine learning models on the dataset. We observe that a simple most-common value classifier, which always predicts the most-common value for a given attribute, provides a very difficult baseline for more complicated models to beat (33% accuracy). In our current experiments, we are unable to train an image-only classifier that can outperform this simple model, despite using modern neural architectures such as VGG-16 BIBREF8 and Google's Inception-v3 BIBREF9 . However, we are able to obtain significantly better performance using a text-only classifier (59% accuracy). We hope to improve and obtain more accurate models in further research.
Motivated by this goal, we introduce the task of multimodal attribute extraction. Provided contextual information about an entity, in the form of any of the modes described above, along with an attribute query, the goal is to extract the corresponding value for that attribute. While attribute extraction on the domain of text has been well-studied, to our knowledge this is the first time attribute extraction using a combination of multiple modes of data has been considered.
|
Is it the first time attribute extraction using a combination of multiple modes of data has been considered?
|
Yes.
|
1909.07512
| false
| null |
This method is unique since it avoids needing a prior dataset by using the information found within the weights of a general language model. Word embedding models, and BERT in particular, contain vast amounts of information collected through the course of their training. BERT Base for instance, has 110 Million parameters and was trained on both Wikipedea Corpus and BooksCorpus BIBREF0, a combined collection of over 3 Billion words. The full potential of such vastly trained general language models is still unfolding. This paper demonstrates that by carefully prompting and analysing these models, it is possible to extract new information from them, and extend short-text analysis beyond the limitations posed by word count.
Word embedding models, and BERT in particular, contain vast amounts of information collected through the course of their training. BERT Base for instance, has 110 Million parameters and was trained on both Wikipedea Corpus and BooksCorpus BIBREF0, a combined collection of over 3 Billion words.
|
What is the language model pre-trained on?
|
The answers are shown as follows:
* Wikipedea Corpus and BooksCorpus
|
null | false
| null |
What is the legal drinking age in the USA?
|
The legal drinking age in the USA is 21.
|
|
null | false
| null |
Baseball is a bat-and-ball sport played between two teams of nine players each, taking turns batting and fielding. The game occurs over the course of several plays, with each play generally beginning when a player on the fielding team, called the pitcher, throws a ball that a player on the batting team, called the batter, tries to hit with a bat. The objective of the offensive team (batting team) is to hit the ball into the field of play, away from the other team's players, allowing its players to run the bases, having them advance counter-clockwise around four bases to score what are called "runs". The objective of the defensive team (referred to as the fielding team) is to prevent batters from becoming runners, and to prevent runners' advance around the bases. A run is scored when a runner legally advances around the bases in order and touches home plate (the place where the player started as a batter).
|
How do you play baseball?
|
Baseball is played with two teams of nine players each that take turns batting and fielding. The objective of the offensive team is to hit the ball into the field of play away from the other team's players, allowing its players to run bases and having them advance around the bases and score runs. The objective of the defensive team is to prevent batters from becoming runners, and to prevent runners' advance around the bases.
|
null | false
| null |
Give me a list of some of the best films where English is not the primary language spoken. Please indicate which language is the primary language spoken.
|
You might enjoy the following films:
- Spirited Away (Japanese)
- In the Mood for Love (Cantonese)
- City of God (Portuguese)
- Life is Beautiful (Italian)
- Cinema Paradiso (Italian)
- Amélie (French)
- La Haine (French)
|
|
null | false
| null |
In 1872 the Boundary Commission travelled through this area while surveying the border between Canada and the United States. During the 1874 March West trek of the North-West Mounted Police, the force established its first camp (called Short Creek Camp) at this location. The community began being served by the Soo Line in 1893, permitting the coal mines in the region to become operational. Roche Percée incorporated as a village on January 12, 1909.
|
Based on this passage, what was the first designated space to be established in Roche Percee called? Who established it and in what year?
|
The first established space was Short Creek Camp which was founded in 1874 by the North-West Mounted Police
|
null | false
| null |
Give me the list of fortified wines.
|
Fortified wines:
- Port
- Madeira
- Marsala
- Sherry
- Vermouth
- Vinsanto
|
|
null | false
| null |
What was Britain's first colony (annexed in 1583)
|
Newfoundland
|
|
null | false
| null |
Who are the considered to be the best soccer players of all time, and what countries are they from?
|
Pelé (Brazil), Lionel Messi (Argentina), Cristiano Ronaldo (Portugal), Ronaldo (Brazil), Diego Maradona (Argentina), Franz Beckenbauer (Germany), George Best (Northern Ireland), Michel Platini (France), Johan Cruyff (The Netherlands), and David Beckham (England).
|
|
null | false
| null |
You were abducted by aliens and experimented on, you were sent back to earth with the knowledge and ability to solve any problem on earth
|
Now that I have returned to earth with extreme knowledge and power, I have the ability to solve any problem in the world. I will start with antigravity enabling space travel for all. I will then discover the secret to longevity. I will answer the uncertainties of the world such as how the Pyramids came to be, lost civilizations, nuclear fusion and global warming. Finally, to make my millions, I will reverse male balding.
|
|
null | false
| null |
i want to eat healthier. what kind of food in my diet should i cut out? Everyday I have a yogurt, burger, pizza, salad, and coffee.
|
It depends what your goals are for dieting. However, burgers and pizzas are generally not healthy due to a lot of trans fat and unhealthy carbs. Burgers and pizzas are also often fast food which means they are made with processed ingredients. Yogurt, salad, and coffee do not need to be cut out but in excess they can be unhealthy too. So, always eat or drink in moderation,
|
|
null | false
| null |
Classify each of these as a toy for a dog or a cat: ball, laser, bone, mouse, frisbee, catnip
|
ball: dog, laser: cat, bone: dog, mouse: cat, frisbee: dog, catnip: cat
|
|
null | false
| null |
Why do we have work meetings?
|
We have work meetings to give status updates on current projects. These meetings can be virtual or in person. In-person work meetings are great for building rapport, meeting new people, and building stronger relationships with people you already know. Staying up to date on what everyone is working and aligning with company vision is how companies stay successful and have a competitive advantage over their competitors. Regular meetings are the best way to achieve this.
|
|
null | false
| null |
Why is football called soccer in the US?
|
It is originally a British name. “Assoccer” (association football) was later shortened to “soccer”. The American football is called rugby football. Later British changes the name to football due to anti-americanism. But in America, football is named as a more popular sports like rugby football. So now it cannot change the name as the world called.
|
|
1912.00819
| false
| null |
First preference is given to the labels that are perfectly matching in all the neural annotators. In Table TABREF11, we can see that both datasets have about 40% of exactly matching labels over all models (AM). Then priority is given to the context-based models to check if the label in all context models is matching perfectly. In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models. Then, we allow labels to rely on these at least two context models. As a result, about 47% of the labels are taken based on the context models (CM). When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. There are about 3% in IEMOCAP and 5% in MELD (BM).
Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category. This unknown category of the dialogue act is labeled with `xx' in the final annotations, and they are about 7% in IEMOCAP and 11% in MELD (NM). The statistics of the EDAs is reported in Table TABREF13 for both datasets. Total utterances in MELD includes training, validation and test datasets.
First preference is given to the labels that are perfectly matching in all the neural annotators. In Table TABREF11, we can see that both datasets have about 40% of exactly matching labels over all models (AM). Then priority is given to the context-based models to check if the label in all context models is matching perfectly. In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models. Then, we allow labels to rely on these at least two context models. As a result, about 47% of the labels are taken based on the context models (CM). When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. There are about 3% in IEMOCAP and 5% in MELD (BM).
Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category. This unknown category of the dialogue act is labeled with `xx' in the final annotations, and they are about 7% in IEMOCAP and 11% in MELD (NM).
|
How does the ensemble annotator extract the final label?
|
The answers are shown as follows:
* First preference is given to the labels that are perfectly matching in all the neural annotators.
* In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models.
* When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one.
* Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category.
|
null | false
| null |
Classify the following bools as fiction or non-fiction: Harry Potter and the Sorcerer's Stone, Economics in One Lesson, The Cat in the Hat, A Brief History of Time, Democracy In America, and Goodnight Moon.
|
Harry Potter and the Sorcerer's Stone, The Cat in the Hat, and Goodnight Moon are all fiction books. Economics in One Lesson, A Brief History of Time, and Democracy In America are all non-fiction.
|
|
null | false
| 144
|
Multi-document summarization (MDS), the transformation of a set of documents into a short text containing their most important aspects, is a long-studied problem in NLP. Generated summaries have been shown to support humans dealing with large document collections in information seeking tasks BIBREF0 , BIBREF1 , BIBREF2 . However, when exploring a set of documents manually, humans rarely write a fully-formulated summary for themselves. Instead, user studies BIBREF3 , BIBREF4 show that they note down important keywords and phrases, try to identify relationships between them and organize them accordingly. Therefore, we believe that the study of summarization with similarly structured outputs is an important extension of the traditional task.
A representation that is more in line with observed user behavior is a concept map BIBREF5 , a labeled graph showing concepts as nodes and relationships between them as edges (Figure FIGREF2 ). Introduced in 1972 as a teaching tool BIBREF6 , concept maps have found many applications in education BIBREF7 , BIBREF8 , for writing assistance BIBREF9 or to structure information repositories BIBREF10 , BIBREF11 . For summarization, concept maps make it possible to represent a summary concisely and clearly reveal relationships. Moreover, we see a second interesting use case that goes beyond the capabilities of textual summaries: When concepts and relations are linked to corresponding locations in the documents they have been extracted from, the graph can be used to navigate in a document collection, similar to a table of contents. An implementation of this idea has been recently described by BIBREF12 .
The corresponding task that we propose is concept-map-based MDS, the summarization of a document cluster in the form of a concept map. In order to develop and evaluate methods for the task, gold-standard corpora are necessary, but no suitable corpus is available. The manual creation of such a dataset is very time-consuming, as the annotation includes many subtasks. In particular, an annotator would need to manually identify all concepts in the documents, while only a few of them will eventually end up in the summary.
To overcome these issues, we present a corpus creation method that effectively combines automatic preprocessing, scalable crowdsourcing and high-quality expert annotations. Using it, we can avoid the high effort for single annotators, allowing us to scale to document clusters that are 15 times larger than in traditional summarization corpora. We created a new corpus of 30 topics, each with around 40 source documents on educational topics and a summarizing concept map that is the consensus of many crowdworkers (see Figure FIGREF3 ).
As a crucial step of the corpus creation, we developed a new crowdsourcing scheme called low-context importance annotation. In contrast to traditional approaches, it allows us to determine important elements in a document cluster without requiring annotators to read all documents, making it feasible to crowdsource the task and overcome quality issues observed in previous work BIBREF13 . We show that the approach creates reliable data for our focused summarization scenario and, when tested on traditional summarization corpora, creates annotations that are similar to those obtained by earlier efforts.
To summarize, we make the following contributions: (1) We propose a novel task, concept-map-based MDS (§ SECREF2 ), (2) present a new crowdsourcing scheme to create reference summaries (§ SECREF4 ), (3) publish a new dataset for the proposed task (§ SECREF5 ) and (4) provide an evaluation protocol and baseline (§ SECREF7 ). We make these resources publicly available under a permissive license.
To overcome these issues, we present a corpus creation method that effectively combines automatic preprocessing, scalable crowdsourcing and high-quality expert annotations. Using it, we can avoid the high effort for single annotators, allowing us to scale to document clusters that are 15 times larger than in traditional summarization corpora.
|
What steps does their method contain?
|
Automatic preprocessing, scalable crowdsourcing and high-quality expert annotations.
|
null | false
| 428
|
In Table 5, we compare the computational complexity of action selection for CASEC and DCG, which is the bottleneck of both algorithms. CASEC is slightly faster than DCG by virtue of graph sparsity.****Table 5: Average time (millisecond) for 1000 action selection phases of CASEC/DCG. CASEC uses a graph with sparseness 0.2 while DCG uses the full graph. To ensure a fair comparison, both Max-Sum/Max-Plus algorithms pass messages for 8 iterations. The batch size is set to 10.
|
The paper misses relevant work: Deep Implicit Coordination Graphs for Multi-agent Reinforcement Learning,Could authors provide a legitimate reason for not including relevant work into this submission?
|
The main reason is that DICG [1] does not incorporate pairwise payoff functions. Graphs in DICG are used to mix observation of agents which serves as the input of fully-decomposed actors or centralized critics. Consequently, DICG still suffers from the relative generalization problem. In our ablation study, Attn. (described in Appendix B.2), we use the DICG-style attention mechanism to assign weights to payoff functions. We empirically compare against this ablation in Table 5 (page 20).
As suggested by the reviewer, in the revised version of our paper, we discuss and compare against DICG.
|
null | false
| 66
|
Recent work on convolutional neural network architectures showed that they are competitive with recurrent architectures even on tasks where modeling long-range dependencies is critical, such as language modeling BIBREF0 , machine translation BIBREF1 , BIBREF2 and speech synthesis BIBREF3 . In end-to-end speech recognition however, recurrent architectures are still prevalent for acoustic and/or language modeling BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 .
There is a history of using convolutional networks in speech recognition, but only as part of an otherwise more traditional pipeline. They have been first introduced as TDNNs to predict phoneme classes BIBREF9 , and later to generate HMM posteriorgrams BIBREF10 . They have more recently been used in end-to-end frameworks, but only in combination with recurrent layers BIBREF6 , or n-gram language models BIBREF11 , or for phone recognition BIBREF12 , BIBREF13 . Nonetheless, convolutional architectures are prevalent when learning from the raw waveform BIBREF14 , BIBREF15 , BIBREF16 , BIBREF13 , BIBREF17 , because they naturally model the computation of standard features such as mel-filterbanks. Given the evidence that they are also suitable on long-range dependency tasks, we expect convolutional neural networks to be competitive at all levels of the speech recognition pipeline.
In this paper, we present a fully convolutional approach to end-to-end speech recognition. Building on recent advances in convolutional learnable front-ends for speech BIBREF13 , BIBREF17 , convolutional acoustic models BIBREF11 , and convolutional language models BIBREF0 , the paper has four main contributions:
More precisely, we perform experiments on the large vocabulary task of the Wall Street Journal dataset (WSJ) and on the 1000h Librispeech. Our overall pipeline improves the state- of-the-art of end-to-end systems on both datasets. In particular, we decrease by 2% (absolute) the Word Error Rate on the noisy test set of Librispeech compared to DeepSpeech 2 [7] and the best sequence-to-sequence model [9]. On clean speech, the improvement is about 0.5% on Librispeech compared to the best end-to-end systems; on WSJ, our results are competitive with the current state-of-the-art, a DNN-HMM system [19].
|
What dataset is used for the experiments?
|
They performed experiments on the large vocabulary task of the Wall Street Journal dataset (WSJ) and on the 1000h Librispeech.
|
null | false
| null |
List of the biggest oceans in the world
|
Pacific Ocean
Atlantic Ocean
Indian Ocean
Southern Ocean
Arctic Ocean
Coral Sea
Arabian Sea
|
|
null | false
| 25
|
State-of-the-art automatic speech recognition (ASR) systems BIBREF0 have large model capacities and require significant quantities of training data to generalize. Labeling thousands of hours of audio, however, is expensive and time-consuming. A natural question to ask is how to achieve better generalization with fewer training examples. Active learning studies this problem by identifying and labeling only the most informative data, potentially reducing sample complexity. How much active learning can help in large-scale, end-to-end ASR systems, however, is still an open question.
The speech recognition community has generally identified the informativeness of samples by calculating confidence scores. In particular, an utterance is considered informative if the most likely prediction has small probability BIBREF1 , or if the predictions are distributed very uniformly over the labels BIBREF2 . Though confidence-based measures work well in practice, less attention has been focused on gradient-based methods like Expected Gradient Length (EGL) BIBREF3 , where the informativeness is measured by the norm of the gradient incurred by the instance. EGL has previously been justified as intuitively measuring the expected change in a model's parameters BIBREF3 .We formalize this intuition from the perspective of asymptotic variance reduction, and experimentally, we show EGL to be superior to confidence-based methods on speech recognition tasks. Additionally, we observe that the ranking of samples scored by EGL is not correlated with that of confidence scoring, suggesting EGL identifies aspects of an instance that confidence scores cannot capture.
In BIBREF3 , EGL was applied to active learning on sequence labeling tasks, but our work is the first we know of to apply EGL to speech recognition in particular. Gradient-based methods have also found applications outside active learning. For example, BIBREF4 suggests that in stochastic gradient descent, sampling training instances with probabilities proportional to their gradient lengths can speed up convergence. From the perspective of variance reduction, this importance sampling problem shares many similarities to problems found in active learning.
EGL has previously been justified as intuitively measuring the expected change in a model’s parameters [4].
|
Can the EGL measure the expected change in a model's parameters intuitively?
|
Yes, it can.
|
null | false
| null |
The Purdue University football team traces its origin back to October 29, 1887, when its team fell to Butler College by a score of 48–6 in Indianapolis, Indiana. A group of students at Purdue University formed the school's first football team in 1887. Albert Berg was hired as the coach. Despite being deaf, Berg was reportedly "the only man in the territory with any knowledge of the game." Berg was 23 years old when he became Purdue's football "coacher." He was paid $1 for each lesson he gave to the newly organized football team and had only one week to prepare the team for its first game. The 1887 Purdue team played its only game on October 29, 1887, against the Butler College team at Athletic Park in Indianapolis. Butler soundly defeated Berg's squad by a score of 48–6. After the loss to Butler, Purdue did not field a football team again until 1889.
|
Tell me a bulleted list of events that happened in this paragraph grouped by year
|
Sure, here is a bulleted list of events that happened in the mentioned paragraph grouped by year:
1887
* A group of students at Purdue University formed the school's first football team
* Albert Berg was hired as the coach
* On October 29, 1887, Purdue University football team fell to Butler College by a score of 48–6 in Indianapolis, Indiana
1889
* First time that Purdue fielded a football team since their loss to Butler College on October 29, 1887
|
1801.09030
| true
| null |
In the TCM prescription generation task, the textual symptom descriptions can be seen as the question and the aim of the task is to produce a set of TCM herbs that form a prescription as the answer to the question. However, the set of herbs is different from the textual answers to a question in the QA task. A difference that is most evident is that there will not be any duplication of herbs in the prescription. However, the basic seq2seq model sometimes produces the same herb tokens repeatedly when applied to the TCM prescription generation task. This phenomenon can hurt the performance of recall rate even after applying a post-process to eliminate repetitions. Because in a limited length of the prescription , the model would produce the same token over and over again, rather than real and novel ones. Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order. In this paper, we explore to automatically generate TCM prescriptions based on textual symptoms. We propose a soft seq2seq model with coverage mechanism and a novel soft loss function. The coverage mechanism is designed to make the model aware of the herbs that have already been generated while the soft loss function is to relieve the side effect of strict order assumption. In the experiment results, our proposed model beats all the baselines in professional evaluations, and we observe a large increase in both the recall rate and the F1 score compared with the basic seq2seq model.
Furthermore, the basic seq2seq assumes a strict order between generated tokens, but in reality, we should not severely punish the model when it predicts the correct tokens in the wrong order.
|
Do they impose any grammatical constraints over the generated output?
|
No.
|
null | false
| null |
Who was the Apostle Paul?
|
Apostle Paul, who was also known as Saint Paul, was a Christian Apostle who spread the teachings of Jesus Christ. According to the book of Acts in the bible, Paul was Pharisees and lead persecution of early disciples of Jesus. He encountered Jesus on the road to Damascus to find Christians there and bring them bound to Jerusalem. But on his way a light brighter than a sun shone around him and those with him from heaven at mid day. Then he fell to the ground and heard a voice, "Saul, saul, why are you persecuting me?". and he said "who are you?" then the lord said, "I am Jesus, whom you are persecuting. It is hard for you to kick against the goads." Paul asked "What do you want me to do?" then the lord said "Arise and go into the city, and you will be told what you must do." Having been made blind, he was lead to the city and days later his sight was restored by Ananias of Damascus. Then Paul was baptized and immediately started to proclaim Christian message. He proclaimed mainly to the gentiles and he is also known as an Apostle to the gentiles. He planted several churches in Galatia, Philippi, Thessalonika, Corinth and Ephesus.Thirteen or Fourteen of the twenty seven books in the new Testament was written by Paul through the guidance of the holy spirit.
|
|
null | false
| null |
StarCraft is a real-time strategy video game developed and published by Blizzard Entertainment for Microsoft Windows. The first installment of the video game series of the same name, it was released in 1998. A Classic Mac OS version was released in 1999, and a Nintendo 64 port co-developed with Mass Media and published by Nintendo was released in 2000.
Blizzard started work on the game shortly after Warcraft II, another real-time strategy game, was released in 1995. The first incarnation debuted at the 1996 Electronic Entertainment Expo, where it was unfavorably compared to Warcraft II. As a result, the project was entirely overhauled before being showcased to the public in early 1997, at which time it received a far more positive response. The game's multiplayer is particularly popular in South Korea, where players and teams participate in professional competitions, earn sponsorships, and compete in televised tournaments.
Set in a fictitious future timeline during the 25th century AD in a distant part of the Milky Way galaxy known as the Koprulu Sector, the game revolves around three intelligent species fighting for dominance: the Terrans are humans exiled from Earth who are now skilled at adapting to any situation; the Zerg are a race of insectoid aliens in pursuit of genetic perfection and obsessed with assimilating other races; the Protoss are a humanoid species with advanced technology and psionic abilities who are attempting to preserve their civilization and strict philosophy about their way of life from the Zerg.
Many journalists of the video game industry have praised StarCraft as one of the most important, and one of the greatest video games of all time. The game is also said to have raised the bar for developing real-time strategy (RTS) games. With more than 11 million copies sold worldwide by February 2009, StarCraft became one of the best-selling games for the personal computer. It has been praised for pioneering the use of unique factions in RTS gameplay, and for having a compelling story.
StarCraft has had its storyline adapted and expanded through a series of novels published between 2000 and 2016, the expansion pack StarCraft: Brood War, and two officially authorized add-ons, Insurrection and Retribution. A sequel, StarCraft II: Wings of Liberty, was released in July 2010, along with two expansion packs and a campaign pack between 2013 and 2016, while a remastered edition of the original and its expansion pack was released in August 2017. The original game, along with the expansion, was released for free in April 2017.
|
Tell me about StarCraft.
|
StarCraft is a Real Time Strategy game released in 1998 by Blizzard Entertainment. It is widely considered one of the greatest video games of all time. The story is set in the 25th century and features three races struggling for power: Terran, Zerg, and Protoss. There have been one sequel and multiple expansions. The multiplayer mode was very popular, spawning professional competitions.
|
null | false
| null |
Tell me which of the following guitars are hollow bodies or not: Gibson ES-175, Gibson ES-335, Gibson SG, Gibson Les Paul, Fender Stratocaster, Fender Telecaster.
|
Gibson ES-175 is one of the most famous examples of a well-crafted hollow-body guitar, beloved by numerous jazz guitarists over many decades. Most variants of Gibson's SG, Les Paul, and Fender's Stratocaster and Telecaster are considered solid bodies. However, there were numerous variant models which featured "hollowed" or "chambered" bodies. One prime example is Fender's Telecaster Thinline model from the 1970's, which features a semi-hollow design with an open f-hole on only one side of the body. Speaking of "semi-hollows," Gibson's ES-335 is a semi-hollow body guitar where a solid "center block" of wood passes through the middle of the body, providing additional structural stability over full hollow body guitars.
|
|
1909.13695
| false
| null |
The Business Language Testing Service (BULATS) test of Cambridge Assessment English BIBREF27 is a multi-level computer-based English test. It consists of read speech and free-speaking components, with the candidate responding to prompts. The BULATS spoken test has five sections, all with materials appropriate to business scenarios. The first section (A) contains eight questions about the candidate and their work. The second section (B) is a read-aloud section in which the candidates are asked to read eight sentences. The last three sections (C, D and E) have longer utterances of spontaneous speech elicited by prompts. In section C the candidates are asked to talk for one minute about a prompted business related topic. In section D, the candidate has one minute to describe a business situation illustrated in graphs or charts, such as pie or bar charts. The prompt for section E asks the candidate to imagine they are in a specific conversation and to respond to questions they may be asked in that situation (e.g. advice about planning a conference). This section is made up of 5x 20 seconds responses.
In this work, non-native speech from the BULATS test is used as both training and test data for the speaker verification systems. To investigate how the systems generalise, data for testing is also taken from the Cambridge Assessment English Linguaskill online test. Like BULATS, this is also a multi-level test and has a similar format composed of the same five sections as described before but assesses general English ability.
The Business Language Testing Service (BULATS) test of Cambridge Assessment English BIBREF27 is a multi-level computer-based English test. It consists of read speech and free-speaking components, with the candidate responding to prompts. The BULATS spoken test has five sections, all with materials appropriate to business scenarios.
In this work, non-native speech from the BULATS test is used as both training and test data for the speaker verification systems. To investigate how the systems generalise, data for testing is also taken from the Cambridge Assessment English Linguaskill online test. Like BULATS, this is also a multi-level test and has a similar format composed of the same five sections as described before but assesses general English ability.
|
What standard large speaker verification corpora is used for evaluation?
|
The answers are shown as follows:
* non-native speech from the BULATS test
|
null | false
| null |
Battle of Pävankhind was a rearguard last stand that took place on 13 July 1660, at a mountain pass in the vicinity of fort Vishalgad, near the city of Kolhapur with the Maratha warrior Baji Prabhu Deshpande and Sambhu Singh Jadhav against Siddi Masud of the Bijapur Sultanate. The engagement ended with the destruction of the Maratha forces, and a tactical victory for the Bijapur Sultanate that failed to achieve a strategic victory.
|
Summarize the battle of Pävankhind.
|
In 1660, Shivaji Maharaj was trapped in the fort of Panhala, under siege and vastly outnumbered by an Adilshah army led by an Abyssinian General called Siddi Masud. Shivaji Maharaj planned to escape to the Maratha fort of Vishalgad. Two Maratha sardars under the Adilshahi General Siddi Jouhar, namely Suryarao Surve and Jaswantrao Dalvi had also encircled the fort of Vishalgad simultaneously. Shivaji Maharaj waited for months, planning and depleting the Adilshah's food source.
Shivaji Maharaj, Baji Prabhu, and around 600 of their best troops, would dash through the Adilshahi force at night. A man named Shiva Kashid, who resembled Shivaji Maharaj in appearance, had volunteered to dress like the king and get captured. This bought some additional time due to the confusion over identity, before Siddi Masud realised the error and gave chase.
Baji Prabhu Deshpande Statue in Panhala Fort
Shivaji Maharaj made his escape on the dark night of 13 July, with his troops and the Adilshah army was in pursuit with an army of 10,000. It was clear that there was no way to shake off the enemy, and that the Marathas would not simultaneously prevail over both the Moghul garrison at Vishalgad and the chasing Adilshahi army.
The only option was for a section of the Marathas to stay back and fight the larger Adilshahi forces in a rearguard action, while the rest of the Marathas would carry on to their destination. Shivaji Maharaj decided to split his forces. Baji Prabhu agreed to face the troops of Bijapur with 300 soldiers. Shivaji Maharaj told Baji Prabhu that he would hear cannons being fired 5 times from Vishalgad, signaling Shivaji Maharaj’s safety. The strategic position of Ghod Khind (Horse Pass) was chosen for the defence, as it was very narrow and only a few soldiers could pass at any one time.
|
null | false
| 335
|
While the demand for physical and manual labor is gradually declining, there is a growing need for a workforce with soft skills. Which soft skill do you think would be the most valuable in your daily life? According to an article in Forbes BIBREF0 , 70% of employed Americans agree that public speaking skills are critical to their success at work. Yet, it is one of the most dreaded acts. Many people rate the fear of public speaking even higher than the fear of death BIBREF1 . To alleviate the situation, several automated systems are now available that can quantify behavioral data for participants to reflect on BIBREF2 . Predicting the viewers' ratings from the speech transcripts would enable these systems to generate feedback on the potential audience behavior.
Predicting human behavior, however, is challenging due to its huge variability and the way the variables interact with each other. Running Randomized Control Trials (RCT) to decouple each variable is not always feasible and also expensive. It is possible to collect a large amount of observational data due to the advent of content sharing platforms such as YouTube, Massive Open Online Courses (MOOC), or ted.com. However, the uncontrolled variables in the observational dataset always keep a possibility of incorporating the effects of the “data bias” into the prediction model. Recently, the problems of using biased datasets are becoming apparent. BIBREF3 showed that the error rates in the commercial face-detectors for the dark-skinned females are 43 times higher than the light-skinned males due to the bias in the training dataset. The unfortunate incident of Google's photo app tagging African-American people as “Gorilla” BIBREF4 also highlights the severity of this issue.
We address the data bias issue as much as possible by carefully analyzing the relationships of different variables in the data generating process. We use a Causal Diagram BIBREF5 , BIBREF6 to analyze and remove the effects of the data bias (e.g., the speakers' reputations, popularity gained by publicity, etc.) in our prediction model. In order to make the prediction model less biased to the speakers' race and gender, we confine our analysis to the transcripts only. Besides, we normalize the ratings to remove the effects of the unwanted variables such as the speakers' reputations, publicity, contemporary hot topics, etc.
For our analysis, we curate an observational dataset of public speech transcripts and other meta-data collected from the ted.com website. This website contains a large collection of high-quality public speeches that are freely available to watch, share, rate, and comment on. Every day, numerous people watch and annotate their perceptions about the talks. Our dataset contains 2231 public speech transcripts and over 5 million ratings from the spontaneous viewers of the talks. The viewers annotate each talk by 14 different labels—Beautiful, Confusing, Courageous, Fascinating, Funny, Informative, Ingenious, Inspiring, Jaw-Dropping, Long-winded, Obnoxious, OK, Persuasive, and Unconvincing.
We use two neural network architectures in the prediction task. In the first architecture, we use LSTM BIBREF7 for a sequential input of the words within the sentences of the transcripts. In the second architecture, we use TreeLSTM BIBREF8 to represent the input sentences in the form of a dependency tree. Our experiments show that the dependency tree-based model can predict the TED talk ratings with slightly higher performance (average F-score 0.77) than the word sequence model (average F-score 0.76). To the best of our knowledge, this is the best performance in the literature on predicting the TED talk ratings. We compare the performances of these two models with a baseline of classical machine learning techniques using hand-engineered features. We find that the neural networks largely outperform the classical methods. We believe this gain in performance is achieved by the networks' ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels.
We believe this gain in performance is achieved by the networks’ ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels.
|
How to achieve the gain in performance?
|
this gain in performance is achieved by the ability to capture better the natural relationship of the words (as compared to the hand engineered feature selection approach in the baseline methods) and the correlations among different rating labels.
|
null | false
| null |
What are the primary causes of inflation?
|
The 5 causes of inflation are increase in wages, increase in the price of raw materials, increase in taxes, decline in productivity, increase in money supply.
|
|
null | false
| null |
What activities an admin or an administrator of any data tools & platform or data tools can do?
|
An administrator in general is someone who has can perform more elevated operations, they will have all access. As the famous quote goes "With great power, comes great responsibility". With custom encryption keys and ability to encrypt data it is now possible to encrypt data being processed in the platform which will not be accessible/visible by the administrator.
|
|
null | false
| 130
|
Finally we test whether the target really needs to be represented by natural language (e.g. “Do you support X?”). Namely, an alternative is to represent the target with a trainable embedding instead of a question.
In order to fit target embeddings smoothly into our architecture, we represent each target type with a different reserved symbol from the M-Bert vocabulary. Segment A is then set to this symbol instead of a natural language question.
The results for this experiment are listed in the bottom row of Table TABREF39. An M-Bert model that learns target embeddings instead of encoding a question performs clearly worse in the supervised and cross-lingual settings. From this we conclude that spelled-out natural language questions provide important linguistic detail that can help in stance detection.
In order to fit target embeddings smoothly into our architecture, we represent each target type with a different reserved symbol from the M-BERT vocabulary.
|
What is used to represent each target type in order to fit target embeddings smoothly into the architecture?
|
A different reserved symbol from the M-BERT vocabulary.
|
null | false
| null |
What can I make with leftover stale bread?
|
You can pulse it in the food processor to make bread crumbs. You can bake it by itself to make croutons. You can bake it in the oven with other ingredients to make dishes such as bread pudding, french onion soup. You can cook it on a stovetop to make french toast. You can also use it to make stuffing for a turkey.
|
|
null | false
| null |
When was Amsterdam founded?
|
Amsterdam was founded in the late 13th century.
|
|
null | false
| null |
Operation Aurora was a series of cyber attacks conducted by advanced persistent threats such as the Elderwood Group based in Beijing, China, with ties to the People's Liberation Army. First publicly disclosed by Google on January 12, 2010, in a blog post, the attacks began in mid-2009 and continued through December 2009.
The attack was aimed at dozens of other organizations, of which Adobe Systems, Akamai Technologies, Juniper Networks, and Rackspace have publicly confirmed that they were targeted. According to media reports, Yahoo, Symantec, Northrop Grumman, Morgan Stanley, and Dow Chemical were also among the targets.
As a result of the attack, Google stated in its blog that it plans to operate a completely uncensored version of its search engine in China "within the law, if at all," and acknowledged that if this is not possible, it may leave China and close its Chinese offices. Official Chinese sources claimed this was part of a strategy developed by the U.S. government.
The attack was named "Operation Aurora" by Dmitri Alperovitch, Vice President of Threat Research at cybersecurity company McAfee. Research by McAfee Labs discovered that "Aurora" was part of the file path on the attacker's machine that was included in two of the malware binaries McAfee said were associated with the attack. "We believe the name was the internal name the attacker(s) gave to this operation," McAfee Chief Technology Officer George Kurtz said in a blog post.
According to McAfee, the primary goal of the attack was to gain access to and potentially modify source code repositories at these high-tech, security, and defense contractor companies. "[The SCMs] were wide open," says Alperovitch. "No one ever thought about securing them, yet these were the crown jewels of most of these companies in many ways—much more valuable than any financial or personally identifiable data that they may have and spend so much time and effort protecting."
History
Flowers left outside Google China's headquarters after its announcement it might leave the country
On January 12, 2010, Google revealed on its blog that it had been the victim of a cyber attack. The company said the attack occurred in mid-December and originated from China. Google stated that over 20 other companies had been attacked; other sources have since cited that more than 34 organizations were targeted. As a result of the attack, Google said it was reviewing its business in China. On the same day, United States Secretary of State Hillary Clinton issued a brief statement condemning the attacks and requesting a response from China.
On January 13, 2010, the news agency All Headline News reported that the United States Congress plans to investigate Google's allegations that the Chinese government used the company's service to spy on human rights activists.
In Beijing, visitors left flowers outside of Google's office. However, these were later removed, with a Chinese security guard stating that this was an "illegal flower tribute". The Chinese government has yet to issue a formal response, although an anonymous official stated that China was seeking more information on Google's intentions.
Attackers involved
Further information: Cyberwarfare by China
Technical evidence including IP addresses, domain names, malware signatures, and other factors, show Elderwood was behind the Operation Aurora attack. The "Elderwood" group was named by Symantec (after a source-code variable used by the attackers), and is referred to as the "Beijing Group" by Dell Secureworks. The group obtained some of Google's source code, as well as access to information about Chinese activists. Elderwood also targeted numerous other companies in the shipping, aeronautics, arms, energy, manufacturing, engineering, electronics, financial, and software sectors.
The "APT" designation for the Chinese threat actors responsible for attacking Google is APT17.
Elderwood specializes in attacking and infiltrating second-tier defense industry suppliers that make electronic or mechanical components for top defense companies. Those firms then become a cyber "stepping stone" to gain access to top-tier defense contractors. One attack procedure used by Elderwood is to infect legitimate websites frequented by employees of the target company – a so-called "water hole" attack, just as lions stake out a watering hole for their prey. Elderwood infects these less-secure sites with malware that downloads to a computer that clicks on the site. After that, the group searches inside the network to which the infected computer is connected, finding and then downloading executives' e-mails and critical documents on company plans, decisions, acquisitions, and product designs.
Attack analysis
In its blog posting, Google stated that some of its intellectual property had been stolen. It suggested that the attackers were interested in accessing Gmail accounts of Chinese dissidents. According to the Financial Times, two accounts used by Ai Weiwei had been attacked, their contents read and copied; his bank accounts were investigated by state security agents who claimed he was under investigation for "unspecified suspected crimes". However, the attackers were only able to view details on two accounts and those details were limited to things such as the subject line and the accounts' creation date.
Security experts immediately noted the sophistication of the attack. Two days after the attack became public, McAfee reported that the attackers had exploited purported zero-day vulnerabilities (unfixed and previously unknown to the target system developers) in Internet Explorer and dubbed the attack "Operation Aurora". A week after the report by McAfee, Microsoft issued a fix for the issue, and admitted that they had known about the security hole used since September. Additional vulnerabilities were found in Perforce, the source code revision software used by Google to manage their source code.
VeriSign's iDefense Labs claimed that the attacks were perpetrated by "agents of the Chinese state or proxies thereof".
According to a diplomatic cable from the U.S. Embassy in Beijing, a Chinese source reported that the Chinese Politburo directed the intrusion into Google's computer systems. The cable suggested that the attack was part of a coordinated campaign executed by "government operatives, public security experts and Internet outlaws recruited by the Chinese government." The report suggested that it was part of an ongoing campaign in which attackers have "broken into American government computers and those of Western allies, the Dalai Lama and American businesses since 2002." According to The Guardian's reporting on the leak, the attacks were "orchestrated by a senior member of the Politburo who typed his own name into the global version of the search engine and found articles criticising him personally."
Once a victim's system was compromised, a backdoor connection that masqueraded as an SSL connection made connections to command and control servers running in Illinois, Texas, and Taiwan, including machines that were running under stolen Rackspace customer accounts. The victim's machine then began exploring the protected corporate intranet that it was a part of, searching for other vulnerable systems as well as sources of intellectual property, specifically the contents of source code repositories.
The attacks were thought to have definitively ended on Jan 4 when the command and control servers were taken down, although it is not known at this point whether or not the attackers intentionally shut them down. However, the attacks were still occurring as of February 2010.
Response and aftermath
The German, Australian, and French governments publicly issued warnings to users of Internet Explorer after the attack, advising them to use alternative browsers at least until a fix for the security hole was made. The German, Australian, and French governments considered all versions of Internet Explorer vulnerable or potentially vulnerable.
In an advisory on January 14, 2010, Microsoft said that attackers targeting Google and other U.S. companies used software that exploits a hole in Internet Explorer. The vulnerability affects Internet Explorer versions 6, 7, and 8 on Windows 7, Vista, Windows XP, Server 2003, Server 2008 R2, as well as IE 6 Service Pack 1 on Windows 2000 Service Pack 4.
The Internet Explorer exploit code used in the attack has been released into the public domain, and has been incorporated into the Metasploit Framework penetration testing tool. A copy of the exploit was uploaded to Wepawet, a service for detecting and analyzing web-based malware operated by the computer security group at the University of California, Santa Barbara. "The public release of the exploit code increases the possibility of widespread attacks using the Internet Explorer vulnerability," said George Kurtz, CTO of McAfee, of the attack. "The now public computer code may help cybercriminals craft attacks that use the vulnerability to compromise Windows systems."
Security company Websense said it identified "limited public use" of the unpatched IE vulnerability in drive-by attacks against users who strayed onto malicious Web sites. According to Websense, the attack code it spotted is the same as the exploit that went public last week.[clarification needed] "Internet Explorer users currently face a real and present danger due to the public disclosure of the vulnerability and release of attack code, increasing the possibility of widespread attacks," said George Kurtz, chief technology officer of McAfee, in a blog update. Confirming this speculation, Websense Security Labs identified additional sites using the exploit on January 19. According to reports from Ahnlab, the second URL was spread through the Instant Messenger network Misslee Messenger, a popular IM client in South Korea.
Researchers have created attack code that exploits the vulnerability in Internet Explorer 7 (IE7) and IE8—even when Microsoft's recommended defensive measure (Data Execution Prevention (DEP)) is turned on.[dubious – discuss] According to Dino Dai Zovi, a security vulnerability researcher, "even the newest IE8 isn't safe from attack if it's running on Windows XP Service Pack 2 (SP2) or earlier, or on Windows Vista RTM (release to manufacturing), the version Microsoft shipped in January 2007."
Microsoft admitted that the security hole used had been known to them since September. Work on an update was prioritized and on Thursday, January 21, 2010, Microsoft released a security patch aiming to counter this weakness, the published exploits based on it and a number of other privately reported vulnerabilities. They did not state if any of the latter had been used or published by exploiters or whether these had any particular relation to the Aurora operation, but the entire cumulative update was termed critical for most versions of Windows, including Windows 7.
Security researchers continued to investigate the attacks. HBGary, a security firm, released a report in which they claimed to have found some significant markers that might help identify the code developer. The firm also said that the code was Chinese language based but could not be specifically tied to any government entity.
On February 19, 2010, a security expert investigating the cyber-attack on Google, has claimed that the people behind the attack were also responsible for the cyber-attacks made on several Fortune 100 companies in the past one and a half years. They have also tracked the attack back to its point of origin, which seems to be two Chinese schools, Shanghai Jiao Tong University and Lanxiang Vocational School. As highlighted by The New York Times, both of these schools have ties with the Chinese search engine Baidu, a rival of Google China. Both Lanxiang Vocational and Jiaotong University have denied the allegation.
In March 2010, Symantec, which was helping investigate the attack for Google, identified Shaoxing as the source of 21.3% of all (12 billion) malicious emails sent throughout the world.
Google retrospective
On October 3, 2022, Google on YouTube released a six-episode series covering the events that occurred during Operation Aurora, with commentary from insiders who dealt with the attack, though the series primary focus was to reassure the Google-using public that measures are in place to counter hacking attempts.
|
Given this article about Operation Aurora, How was the operation named?
|
The attack was named "Operation Aurora" by Dmitri Alperovitch, Vice President of Threat Research at cybersecurity company McAfee. Research by McAfee Labs discovered that "Aurora" was part of the file path on the attacker's machine that was included in two of the malware binaries McAfee said were associated with the attack. "We believe the name was the internal name the attacker(s) gave to this operation," McAfee Chief Technology Officer George Kurtz said in a blog post.
|
null | false
| null |
What is the best indoor plant?
|
Money plant.
|
|
null | false
| null |
Swift has won 12 Grammy Awards (including three for Album of the Year—tying for the most by an artist), an Emmy Award, 40 American Music Awards (the most won by an artist),29 Billboard Music Awards (the most won by a woman), 92 Guinness World Records, 14 MTV Video Music Awards (including three Video of the Year wins—the most by an act), 12 Country Music Association Awards (including the Pinnacle Award), eight Academy of Country Music Awards, and two Brit Awards. As a songwriter, she has been honored by the Nashville Songwriters Association, the Songwriters Hall of Fame, and the National Music Publishers' Association and was the youngest person on Rolling Stone's list of the 100 Greatest Songwriters of All Time in 2015. At the 64th BMI Awards in 2016, Swift was the first woman to be honored with an award named after its recipient. In 2023, Swift became the first act to be named the global best-selling artist of the year by the International Federation of the Phonographic Industry (IFPI) thrice.
From available data, Swift has amassed over 50 million album sales, 150 million single sales, and 114 million units globally, including 78 billion streams. The International Federation of the Phonographic Industry (IFPI) ranked her as the Global Recording Artist of the Year for a record three times (2014, 2019 and 2022). Swift has the most number-one albums in the United Kingdom and Ireland for a female artist this millennium and earned the highest income for an artist on Chinese digital music platforms—RMB 159,000,000. Swift is the most streamed female act on Spotify, and the only artist to have received more than 200 million streams in one day (228 million streams on October 21, 2022).The most entries and the most simultaneous entries for an artist on the Billboard Global 200, with 94 and 31 songs, respectively, are among her feats. Her Reputation Stadium Tour (2018) is the highest-grossing North American tour ever, and she was the world's highest-grossing female touring act of the 2010s. Beginning with Fearless, all of her studio albums opened with over a million global units.
|
Given the passage below, tell me how many Grammy Awards, Emmy Awards, and Billboard Music Awards Taylor Swift has won and how many simultaneous entries she had on the Billboard Global 200 when she had the most simultaneous entries for an artist.
|
Taylor Swift won 12 Grammy Awards, one Emmy Awards, 29 Billboard Music Awards, and had 31 songs on the Billboard Global 200 simultaneously, breaking the record for the most simultaneous entries for an artist on the Billboard Global 200.
|
null | false
| null |
Give me list of top 10 countries by GDP
|
Below is the list of top 10 countries by GDP in 2023:
1. United States (approx. 26.2 trillion USD)
2. China (approx. 19.2 trillion USD)
3. Japan (approx. 4.4 trillion USD)
4. Germany (approx. 4.1 trillion USD)
5. India (approx. 3.8 trillion USD)
6. United Kingdom (approx. 3.5 trillion USD)
7. France (approx. 2.8 trillion USD)
8. Canada (approx. 2.3 trillion USD)
9. Russia (approx. 2.1 trillion USD)
10. Brazil (approx. 2 trillion USD)
|
|
null | false
| null |
what are the different activities you can engage 5 year old kid during spring break
|
kids loves outdoor so its a great idea to have them do some activities outdoors like cycling.
5 year old also love crafts so you can teach them how to build a paper plane or party popper.
painting is another great idea , of course make sure you are not letting them do it on the carpet or wall !!
some kids loves baking cookies or making pancakes with parent and at the end of day a good nice story telling by reading a book is always fun.
spring breaks are not easy but with a little planning you and your kid can have lots of fun together.
|
|
null | false
| null |
What is a mainframe?
|
A mainframe is the central data repository, or hub , in a corporation's data processing center, linked to users through less powerful devices such as workstations or terminals.
|
|
null | false
| null |
What are the ways you can use Zoom?
|
Most people primarily use Zoom for work purposes.
But “Zoom” cemented itself as a verb in 2020 when it felt like the whole world was Zooming into virtual happy hours, birthday parties, bridal showers, memorials, yoga classes, and doctor's appointments. At first, it was a novelty — people took screenshots of their crowded Zoom calls, smiling faces raising a glass to show that even though they couldn't be physically together, they were still in it together.
Zoom is often still used for social gatherings, but less so after many users complained of "Zoom fatigue" in 2020 and the years that followed.
|
|
1702.01517
| false
| null |
Our data are collected from the yelp academic dataset, provided by Yelp.com, a popular restaurant review website. The data set contains three types of objects: business, user, and review, where business objects contain basic information about local businesses (i.e. restaurants), review objects contain review texts and star rating, and user objects contain aggregate information about a single user across all of Yelp. Table TABREF31 illustrates the general statistics of the dataset.
Our data are collected from the yelp academic dataset, provided by Yelp.com, a popular restaurant review website. The data set contains three types of objects: business, user, and review, where business objects contain basic information about local businesses (i.e. restaurants), review objects contain review texts and star rating, and user objects contain aggregate information about a single user across all of Yelp. Table TABREF31 illustrates the general statistics of the dataset.
|
Does they focus on any specific product/service domain?
|
The answers are shown as follows:
* local businesses (i.e. restaurants)
|
null | false
| null |
Give me a list of things to ask while triaging a support ticket
|
1) Is this issue being observed in your production or non production environment
2) If its in production since how long it has been in production
3) Are you aware of any changes done by you or your team recently that could have contributed to this failure. Like upgrades of software and libraries. Or changes in infrastructure. Changes in user interaction
4) If the response to question #3 is no, please request for verification
5) Understand if this is impacting single user, multiple users or all users. Along the same lines understand if this is impacting just one use case, multiple use cases or all use cases
6) Understand the precise symptom of the issue
7) Understand the business impact. As in financial impact or delivery (SLA) impact from this issue.
8) Understand if there is interest in workarounds to get unblocked. Or they seek full RCA with a proper fix.
|
|
null | false
| 172
|
We describe a dataset developed for Named Entity Recognition in German federal court decisions. It consists of approx. 67,000 sentences with over 2 million tokens. The resource contains 54,000 manually annotated entities, mapped to 19 fine-grained semantic classes: person, judge, lawyer, country, city, street, landscape, organization, company, institution, court, brand, law, ordinance, European legal norm, regulation, contract, court decision, and legal literature. The legal documents were, furthermore, automatically annotated with more than 35,000 TimeML-based time expressions. The dataset, which is available under a CC-BY 4.0 license in the CoNNL-2002 format, was developed for training an NER service for German legal documents in the EU project Lynx.
The dataset, which is available under a CC-BY 4.0 license in the CoNNL-2002 format, was developed for training an NER service for German legal documents in the EU project Lynx.
|
What was the dataset developed for?
|
The dataset was developed for training an NER service for German legal documents in the EU project Lynx.
|
null | false
| null |
Hanlon's razor is an adage or rule of thumb that states, "Never attribute to malice that which is adequately explained by stupidity." Known in several other forms, it is a philosophical razor that suggests a way of eliminating unlikely explanations for human behavior. It is probably named after Robert J. Hanlon, who submitted the statement to Murphy's Law Book Two (1980). Similar statements have been recorded since at least the 18th century.
Origin
A similar quotation appears in Robert A. Heinlein's novella Logic of Empire (1941). The character "Doc" in Heinlein's story described the "devil theory" fallacy, explaining, "You have attributed conditions to villainy that simply result from stupidity."
The quotation as such was a submission credited in print to Robert J. Hanlon of Scranton, Pennsylvania, for a compilation of various jokes related to Murphy's law that were published in Arthur Bloch's Murphy's Law Book Two: More Reasons Why Things Go Wrong! (1980). It is unknown whether Hanlon knew of Heinlein's story or whether he independently constructed the phrase.[citation needed]
Hanlon's razor became well-known after its inclusion in the Jargon File, a glossary of computer programmer slang, since 1990. Later that same year, the Jargon File editors noted lack of knowledge about the term's derivation and the existence of a similar epigram by William James, though this was possibly intended as a reference to William James Laidlay. In 1996, the Jargon File entry on Hanlon's Razor noted the existence of the phrase in Heinlein's novella, with speculation that Hanlon's Razor might be a corruption of "Heinlein's Razor". The link to Murphy's law was described in a pair of 2001 blog entries by Quentin Stafford-Fraser, citing emails from Joseph E. Bigler. Subsequently, in 2002, the Jargon File entry noted the same. Current Jargon File refers to it as a "Murphyism".
The name was inspired by Occam's razor.
|
Given a reference text about Hanlon's razor, when did the adage become well-known?
|
Hanlon's razor became well-known after its inclusion in the Jargon File, a glossary of computer programmer slang, since 1990.
|
null | false
| null |
In the video game industry, a console war describes the competition between two or more video game console manufacturers in trying to achieve better consumer sales through more advanced console technology, an improved selection of video games, and general marketing around their consoles. While console manufacturers are generally always trying to out-perform other manufacturers in sales, these console wars engage in more direct tactics to compare their offerings directly against their competitors or to disparage the competition in contrast to their own, and thus the marketing efforts have tended to escalate in back-and-forth pushes.
While there have been many console wars to date, the term became popular between Sega and Nintendo during the late 1980s and early 1990s as Sega attempted to break into the United States video game market with its Sega Genesis console. Through a novel marketing approach and improved hardware, Sega had been able to gain a majority of the video game console market by 1991, three years after the Genesis’ launch. This caused back and forth competition between the two companies throughout the early 1990s. However, Nintendo eventually regained its market share and Sega stopped making home console hardware by 2001.
|
What is a console war in video games?
|
The video game console market started in 1972 with the release of the first home console, the Magnavox Odyssey. As more manufacturers entered the market and technology improved, the market began to coalesce around the concept of console generations, groupings of consoles with similar technical specifications that vied in the consumer marketplace. Since 1972, there have been nine such console generations, with two to three dominant manufacturers controlling the marketplace as an oligopoly.
As with most industries without a single dominant leader, console manufacturers have marketed their products in a manner to highlight them in a more favorable manner compared to their competitors', or to focus on features that their competitors may lack, often in aggressive manners. For example, console manufacturers in the 1980s and 1990s heavily relied on the word size of the central processor unit, emphasizing that games had better capabilities with 16-bit processors over 8-bit ones. This type of aggressive marketing led video game journalists to call the competitive marketing a "war" or "battle" as early as August 1988. As each new console generation emerged with new marketing approaches, journalists and consumers continued to use variations of the "war" language, including "system wars" and "console wars". By the early 2000s, the term "console war" was most commonly used to describe heated competition between console manufacturers within any generation.
While not the only console war, the rivalry between Sega and Nintendo for dominance of the North American video game market in the late 1980s and early 1990s is generally the most visible example of a console war. It established the use of aggressive marketing and advertising tactics by each company to try to gain control of the marketplace, and ended around 1995 when a new player, Sony, entered and disrupted the console space.
The United States video game industry suffered a severe market crash in 1983 from numerous factors which led to a larger market recession and increasing popularity of personal computers as a video game platform. A key contributing factor to the crash was the loss of publishing control for console games. Early success by some of the first third-party developers like Activision for the Atari VCS console led to venture capitalists bringing in teams of inexperienced programmers to try to capture the same success, but only managed to flood the market with poor quality games, which made it difficult for good quality games to sell. The video game crash impacted other factors in the industry that were already in decline, such as video game arcades.
In Japan, Nintendo had released its Famicom (Family Computer) console in 1983, one of the first consoles of the third generation or the "8-bit" generation. Japan did not have a similar third-party development system in place, and Nintendo maintained control on the manufacturing of game cartridges for the Famicom using a licensing model to limit which third-party games were published on it. The Famicom did launch with a technical fault that Nintendo scrambled to fix, but by the end of 1984, the Famicom was the best-selling console in Japan. Nintendo looked to release the unit in the United States, but recognized that the market was still struggling from the 1983 crash. Nintendo took several steps to redesign the Famicom prior to a United States launch. It was made to look like a VCR unit rather than a console, and was given the name the "Nintendo Entertainment System" to distance it from being a video game console. Further, Nintendo added a special 10NES lockout system that worked as a lock-and-key system with game cartridges to further prevent unauthorized games from being published for the system and avoid the loss of publishing control that had caused the 1983 crash. The NES revitalized the U.S. video game industry and established Nintendo as the dominant name in video game consoles over Atari. In lifetime sales, the NES had sold nearly 62 million units worldwide, with 34 million in North America.
At the same time, Sega was looking to get into the video game console industry as well, having been a successful arcade game manufacturer, but due to the downturn in arcade game business, looked to use that expertise for the home market. They released the SG-1000 console in Japan the same day as the Famicom in 1983, but sold only 160,000 units of the SG-1000 in its first year.
Sega redesigned the SG-1000 twice to try to build a system to challenge Nintendo's dominance; the SG-1000 Mark II remained compatible with the SG-1000 but failed to gain any further sales. The next iteration, the Sega Mark III, was released in 1985, using Sega's arcade hardware for its internals to provide more refined graphics. The console was slightly more powerful than the Famicom, and Sega's marketing attempted to push on the more advanced graphics their system offered over the Famicom. However, Sega found that Nintendo had also contracted other developers to only develop their games exclusive to the Famicom, and Sega was forced to develop their own titles or to port the games from the other developers themselves, limiting the size of the Mark III's library in Japan. Sega attempted to follow Nintendo with a worldwide release of the Mark III, rebranded as the Master System. The Master System was released in the United States in 1986, but Nintendo had similar licensing practices in the U.S. to keep developers exclusive to the NES, limiting the library of games that Sega could offer and to also ensure that another gaming crash didn't begin. Further, Sega's third-party distributor, the toy company Tonka, opted against localizing several of the Japanese games Sega had created, further capping the game library Sega could offer in the U.S. Only a total estimated two million systems were sold.
The Sega and Nintendo console war primarily centered on the launch of the Sega Genesis to try to outsell the Nintendo Entertainment System in the United States.
The fourth generation of video game consoles was started by the launch of NEC's PC Engine in 1987 in Japan. While the PC Engine used an 8-bit CPU, it included 16-bit graphic rendering components, and NEC marketed this heavily as a 16-bit game console to distinguish it from the Famicom and Mark III; when NEC brought the PC Engine worldwide, it was rebranded as the "TurboGrafx-16" to emphasize this. After the release of the TurboGrafx-16, use of the bit designation caught on. Which led manufacturers to focus their advertising heavily on the number of bits in a console system for the next two console generations.
NEC was another competitor to Sega and Nintendo. Following a similar path they had done for the Mark III, Sega used their arcade game technology, now using 16-bit processor boards, and adapted those into a home console, released in Japan in October 1988 as the Mega Drive. Compared to its prior consoles, the Mega Drive was designed to be more mature-looking and less like a toy compared to the Famicom to appeal to an older demographic of gamers, and "16-bit" was emblazoned on the console's case to emphasize this feature. While the system was positively received by gaming magazines like Famitsu, it was overshadowed by the release a week prior of Super Mario Bros. 3 for the Famicom.
As with the Master System, Sega also planned for a major push of the Mega Drive into the United States to challenge Nintendo's dominance among other markets, with the unit rebranded as the Sega Genesis. Sega was dissatisfied with Tonka's handling of the Master System and so sought a new partner through the Atari Corporation led by Jack Tramiel. Tramiel was bullish on the Genesis due to its cost, and turned down the offer, instead focusing more on the company's computer offerings. Sega instead used its dormant Sega of America branch to run a limited launch of the console in August 1989 in test markets of New York City and Los Angeles, with its launch system being bundled with the port of the arcade game Altered Beast.
In October 1989, the company named former Atari Entertainment Electronics Division president Michael Katz as CEO of Sega of America to implement a marketing strategy for a nation-wide push of the Genesis with a target of one million consoles. Katz used a two-prong strategy to challenge Nintendo. The first was to stress the arcade-like capabilities of the Genesis with the capabilities of games like Altered Beast compared to the simpler 8-bit graphics of the NES, and devising slogans such as "Genesis does what Nintendon't."Katz also observed that Nintendo still held most of the rights to arcade game ports for the NES, so the second part of his strategy was to work with the Japanese headquarters of Sega to pay celebrities for their naming rights for games like Pat Riley Basketball, Arnold Palmer Golf, Joe Montana Football, and Michael Jackson's Moonwalker.
Most of these games were developed by Sega's Japanese programmers, though notably, Joe Montana Football had originally been developed by Mediagenic, the new name for Activision after it had become more involved in publishing and business application development alongside games. Mediagenic had started a football game which Katz wanted to brand under Joe Montana's name, but unknown to Katz at the time, the game was only partially finished due to internal strife at Mediagenic. After the deal had been completed and Katz learned of this, he took the game to Electronic Arts. Electronic Arts had already made itself a significant force in the industry as they had been able to reverse engineer the cartridge format for both the NES and the Genesis, though Electronic Arts' CEO Trip Hawkins felt it was better for the company to develop for the Genesis. Electronic Arts used their reverse engineering knowledge as part of their negotiations with Sega to secure a freer licensing contract to develop openly on the Genesis, which proved beneficial for both companies. At the time Katz had secured Mediagenic's Joe Montana football, Electronic Arts was working on its John Madden Football series for personal computers. Electronic Arts was able to help bring Joe Montana Football, more as an arcade title compared to the strategic John Madden Football, to reality, as well as bringing John Madden Football over as a Genesis title.
The second push in 1991
The Genesis still struggled in the United States against Nintendo, and only sold about 500,000 units by mid-1990. Nintendo had released Super Mario Bros. 3 in February 1990 which further drove sales away from Sega's system. Nintendo themselves did not seem to be affected by either Sega's or NEC's entry into the console market. Sega's president Hayao Nakayama wanted the company to develop an iconic mascot character and build a game around it as one means to challenge Nintendo's own Mario mascot. Company artist Naoto Ohshima came up with the concept of Sonic the Hedgehog, a fast anthropomorphic character with an "attitude" that would appeal to teenagers and incorporating the blue color of Sega's logo, and Yuji Naka helped to develop the game Sonic the Hedgehog to showcase the character as well as the graphics and processing speed of the Genesis.The game was ready by early 1991 and launched in North America in June 1991.
Separately, Sega fired Katz and replaced him with Tom Kalinske as Sega of America's new CEO in mid-1990. Kalinske had been president of Mattel and did not have much experience in video games but recognized the razor and blades model, and developed a new strategy for Sega's push to challenge Nintendo's dominance in America with four key decisions, which included cutting the price of the Genesis from $189 to $149, and continue the same aggressive marketing campaigns to make the Genesis look "cool" over the NES and of Nintendo's upcoming Super Nintendo Entertainment System (SNES). Further, Kalinske pushed hard for American developers like Electronic Arts to create games on the Genesis that would better fit American preferences, particularly sports simulation games which the console had gained a reputation for. Finally, Kalinske insisted on making Sonic the Hedgehog the bundled game on the system following its release in June 1991, replacing Altered Beast and even offering those that had purchased a Genesis with Altered Beast a trade-in replacement for Sonic.
Under Kalinske, Sega also revamped their advertising approach, aiming for more of a young adult audience, as Nintendo still was positioning the SNES as a child-friendly console. Advertising focused on Sonic, the edgier games in the Genesis library, and its larger library of sports games which appealed to this group. Television ads for the Genesis and its games ended with the "Sega Scream" – a character shouting the name "Sega" to the camera in the final shot – which also caught on quickly.
These changes, all predating the SNES's planned North American release in September 1991, gave Sega its first gain on Nintendo in the U.S. market. Further, the price cut to $149 made the Genesis a cheaper option than the planned $199 price for the SNES led many families to purchase the Genesis instead of waiting for the SNES. The Genesis had a larger library of games for the U.S. with over 150 titles by the time the SNES launched alongside eight games, and Sega continued to push out titles that drew continuous press throughout the year, whereas with the SNES, its game library was generally held up by flagship Mario and Zelda games that only came at out once a year, along with less which further made the Genesis a more desirable option.
For Nintendo, up until 1991, they had been passive towards Sega's approach in North America, but as the SNES launch approach, the company recognized that they were losing ground. The company shifted their advertising in North America to focus on more of the advanced features of the SNES that were not present in the Genesis, such as its Mode 7 to create simulated 3D perspective effects.The initial shipment of one million SNES units sold out quickly and a total of 3.4 million SNES were sold by the end of 1991, a record for a new console launch, but the Genesis maintained strong sales against the SNES. The Genesis's resilience against the SNES led several of Nintendo's third-party developers to break their exclusive development agreements with Nintendo and seek out licenses to also develop for Genesis. Including Acclaim, Konami, Tecmo, Taito, and Capcom. The latter of which arranged to have a special licensing mechanism with Sega, allowing them to publish select titles exclusively for the Genesis.
During this period, the push for marketing by both Nintendo and Sega led to the growth of video game magazines. Nintendo had already established Nintendo Power in 1988 in part to serve as a help guide for players on its popular titles, and was able to use this further to advertise the SNES and upcoming games. Numerous other titles grew in the late 1980s and early 1990s, giving Sega the opportunity to market its games heavily in these publications.
The war escalates in 1992 and 1993
Nintendo publicly acknowledged that it knew it was no longer in the dominant position in the console market by 1992. A year into the SNES's release, the SNES's price was lowered to $149 to match the Genesis, to which Sega reduced the Genesis to $129 shortly after. The SNES was helped by Capcom's decision to maintain exclusivity of its home port of its popular brawler arcade game Street Fighter II: The World Warrior to the SNES when it was released in June 1992. While the SNES outsold the Genesis in the U.S. in 1992. the Genesis still had a larger install base. By the start of 1993, the Genesis had captured about 55% of the market, a stark contrast to 1990 when Nintendo had controlled 90% of the market.
The success of Street Fighter II both as an arcade game and as a home console title led to the growth of the fighting game genre, and numerous variations from other developers followed. Of significant interest was Midway's Mortal Kombat, released to arcades in 1992. Compared to most other fighting games at the time, Mortal Kombat was much more violent. The game showed combatants’ blood splatter during combat and allowed players to end matches in graphically intense "fatalities.” Because of its controversial style and gameplay, the game proved extremely popular in arcades.
By 1993, Both Nintendo and Sega recognized the need to have Mortal Kombat on their consoles. However, Nintendo, fearing issues with the game’s violence, licensed a “clean” version of the game from Acclaim for the SNES. Which included replacing the blood splatter with sweat and removing the aforementioned fatalities. Sega also licensed a censored version of the game for the Genesis. However, players could enter a cheat code that reverted the game back to its original arcade version. Both home versions were released in September, and approximately 6.5 million units were sold over the game’s lifetime. But the Genesis version was more popular with three to five times more sales than its SNES counterpart.
External video
video icon C-SPAN footage of the first congressional hearing on December 9, 1993.
The popularity of the home console version of Mortal Kombat, coupled with other moral panics in the early 1990s, led to concerns from parents, activists and lawmakers in the United States, leading up to the 1993 congressional hearings on video games first held in December. Led by Senators Joe Lieberman and Herb Kohl, the Senate Committees on Governmental Affairs and the Judiciary brought several of the video game industry leaders, including Howard Lincoln, vice president of Nintendo of America, and Bill White, vice president of Sega of America, to discuss the way they marketed games like Mortal Kombat and Night Trap on consoles to children. Lincoln and White accused each other's companies of creating the issue at hand. Lincoln stated that Nintendo had taken a curated approach to selecting games for their consoles, and that violent games had no place in the market. White responded that Sega purposely was targeting an older audience than Nintendo, and had created a ratings system for its games that it had been trying to encourage the rest of the industry to use; further, despite Nintendo's oversight, White pointed out that there were still many Nintendo titles that incorporated violence.With neither Lincoln nor White giving much play, Lieberman concluded the first hearing with a warning that the industry needs to come together with some means to regulate video games or else Congress would pass laws to do this for them.
By the time of the second hearing in March 1994, the industry had come together to form the Interactive Digital Software Association (today the Entertainment Software Association) and were working to establish the Entertainment Software Rating Board (ESRB), a ratings panel, which ultimately was introduced by September 1994. Despite Sega offering its ratings system as a starting point, Nintendo refused to work with that as they still saw Sega as their rival, requiring a wholly new system to be created. The ESRB eventually established a form modelled off the Motion Picture Association of America (MPAA)'s rating system for film, and the committee was satisfied with the proposed system and allowed the video game industry to continue without further regulations.
The arrival of Sony and the end of the war
The Sega Saturn (top) and Nintendo 64 (middle) struggled against Sony's newcomer, the PlayStation, which ultimately ended Sega and Nintendo's console war.
In 1994 and 1995, there was a contraction in the video game industry, with NPD Group reporting a 17% and 19% year-to-year drop in revenue. While Sega had been outperforming Nintendo in 1993, it still carried corporate debt while Nintendo remained debt-free from having a more dominant position in the worldwide market, even beating Sega in the North American and US market winning the 16 bit console war. To continue to fight Nintendo, Sega's next console was the Sega Saturn, first released in November 1994 in Japan. It brought in technology used by Sega's arcade games that used 3d polygonal graphics, and launch titles featured home versions of these arcade games including Virtua Fighter. While Virtua Fighter was not a pack-in game, sales of the title were nearly 1:1 with the console in Japan. Sega, recognizing that they had numerous consoles with disparate games they were now trying to support, decided to put most of their attention onto the Saturn line going forward, dropping support for the Genesis despite its sales still being strong in the United States at the time.
At the same time, a new competitor in the console marketplace emerged, Sony Computer Entertainment, with the introduction of the PlayStation in December 1994. Sega, aware of Sony's potential competition in Japan, made sure to have enough Saturns ready for sale on the day the PlayStation first shipped as to overwhelm Sony's offering.
Both Sega and Sony turned to move these units to the North American market. With the formation of the ISDA, a new North American tradeshow, the Electronic Entertainment Expo (E3) was created in 1995 to focus on video games, to distinguish it from the Consumer Electronics Show (CES), which covered all home electronics. Nintendo, Sega and Sony gave their full support to E3 in 1995. Sega believed they had the stronger position going into E3 over Sony, as gaming publications, comparing the Saturn to the PlayStation, rated the Saturn as the better system. At the first E3 in May 1995, Sega's Kalinske premiered the North American version of the Saturn, announced its various features and its selling price of $399, and said that while it would officially launch that same day, they had already sent a number of systems to selected vendors for sale. Sony's Olaf Olafsson of Sony Electronic Publishing began to cover the PlayStation features, then invited Steve Race, president of Sony Computer Entertainment America to the stage. Race stated the launch price of the PlayStation, "$299", and then left to "thunderous applause". The surprise price cut caught Sega off-guard, and, in addition to several stores pulling Sega from their lineup due to being shunned from early Saturn sales, the higher price point made it more difficult for them to sell the system. When the PlayStation officially launched in the United States in September 1995, its sales over the first two days exceeded what the Saturn had sold over the prior five months. Because Sega had invested heavily on Saturn into the future, Sony's competition drastically hurt the company's finances.
In the case of Nintendo, they bypassed the 32-bit CPU and instead their next offering was the Nintendo 64, a 64-bit CPU console first released in June 1996. While this gave them powerful capabilities such as 3D graphics to keep up and surpass those on the Saturn and PlayStation, it was still a cartridge-based system limiting how much information could be stored for each game. This decision ultimately cost them Square Soft who moved their popular Final Fantasy series over to the PlayStation line to take advantage of the larger space on optical media. The first PlayStation game in the series, Final Fantasy VII, drove sales of the PlayStation, further weakening Nintendo's position and driving Sega further out of the market.
By this point, the console war between Nintendo and Sega had evaporated, with both companies now facing Sony as their rival. Sega made one more console, the Dreamcast, which had a number of innovative features including a built-in modem for online connectivity, but the console's lifespan was short-lived in part due to the success of Sony's next product, the PlayStation 2, one of the best-selling home consoles of all time. Sega left the home console hardware business in 2001 to focus on software development and licensing. Nintendo remains a key player in the home console business, but more recently has taken a "blue ocean strategy" approach to avoid competing directly with Sony or Microsoft on a feature-for-feature basis with consoles like the Wii, Nintendo DS, and Nintendo Switch.
The Sega/Nintendo console war is the subject of the non-fiction novel Console Wars by Blake Harris in 2014, as well as a film adaption/documentary of the book in 2020.
Sega and Nintendo have since collaborated on various software titles. Sega has developed a biennial Mario & Sonic at the Olympics series of sports games based on the Summer and Winter Olympics since 2008 featuring characters from both the Super Mario and Sonic series, while Nintendo has developed the Super Smash Bros. crossover fighter series for numerous Nintendo properties that has included Sonic as a playable character along with other Sonic characters in supporting roles since Super Smash Bros. Brawl.
1990s handheld consoles.
A number of major handheld consoles were released on the market within about a year of each other: Nintendo's Game Boy, Sega's Game Gear, and the Atari Lynx. While the Game Boy used a monochromatic display, both the Game Gear and Lynx had colour displays. As these handheld releases were alongside the Sega v. Nintendo console war, they were also subject to heavy marketing and advertising to try to draw consumersHowever, the Game Boy ultimately won out in this battle, selling over 118 million units over its lifetime (including its future revisions) compared to 10 million for the Game Gear and 3 million for the Lynx. The Game Boy initially sold for $50 or more cheaper than its competitors, and had a larger library of games, including what is considered the handheld's killer app, Tetris, that drew non-gamers to purchase the handheld to play it.
The Hyperdimension Neptunia series of video games started as a parody of the console wars, incorporating personified consoles, developers, consumers, and other such figures within the gaming industry.
|
1602.00812
| false
| null |
This chapter describes the underlying formalism of the theorem provers, as it is visible during an interactive proof trace, and present the general strategy followed by the theorem provers. The presentation in this chapter is somewhat informal, referring the reader elsewhere for full proofs.
The rest of this chapter is structured as follows. Section "Type-logical grammars" presents a general introduction to type-logical grammars and illustrates its basic concepts using the Lambek calculus, ending the section with some problems at the syntax-semantics interface for the Lambek calculus. Section "Modern type-logical grammars" looks at recent developments in type-logical grammars and how they solve some of the problems at the syntax-semantics interface. Section "Theorem proving" looks at two general frameworks for automated theorem proving for type-logical grammars, describing the internal representation of partial proofs and giving a high-level overview of the proof search mechanism.
Type-logical grammars are a family of grammar formalisms built on a foundation of logic and type theory. Type-logical grammars originated when BIBREF4 introduced his Syntactic Calculus (called the Lambek calculus, L, by later authors). Though Lambek built on the work of BIBREF5 , BIBREF6 and others, Lambek's main innovation was to cast the calculus as a logic, giving a sequent calculus and showing decidability by means of cut elimination. This combination of linguistic and computational applications has proved very influential.
This chapter describes the underlying formalism of the theorem provers, as it is visible during an interactive proof trace, and present the general strategy followed by the theorem provers.
The rest of this chapter is structured as follows. Section "Type-logical grammars" presents a general introduction to type-logical grammars and illustrates its basic concepts using the Lambek calculus, ending the section with some problems at the syntax-semantics interface for the Lambek calculus.
Type-logical grammars are a family of grammar formalisms built on a foundation of logic and type theory. Type-logical grammars originated when BIBREF4 introduced his Syntactic Calculus (called the Lambek calculus, L, by later authors). Though Lambek built on the work of BIBREF5 , BIBREF6 and others, Lambek's main innovation was to cast the calculus as a logic, giving a sequent calculus and showing decidability by means of cut elimination.
|
What formalism does Grail use?
|
The answers are shown as follows:
* a family of grammar formalisms built on a foundation of logic and type theory. Type-logical grammars originated when BIBREF4 introduced his Syntactic Calculus (called the Lambek calculus, L, by later authors).
|
null | false
| null |
Andrew James Weatherall (6 April 1963 – 17 February 2020) was an English musician, DJ, songwriter, producer and remixer. His career took him from being one of the key DJs in the acid house movement of the late 1980s to being a remixer of tracks by the likes of Happy Mondays, New Order, Björk, the Orb, the Future Sound of London and My Bloody Valentine.
His production work on Primal Scream's album Screamadelica, adding samples, loops and creating an influential mix of hard rock, house and rave, helped the record win the first ever Mercury Music Prize in 1992 and become one of the most celebrated albums of the 1990s.
|
Who was Andrew Weatherall
|
Andrew James Weatherall (6 April 1963 – 17 February 2020) was an English musician, DJ, songwriter, producer and remixer. His career took him from being one of the key DJs in the acid house movement of the late 1980s to being a remixer of tracks by the likes of Happy Mondays, New Order, Bjork, the Orb, the Future Sound of London and My Bloody Valentine.
His production work on Primal Scream's album Screamadelica, adding samples, loops and creating an influential mix of hard rock, house and rave, helped the record win the first ever Mercury Music Prize in 1992 and become one of the most celebrated albums of the 1990s.
|
null | false
| null |
What are some typical attributes of a tortoiseshell cat?
|
Common attributes of a tortoiseshell (also referred to as "tortie") cat include golden eyes, a mottled coat that is mostly black and orange, and pink and/or black leathers. There is also an assumption that tortie cats are sassy, which is referenced as "tortietude."
|
|
null | false
| null |
What is the first evidence of a boomerang?
|
Earliest evidence of the boomerang in Australia was about 20,000 years ago.
|
|
null | false
| null |
What is Mexico?
|
Mexico is a country on the North American continent.
|
|
null | false
| 12
|
Table TABREF22 shows statistics of our datasets. It can be observed that average length of Fisher is much higher than 20 newsgroups and CSAT. Cumulative distribution of document lengths for each dataset is shown in Fig. FIGREF21. It can be observed that almost all of the documents in Fisher dataset have length more than 1000 words. For CSAT, more than 50% of the documents have length greater than 500 and for 20newsgroups only 10% of the documents have length greater than 500. Note that, for CSAT and 20newsgroups, there are few documents with length more than 5000.
Note that, for CSAT and 20newsgroups, there are few documents with length more than 5000.
|
Are there any documents with more than 5,000 words in the dataset collection?
|
Yes.
|
null | false
| null |
Classify each of the following as being opaque or transparent: Window, wall, lens, glass, wood, cement
|
Window: Transparent
Wall: Opaque
Lens: Transparent
Glass: Transparent
Wood: Opaque
Cement: Opaque
|
|
1611.01400
| false
| null |
We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments.
We compare our system to a variety of baselines. (1) Rank by the number of times a citation is mentioned in the document. (2) Rank by the number of times the citation is cited in the literature (citation impact). (3) Rank using Google Scholar Related Articles. (4) Rank by the TF*IDF weighted cosine similarity. (5) Rank using a learning-to-rank model trained on text similarity rankings. The first two baseline systems are models where the values are ordered from highest to lowest to generate the ranking. The idea behind them is that the number of times a citation is mentioned in an article, or the citation impact may already be good indicators of their closeness. The text similarity model is trained using the same features and methods used by the annotation model, but trained using text similarity rankings instead of the author's judgments.
|
what were the baselines?
|
The answers are shown as follows:
* Rank by the number of times a citation is mentioned in the document
* Rank by the number of times the citation is cited in the literature (citation impact).
* Rank using Google Scholar Related Articles.
* Rank by the TF*IDF weighted cosine similarity.
* ank using a learning-to-rank model trained on text similarity rankings
|
1908.04917
| false
| null |
WAS: The architecture used in BIBREF3 without the audio input. The decoder output Chinese character at each timestep. Others keep unchanged to the original implementation.
LipCH-Net-seq: For a fair comparison, we use sequence-to-sequence with attention framework to replace the Connectionist temporal classification (CTC) loss BIBREF14 used in LipCH-Net BIBREF5 when converting picture to pinyin.
CSSMCM-w/o video: To evaluate the necessity of video information when predicting tone, the video stream is removed when predicting tone and Chinese characters. In other word, video is only used when predicting the pinyin sequence. The tone is predicted from the pinyin sequence. Tone information and pinyin information work together to predict Chinese character.
WAS: The architecture used in BIBREF3 without the audio input. The decoder output Chinese character at each timestep. Others keep unchanged to the original implementation.
LipCH-Net-seq: For a fair comparison, we use sequence-to-sequence with attention framework to replace the Connectionist temporal classification (CTC) loss BIBREF14 used in LipCH-Net BIBREF5 when converting picture to pinyin.
CSSMCM-w/o video: To evaluate the necessity of video information when predicting tone, the video stream is removed when predicting tone and Chinese characters. In other word, video is only used when predicting the pinyin sequence. The tone is predicted from the pinyin sequence. Tone information and pinyin information work together to predict Chinese character.
|
What was the previous state of the art model for this task?
|
The answers are shown as follows:
* WAS
* LipCH-Net-seq
* CSSMCM-w/o video
|
1701.06538
| false
| null |
Tables TABREF42 , TABREF43 , and TABREF44 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En INLINEFORM0 Fr and En INLINEFORM1 De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time.
As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3 . The perplexity scores are also better. On the Google Production dataset, our model achieved 1.01 higher test BLEU score even after training for only one sixth of the time.
|
What improvement does the MOE model make over the SOTA on machine translation?
|
The answers are shown as follows:
* 1.34 and 1.12 BLEU score on top of the strong baselines in BIBREF3
* perplexity scores are also better
* On the Google Production dataset, our model achieved 1.01 higher test BLEU score
|
null | false
| null |
Classify the following as either a solid, liquid, or gas: carbon dioxide, water, oil, granite, aluminum, gold, oxygen
|
Carbon Dioxide: Gas
Water: Liquid
Oil: Liquid
Granite: Solid
Aluminum: Solid
Gold: Solid
Oxygen: Gas
|
|
null | false
| null |
Which characters belong to DC or Marvel Universe? Tigra, Cheetah
|
Tigra is Marvel, Cheetah is DC
|
|
null | false
| null |
What does AWS stand for?
|
AWS is an abbreviation for Amazon Web Services. AWS is a subsidiary of Amazon and delivers cloud computing services.
|
|
null | false
| null |
What are some arguments for decriminalizing people selling sex while criminalizing the purchasing of sex?
|
Sometimes minors are forced into prostitution, and should not be held criminally liable
Many people in the sex trade want to leave that life and a criminal record impedes their progress
Turning to prostitution is often the last resort of a person who is impoverished, and poverty should not be against the law
If law enforcement depresses the demand for the sale of sex (by going after those who would buy sex), then prostitution rates may drop
|
|
null | false
| 362
|
The proliferation of the online social media has lately resulted in the democratization of online content sharing. Among other media, Twitter is very popular for research and application purposes due to its scale, representativeness and ease of public access to its content. However, tweets, that are short messages of up to 140 characters, pose several challenges to traditional Natural Language Processing (NLP) systems due to the creative use of characters and punctuation symbols, abbreviations ans slung language.
Named Entity Recognition (NER) is a fundamental step for most of the information extraction pipelines. Importantly, the terse and difficult text style of tweets presents serious challenges to NER systems, which are usually trained using more formal text sources such as newswire articles or Wikipedia entries that follow particular morpho-syntactic rules. As a result, off-the-self tools trained on such data perform poorly BIBREF0 . The problem becomes more intense as the number of entities to be identified increases, moving from the traditional setting of very few entities (persons, organization, time, location) to problems with more. Furthermore, most of the resources (e.g., software tools) and benchmarks for NER are for text written in English. As the multilingual content online increases, and English may not be anymore the lingua franca of the Web. Therefore, having resources and benchmarks in other languages is crucial for enabling information access worldwide.
In this paper, we propose a new benchmark for the problem of NER for tweets written in French. The tweets were collected using the publicly available Twitter API and annotated with 13 types of entities. The annotators were native speakers of French and had previous experience in the task of NER. Overall, the generated datasets consists of INLINEFORM0 tweets, split in training and test parts.
The paper is organized in two parts. In the first, we discuss the data preparation steps (collection, annotation) and we describe the proposed dataset. The dataset was first released in the framework of the CAp 2017 challenge, where 8 systems participated. Following, the second part of the paper presents an overview of baseline systems and the approaches employed by the systems that participated. We conclude with a discussion of the performance of Twitter NER systems and remarks for future work.
In this paper, we propose a new benchmark for the problem of NER for tweets written in French.
|
What problem does the new benchmark solve?
|
It solves the problem of Named Entity Recognition (NER) for tweets written in French.
|
null | false
| null |
What makes an organization standout?
|
When an organization puts its people ahead of the profits and cares about the society it creates a unique value and culture that produces long term results. There are many companies that are examples which include the TATA group, Reliance etc.
|
|
null | false
| null |
What are all the way one can travel from Boston to Philadelphia
|
There are 3 mutually exclusive means of transport.
1. Air
2. Land
3. Sea
Within each option one can list a range of options. For example:
1. Air: take a commercial flight; take a private jet; fly yourself; take a hot air balloon, etc.
2. Land: walk, run, drive, ride a bus, ride a bike, ride a motorcycle, hitch hike, use a roller skate, train, etc.
3. Sea: swim, ride a personal boat, take a commercial boat ride, etc.
|
|
null | false
| 22
|
Chatbots such as dialog and question-answering systems have a long history in AI and natural language processing. Early such systems were mostly built using markup languages such as AIML, handcrafted conversation generation rules, and/or information retrieval techniques BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Recent neural conversation models BIBREF4 , BIBREF5 , BIBREF6 are even able to perform open-ended conversations. However, since they do not use explicit knowledge bases and do not perform inference, they often suffer from generic and dull responses BIBREF5 , BIBREF7 . More recently, BIBREF8 and BIBREF9 proposed to use knowledge bases (KBs) to help generate responses for knowledge-grounded conversation. However, one major weakness of all existing chat systems is that they do not explicitly or implicitly learn new knowledge in the conversation process. This seriously limits the scope of their applications. In contrast, we humans constantly learn new knowledge in our conversations. Even if some existing systems can use very large knowledge bases either harvested from a large data source such as the Web or built manually, these KBs still miss a large number of facts (knowledge) BIBREF10 . It is thus important for a chatbot to continuously learn new knowledge in the conversation process to expand its KB and to improve its conversation ability.
In recent years, researchers have studied the problem of KB completion, i.e., inferring new facts (knowledge) automatically from existing facts in a KB. KB completion (KBC) is defined as a binary classification problem: Given a query triple, ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ), we want to predict whether the source entity INLINEFORM3 and target entity INLINEFORM4 can be linked by the relation INLINEFORM5 . However, existing approaches BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 solve this problem under the closed-world assumption, i.e., INLINEFORM6 , INLINEFORM7 and INLINEFORM8 are all known to exist in the KB. This is a major weakness because it means that no new knowledge or facts may contain unknown entities or relations. Due to this limitation, KBC is clearly not sufficient for knowledge learning in conversations because in a conversation, the user can say anything, which may contain entities and relations that are not already in the KB.
In this paper, we remove this assumption of KBC, and allow all INLINEFORM0 , INLINEFORM1 and INLINEFORM2 to be unknown. We call the new problem open-world knowledge base completion (OKBC). OKBC generalizes KBC. Below, we show that solving OKBC naturally provides the ground for knowledge learning and inference in conversations. In essence, we formulate an abstract problem of knowledge learning and inference in conversations as a well-defined OKBC problem in the interactive setting.
From the perspective of knowledge learning in conversations, essentially we can extract two key types of information, true facts and queries, from the user utterances. Queries are facts whose truth values need to be determined. Note that we do not study fact or relation extraction in this paper as there is an extensive work on the topic. (1) For a true fact, we will incorporate it into the KB. Here we need to make sure that it is not already in the KB, which involves relation resolution and entity linking. After a fact is added to the KB, we may predict that some related facts involving some existing relations in the KB may also be true (not logical implications as they can be automatically inferred). For example, if the user says “Obama was born in USA,” the system may guess that (Obama, CitizenOf, USA) (meaning that Obama is a citizen of USA) could also be true based on the current KB. To verify this fact, it needs to solve a KBC problem by treating (Obama, CitizenOf, USA) as a query. This is a KBC problem because the fact (Obama, BornIn, USA) extracted from the original sentence has been added to the KB. Then Obama and USA are in the KB. If the KBC problem is solved, it learns a new fact (Obama, CitizenOf, USA) in addition to the extracted fact (Obama, BornIn, USA). (2) For a query fact, e.g., (Obama, BornIn, USA) extracted from the user question “Was Obama born in USA?” we need to solve the OKBC problem if any of “Obama, “BornIn”, or “USA" is not already in the KB.
We can see that OKBC is the core of a knowledge learning engine for conversation. Thus, in this paper, we focus on solving it. We assume that other tasks such as fact/relation extraction and resolution and guessing of related facts of an extracted fact are solved by other sub-systems.
We solve the OKBC problem by mimicking how humans acquire knowledge and perform reasoning in an interactive conversation. Whenever we encounter an unknown concept or relation while answering a query, we perform inference using our existing knowledge. If our knowledge does not allow us to draw a conclusion, we typically ask questions to others to acquire related knowledge and use it in inference. The process typically involves an inference strategy (a sequence of actions), which interleaves a sequence of processing and interactive actions. A processing action can be the selection of related facts, deriving inference chain, etc., that advances the inference process. An interactive action can be deciding what to ask, formulating a suitable question, etc., that enable us to interact. The process helps grow the knowledge over time and the gained knowledge enables us to communicate better in the future. We call this lifelong interactive learning and inference (LiLi). Lifelong learning is reflected by the facts that the newly acquired facts are retained in the KB and used in inference for future queries, and that the accumulated knowledge in addition to the updated KB including past inference performances are leveraged to guide future interaction and learning. LiLi should have the following capabilities:
This setting is ideal for many NLP applications like dialog and question-answering systems that naturally provide the scope for human interaction and demand real-time inference.
LiLi starts with the closed-world KBC approach path-ranking (PR) BIBREF11 , BIBREF17 and extends KBC in a major way to open-world knowledge base completion (OKBC). For a relation INLINEFORM0 , PR works by enumerating paths (except single-link path INLINEFORM1 ) between entity-pairs linked by INLINEFORM2 in the KB and use them as features to train a binary classifier to predict whether a query INLINEFORM3 should be in the KB. Here, a path between two entities is a sequence of relations linking them. In our work, we adopt the latest PR method, C-PR BIBREF16 and extend it to make it work in the open-world setting. C-PR enumerates paths by performing bidirectional random walks over the KB graph while leveraging the context of the source-target entity-pair. We also adopt and extend the compositional vector space model BIBREF20 , BIBREF21 with continual learning capability for prediction.
Given an OKBC query ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ) (e.g., “(Obama, CitizenOf, USA), which means whether Obama a citizen of USA), LiLi interacts with the user (if needed) by dynamically formulating questions (see the interaction example in Figure 1, which will be further explained in §3) and leverages the interactively acquired knowledge (supporting facts (SFs) in the figure) for continued inference. To do so, LiLi formulates a query-specific inference strategy and executes it. We design LiLi in a Reinforcement Learning (RL) setting that performs sub-tasks like formulating and executing strategy, training a prediction model for inference, and knowledge retention for future use. To the best of our knowledge, our work is the first to address the OKBC problem and to propose an interactive learning mechanism to solve it in a continuous or lifelong manner. We empirically verify the effectiveness of LiLi on two standard real-world KBs: Freebase and WordNet. Experimental results show that LiLi is highly effective in terms of its predictive performance and strategy formulation ability.
In this paper, we remove this assumption of KBC, and allow all INLINEFORM0 , INLINEFORM1 and INLINEFORM2 to be unknown. We call the new problem open-world knowledge base completion (OKBC).
|
What new problem does the paper propose?
|
A new problem of open-world knowledge base completion (OKBC).
|
null | false
| 130
|
We downloaded questions and answers via the Smartvote API. The downloaded data cover 175 communal, cantonal and national elections between 2011 and 2020.
All candidates in an election who participate in Smartvote are asked the same set of questions, but depending on the locale they see translated versions of the questions. They can answer each question with either `yes', `rather yes', `rather no', or `no'. They can supplement each answer with a comment of at most 500 characters.
The questions asked on Smartvote have been edited by a team of political scientists. They are intended to cover a broad range of political issues relevant at the time of the election. A detailed documentation of the design of Smartvote and the editing process of the questions is provided by BIBREF12.
We merged the two labels on each pole into a single label: `yes' and `rather yes' were combined into `favor'; `rather no', or `no' into `against`. This improves the consistency of the data and the comparability to previous stance detection datasets.
We did not further preprocess the text of the comments.
As the API does not provide the language of comments, we employed a language identifier to automatically annotate this information. We used the langdetect library BIBREF13. For each responder we classified all the comments jointly, assuming that responders did not switch code during the answering of the questionnaire.
We applied the identifier in a two-step approach. In the first run we allowed the identifier to output all 55 languages that it supports out of the box, plus Romansh, the fourth official language in Switzerland. We found that no Romansh comments were detected and that all unexpected outputs were misclassifications of German, French or Italian comments. We further concluded that little or no Swiss German comments are in the dataset: If they were, some of them would have manifested themselves in the form of misclassifications (e.g. as Dutch).
In the second run, drawing from these conclusions, we restricted the identifier's output to English, French, German and Italian.
We pre-filtered the questions and answers to improve the quality of the dataset. To keep the domain of the data surveyable, we set a focus on national-level questions. Therefore, all questions and corresponding answers pertaining to national elections were included.
In the context of communal and cantonal elections, candidates have answered both local questions and a subset of the national questions. Of those elections, we only considered answers to the questions that also had been asked in a national election. Furthermore, they were only used to augment the training set while the validation and test sets were restricted to answers from national elections.
We discarded the less than 20 comments classified as English. Furthermore, instances that met any of the following conditions were filtered from the dataset:
Question is not a closed question or does not address a clearly defined political issue.
No comment was submitted by the candidate or the comment is shorter than 50 characters.
Comment starts with “but” or a similar indicator that the comment is not a self-contained statement.
Comment contains a URL.
In total, a fifth of the original comments were filtered out.
The questions have been organized by the Smartvote editors into categories (such as “Economy”). We further consolidated the pre-defined categories into 12 broad topics (Table TABREF7).
The dataset is shared under a CC BY-NC 4.0 license. Copyright remains with www.smartvote.ch.
Given the sensitive nature of the data, we increase the anonymity of the data by hashing the respondents' IDs. No personal attributes of the respondents, such as their party affiliation, are included in the dataset. We provide a data statement BIBREF15 in Appendix SECREF8.
In the first run we allowed the identifier to output all 55 languages that it supports out of the box, plus Romansh, the fourth official language in Switzerland3 . We found that no Romansh comments were detected and that all unexpected outputs were misclassifications of German, French or Italian comments. We further concluded that little or no Swiss German comments are in the dataset; otherwise, some of them would have manifested themselves via misclassifications (e.g. as Dutch). In the second run, drawing from these conclusions, we restricted the identifier’s set of choices to English, French, German and Italian.
|
What is the two-step approach used by the authors to apply the identifier?
|
First allowed the identifier to output all languages and get the conclusions, second restricted the identifier’s set of choices drawing from these conclusions.
|
null | false
| null |
Multipurpose trees or multifunctional trees are trees that are deliberately grown and managed for more than one output. They may supply food in the form of fruit, nuts, or leaves that can be used as a vegetable; while at the same time supplying firewood, adding nitrogen to the soil, or supplying some other combination of multiple outputs. "Multipurpose tree" is a term common to agroforestry, particularly when speaking of tropical agroforestry where the tree owner is a subsistence farmer.
While all trees can be said to serve several purposes, such as providing habitat, shade, or soil improvement; multipurpose trees have a greater impact on a farmer's well-being because they fulfill more than one basic human need. In most cases multipurpose trees have a primary role; such as being part of a living fence, or a windbreak, or used in an ally cropping system. In addition to this they will have one or more secondary roles, most often supplying a family with food or firewood, or both.
When a multipurpose tree is planted, a number of needs and functions can be fulfilled at once. They may be used as a windbreak, while also supplying a staple food for the owner. They may be used as fencepost in a living fence, while also being the main source of firewood for the owner. They may be intercropped into existing fields, to supply nitrogen to the soil, and at the same time serve as a source of both food and firewood.
Common multipurpose trees of the tropics include:
Gliricidia (Gliricidia sepium) – the most common tree used for living fences in Central America, firewood, fodder, fixing nitrogen into the soil.
Moringa (Moringa oleifera) – edible leaves, pods and beans, commonly used for animal forage and shade (it does not fix nitrogen as is commonly believed)
Coconut palm (Cocos nucifera) – used for food, purified water (juice from inside the coconut), roof thatching, firewood, shade.
Neem (Azadirachta indica) – limited use as insect repellent, antibiotic, adding nitrogen to the soil, windbreaks, biomass production for use as mulch, firewood.
Ideally most trees found on tropical farms should be multipurpose, and provide more to the farmer than simply shade and firewood. In most cases they should be nitrogen fixing legumes, or trees that greatly increase the farmer's food security.
|
Which of the tree species mentioned in the text are used as a source of food for humans?
|
Moringa(Moringa oleifera) and Coconut palm (Cocos nucifera) are used as a source of food for humans.
|
null | false
| null |
Why does Israel offer huge discounts to Chicago Booth students' spring break trips?
|
Israel wants to sell their country to potential future leaders of the U.S. They know that we write them hefty checks every year and they don't want that to stop. Israel knows that by taking Chicago Booth students around their country and providing a narrative from a Jew's perspective, will likely get them to see their point of view and be on their side in the future. It's actually quite a smart initiative.
|
|
null | false
| null |
Is the Earth round?
|
The planet Earth is generally considered to be round, although it is not a perfect sphere. The poles are slightly depressed, making the Earth slightly thicker around the equator.
|
|
null | false
| null |
Bon Iver (/ˌboʊn iːˈvɛər/ BOHN ee-VAIR) is an American indie folk band founded in 2006 by singer-songwriter Justin Vernon.
|
Who is the singer in Bon Iver?
|
Justin Vernon is the singer-songwriter in Bon Iver.
|
null | false
| 39
|
Typical speech-to-text translation systems pipeline automatic speech recognition (ASR) and machine translation (MT) BIBREF0 . But high-quality ASR requires hundreds of hours of transcribed audio, while high-quality MT requires millions of words of parallel text—resources available for only a tiny fraction of the world's estimated 7,000 languages BIBREF1 . Nevertheless, there are important low-resource settings in which even limited speech translation would be of immense value: documentation of endangered languages, which often have no writing system BIBREF2 , BIBREF3 ; and crisis response, for which text applications have proven useful BIBREF4 , but only help literate populations. In these settings, target translations may be available. For example, ad hoc translations may be collected in support of relief operations. Can we do anything at all with this data?
In this exploratory study, we present a speech-to-text translation system that learns directly from source audio and target text pairs, and does not require intermediate ASR or MT. Our work complements several lines of related recent work. For example, duong2015attentional and antonios+chiang+duongEMNLP2016 presented models that align audio to translated text, but neither used these models to try to translate new utterances (in fact, the latter model cannot make such predictions). berard+etalnipsworkshop16 did develop a direct speech to translation system, but presented results only on a corpus of synthetic audio with a small number of speakers. Finally, Adams et al. adams+etalinterspeech16,adams+etalemnlp16 targeted the same low-resource speech-to-translation task, but instead of working with audio, they started from word or phoneme lattices. In principle these could be produced in an unsupervised or minimally-supervised way, but in practice they used supervised ASR/phone recognition. Additionally, their evaluation focused on phone error rate rather than translation. In contrast to these approaches, our method can make translation predictions for audio input not seen during training, and we evaluate it on real multi-speaker speech data.
Our simple system (§ SECREF2 ) builds on unsupervised speech processing BIBREF5 , BIBREF6 , BIBREF7 , and in particular on unsupervised term discovery (UTD), which creates hard clusters of repeated word-like units in raw speech BIBREF8 , BIBREF9 . The clusters do not account for all of the audio, but we can use them to simulate a partial, noisy transcription, or pseudotext, which we pair with translations to learn a bag-of-words translation model. We test our system on the CALLHOME Spanish-English speech translation corpus BIBREF10 , a noisy multi-speaker corpus of telephone calls in a variety of Spanish dialects (§ SECREF3 ). Using the Spanish speech as the source and English text translations as the target, we identify several challenges in the use of UTD, including low coverage of audio and difficulty in cross-speaker clustering (§ SECREF4 ). Despite these difficulties, we demonstrate that the system learns to translate some content words (§ SECREF5 ).
Our work complements several lines of related recent work.
|
What's the relationship between the authors' work and related recent work?
|
Their work complements several lines of related recent work.
|
null | false
| null |
In 2015, the Ryman underwent another US$14,000,000 (equivalent to $16,004,773 in 2021) renovation and expansion. Much of the 1994 expansion was gutted and remodeled. The original building received only minor touch-ups and remained in use throughout the construction. The renovation and expansion includes more lobby space, plus expanded restrooms, concessions, and a gift shop. A new quick-service restaurant was added, called "Cafe Lula" and named in memory of Lula C. Naff. (The cafe closed in 2020 due to the COVID-19 pandemic, and was not reopened.) New renovations in 2023 will include a new gift shop in the previous Café Lula location.
Also added in the 2015 renovations was a 100-seat theater. Here a short holographic film is shown that serves as the first exhibit on the building's daily self-guided tours. Entitled The Soul Of Nashville, the film features an actress portraying Naff in presenting the history of the Ryman. It also features an original song performed by Darius Rucker, Sheryl Crow, Vince Gill, and the Fisk Jubilee Singers.
|
From the passage provided, extract the cost of the Ryman Auditorium's most recent renovation and expansion.
|
In 2015, the Ryman underwent another US$14,000,000 (equivalent to $16,004,773 in 2021) renovation and expansion.
|
null | false
| null |
Chile, officially the Republic of Chile, is a country located in western South America. It is the southernmost country in the world and closest to Antarctica, stretching along a narrow strip of land between the Andes Mountains and the Pacific Ocean. With an area of 756,096 square kilometers (291,930 sq mi) and a population of 17.5 million as of 2017, Chile shares borders with Peru to the north, Bolivia to the northeast, Argentina to the east, and the Drake Passage to the south. The country also controls several Pacific islands, including Juan Fernández, Isla Salas y Gómez, Desventuradas, and Easter Island, and claims about 1,250,000 square kilometers (480,000 sq mi) of Antarctica as the Chilean Antarctic Territory.[nb 2] The capital and largest city of Chile is Santiago, and the national language is Spanish.
|
Is the capital city and the largest city of Chile same or different?
|
They are the same. Santiago.
|
null | false
| null |
Name the person from Bangladesh who won Pulitzer Prize 2022
|
Fahmida Azim(Painter and story writer)
|
|
null | false
| null |
What are some high-potential cryptos with low marketcaps?
|
1) Morpheus Network - $MNW
2) Greenheart - $CBD
3) Realio - $RIO
4) Blocksquare - $BST
5) Theopetra - $THEO
6) Constellation - $DAG
7) Render - $RNDR
8) Alkimi - $ADS
9) Verasity - $VRA
10) Chainge Finance - $CHNG
|
|
null | false
| null |
In 1974, Judith Stern joined the faculty of the University of California, Davis as an assistant professor in the Department of Nutrition. There her early research centered on differences in the adipose tissue of obese individuals – using Zucker rats as a research model. Eventually her research program grew to include human studies, focused on various obesity treatment regimens.
In 1995, Stern was elected to the National Academy of Medicine. At various points she was elected president of the NAASO (1992–1993, now The Obesity Society) and the American Society for Clinical Nutrition. Stern co-founded the American Obesity Association, an advocacy group that lobbied in favor of obesity research and treatment.
|
Please list a few of the professional organizations that Judith Stern participated in during her career.
|
Some of the professional organizations that Judith Stern created or participated in are the National Academy of Medicine, the Obesity Society, and the American Society for Clinical Nutrition.
|
null | false
| null |
Time travel is the concept of movement between certain points in time, analogous to movement between different points in space by an object or a person, typically with the use of a hypothetical device known as a time machine. Time travel is a widely recognized concept in philosophy and fiction, particularly science fiction. The idea of a time machine was popularized by H. G. Wells' 1895 novel The Time Machine.
|
Given this paragraph about time travel, how practical is it
|
Yes, time travel is indeed a real thing. But it's not quite what you've probably seen in the movies. Under certain conditions, it is possible to experience time passing at a different rate than 1 second per second. And there are important reasons why we need to understand this real-world form of time travel.
|
null | false
| 322
|
OWL BIBREF15 is the de-facto standard for machine processable and interoperable ontologies on the SW. In its second version, OWL is equivalent to the description logic $\mathcal {SROIQ}(D)$. Such expressiveness has a higher computational cost but allows the development of interesting applications such as automated reasoning BIBREF16. OWL 2 ontologies consist of the following three different syntactic categories:
Entities, such as classes, properties, and individuals, are identified by IRIs. They form the primitive terms and constitute the basic elements of an ontology. Classes denote sets of individuals and properties link two individuals or an individual and a data value along a property. For example, a class :Animal can be used to represent the set of all animals. Similarly, the object property :childOf can be used to represent the parent-child relationship and the data property :birthDate assigns a particular birth date to an individual. Finally, the individual :Alice can be used to represent a particular person called "Alice".
Expressions represent complex notions in the domain being described. For example, a class expression describes a set of individuals in terms of the restrictions on the individuals' characteristics. OWL offers existential (SOME) or universal (ONLY) qualifiers and a variety of typical logical constructs, such as negation (NOT), other Boolean operators (OR, AND), and more constructs such as cardinality restriction (MIN, MAX, EXACTLY) and value restriction (VALUE), to create class expressions. Such constructs can be combined in arbitrarily complex class expressions CE according to the following grammar
where A is an atomic class, C and D are class expressions, R is an object property, a as well as a$_1$ to a$_m$ with $\texttt {m} \ge 1$ are individuals, and $\texttt {n} \ge 0$ is an integer.
Axioms are statements that are asserted to be true in the domain being described. Usually, one distinguish between (1) terminological and (2) assertional axioms. (1) terminological axioms are used to describe the structure of the domain, i.e., the relationships between classes resp. class expressions. For example, using a subclass axiom (SubClassOf:), one can state that the class :Koala is a subclass of the class :Animal. Classes can be subclasses of other classes, thus creating a taxonomy. In addition, axioms can arrange properties in hierarchies (SubPropertyOf:) and can assign various characteristics (Characteristics:) such as transitivity or reflexivity to them. (2) Assertional axioms formulate facts about individuals, especially the classes they belong to and their mutual relationships. OWL can be expressed in various syntaxes with the most common computer readable syntax being RDF/XMLA more human-readable format is the MOS BIBREF17. For example, the class expression that models people who work at a university that is located in Spain could be as follows in MOS:
Likewise, expressing that every professor works at a university would read as
OWL 2 ontologies consist of Entities, Expressions and Axioms as introduced in subsec:owl. While both expressions and axioms can be mapped to RDF, i.e. into a set of RDF triples, using this mapping and applying the triple-based verbalization on it would lead to a non-human understandable text in many cases. For example, the intersection of two classes :A and :B can be represented in RDF by the six triples
The verbalization of these triples would result in Something that is a class and the intersection of something whose first is A and whose rest is something whose first is B and whose rest ist nil., which is obviously far away from how a human would express it in NL. Therefore, generating NL from OWL requires a different procedure based on its syntactic categories, OWL expressions and OWL axioms. We show the general rules for each of them in the following.
In theory, class expressions can be arbitrarily complex, but as it turned out in some previous analysis BIBREF22, in practice they seldom arise and can be seen as some corner cases. For example, an ontology could contain the following class expression about people and their birth place:
Class expressions do have a tree-like structure and can simply be parsed into a tree by means of the binary OWL class expressions constructors contained in it. For our example, this would result in the following tree:
every tree node/.style=align=center,anchor=base,font=, edge from parent/.style= thick, draw, edge from parent path=(.south) – +(0,-8pt) -| () frontier/.style=distance from root=9 [.AND Person [.SOME birthPlace [.AND City [.VALUE locatedIn France ] ] ] ]
Such a tree can be traversed in post-order, i.e. sub-trees are processed before their parent nodes recursively. For the sake of simplicity, we only process sub-trees that represent proper class expression in our example, i.e. we omit birthPlace, locatedIn, and France. Moreover and again for simplicity, we'll explain the transformation process by starting from the right-hand side of the tree. Thus, in our example we begin with the class expression City which is transformed to everything that is a city and locatedIn VALUE France resulting in everything that is located in France by application of a rule. Both class expressions are used in the conjunction City AND locatedIn VALUE France. Thus, the next step would be to merge both phrases. An easy way is to use the coordinating conjunction and, i.e. everything that is a city and everything that is located in France. Although the output of this transformation is correct, it still contains unnecessarily redundant information. Therefore, we apply the aggregation procedure described in subsec:grouping, i.e. we get everything that is a city and located in France. Yet, the aggregation can still be improved: if there is any atomic class in the conjunction, we know that this is more specific than the placeholder everything. Thus, we can replace it by the plural form of the class, finally resulting in cities that are located in France. The same procedure is applied for its parent class expression being the existential restriction
This will be transformed to everything whose birth place is a city that is located in France. Note, that we used the singular form here, assuming that the property birthPlace is supposed to be functional in the ontology. In the last step, we process the class expression Person, which gives us everything that is a person. Again, due to the conjunction we merge this result with with the previous one, such that in the end we get people whose birth place is a city that is located in France.
As we described in sec:owl, OWL axioms can roughly be categorized into terminological and assertional axioms. Therefore, we have different procedures for processing each category:
Assertional Axioms (ABox Axioms) - Most assertional axioms assert individuals to atomic classes or relate individuals to another individual resp. literal value. For example, axioms about the type as well as birth place and birth date of Albert Einstein can be expressed by
Those axioms can simply be rewritten as triples, thus, we can use the same procedure as we do for triples (sec:singletriple). Converting them into NL gives us Albert Einstein is a person whose birth place is Ulm and whose birth date is 14 March 1879. OWL also allows for assigning an individual to a complex class expression. In that case we'll use our conversion of OWL class expressions as described in subsec:owlce.
Terminological Axioms (TBox Axioms) - According to power2010, most of the terminological axioms used in ontologies are subclass axioms. By definition, subclass and superclass can be arbitrarily complex class expressions $\texttt {CE}_1$ and $\texttt {CE}_2$, i.e. [language=manchester,mathescape=true]CE$_1$ SubClassOf CE$_2$, but in praxis it is quite often only used with atomic classes as subclass or even more simple with the superclass also beeing an atomic class. Nevertheless, we support any kind of subclass axiom and all other logical OWL axioms in LD2NL. For simplicity, we outline here how we verbalize subclass axioms in LD2NL. The semantics of a subclass axiom denotes that every individual of the subclass also belongs to the superclass. Thus, the verbalization seems to be relatively straightforward, i.e. we verbalize both class expressions and follow the template : every $\rho (\texttt {\texttt {CE}$1$})$ is a $\rho (\texttt {\texttt {CE}$2$})$. Obviously, this works pretty well for subclass axioms with atomic classes only. For example, the axiom
is verbalized as every scientist is a person.
This work was supported by the German Federal Ministry of Transport and Digital Infrastructure (BMVI) through the projects LIMBO (no. 19F2029I) and OPAL (no. 19F2028A). This work was supported by the German Federal Ministry of Economics and Technology (BMWI) in the projects RAKI (no. 01MD19012D) as well as by the BMBF project SOLIDE (no. 13N14456).
Entities, such as classes, properties, and individuals, are identified by IRIs. They form the primitive terms and constitute the basic elements of an ontology.
|
What do the Entities mean in OWL?
|
Entities, such as classes, properties, and individuals, are identified by IRIs. They form the primitive terms and constitute the basic elements of an ontology.
|
null | false
| 107
|
We re-implemented five keyphrase extraction models : the first two are commonly used as baselines, the third is a resource-lean unsupervised graph-based ranking approach, and the last two were among the top performing systems in the SemEval-2010 keyphrase extraction task BIBREF0 . We note that two of the systems are supervised and rely on the training set to build their classification models. Document frequency counts are also computed on the training set. Stemming is applied to allow more robust matching. The different keyphrase extraction models are briefly described below:
Each model uses a distinct keyphrase candidate selection method that provides a trade-off between the highest attainable recall and the size of set of candidates. Table summarizes these numbers for each model. Syntax-based selection heuristics, as used by TopicRank and WINGNUS, are better suited to prune candidates that are unlikely to be keyphrases. As for KP-miner, removing infrequent candidates may seem rather blunt, but it turns out to be a simple yet effective pruning method when dealing with long documents. For details on how candidate selection methods affect keyphrase extraction, please refer to BIBREF16 .
Apart from TopicRank that groups similar candidates into topics, the other models do not have any redundancy control mechanism. Yet, recent work has shown that up to 12% of the overall error made by state-of-the-art keyphrase extraction systems were due to redundancy BIBREF6 , BIBREF17 . Therefore as a post-ranking step, we remove redundant keyphrases from the ranked lists generated by all models. A keyphrase is considered redundant if it is included in another keyphrase that is ranked higher in the list.
We re-implemented five keyphrase extraction models : the first two are commonly used as baselines, the third is a resource-lean unsupervised graph-based ranking approach, and the last two were among the top performing systems in the SemEval-2010 keyphrase extraction task (Kim et al., 2010).
|
How many models are tested?
|
Five.
|
null | false
| null |
you are riding your bicycle to the store and your bicycle has a basket. which of the following items can you take back home? a toothbrush, a TV, a bar of soap, a pair of pants, a pair of skis, a loaf of bread, a tiger, a phone charger
|
you can take back, the tooth brush, the bar of soap, a pair of pants, a loaf of bread and a phone charger
|
|
1902.06843
| false
| null |
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
|
What is the source of the visual data?
|
Profile pictures from the Twitter users' profiles.
|
null | false
| 32
|
Propaganda has been tackled mostly at the article level. BIBREF3 created a corpus of news articles labelled as propaganda, trusted, hoax, or satire. BIBREF4 experimented with a binarized version of that corpus: propaganda vs. the other three categories. BIBREF5 annotated a large binary corpus of propagandist vs. non-propagandist articles and proposed a feature-based system for discriminating between them. In all these cases, the labels were obtained using distant supervision, assuming that all articles from a given news outlet share the label of that outlet, which inevitably introduces noise BIBREF6.
A related field is that of computational argumentation which, among others, deals with some logical fallacies related to propaganda. BIBREF7 presented a corpus of Web forum discussions with instances of ad hominem fallacy. BIBREF8, BIBREF9 introduced Argotario, a game to educate people to recognize and create fallacies, a by-product of which is a corpus with $1.3k$ arguments annotated with five fallacies such as ad hominem, red herring and irrelevant authority, which directly relate to propaganda.
Unlike BIBREF8, BIBREF9, BIBREF7, our corpus uses 18 techniques annotated on the same set of news articles. Moreover, our annotations aim at identifying the minimal fragments related to a technique instead of flagging entire arguments.
The most relevant related work is our own, which is published in parallel to this paper at EMNLP-IJCNLP 2019 BIBREF10 and describes a corpus that is a subset of the one used for this shared task.
The most relevant related work is our own, which is published in parallel to this paper at EMNLP-IJCNLP 2019 (Da San Martino et al., 2019) and describes a corpus that is a subset of the one used for this shared task.
|
Who is the author of the most relevant related work?
|
The authors themselves.
|
1908.10322
| false
| null |
Instead of reading in the tokenized input text, our model reads raw utf-8 bytes. For English text in the ASCII range, this is equivalent to processing characters as individual tokens. Non-ASCII characters (e.g. accented characters, or non-Latin scripts) are typically two or three utf-8 bytes. We use a standard “transformer decoder” (a stack of transformer layers with a causal attention mask) to process the sequence $x_{0:i-1}$ and predict the following byte $x_i$. The model's prediction is an estimate of the probability distribution over all possible 256 byte values. Our input byte embedding matrix has dimensionality 256. Our byte-level transformer model has 40 standard transformer layers with hidden size 1024, filter size 8192, and 16 heads. The model has around 836M parameters, of which only 66K are byte embeddings.
The model has around 836M parameters, of which only 66K are byte embeddings.
|
How many parameters does the model have?
|
The answers are shown as follows:
* model has around 836M parameters
|
null | false
| null |
Sunset, also known as sundown, is the daily disappearance of the Sun below the horizon due to Earth's rotation. As viewed from everywhere on Earth (except the North and South poles), the equinox Sun sets due west at the moment of both the spring and autumn equinoxes. As viewed from the Northern Hemisphere, the Sun sets to the northwest (or not at all) in the spring and summer, and to the southwest in the autumn and winter; these seasons are reversed for the Southern Hemisphere.
The time of sunset is defined in astronomy as the moment when the upper limb of the Sun disappears below the horizon. Near the horizon, atmospheric refraction causes sunlight rays to be distorted to such an extent that geometrically the solar disk is already about one diameter below the horizon when a sunset is observed.
Sunset is distinct from twilight, which is divided into three stages. The first one is civil twilight, which begins once the Sun has disappeared below the horizon, and continues until it descends to 6 degrees below the horizon. The second phase is nautical twilight, between 6 and 12 degrees below the horizon. The third phase is astronomical twilight, which is the period when the Sun is between 12 and 18 degrees below the horizon. Dusk is at the very end of astronomical twilight, and is the darkest moment of twilight just before night. Finally, night occurs when the Sun reaches 18 degrees below the horizon and no longer illuminates the sky.
Locations further north than the Arctic Circle and further south than the Antarctic Circle experience no full sunset or sunrise on at least one day of the year, when the polar day or the polar night persists continuously for 24 hours.
|
From the passage provided, summarise the key phases of sunset.
|
Sunset is divided into three stages. The first is twilight, which begins once the Sun has disappears below the horizon, and continues until it descends to 6 degrees below the horizon. The second phase is nautical twilight, between 6 and 12 degrees below the horizon. The third phase is astronomical twilight, which is the period when the Sun is between 12 and 18 degrees below the horizon.
|
null | false
| null |
Sir Charles Lyell, 1st Baronet, FRS (14 November 1797 – 22 February 1875) was a Scottish geologist who demonstrated the power of known natural causes in explaining the earth's history. He is best known as the author of Principles of Geology (1830–33), which presented to a wide public audience the idea that the earth was shaped by the same natural processes still in operation today, operating at similar intensities. The philosopher William Whewell termed this gradualistic view "uniformitarianism" and contrasted it with catastrophism, which had been championed by Georges Cuvier and was better accepted in Europe. The combination of evidence and eloquence in Principles convinced a wide range of readers of the significance of "deep time" for understanding the earth and environment.
|
What is Sir Charles Lyell best known for?
|
Sir Charles Lyell is best known as the author of Principles of Geology (1830-1833)
|
null | false
| null |
Tell me few brainstorming techniques
|
Mind-Mapping - is where you write down a central idea or problem and then create branches of related idea around it. You can add sub-branches and idea to each branch, creating a visual map of ideas and connections
Free-Writing - is where you set a timer for a specific amount of time and then write down idea that comes to mind, without stopping to edit or censor. Goal is to generate as many ideas as possible in a shortest amount of time
Role-Playing - is where you assign different roles or personas to participants in the session. Example, one person can play role of a customer, another of a competitor and so on. This help participants to see things in different perspective and generate new ideas
|
|
null | false
| 29
|
MLP contains three layers - an input layer, an output layer and some hidden layers. Input layer receives the signal, the output layer gives a decision or prediction about the input and the computation of the MLP is conducted in the hidden layers. In our system, we use 100 layers. For weight optimization, we use Limited-memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) optimization algorithm.
SVM gives an optimal hyper-plane and it maximizes the margin between classes. We use Radial Basis Function (RBF) kernel in our system to make decision boundary curve-shaped. For decision function shape, we use the original one-vs-one (ovo) decision function.
NBC is based on Bayes' Theorem which gives probability of an event occurrence based on some conditions related to that event. We use Multinomial Naive Bayes Classifier with smoothing parameter equals to 0.1. A zero probability cancels the effects of all the other probabilities.
Stochastic gradient descent optimizes an objective function with suitable smoothness properties BIBREF27. It selects few examples randomly instead of whole data for each iteration. We use 'L2' regularization for reduction of overfitting.
Gradient Boosting Classifier produces a prediction model consisting of weak prediction models. Gradient boosting uses decision trees. We use 100 boosting stages in this work.
K-NN is a supervised classification and regression algorithm. It uses the neighbours of the given sample to identify its class. K determines the number of neighbours needed to be considered. We set the value of K equals to 13 in this work.
RF is an ensemble learning technique. It constructs large number of decision trees during training and then predicts the majority class. We use 500 decision trees in the forest and "entropy" function to measure the quality of a split.
In our system, we use 100 layers.
|
How many layers do the authors use?
|
100 layers.
|
2003.07723
| false
| null |
We find that Cohen $\kappa $ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is almost identical, especially for German. However, for English, Annotator 2 annotates more Vitality than Uneasiness. Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps. Notably, Beauty/Joy and Sadness are confused across annotators more often than other labels. This is topical for poetry, and therefore not surprising: One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48. We also find considerable confusion of Sadness with Awe/Sublime and Vitality, while the latter is also regularly confused with Beauty/Joy.
Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps.
|
How is the annotation experiment evaluated?
|
The answers are shown as follows:
* confusion matrices of labels between annotators
|
null | false
| null |
The Dallas Cowboys are a professional American football team based in the Dallas–Fort Worth metroplex. The Cowboys compete in the National Football League (NFL) as a member club of the league's National Football Conference (NFC) East division. The team is headquartered in Frisco, Texas, and has been played its home games at AT&T Stadium in Arlington, Texas, since its opening in 2009. The stadium took its current name prior to the 2013 season. In January 2020, Mike McCarthy was hired as head coach of the Cowboys. He is the ninth in the team’s history. McCarthy follows Jason Garrett, who coached the team from 2010–2019.
|
Help summarize Dallas Cowboys
|
The Cowboys joined the NFL as an expansion team in 1960. The team's national following might best be represented by its NFL record of consecutive sell-outs. The Cowboys' streak of 190 consecutive sold-out regular and post-season games (home and away) began in 2002. The franchise has made it to the Super Bowl eight times, tying it with the Pittsburgh Steelers and Denver Broncos for second-most Super Bowl appearances in history behind the New England Patriots' record 11 appearances. The Cowboys have won eight NFC championships, the most in the conference's history. The Cowboys are the only NFL team to record 20 straight winning seasons (from 1966 to 1985) during which they missed the playoffs only twice (1974 and 1984).
In 2015, the Dallas Cowboys became the first sports team to be valued at $4 billion, making it the most valuable sports team in the world, according to Forbes. The Cowboys also generated $620 million in revenue in 2014, a record for a U.S. sports team. In 2018, they also became the first NFL franchise to be valued at $5 billion and making Forbes' list as the most valued NFL team for the 12th straight year.
|
null | false
| null |
What is the Shy Plant?
|
The scientific name of the shy plant also known as the shame plant is Mimosa pudica, which means shy, bashful, and sensitive. The structures of the shy plant are interesting; it has an herbaceous stem. As well as a dicot on a monocot stem. It has beautiful flowers and leaves that are green, and they are thigmotropism negative. When the shy plant's leaves are touched, they curl up. The Shy plant needs direct sunlight, and 70 to 80 degrees indoors is a good place for it. When the need to water comes, the soil must be 50% dry. The medical value of the Shy plant is great. It helps with Hemorrhoids, infections, sores, and sinuses.
|
|
null | false
| 143
|
Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc. Fact extraction involves extraction of Named Entities from a corpus, usually given a certain ontology. When compared to NER in the domain of general text, the biomedical domain has some characteristic challenges:
Some of the earliest systems were heavily dependent on hand-crafted features. The method proposed in BIBREF4 for recognition of protein names in text does not require any prepared dictionary. The work gives examples of diversity in protein names and lists multiple rules depending on simple word features as well as POS tags.
BIBREF5 adopt a machine learning approach for NER. Their NER system extracts medical problems, tests and treatments from discharge summaries and progress notes. They use a semi-Conditional Random Field (semi-CRF) BIBREF6 to output labels over all tokens in the sentence. They use a variety of token, context and sentence level features. They also use some concept mapping features using existing annotation tools, as well as Brown clustering to form 128 clusters over the unlabelled data. The dataset used is the i2b2 2010 challenge dataset. Their system achieves an F-Score of 0.85. BIBREF7 is an incremental paper on NER taggers. It uses 3 types of word-representation techniques (Brown clustering, distributional clustering, word vectors) to improve performance of the NER Conditional Random Field tagger, and achieves marginal F-Score improvements.
BIBREF8 propose a boostrapping mechanism to bootstrap biomedical ontologies using NELL BIBREF9 , which uses a coupled semi-supervised bootstrapping approach to extract facts from text, given an ontology and a small number of “seed” examples for each category. This interesting approach (called BioNELL) uses an ontology of over 100 categories. In contrast to NELL, BioNELL does not contain any relations in the ontology. BioNELL is motivated by the fact that a lot of scientific literature available online is highly reliable due to peer-review. The authors note that the algorithm used by NELL to bootstrap fails in BioNELL due to ambiguities in biomedical literature, and heavy semantic drift. One of the causes for this is that often common words such as “white”, “dad”, “arm” are used as names of genes- this can easily result in semantic drift in one iteration of the bootstrapping. In order to mitigate this, they use Pointwise Mutual Information scores for corpus level statistics, which attributes a small score to common words. In addition, in contrast to NELL, BioNELL only uses high instances as seeds in the next iteration, but adds low ranking instances to the knowledge base. Since evaluation is not possible using Mechanical Turk or a small number of experts (due to the complexity of the task), they use Freebase BIBREF10 , a knowledge base that has some biomedical concepts as well. The lexicon learned using BioNELL is used to train an NER system. The system shows a very high precision, thereby showing that BioNELL learns very few ambiguous terms.
More recently, deep learning techniques have been developed to further enhance the performance of NER systems. BIBREF11 explore recurrent neural networks for the problem of NER in biomedical text.
In contrast to NELL, BioNELL does not contain any relations in the ontology.
|
Does the BioNELL contain any relations in the ontology?
|
No, it doesn't.
|
null | false
| null |
Secolo d'Italia ([ˈsɛːkolo diˈtaːlja]; "Century of Italy") is a daily, conservative, online newspaper in Italy, published since 1952, formerly supporting neo-fascism. In 2012, it ceased its print edition and continued as an online-only publication.
|
What kind of newspaper is the Secolo d'Italia
|
The Secolo d'Italia, Century of Italy, is a daily, conservative, online newspaper in Italy, published since 1952, formerly supporting neo-fascism. In 2012, it ceased its print edition and continued as an online-only publication.
|
null | false
| null |
The Bosnian War began in 1992, as a result of the Breakup of Yugoslavia. The deteriorating situation led to United Nations Security Council Resolution 816 on 9 October 1992, ordering a no-fly zone over central Bosnia and Herzegovina, which NATO began enforcing on 12 April 1993 with Operation Deny Flight. From June 1993 until October 1996, Operation Sharp Guard added maritime enforcement of the arms embargo and economic sanctions against the Federal Republic of Yugoslavia. On 28 February 1994, NATO took its first wartime action by shooting down four Bosnian Serb aircraft violating the no-fly zone.
On 10 and 11 April 1994, the United Nations Protection Force called in air strikes to protect the Goražde safe area, resulting in the bombing of a Bosnian Serb military command outpost near Goražde by two US F-16 jets acting under NATO direction. In retaliation, Serbs took 150 U.N. personnel hostage on 14 April. On 16 April a British Sea Harrier was shot down over Goražde by Serb forces.
In August 1995, a two-week NATO bombing campaign, Operation Deliberate Force, began against the Army of the Republika Srpska, after the Srebrenica genocide. Further NATO air strikes helped bring the Yugoslav Wars to an end, resulting in the Dayton Agreement in November 1995. As part of this agreement, NATO deployed a UN-mandated peacekeeping force, under Operation Joint Endeavor, named IFOR. Almost 60,000 NATO troops were joined by forces from non-NATO countries in this peacekeeping mission. This transitioned into the smaller SFOR, which started with 32,000 troops initially and ran from December 1996 until December 2004, when operations were then passed onto the European Union Force Althea. Following the lead of its member states, NATO began to award a service medal, the NATO Medal, for these operations.
|
Please create a chronological timeline of the Bosnia and Herzegovina intervention by NATO, based on the passage provided.
|
-- 1992: The Bosnian War begins.
-- October 9th, 1992: A no-fly zone over central Bosnia and Herzegovina is declared by the United Nations.
-- April 12th, 1993: Operation Deny Flight begins, and is enforced by NATO to execute the no-fly zone over central Bosnia and Herzegovina.
-- June 1993: Operation Sharp Guard begins, and is enforced by NATO to execute maritime enforcement of embargo and sanctions against Yugoslavia.
-- February 28th, 1994: NATO takes its first wartime action by shooting down four Bosnian Serb aircraft violating the no-fly zone.
-- April 10th & 11th, 1994: NATO directs the bombing of a Bosnian Serb military command outpost near Goražde.
-- April 14th, 1994: Serbs took 150 U.N. personnel hostage, in retaliation of the outpost bombing from three days prior.
-- April 16th, 1994: Serb forces shoot down a British Sea Harrier over Goražde.
-- August 1995: Operation Deliberate Force begins, and is enforced by NATO to execute a two-week bombing campaign against the Army of Republika Srpska.
-- November 1995: The Dayton Agreement is executed, bringing the Yugoslav Wars to an end. This results in Operation Joint Endeavor, and is enforced by NATO to deploy a peacekeeping force named IFOR.
-- December 1996: IFOR transitions into a smaller force named SFOR, which is initially staffed with 32,000 troops.
-- December 2004: Peacekeeping operations are passed to the European Union Force Althea.
|
null | false
| null |
How to reverse Type 2 diabetes naturally?
|
Intermittent fasting is recently been found to be very effective in reversing Type 2 diabetes
Especially healthy eating habits during in eating window and healthy lifestyle accelerates the time taken to reverse the Type 2 diabetes i.e blood sugar levels comes to normal range.
There’s a lot of research going on in this space, lots of people are seeing the results. Doctors and dietician are finding this revolutionary and out of the standard medical procedure of treating Type 2 diabetes with drugs like Metformin.”
Low carb high protein and high fat diet, keto diet, right exercise routine have found to improve the results.
The effect of this approach depends on various factors like Age, current food habits and health lifestyle, severity of the condition, discipline, will power and lots more
|
|
null | false
| null |
How do you make soap?
|
Soap is made with a mixture of natural or organic oils, sodium hydroxide and distilled water. When making soap you need to mix a combination of light and heavy oils like olive and coconut oil and then mix it with sodium hydroxide and distilled water. After this mixture has been mixed well you poor it into soap molds and let it hard over the course of many weeks. This process called saponification is the time it takes for the sodium hydroxide to evaporate from the mixture which causes the soap to harden in he soap molds. Other options you have when making soap are to add essential oils, like lavender and grapefruit to give a nice scent to the soap. This has to be added prior to saponification.
|
|
null | false
| null |
EIC voyage #1 (1713–1716): Captain Thomas Lewis sailed from The Downs on 2 October 1713, bound for Banjarmasin and Bencoolen. Borneo was at the Cape of Good Hope on 30 January 1714 and reached Batavia on 7 May. She was at Banjarmasin on 1 July and Bencoolen on 26 December. She was then at Bantal on 23 January 1715 before returning to Bencoolen on 9 February. She returned to Bantal on 4 March and Bencoolen on 29 March. She then was at Batavia on 13 June and Bencoolen yet again on 24 August. She stopped at Madras on 21 February 1716, and visited Bencoolen for the last time on 30 May. Her voyage ended on 1 August 1717.
|
What are all the dates that the Borneo visited Bencoolen written in the format Month Day Year?
|
December 26th 1714, February 9th 1715, March 29th 1715, August 24th 1715, May 30th 1716.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.