paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
1606.04631
| false
| null |
We also evaluate our Joint-BiLSTM structure by comparing with several other state-of-the-art baseline approaches, which exploit either local or global temporal structure. As shown in Table TABREF20 , our Joint-BiLSTM reinforced model outperforms all of the baseline methods. The result of “LSTM” in first row refer from BIBREF15 and the last row but one denotes the best model combining local temporal structure using C3D with global temporal structure utilizing temporal attention in BIBREF17 . From the first two rows, our unidirectional joint LSTM shows rapid improvement, and comparing with S2VT-VGG model in line 3, it also demonstrates some superiority. Even LSTM-E jointly models video and descriptions representation by minimizing the distance between video and corresponding sentence, our Joint-BiLSTM reinforced obtains better performance from bidirectional encoding and separated visual and language models.
FLOAT SELECTED: Table 2: Comparing with several state-of-the-art models (reported in percentage, higher is better).
We also evaluate our Joint-BiLSTM structure by comparing with several other state-of-the-art baseline approaches, which exploit either local or global temporal structure. As shown in Table TABREF20 , our Joint-BiLSTM reinforced model outperforms all of the baseline methods.
FLOAT SELECTED: Table 2: Comparing with several state-of-the-art models (reported in percentage, higher is better).
|
what are the state of the art methods?
|
S2VT, RGB (VGG), RGB (VGG)+Flow (AlexNet), LSTM-E (VGG), LSTM-E (C3D) and Yao et al.
|
null | false
| 300
|
The success of word representations (embeddings) learned from text has motivated analogous methods to learn representations of longer sequences of text such as sentences, a fundamental step on any task requiring some level of text understanding BIBREF0 . Sentence representation is a challenging task that has to consider aspects such as compositionality, phrase similarity, negation, etc. In order to evaluate sentence representations, intermediate tasks such as Semantic Textual Similarity (STS) BIBREF1 or Natural Language Inference (NLI) BIBREF2 have been proposed, with STS being popular among unsupervised approaches. Through a set of campaigns, STS has produced several manually annotated datasets, where annotators measure the similarity among sentences, with higher scores for more similar sentences, ranging between 0 (no similarity) to 5 (semantic equivalence). Human annotators exhibit high inter-tagger correlation in this task.
In another strand of related work, tasks that combine representations of multiple modalities have gained increasing attention, including image-caption retrieval, video and text alignment, caption generation, and visual question answering. A common approach is to learn image and text embeddings that share the same space so that sentence vectors are close to the representation of the images they describe BIBREF3 , BIBREF4 . BIBREF5 provides an approach that learns to align images with descriptions. Joint spaces are typically learned combining various types of deep learning networks such us recurrent networks or convolutional networks, with some attention mechanism BIBREF6 , BIBREF7 , BIBREF8 .
The complementarity of visual and text representations for improved language understanding have been shown also on word representations, where embeddings have been combined with visual or perceptual input to produce grounded representations of words BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . These improved representation models have outperformed traditional text-only distributional models on a series of word similarity tasks, showing that visual information coming from images is complementary to textual information.
In this paper we present Visual Semantic Textual Similarity (vSTS), a dataset which allows to study whether better sentence representations can be built when having access to corresponding images, e.g. a caption and its image, in contrast with having access to the text alone. This dataset is based on a subset of the STS benchmark BIBREF1 , more specifically, the so called STS-images subset, which contains pairs of captions. Note that the annotations are based on the textual information alone. vSTS extends the existing subset with images, and aims at being a standard dataset to test the contribution of visual information when evaluating sentence representations.
In addition we show that the dataset allows to explore two hypothesis: H1) whether the image representations alone are able to predict caption similarity; H2) whether a combination of image and text representations allow to improve the text-only results on this similarity task.
This dataset is based on a subset of the STS benchmark, more specifically, the so called STS-images subset, which contains pairs of captions.
|
Where is their dataset derived from?
|
STS-images subset.
|
1907.02636
| false
| null |
IOCs in cybersecurity articles are often described in a predictable way: being connected to a set of contextual keywords BIBREF16 , BIBREF1 . For example, a human user can infer that the word “ntdll.exe” is the name of a malicious file on the basis of the words “download” and “compromised” from the text shown in Fig. FIGREF1 . By analyzing the whole corpus, it is interesting that malicious file names tends to co-occur with words such as "download", "malware", "malicious", etc. In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords.
In this work, we consider words that can indicate the characteristics of the neighbor words as contextual keywords and develop an approach to generate features from the automatically extracted contextual keywords.
|
What contextual features are used?
|
The words that can indicate the characteristics of the neighbor words as contextual keywords and generate it from the automatically extracted contextual keywords.
|
null | false
| null |
Coldplay are a British rock band formed in London in 1997. They consist of vocalist and pianist Chris Martin, guitarist Jonny Buckland, bassist Guy Berryman, drummer Will Champion and creative director Phil Harvey. They met at University College London and began playing music together from 1997 to 1998, initially calling themselves Starfish.
|
Who are Coldplay
|
Coldplay are a British rock band formed in London in 1997. They consist of vocalist and pianist Chris Martin, guitarist Jonny Buckland, bassist Guy Berryman, drummer Will Champion and creative director Phil Harvey.
They met at University College London and began playing music together from 1997 to 1998, initially calling themselves Starfish.
|
null | false
| null |
What recording artist has won the most Grammys?
|
Beyonce, a singer-songwriter who has been active since the 1990s, holds the record for most Grammys won as of 2023 with 32 awards.
|
|
null | false
| null |
Why donuts have holes?
|
Donuts have holes to help cook/bake the entire food item evenly
|
|
1807.03367
| false
| null |
Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.
Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.
|
What data did they use?
|
The answers are shown as follows:
* dataset on Mechanical Turk involving human perception, action and communication
|
null | false
| null |
The University of Sfax (Arabic: جامعة صفاقس French: Université de Sfax) is a university located in Sfax, Tunisia. It was founded in 1986 under the name University of the South with the purpose of covering all academic institutions in Southern Tunisia. It is divided into three universities, including the current University of Sfax, with the creation of the University of Gabes in 2003 and the University of Gafsa in 2004.
|
write some critical points on University of Sfax using below text as a reference
|
1. Sfax is home to the University of Sfax.
2. To include all academic institutions in Southern Tunisia, the University of the South was founded in 1986.
3. It is divided into three institutions, including the existing University of Sfax, with the formation of the University of Gabes in 2003 and the University of Gafsa in 2004.
|
null | false
| null |
Make a packing list for a picnic at the beach.
|
You should bring the following items for a picnic at the beach: Sunscreen, sunglasses, beach towels, umbrella, folding beach chairs, ice chest, drinks, ice, sandwiches, chips and salsa.
|
|
null | false
| 122
|
The Propaganda Techniques Corpus (PTC) dataset for the 2019 Shared Task on Fine-Grained Propaganda consists of a training set of 350 news articles, consisting of just over 16,965 total sentences, in which specifically propagandistic fragments have been manually spotted and labelled by experts. This is accompanied by a development set (or dev set) of 61 articles with 2,235 total sentences, whose labels are maintained by the shared task organisers; and two months after the release of this data, the organisers released a test set of 86 articles and 3,526 total sentences. In the training set, 4,720 ($\sim 28\%$) of the sentences have been assessed as containing propaganda, with 12,245 sentences ($\sim 72 \%$) as non-propaganda, demonstrating a clear class imbalance.
In the binary sentence-level classification (SLC) task, a model is trained to detect whether each and every sentence is either 'propaganda' or 'non-propaganda'; in the more challenging field-level classification (FLC) task, a model is trained to detect one of 18 possible propaganda technique types in spans of characters within sentences. These propaganda types are listed in BIBREF4 and range from those which might be recognisable at the lexical level (e.g. Name_Calling, Repetition), and those which would likely need to incorporate semantic understanding (Red_Herring, Straw_Man).
For several example sentences from a sample document annotated with fragment-level classifications (FLC) (Figure FIGREF13). The corresponding sentence-level classification (SLC) labels would indicate that sentences 3, 4, and 7 are 'propaganda' while the the other sentences are `non-propaganda'.
The Propaganda Techniques Corpus (PTC) dataset for the 2019 Shared Task on Fine-Grained Propaganda consists of a training set of 350 news articles, consisting of just over 16,965 total sentences, in which specifically propagandistic fragments have been manually spotted and labelled by experts. This is accompanied by a development set (or dev set) of 61 articles with 2,235 total sen_x001f_tences, whose labels are maintained by the shared task organisers; and two months after the release of this data, the organisers released a test set of 86 articles and 3,526 total sentences.
|
What is the size of the PTC corpus?
|
The Propaganda Techniques Corpus (PTC) dataset for the 2019 Shared Task consists a training set, a development set and a test set. In total, there are 497 articles with 22726 sentences.
|
null | false
| 322
|
Commonly, the selection of subsets of RDF is performed using the SPARQL query language. SPARQL can be used to express queries across diverse data sources. Query forms contain variables that appear in a solution result. They can be used to select all or a subset of the variables bound in a pattern match. They exist in four different instantiations, i.e., SELECT, CONSTRUCT, ASK and DESCRIBE. The SELECT query form is the most commonly used and is used to return rows of variable bindings. Therefore, we use this type of query in our explanation. CONSTRUCT allows to create a new RDF graph or modify the existing one through substituting variables in a graph templates for each solution. ASK returns a Boolean value indicating whether the graph contains a match or not. Finally, DESCRIBE is used to return all triples about the resources matching the query. For example, represents the following query “Return all scientists who were born in Ulm”.
A SPARQL SELECT query can be regarded as consisting of three parts: (1) a body section B, which describes all data that has to be retrieved, (2) an optional section O, which describes the data items that can be retrieved by the query if they exist, and (3) a modifier section M, which describes all solution sequences, modifiers and aggregates that are to be applied to the result of the previous two sections of the query. Let Var be the set of all variables that can be used in a SPARQL query. In addition, let $R$ be the set of all resources, $P$ the set of all properties and $L$ the set of all literals contained in the target knowledge base of the SPARQL queries at hand. We call $x \in \text{\em Var}\, \cup R \cup P \cup L$ an atom. The basic components of the body of a SPARQL query are triple patterns $(\texttt {s}, \texttt {p}, \texttt {o}) \in (\text{\em Var}\, \cup R) \times (\text{\em Var}\, \cup P) \times (\text{\em Var}\, \cup R \cup L)$. Let $W$ be the set of all words in the dictionary of our target language. We define the realization function $\rho : \text{\em Var}\, \cup R \cup P \cup L \rightarrow W^*$ as the function which maps each atom to a word or sequence of words from the dictionary. The extension of $\rho $ to all SPARQL constructs maps all atoms $x$ to their realization $\rho (x)$ and defines how these atomic realizations are to be combined. We denote the extension of $\rho $ by the same label $\rho $ for the sake of simplicity. We adopt a rule-based approach to achieve this goal, where the rules extending $\rho $ to all valid SPARQL constructs are expressed in a conjunctive manner. This means that for premises $P_1,\ldots ,P_n$ and consequences $K_1,\ldots ,K_m$ we write $P_1 \wedge \ldots \wedge P_n \Rightarrow K_1 \wedge \ldots \wedge K_m$. The premises and consequences are explicated by using an extension of the Stanford dependencies.
For example, a possessive dependency between two phrase elements $e_1$ and $e_2$ is represented as $\texttt {poss}(e_1,e_2)$. For the sake of simplicity, we slightly deviate from the Stanford vocabulary by not treating the copula to be as an auxiliary, but denoting it as BE. Moreover, we extend the vocabulary by the constructs conj and disj which denote the conjunction resp. disjunction of two phrase elements. In addition, we sometimes reduce the construct $\texttt {subj(y,x)}\wedge \texttt {dobj(y,z)}$ to the triple (x,y,z) $\in W^3$.
Commonly, the selection of subsets of RDF is performed using the SPARQL query language.
|
How to perform the selection of subsets of RDF?
|
Commonly, the selection of subsets of RDF is performed using the SPARQL query language.
|
null | false
| 60
|
There are two different testing sets from VLSP 2018 and VLSP 2019. In general, the data of this year is more complex than the last year one, so there is a big gap in results between two of them. The experiments are conducted using the Kaldi speech recognition toolkitBIBREF2.
The experiments are conducted using the Kaldi speech recognition toolkit [3].
|
What toolkit is used in their experiments?
|
Kaldi speech recognition toolkit.
|
null | false
| null |
Classify each of the following as either a real or a fictional characters: professor Dumbledore, Mahatma Gandhi, King Arthur, captain Kirk, Luke skywalker, Julius Caesar
|
Dumbledore - fictional ,Mahatma Gandhi - real, King Arthur - fictional, captain Kirk - fictional, Luke skywalker - fictional, Julius Caesar - real
|
|
null | false
| null |
Classify the below as food or drinks.
Biriyani, Pasta, Steak, cocktail, mocktail, wine
|
Food - Biriyani, Pasta, Steak
Drinks - cocktail, mocktail, wine
|
|
1908.02322
| false
| null |
This dataset is aimed to contribute to developing a partisan news detector. There are several ways that the dataset can be used to devise the system. For example, it is possible to train the detector using publisher-level labels and test with article-level labels. It is also possible to use semi-supervised learning and treat the publisher-level part as unsupervised, or use only the article-level part. We also released the raw survey data so that new mechanisms to decide the article-level labels can be devised.
This dataset is aimed to contribute to developing a partisan news detector. There are several ways that the dataset can be used to devise the system. For example, it is possible to train the detector using publisher-level labels and test with article-level labels. It is also possible to use semi-supervised learning and treat the publisher-level part as unsupervised, or use only the article-level part. We also released the raw survey data so that new mechanisms to decide the article-level labels can be devised.
|
What examples of applications are mentioned?
|
The answers are shown as follows:
* partisan news detector
|
null | false
| null |
Which of the following animals live in the snow: Dog, Bonobos, Cat, Polar bear, Whale shark, Arctic fox, Orangutans, Snow leopard, Lion, Ocean sunfish, Koala, Cheetah, Beluga sturgeon, Orca, Gorillas, Camel, Hamsters, Penguins, Blue whale, Chimpanzee
|
There are some animals in that list that typically live in the snow! Polar bear, Arctic fox, Snow leopard and Penguins.
|
|
null | false
| null |
Which country is the best at hockey?
|
Canada is widely considered to be the strongest country in ice hockey. The National Hockey League (NHL) is the highest level of competition in the world. About half of NHL players are Canadian.
|
|
null | false
| null |
Write a thoughtful letter to my mother-in-law asking her to move out of our house and in to a retirement community. Make sure to thank her for her help and be kind. She has lived with us for 5 years and helped us when our kids were little. We want her to move out in the next 6 months, and we will pay for her retirement home and care.
|
Dear [insert name],
It has been great living with you for the last 5 years. When our kids were small, you helped care for them and made a big impact on their development. We could not have raised them without your help. Now that the kids are older, we want to make sure we are not slowing you down or keeping you from your retirement goals. In fact, we would love to return the favor you did for us, by helping you find a retirement community that you can enjoy and prosper in. We are happy to pay for all of the accommodations and ensure moving is as easy as possible. We think it is best to make this change in the next 6 months, and we will do everything we can to help you navigate this important transition, for you and for us.
Love Always,
[insert name]
|
|
null | false
| null |
Classify the following list into mammals and non-mammals: dog, human, ant, bird, cat, dolphin, pig, cow, shrimp, coral, shark, honeybee, capybara
|
mammals: dog, human, cat, dolphin, pig, cow, capybara
non-mammals: ant, bird, shrimp, coral, shark, honeybee
|
|
null | false
| null |
In the walk of fame, which star is on the wall?
|
Mohammad Ali
|
|
null | false
| null |
What was the duration of the Lewis and Clark Expedition?
|
1228 Days. The Corps of Discovery departed from Camp Dubois at 4pm on May 14, 1804. The expedition returned to St. Louis on September 23, 1806.
|
|
null | false
| null |
How does water feel?
|
Water is typically a fluid at room temperature, as a fluid it feels wet. As a solid, its ice, which feels hard. As a gas, its vapor, which feels like a cool mist.
|
|
null | false
| null |
What are some great progressive rock bands?
|
RUSH, Yes, Genesis, the Alan Parsons Project, Pink Floyd, King Crimson, Emerson Lake and Palmer, Jethro Tull
|
|
null | false
| null |
In 1997, two expansion packs for Red Alert were released for the PC, Command & Conquer: Red Alert: Counterstrike and Command & Conquer: Red Alert: The Aftermath. The expansion packs were designed by Westwood Studios with the "apprenticeship" of Intelligent Games, a London-based game developer. Much of the development on multiplayer maps was undertaken by players from the Compuserve Red Alert ladder. New units, missions, maps, and music were included in the expansions.
By October 1997, Counterstrike had sold 650,000 copies worldwide after its launch in April of that year. According to Westwood, this made it the all-time fastest-selling expansion pack for a computer game by that point. The Counterstrike add-on included the secret Ant Missions titled "It Came from Red Alert", where the player battles against an army of giant, mutant ants.
|
Based on the below text, which expansion pack for Red Alert featured the player fighting an army of mutant ants?
|
The Counterstrike add-on to Red Alert featured missions against giant mutant ants.
|
null | false
| null |
What is a Semantic Layer?
|
A Semantic Layer provides a friendly abstraction to data such that users can get trustworthy answers from data without an understanding of its technical representation.
|
|
null | false
| null |
What is a mitochondrion?
|
A mitochondrion is an organelle found in the cells of most eukaryotes, such as animals, plants and fungi. Mitochondria have a double membrane structure and use aerobic respiration to generate adenosine triphosphate (ATP), which is used throughout the cell as a source of chemical energy. They were discovered by Albert von Kölliker in 1857 in the voluntary muscles of insects. The term mitochondrion was coined by Carl Benda in 1898. The mitochondrion is popularly nicknamed the "powerhouse of the cell", a phrase coined by Philip Siekevitz in a 1957 article of the same name.
Some cells in some multicellular organisms lack mitochondria (for example, mature mammalian red blood cells). A large number of unicellular organisms, such as microsporidia, parabasalids and diplomonads, have reduced or transformed their mitochondria into other structures. One eukaryote, Monocercomonoides, is known to have completely lost its mitochondria, and one multicellular organism, Henneguya salminicola, is known to have retained mitochondrion-related organelles in association with a complete loss of their mitochondrial genome.
Mitochondria are commonly between 0.75 and 3 μm2 in cross section, but vary considerably in size and structure. Unless specifically stained, they are not visible. In addition to supplying cellular energy, mitochondria are involved in other tasks, such as signaling, cellular differentiation, and cell death, as well as maintaining control of the cell cycle and cell growth. Mitochondrial biogenesis is in turn temporally coordinated with these cellular processes. Mitochondria have been implicated in several human disorders and conditions, such as mitochondrial diseases, cardiac dysfunction, heart failure and autism.
The number of mitochondria in a cell can vary widely by organism, tissue, and cell type. A mature red blood cell has no mitochondria, whereas a liver cell can have more than 2000. The mitochondrion is composed of compartments that carry out specialized functions. These compartments or regions include the outer membrane, intermembrane space, inner membrane, cristae, and matrix.
Although most of a eukaryotic cell's DNA is contained in the cell nucleus, the mitochondrion has its own genome ("mitogenome") that is substantially similar to bacterial genomes. This finding has led to general acceptance of the endosymbiotic hypothesis - that free-living prokaryotic ancestors of modern mitochondria permanently fused with eukaryotic cells in the distant past, evolving such that modern animals, plants, fungi, and other eukaryotes are able to respire to generate cellular energy.
|
|
null | false
| 74
|
Event detection on microblogging platforms such as Twitter aims to detect events preemptively. A main task in event detection is detecting events of predetermined types BIBREF0, such as concerts or controversial events based on microposts matching specific event descriptions. This task has extensive applications ranging from cyber security BIBREF1, BIBREF2 to political elections BIBREF3 or public health BIBREF4, BIBREF5. Due to the high ambiguity and inconsistency of the terms used in microposts, event detection is generally performed though statistical machine learning models, which require a labeled dataset for model training. Data labeling is, however, a long, laborious, and usually costly process. For the case of micropost classification, though positive labels can be collected (e.g., using specific hashtags, or event-related date-time information), there is no straightforward way to generate negative labels useful for model training. To tackle this lack of negative labels and the significant manual efforts in data labeling, BIBREF1 (BIBREF1, BIBREF3) introduced a weak supervision based learning approach, which uses only positively labeled data, accompanied by unlabeled examples by filtering microposts that contain a certain keyword indicative of the event type under consideration (e.g., `hack' for cyber security). Another key technique in this context is expectation regularization BIBREF6, BIBREF7, BIBREF1. Here, the estimated proportion of relevant microposts in an unlabeled dataset containing a keyword is given as a keyword-specific expectation. This expectation is used in the regularization term of the model's objective function to constrain the posterior distribution of the model predictions. By doing so, the model is trained with an expectation on its prediction for microposts that contain the keyword. Such a method, however, suffers from two key problems:
Due to the unpredictability of event occurrences and the constantly changing dynamics of users' posting frequency BIBREF8, estimating the expectation associated with a keyword is a challenging task, even for domain experts;
The performance of the event detection model is constrained by the informativeness of the keyword used for model training. As of now, we lack a principled method for discovering new keywords and improve the model performance.
To address the above issues, we advocate a human-AI loop approach for discovering informative keywords and estimating their expectations reliably. Our approach iteratively leverages 1) crowd workers for estimating keyword-specific expectations, and 2) the disagreement between the model and the crowd for discovering new informative keywords. More specifically, at each iteration after we obtain a keyword-specific expectation from the crowd, we train the model using expectation regularization and select those keyword-related microposts for which the model's prediction disagrees the most with the crowd's expectation; such microposts are then presented to the crowd to identify new keywords that best explain the disagreement. By doing so, our approach identifies new keywords which convey more relevant information with respect to existing ones, thus effectively boosting model performance. By exploiting the disagreement between the model and the crowd, our approach can make efficient use of the crowd, which is of critical importance in a human-in-the-loop context BIBREF9, BIBREF10. An additional advantage of our approach is that by obtaining new keywords that improve model performance over time, we are able to gain insight into how the model learns for specific event detection tasks. Such an advantage is particularly useful for event detection using complex models, e.g., deep neural networks, which are intrinsically hard to understand BIBREF11, BIBREF12. An additional challenge in involving crowd workers is that their contributions are not fully reliable BIBREF13. In the crowdsourcing literature, this problem is usually tackled with probabilistic latent variable models BIBREF14, BIBREF15, BIBREF16, which are used to perform truth inference by aggregating a redundant set of crowd contributions. Our human-AI loop approach improves the inference of keyword expectation by aggregating contributions not only from the crowd but also from the model. This, however, comes with its own challenge as the model's predictions are further dependent on the results of expectation inference, which is used for model training. To address this problem, we introduce a unified probabilistic model that seamlessly integrates expectation inference and model training, thereby allowing the former to benefit from the latter while resolving the inter-dependency between the two.
To the best of our knowledge, we are the first to propose a human-AI loop approach that iteratively improves machine learning models for event detection. In summary, our work makes the following key contributions:
A novel human-AI loop approach for micropost event detection that jointly discovers informative keywords and estimates their expectation;
A unified probabilistic model that infers keyword expectation and simultaneously performs model training;
An extensive empirical evaluation of our approach on multiple real-world datasets demonstrating that our approach significantly improves the state of the art by an average of 24.3% AUC.
The rest of this paper is organized as follows. First, we present our human-AI loop approach in Section SECREF2. Subsequently, we introduce our proposed probabilistic model in Section SECREF3. The experimental setup and results are presented in Section SECREF4. Finally, we briefly cover related work in Section SECREF5 before concluding our work in Section SECREF6.
To address the above issues, we advocate a human-AI loop approach for discovering informative keywords and estimating their expectations reliably.
|
What approach does the paper propose?
|
A human-AI loop approach for discovering informative keywords and estimating their expectations reliably.
|
null | false
| null |
why am I required to do onboarding training?
|
Company onboarding training is important because it lets you get familiar with company rules and company culture. Onboarding training teaches rules to prevent harassment at work. It teaches you what the company cultural values are. It teaches you how to interact with your co-workers and what kind of behavior is acceptable and not acceptable. It also teaches you the value of diversity and inclusion and why its so important.
|
|
null | false
| null |
Who is Rentala Madhubala
|
Rentala Madhubala is an Indian scientist who studied molecular parasitology and functional genomics. She is the director of the Academic Staff College in Jawaharlal Nehru University. She was the Dean at the School of Life Sciences and the director of the Advanced Instrumentation Research Facility there.
|
|
null | false
| null |
Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the "Thunderer of the Nile", and described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them.
Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing.
Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature.
Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the New Latin word electricus ("of amber" or "like amber",, elektron, the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.
Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges
In 1775, Hugh Williamson reported a series of experiments to the Royal Society on the shocks delivered by the electric eel; that same year the surgeon and anatomist John Hunter described the structure of the fish's electric organs. In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his "On Physical Lines of Force" in 1861 and 1862.
While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.
In 1887, Heinrich Hertz discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for "his discovery of the law of the photoelectric effect". The photoelectric effect is also employed in photocells such as can be found in solar panels.
The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.
Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947, followed by the bipolar junction transistor in 1948.
|
Based on the following passage describe the first solid-state device and its use.
|
The first solid-state device was the "cat's-whisker detector" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect.
|
null | false
| null |
is a career in public accounting a good way to go
|
yes
|
|
null | false
| null |
Give me a list of the last five "Best New Artist" Grammy winners.
|
1. Samara Joy, 2023
2. Olivia Rodrigo, 2022
3. Megan Thee Stallion, 2021
4. Billie Eilish, 2020
5. Dua Lipa, 2019
|
|
null | false
| null |
Classify each of the following as either an insect or a worm: tequila worm, silk worm, glow worm, inchworms, earthworms, tapeworm, hookworm, liver fluke, threadworm
|
Both worms and insects are invertebrates in the animal kingdom, meaning they do not have backbones. Tapeworms, hookworms, liver flukes, and threadworms are examples of intestinal worms. Earthworms are also worms. Tequila worms, silkworms, glow worms, and inchworms are not worms. They are insects. Silkworms, for example, are the larva of silk moths.
|
|
null | false
| null |
Tennis balls were originally made of cloth strips stitched together with thread and stuffed with feathers. Modern tennis balls are made of hollow vulcanized rubber with a felt coating. Traditionally white, the predominant colour was gradually changed to optic yellow in the latter part of the 20th century to allow for improved visibility. Tennis balls must conform to certain criteria for size, weight, deformation, and bounce to be approved for regulation play. The International Tennis Federation (ITF) defines the official diameter as 65.41–68.58 mm (2.575–2.700 in). Balls must weigh between 56.0 and 59.4 g (1.98 and 2.10 oz).
|
What is the size a tennis ball?
|
Tennis balls are 65.41–68.58 millimeters or 2.575–2.700 inches in diameter.
|
null | false
| null |
Difference between EST and EDT?
|
EDT is the time zone used in summer and spring, while EST is the time zone used in winter and autumn.
|
|
null | false
| null |
Who is the greatest basketball player of all time?
|
This is a highly subjective topic and hotly debated. Most basketball experts would agree that a player who has acquired the most NBA championships should be on the list. Bill Russell, with 12 championships, has to be in the conversation. LeBron James is the all time scoring leader and has had one of the longest NBA careers to go along with 4 titles. Kobe Bryant at 5 titles is also on the list. But the most dominant player of his era in the 1990's, winning 6 titles in 6 attempts, was Michael Jordan. Michael Jordan was considered one of the best two way players, excelling at both defense and offense. So while some might not agree, Michael Jordan is always in the conversation for best NBA player of all time.
|
|
null | false
| null |
What was the first credit card
|
Diners Club
|
|
null | false
| null |
What are the words of House Footly?
|
"Tread Lightly Here"
|
|
null | false
| null |
Identify which instrument is string or woodwind: Hornpipe, Gusli
|
Gusli is string, Hornpipe is woodwind.
|
|
null | false
| null |
Who invented basketball?
|
James Naismith designed the game of basketball in 1891 while he was teaching at the International YMCA Training School. On 15 January 1892, James Naismith published his rules for the game of "Basket Ball" that he invented. The original game played under these rules was quite different from the one played today as there was no dribbling, dunking, three-pointers, or shot clock, and goal tending was legal.
|
|
null | false
| null |
Why is breaking up so hard to do?
|
Being in a relationship always has ups and downs, but ending a relationship is one of the hardest things humans do because we often feel the loss of hope and uncertainty about the future. We become familiar with having a person in our lives and they often feel like a safety net, so losing that sense of safety is scary. Ultimately, we have to do what is best for our own well being and move on. One might find comfort on spending time with friends and family and a strong support network during this difficult time.
|
|
null | false
| null |
Which of these books won a Pulitzer Prize for fiction? The Road, Lonesome Dove, Ender's Game, Avator the Lost Adventures, Prodigal Summer, The Joy Luck Club, Cloudsplitter, Blink, The Dancer Upstairs, The Known World, Millhauser, All the Light We Cannot See, and Age of Iron,
|
The books on this list that won the Pulitzer Prize for fiction are The Road, Lonesome Dove, The Known World, Millhauser, and All the Light We Cannot See. The other books did not win this award.
|
|
null | false
| null |
What is the largest city in the South Hemisphere?
|
The city of São Paulo, capital of the São Paulo state in Brazil, is the largest city by population (12.33 million people according to the 2020 census).
|
|
null | false
| null |
In 2017, TwoSet Violin made a comedic reference to Ling Ling, a fictional violinist who "practices 40 hours a day". In an interview with Yle Uutiset, they described Ling Ling as the final boss of a video game: the Chuck Norris of violinists. Chen said they improvised the character from their comedy sketch video concerning a teenage violin student's tiger mom comparing the student to her friend's child. In 2018, they released a series of videos called the Ling Ling Workout. In these challenges, the duo play classical pieces (or contemporary music) with handicaps such as playing at double speed, with scordatura, while dancing or hula hooping, with hand positions reversed, or while upside down. Prominent violinists such as Ray Chen, Ziyu He, and Hilary Hahn have also attempted the challenge on their channel.
|
How much time violinist Ling Ling spent practicing each day?
|
Ling Ling is the Chuck Norris of violinists. Her violin performance has far surpassed all living violin virtuosos today.
|
1808.08780
| false
| null |
As a complement of this analysis we show some qualitative results which give us further insights on the transformations of the vector space after our average approximation. In particular, we analyze the reasons behind the higher quality displayed by our bilingual embeddings in monolingual settings. While VecMap and MUSE do not transform the initial monolingual spaces, our model transforms both spaces simultaneously. In this analysis we focus on the source language of our experiments (i.e., English). We found interesting patterns which are learned by our model and help understand these monolingual gains. For example, a recurring pattern is that words in English which are translated to the same word, or to semantically close words, in the target language end up closer together after our transformation. For example, in the case of English-Spanish the following pairs were among the pairs whose similarity increased the most by applying our transformation: cellphone-telephone, movie-film, book-manuscript or rhythm-cadence, which are either translated to the same word in Spanish (i.e., teléfono and película in the first two cases) or are already very close in the Spanish space. More generally, we found that word pairs which move together the most tend to be semantically very similar and belong to the same domain, e.g., car-bicycle, opera-cinema, or snow-ice.
We found interesting patterns which are learned by our model and help understand these monolingual gains. For example, a recurring pattern is that words in English which are translated to the same word, or to semantically close words, in the target language end up closer together after our transformation. For example, in the case of English-Spanish the following pairs were among the pairs whose similarity increased the most by applying our transformation: cellphone-telephone, movie-film, book-manuscript or rhythm-cadence, which are either translated to the same word in Spanish (i.e., teléfono and película in the first two cases) or are already very close in the Spanish space.
More generally, we found that word pairs which move together the most tend to be semantically very similar and belong to the same domain, e.g., car-bicycle, opera-cinema, or snow-ice.
|
Why does the model improve in monolingual spaces as well?
|
because word pair similarity increases if the two words translate to similar parts of the cross-lingual embedding space
|
null | false
| null |
List some TV shows that Canadian actor William B. Davis has been in.
|
The X-Files, Robson Arms, Supernatural, Stargate SG-1, Continuum, Upload
|
|
null | false
| null |
Miramar is a corregimiento in Chiriquí Grande District, Bocas del Toro Province, Panama. It has a land area of 26.9 square kilometres (10.4 sq mi) and had a population of 1,232 as of 2010, giving it a population density of 45.8 inhabitants per square kilometre (119/sq mi).
|
What is the population of Miramar?
|
As of 2010, the population of Miramar is 1232.
|
null | false
| null |
Below is a list of cryptography libraries that support SHA-2:
Botan
Bouncy Castle
Cryptlib
Crypto++
Libgcrypt
Mbed TLS
libsodium
Nettle
LibreSSL
OpenSSL
GnuTLS
wolfSSL
Hardware acceleration is provided by the following processor extensions:
Intel SHA extensions: Available on some Intel and AMD x86 processors.
VIA PadLock
ARMv8 Cryptography Extensions
IBM z/Architecture: Available since 2005 as part of the Message-Security-Assist Extensions 1 (SHA-256) and 2 (SHA-512)
IBM Power ISA since Since v 2.07
|
Which cryptography libraries support SHA-2?
|
Libraries that support SHA-2:
Botan
Bouncy Castle
Cryptlib
Crypto++
Libgcrypt
Mbed TLS
libsodium
Nettle
LibreSSL
OpenSSL
GnuTLS
wolfSSL
|
null | false
| null |
What video beat "Thriller" for Video of the Year at the first-ever MTV Video Music Awards?
|
The Cars, "You Might Think"
|
|
null | false
| null |
What is the name of the New England professional football team?
|
New England Patriots
|
|
null | false
| null |
Identify which instrument is string or percussion: Cymbal, Kontigi
|
Cymbal is percussion, Kontigi is string.
|
|
null | false
| null |
What begins with the letter "e" and only contains one letter?
|
An envelope begins with letter "e" and contains only one letter.
|
|
1910.08502
| false
| null |
In this context, we decided to study the three main types of architectures which have demonstrated promising results over traditional systems: 1) Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6 which uses Markov assumptions (i.e. conditional independence between predictions at each time step) to efficiently solve sequential problems by dynamic programming, 2) Attention-based methods BIBREF7, BIBREF8 which rely on an attention mechanism to perform non-monotonic alignment between acoustic frames and recognized acoustic units and 3) RNN-tranducer BIBREF0, BIBREF9, BIBREF10 which extends CTC by additionally modeling the dependencies between outputs at different steps using a prediction network analogous to a language model. We extend our experiments by adding two hybrid end-to-end methods: a multi-task method called joint CTC-attention BIBREF11, BIBREF12 and a RNN-transducer extended with attention mechanisms BIBREF13. To complete our review, we build a state-of-art phone-based system based on lattice-free MMI criterion BIBREF14 and its end-to-end counterpart with both phonetic and orthographic units BIBREF15.
In this context, we decided to study the three main types of architectures which have demonstrated promising results over traditional systems: 1) Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6 which uses Markov assumptions (i.e. conditional independence between predictions at each time step) to efficiently solve sequential problems by dynamic programming, 2) Attention-based methods BIBREF7, BIBREF8 which rely on an attention mechanism to perform non-monotonic alignment between acoustic frames and recognized acoustic units and 3) RNN-tranducer BIBREF0, BIBREF9, BIBREF10 which extends CTC by additionally modeling the dependencies between outputs at different steps using a prediction network analogous to a language model.
|
What are the existing end-to-end ASR approaches for the French language?
|
The answers are shown as follows:
* 1) Connectionist Temporal Classification (CTC)
* 2) Attention-based methods
* 3) RNN-tranducer
|
null | false
| null |
What are some strategies to survive in nature?
|
Here are some strategies to survive in nature:
1. Build a shelter using trees, branches, leaves, and bushes
2. Start a fire using wood and other materials
3. Scavenge for berries and other plant based foods
4. Look for a clean source of water
5. Create makeshift clothing
6. Set up traps to catch land animals and fish
7. Create spears and other weapons to hunt animals
|
|
null | false
| null |
Describe a Pink Flyod's concert
|
Attending a Pink Floyd concert is an unforgettable experience. The band's legendary shows were known for their mesmerizing light shows, innovative visuals, and immersive soundscapes that took audiences on a journey through time and space.
As the lights dimmed and the iconic Pink Floyd prism logo appeared on the stage, the crowd erupted in cheers and applause. The opening notes of "Shine On You Crazy Diamond" filled the arena, and the band appeared on stage, their silhouettes shrouded in smoke and fog.
The band members seamlessly transitioned from one song to the next, with each track taking the audience on a journey through Pink Floyd's extensive discography. The pulsating rhythms of "Another Brick in the Wall" and "Money" had the crowd dancing and singing along, while the haunting melodies of "Comfortably Numb" and "Wish You Were Here" elicited tears from some.
But it wasn't just the music that made the Pink Floyd concert so mesmerizing. The light show and visuals were equally stunning. Giant inflatables of Pink Floyd's iconic characters, including the famous pig from the "Animals" album cover, floated above the stage, while lasers and projections created a psychedelic atmosphere that enhanced the music.
As the band reached the epic finale of "Echoes," the audience was transported to another world, with the intricate soundscapes and otherworldly visuals creating an otherworldly experience. The band then took their final bow, and the crowd erupted in a standing ovation, still buzzing from the incredible show they had just witnessed.
A Pink Floyd concert was more than just a musical performance, it was a theatrical experience that transcended the boundaries of conventional concerts, creating an unforgettable journey through music and visuals that left audiences in awe.
|
|
null | false
| 249
|
The goal of the current research was to investigate the automatic detection of cyberbullying-related posts on social media. Given the information overload on the web, manual monitoring for cyberbullying has become unfeasible. Automatic detection of signals of cyberbullying would enhance moderation and allow to respond quickly when necessary.
Cyberbullying research has often focused on detecting cyberbullying `attacks', hence overlooking posts written by victims and bystanders. However, these posts could just as well indicate that cyberbullying is going on. The main contribution of this paper is that it presents a system for detecting signals of cyberbullying on social media, including posts from bullies, victims and bystanders. A manually annotated cyberbullying dataset was created for two languages, which will be made available for public scientific use. Moreover, while a fair amount of research has been done on cyberbullying detection for English, we believe this is one of the first papers that focus on Dutch as well.
A set of binary classification experiments were conducted to explore the feasibility of automatic cyberbullying detection on social media. In addition, we sought to determine which information sources contribute to this task. Two classifiers were trained on English and Dutch ASKfm data and evaluated on a holdout test of the same genre. Our experiments reveal that the current approach is a promising strategy for detecting signals of cyberbullying in social media data automatically. After feature selection and hyperparameter optimisation, the classifiers achieved an F INLINEFORM0 -score of 64.32% and 58.72% for English and Dutch, respectively. The systems hereby significantly outperformed a keyword and an (unoptimised) INLINEFORM1 -gram baseline. Analysis of the results revealed that false positives often include implicit cyberbullying or offenses through irony, the challenge of which will constitute an important area for future work.
Another interesting direction for future work would be the detection of fine-grained cyberbullying-related categories such as threats, curses and expressions of racism and hate. When applied in a cascaded model, the system could find severe cases of cyberbullying with high precision. This would be particularly interesting for monitoring purposes, since it would allow to prioritise signals of bullying that are in urgent need for manual inspection and follow-up.
Finally, future work will focus on the detection of participants (or roles) typically involved in cyberbullying. This would allow to analyse the context of a cyberbullying incident and hence evaluate its severity. When applied as moderation support on online platforms, such a system would allow to provide feedback in function of the recipient (i.e., a bully, victim, or bystander).
When applied in a cascaded model, the system could find severe cases of cyberbullying with high precision.
|
What can be interesting for monitoring purposes?
|
The system could find severe cases of cyberbullying with high precision.
|
null | false
| null |
Explain telephone numbering system of Russia ?
|
The telephone numbering plan of the USSR was a set of telephone area codes, numbers and dialling rules which operated in the Soviet Union until the 1990s. After the collapse of the USSR, many newly independent republics implemented their own numbering plans. However, many of the principles of the Soviet numbering plan still remain. Russia and Kazakhstan still retain the former Soviet international code +7.
Key principles
The Soviet Union used a four-level open numbering plan. The long-distance prefix was 8.
a).One could call a local number without the code. Local numbers usually consisted of 5-7 digits, with seven-digit numbers only occurring in Moscow, Leningrad and Kyiv. If the internal number of the regional centre had less than seven digits, then its intercity code was supplemented with numbers (usually 2 for the administrative center, 6 for the second largest city).
b).Within the same numbering area (most often within the state or region) the pattern was: 8 2X YYYYYY, where 2 replaced the three-digit area code.
c).For calls to other areas, one had to first dial long-distance prefix 8, then, after the tone, the full code of the numbering area, which consisted of a three-digit code and zone additional digit(X), and then the local phone number.
d).For international calls, one should dial 8 10 [country code] [phone number]
For example: 8 10 1 212 XXXXXXX for a call to New York City.
Also: 8 10 359 2 XXXXXX for a call to the city of Sofia.
|
|
2003.00639
| false
| null |
The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges.
The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$.
|
How does framework automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model?
|
The answers are shown as follows:
* The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs.
|
null | false
| null |
Komorida was born in Kumamoto Prefecture on July 10, 1981. After graduating from high school, he joined the J1 League club Avispa Fukuoka in 2000. Although he debuted as a midfielder in 2001, he did not play much and the club was relegated to the J2 League at the end of the 2001 season. In 2002, he moved to the J2 club Oita Trinita. He became a regular player as a defensive midfielder and the club won the championship in 2002 and was promoted in 2003. He played many matches until 2005. In September 2005, he moved to the J2 club Montedio Yamagata. In 2006, he moved to the J2 club Vissel Kobe. Although he became a regular player as a defensive midfielder, his gradually was played less during the summer. In 2007, he moved to the Japan Football League club Rosso Kumamoto (later Roasso Kumamoto) based in his local region. He played as a regular player and the club was promoted to J2 in 2008. Although he did not play as much, he still played in many matches. In 2010, he moved to Indonesia and joined Persela Lamongan. In July 2010, he returned to Japan and joined the J2 club Giravanz Kitakyushu. He played often as a defensive midfielder and center back until 2012 when he retired.
|
When was Tomoaki Komorida born?
|
Tomoaki Komorida was born on July 10,1981.
|
null | false
| null |
How Is Linux Licensed?
|
Linus has placed the Linux kernel under the GNU General Public License, which basically means that you may freely copy, change, and distribute it, but you may not impose any restrictions on further distribution, and you must make the source code available.
This is not the same as Public Domain. See the Copyright FAQ, ftp://rtfm.mit.edu/pub/usenet/news.answers/law/copyright, for details.
Full details are in the file COPYING in the Linux kernel sources (probably in /usr/src/linux on your system). There is a FAQ for the GPL at: http://www.gnu.org/copyleft/gpl-faq.html.
The licenses of the utilities and programs which come with the installations vary. Much of the code is from the GNU Project at the Free Software Foundation, and is also under the GPL. Some other major programs often included in Linux distributions are under a BSD license and other similar licenses.
|
|
null | false
| null |
Which is a species of fish? Finger or Fingerfish
|
Fingerfish
|
|
null | false
| null |
In Berkeley, how do parents select their public middle school?
|
The Berkeley Unified Public Schools assign children to their middle school based on the location of their home.
|
|
null | false
| null |
Categorize the below types of animals based on whether they are domestic or non-domestic
dog, cat, badgers, cougars
|
domestic : dog, cat
non-domestic : badgers, cougars
|
|
null | false
| 251
|
It is common knowledge in modern NLP that using large amounts of high-quality training data is a key aspect in building successful machine-learning based systems. For this reason, a major challenge when building such systems is obtaining data in the domain of interest. But what defines a domain? Natural language varies greatly across topics, styles, levels of formality, genres and many other linguistic nuances BIBREF0, BIBREF1, BIBREF2. This overwhelming diversity of language makes it hard to find the right data for the task, as it is nearly impossible to well-define the exact requirements from such data with respect to all the aforementioned aspects. On top of that, domain labels are usually unavailable – e.g. in large-scale web-crawled data like Common Crawl which was recently used to train state-of-the-art pretrained language models for various tasks BIBREF3.
Domain data selection is the task of selecting the most appropriate data for a domain from a large corpus given a smaller set of in-domain data BIBREF4, BIBREF5, BIBREF6, BIBREF7. In this work, we propose to use the recent, highly successful self-supervised pre-trained language models, e.g. devlin-etal-2019-bert,liu2019roberta for domain data selection. As pretrained LMs demonstrate state-of-the-art performance across many NLP tasks after being trained on massive amounts of data, we hypothesize that the robust representations they learn can be useful for mapping sentences to domains in an unsupervised, data-driven approach. We show that these models indeed learn to cluster sentence representations to domains without further supervision (e.g. Figure FIGREF2), and quantify this phenomenon by fitting Gaussian Mixture Models (GMMs) to the learned representations and measuring the purity of the resulting unsupervised clustering. We then propose methods to leverage these emergent domain clusters for domain data selection in two ways:
Via distance-based retrieval in the sentence embedding space induced by the pretrained language model.
By fine-tuning the pretrained language model for binary classification, where positive examples are from the domain of interest.
Our methods enable to select relevant data for the task while requiring only a small set of monolingual in-domain data. As they are based solely on the representations learned by self-supervised LMs, they do not require additional domain labels which are usually vague and over-simplify the notion of domain in textual data. We evaluate our method on data selection for neural machine translation (NMT) using the multi-domain German-English parallel corpus composed by BIBREF8. Our data selection methods enable to train NMT models that outperform those trained using the well-established cross-entropy difference method of BIBREF4 across five diverse domains, achieving a recall of more than 95% in all cases with respect to an oracle that selects the “true” in-domain data.
Our contributions in this work are as follows. First, we show that pre-trained language models are highly capable of clustering textual data to domains with high accuracy in a purely unsupervised manner. Second, we propose methods to select in-domain data based on this property using vector-space retrieval and positive-unlabeled fine-tuning of pretrained language models for binary classification. Third, we show the applicability of our proposed data selection methods on a popular benchmark for domain adaptation in machine translation. An additional contribution is a new, improved data split we create for this benchmark, as we point on issues with previous splits used in the literature. The code and data for this work is publicly available. We hope this work will encourage more research on understanding the data landscape in NLP, enabling to “find the right data for the task” in the age of massive models and diverse data sources.
This overwhelming diversity of language makes it hard to find the right data for the task, as it is nearly impos_x0002_sible to well-define the exact requirements from such data with respect to all the aforementioned aspects.
|
What makes it hard to hard task for the task?
|
The diversity of language makes it hard.
|
null | false
| 115
|
The aim of the technical validation of the data is to guarantee good recording quality and to replicate findings of previous studies investigating co-registration of EEG and eye movement data during natural reading tasks (e.g. dimigen2011coregistration). We also compare the results to ZuCo 1.0 BIBREF1, which allows a more direct comparison due to the analogous recording procedure.
We validated the recorded eye-tracking data by analyzing the fixations made by all subjects through their reading speed and omission rate on sentence level.
|
What quantitative value do they use to validate the recorded eye-tracking data?
|
They fixations made by all subjects through their reading speed and omission rate on sentence level.
|
null | false
| null |
The Malibamat'so River is a river in northern Lesotho. Its origin is near to the South African border, where it drains the eastern slopes of the Maloti Range. It flows southwards past the village of Lejone, and eventually joins the Senqu River 5 kilometres (3.1 mi) northeast of Mohlanapeng.
The Malibamat'so forms the northern arm of the Katse Dam reservoir, a part of the Lesotho Highlands Water Project. Katse is Africa's highest elevation dam at 1,993 metres (6,539 ft) above sea level. Here the river is joined by the Bokong/Bokung River. Downstream the Malibamat'so's left bank tributaries are the Matsoku and Semenanyane Rivers, before it forms a right bank tributary of the Senqu/Orange River.
|
Malibamat'so River
|
Malibamat'so River is a river in northern Lesotho. Its origin is near to the South African border, where it drains the eastern slopes of the Maloti Range. It flows southwards past the village of Lejone, and eventually joins the Senqu River 5 kilometres (3.1 mi) northeast of Mohlanapeng.
The Malibamat'so forms the northern arm of the Katse Dam reservoir, a part of the Lesotho Highlands Water Project. Katse is Africa's highest elevation dam at 1,993 metres (6,539 ft) above sea level. Here the river is joined by the Bokong/Bokung River.
|
null | false
| null |
Choose the word which is different from the rest: chicken, snake, swan, crocodile, frog
|
All except chicken can live in water
|
|
null | false
| null |
What is Rhaeboceras
|
Rhaeboceras is an extinct genus of ammonite belonging to the subfamily Scaphitinae. Species belonging to the genus lived during the Cretaceous and have been found in the Pierre Shale of North America.
|
|
null | false
| null |
Sonova Holding AG (Phonak Holding AG before 1 August 2007) is an internationally active Swiss group of companies headquartered in Stäfa that specializes in hearing care (hearing instruments, cochlear implants, wireless communication). The Sonova group operates through its core business brands Phonak, Unitron, Hansaton, Advanced Bionics, AudioNova and Sennheiser. It is one of the largest providers in the sector worldwide. The group and its brands hold 24% of the global hearing aid market in sales. As of 11 September 2022, Sonova is a component of the Swiss Market Index.
|
Who is the company Sonova
|
Sonova Holding AG (Phonak Holding AG before 1 August 2007) is an internationally active Swiss group of companies headquartered in Staefa that specializes in hearing care (hearing instruments, cochlear implants, wireless communication).
The Sonova group operates through its core business brands Phonak, Unitron, Hansaton, Advanced Bionics, AudioNova and Sennheiser. It is one of the largest providers in the sector worldwide. The group and its brands hold 24% of the global hearing aid market in sales. As of 11 September 2022, Sonova is a component of the Swiss Market Index.
|
null | false
| null |
When a pawn advances to its eighth rank, as part of the move, it is promoted and must be exchanged for the player's choice of queen, rook, bishop, or knight of the same color. Usually, the pawn is chosen to be promoted to a queen, but in some cases, another piece is chosen; this is called underpromotion. In the animated diagram, the pawn on c7 can be advanced to the eighth rank and be promoted. There is no restriction on the piece promoted to, so it is possible to have more pieces of the same type than at the start of the game (e.g., two or more queens). If the required piece is not available (e.g. a second queen) an inverted rook is sometimes used as a substitute, but this is not recognized in FIDE-sanctioned games.
|
Can any piece promote to a queen?
|
No.
In chess, only the pawn is eligible to promote once it reaches the eighth rank on the board.
|
null | false
| 146
|
We pose the prediction task as a binary classification problem. Specifically, given an image and associated question, a system outputs a binary label indicating whether a crowd will agree on the same answer. Our goal is to design a system that can detect which visual questions to assign a disagreement label, regardless of the disagreement cause (e.g., subjectivity, ambiguity, difficulty). We implement both random forest and deep learning classifiers.
A visual question is assigned either an answer agreement or disagreement label. To assign labels, we employ 10 crowdsourced answers for each visual question. A visual question is assigned an answer agreement label when there is an exact string match for 9 of the 10 crowdsourced answers (after answer pre-preprocessing, as discussed in the previous section) and an answer disagreement label otherwise. Our rationale is to permit the possibility of up to one “careless/spam" answer per visual question. The outcome of our labeling scheme is that a disagreement label is agnostic to the specific cause of disagreement and rather represents the many causes (described above).
For our first system, we use domain knowledge to guide the learning process. We compile a set of features that we hypothesize inform whether a crowd will arrive at an undisputed, single answer. Then we apply a machine learning tool to reveal the significance of each feature. We propose features based on the observation that answer agreement often arises when 1) a lay person's attention can be easily concentrated to a single, undisputed region in an image and 2) a lay person would find the requested task easy to address.
We employ five image-based features coming from the salient object subitizing BIBREF22 (SOS) method, which produces five probabilities that indicate whether an image contains 0, 1, 2, 3, or 4+ salient objects. Intuitively, the number of salient objects shows how many regions in an image are competing for an observer's attention, and so may correlate with the ease in identifying a region of interest. Moreover, we hypothesize this feature will capture our observation from the previous study that counting problems typically leads to disagreement for images showing many objects, and agreement otherwise.
We employ a 2,492-dimensional feature vector to represent the question-based features. One feature is the number of words in the question. Intuitively, a longer question offers more information and we hypothesize additional information makes a question more precise. The remaining features come from two one-hot vectors describing each of the first two words in the question. Each one-hot vector is created using the learned vocabularies that define all possible words at the first and second word location of a question respectively (using training data, as described in the next section). Intuitively, early words in a question inform the type of answers that might be possible and, in turn, possible reasons/frequency for answer disagreement. For example, we expect “why is" to regularly elicit many opinions and so disagreement. This intuition about the beginning words of a question is also supported by our analysis in the previous section which shows that different answer types yield different biases of eliciting answer agreement versus disagreement.
We leverage a random forest classification model BIBREF23 to predict an answer (dis)agreement label for a given visual question. This model consists of an ensemble of decision tree classifiers. We train the system to learn the unique weighted combinations of the aforementioned 2,497 features that each decision tree applies to make a prediction. At test time, given a novel visual question, the trained system converts a 2,497 feature descriptor of the visual question into a final prediction that reflects the majority vote prediction from the ensemble of decision trees. The system returns the final prediction along with a probability indicating the system's confidence in that prediction. We employ the Matlab implementation of random forests, using 25 trees and the default parameters.
We next adapt a VQA deep learning architecture BIBREF24 to learn the predictive combination of visual and textual features. The question is encoded with a 1024-dimensional LSTM model that takes in a one-hot descriptor of each word in the question. The image is described with the 4096-dimensional output from the last fully connected layer of the Convolutional Neural Network (CNN), VGG16 BIBREF25 . The system performs an element-wise multiplication of the image and question features, after linearly transforming the image descriptor to 1024 dimensions. The final layer of the architecture is a softmax layer.
We train the system to predict (dis)agreement labels with training examples, where each example includes an image and question. At test time, given a novel visual question, the system outputs an unnormalized log probability indicating its belief in both the agreement and disagreement label. For our system's prediction, we convert the belief in the disagreement label into a normalized probability. Consequently, predicted values range from 0 to 1 with lower values reflecting greater likelihood for crowd agreement.
Answer (Dis)Agreement Labels: A visual question is assigned either an answer agreement or disagreement label. To assign labels, we employ 10 crowdsourced answers for each visual question. A visual question is assigned an answer agreement label when there is an exact string match for 9 of the 10 crowdsourced answers (after answer pre-preprocessing, as discussed in the previous section) and an answer disagreement label otherwise.
|
How do they assign answer (dis)agreement labels?
|
When there is an exact string match for 9 of the 10 crowdsourced answers, the visual question is assigned an answer agreement label, or it is assigned an answer disagreement label.
|
null | false
| null |
A rainbow is an optical phenomenon that can occur under certain meteorological conditions. It is caused by reflection, refraction and dispersion of light in water droplets resulting in an effect similar to a spectrum of light appearing in the sky. It takes the form of a multicoloured circular arc. Rainbows caused by sunlight always appear in the section of sky directly opposite the Sun.
Rainbows can be full circles. However, the observer normally sees only an arc formed by illuminated droplets above the ground, and centered on a line from the Sun to the observer's eye.
In a primary rainbow, the arc shows red on the outer part and violet on the inner side. This rainbow is caused by light being refracted when entering a droplet of water, then reflected inside on the back of the droplet and refracted again when leaving it.
In a double rainbow, a second arc is seen outside the primary arc, and has the order of its colours reversed, with red on the inner side of the arc. This is caused by the light being reflected twice on the inside of the droplet before leaving it.
|
Without quoting directly from the text, tell me about rainbows.
|
Rainbows may appear when light passes through water. When a rainbow appears when it's raining, it's always in the opposite direction from the Sun. Although usually in the form of an arc, rainbows can also be full circle. The colors on the primary and double rainbows are inverted with red appearing on the outer part of the primary rainbow and the inner part of a double rainbow.
|
null | false
| 9
|
In recent years, there has been an increasing interest in Machine reading comprehension (MRC), which plays a vital role in the assessment of how well a machine could understand natural language. Several datasets BIBREF0 , BIBREF1 , BIBREF2 for machine reading comprehension have been released in recent years and have driven the evolution of powerful neural models. However, much of the research up to now has been dominated by answering questions that can be well solved solved using superficial information, yet struggles to do accurate natural language understanding and reasoning. For example, BIBREF3 jia2017Adversarial show that existing machine learning systems for MRC perform poorly under adversarial evaluation. Recent developments in MRC datasets BIBREF4 , BIBREF5 , BIBREF6 have heightened the need for deep understanding.
Knowledge has a pivotal role in accurately understanding and reasoning natural language in MRC. Previous research BIBREF7 , BIBREF8 has established that human reading comprehension requires both words and world knowledge. In this paper, we consider words and world knowledge in the format of triplets (subject, predicate, object). Specifically, we believe the advantages of using knowledge in MRC are three-fold. First, utilizing knowledge in MRC supports reasoning over multiple triplets because a single triplet may not cover the entire question. Multi-hop reasoning is also a long-standing goal in question answering. Second, building a question answering system based on triplet-style knowledge facilitates the interpretability of the decision making process. Triplets organize the document together with KBs as a graph, where a well-designed model such as PCNet, which we will describe in a later section, expressly reveal rationales for their predictions. Third, representing the documents as knowledge allows for ease of accessing and leveraging the knowledge from external/background knowledge because the knowledge representation of a document is easily consistent with both manually curated and automatically extracted KBs.
In this paper, we present knowledge based machine reading comprehension, which requires reasoning over triplet-style knowledge involved in a document. However, we find published dataset do not sufficiently support this task. We conduct preliminary exploration on SQuAD BIBREF0 . We use a strong open IE algorithm BIBREF9 to extract triplets from the documents and observe that only 15% of instances have an answer that is exactly the same as the corresponding subject/object in the extracted triplets. To do knowledge-based MRC, We build a new dataset consisting of 40,047 examples for the knowledge based MRC task. The annotation of this dataset is designed so that successfully answering the questions requires understanding and the knowledge involved in a document. Each instance is composed of a question, a set of triplets derived from a document, and the answer.
We implement a framework consisting of both a question answering model and a question generation model, both of which take the knowledge extracted from the document as well as relevant facts from an external knowledge base such as Freebase/ProBase/Reverb/NELL. The question answering model gives each candidate answer a score by measuring the semantic relevance between representation and the candidate answer representation in vector space. The question generation model provides each candidate answer with a score by measuring semantic relevance between the question and the generated question based on the semantics of the candidate answer. We implement an MRC model BiDAF BIBREF10 as a baseline for the proposed dataset. To test the scalability of our approach in leveraging external KBs, we use both manually created and automatically extracted KBs, including Freebase BIBREF11 , ProBase BIBREF12 , NELL BIBREF13 and Reverb BIBREF14 . Experiments show that incorporating evidence from external KBs improves both the matching-based and question generation-based approaches. Qualitative analysis shows the advantages and limitations of our approaches, as well as the remaining challenges.
In this paper, we present knowledge based machine reading comprehension, which requires reasoning over tripletstyle knowledge involved in a document. However, we find published dataset do not sufficiently support this task.**** To do knowledge-based MRC, We build a new dataset consisting of 40,047 examples for the knowledge based MRC task.
|
What is the dataset built for?
|
There is no dataset for knowledge based machine reading comprehension present.
|
null | false
| null |
The 2015 UEFA Super Cup was a football match between Spanish teams Barcelona and Sevilla on 11 August 2015 at the Boris Paichadze Dinamo Arena in Tbilisi, Georgia. It was the 40th UEFA Super Cup, an annual tournament contested by the winners of the UEFA Champions League and UEFA Europa League. Barcelona were appearing in the Super Cup for the ninth time, they had previously won and lost the competition four times each. Sevilla were appearing in the competition for the fourth time, they had won once and lost twice. The two sides had met before in the competition in 2006, when Sevilla won 3–0.
The teams had qualified for the competition by winning the two seasonal European competitions. Barcelona won the 2014–15 UEFA Champions League, defeating Italian Juventus 3–1 in the final. Sevilla qualified as winners of the 2014–15 UEFA Europa League. They beat Ukrainian team Dnipro 3–2 in the final.
Barcelona won the game 5–4 after extra time to claim their fifth Super Cup title, tying Milan's record. Their ninth Super Cup appearance was also a record, two ahead of Milan. Dani Alves's fourth Super Cup win and fifth appearance matched Paolo Maldini's records, while Luis Enrique became the fourth man to lift the UEFA Super Cup as coach and player after Pep Guardiola, Carlo Ancelotti and Diego Simeone. The nine goals scored was also the most in any Super Cup fixture, and the attendance of 51,940 was a record for a one-off, neutral venue UEFA Super Cup.
|
Given a reference text about the 2015 UEFA Super Cup, tell me who played in the game, who won, and the score.
|
Barcelona played Sevilla in the 2015 UEFA Super Cup. Barcelona won 5-4 after extra time.
|
1912.10806
| false
| null |
FLOAT SELECTED: Table 1: Predicted Mean MPA results.
FLOAT SELECTED: Table 2: S&P 500 predicted results.
FLOAT SELECTED: Table 1: Predicted Mean MPA results.
FLOAT SELECTED: Table 2: S&P 500 predicted results.
|
What is the prediction accuracy of the model?
|
mean prediction accuracy 0.99582651
S&P 500 Accuracy 0.99582651
|
null | false
| null |
Discuss at least two risks associated with scraping data from websites. Describe steps an analyst could take to mitigate those risks.
|
No clear Terms-of-service for scrapping the B2B partnership data
The companies like Hubspot (https://legal.hubspot.com/website-terms-of-use) and Salesforce do not have a clear Terms of Service (TOS) when using partnership data. The TOS is subject to interpretation.
Automation is highly fragile
Web-scrapping the B2B partnership data with an automated scrapping tool or a programming language like Python- BeautifulSoup is a highly fragile process. The website layout, which can be dynamically changing, can break the automated scraping. Even with automatic scrapping, there will be many manual activities to normalize the data.
An analyst could list the top companies for B2B partnerships and request the companies for the B2B data. The analyst can start approaching the smaller companies and build the business model to illustrate the value of sharing B2B data, creating a flywheel where more companies will be willing to share the data.
|
|
null | false
| null |
List some of the best surf breaks in California
|
Trestles "Uppers" Lowers" "Middles", Cottons, Blacks, Rincon, Malibu, Mavericks, Huntington Pier, Lunada Bay, San Onofre, T-Street, Hollister Ranch, Santa Cruz, Point Dume, Swamis, Windandsea, Salt Creek, Killer Dana (before Dana Point Harbor was built)
|
|
null | false
| null |
The bulk of dried spaghetti is produced in factories using auger extruders. While essentially simple, the process requires attention to detail to ensure that the mixing and kneading of the ingredients produces a homogeneous mix, without air bubbles. The forming dies have to be water cooled to prevent spoiling of the pasta by overheating. Drying of the newly formed spaghetti has to be carefully controlled to prevent strands sticking together, and to leave it with sufficient moisture so that it is not too brittle. Packaging for protection and display has developed from paper wrapping to plastic bags and boxes.
|
Given a reference text about dried spaghetti production, what is required to produce spaghetti without errors?
|
Spaghetti production is straight-forward, but requires the maker to ensure the mixture is homogenous and without air bubbles before starting the drying process. The drying process itself needs to be monitored to ensure that the individual spaghetti strands do not stick together but also do not dry out so much that they become brittle
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Corbray?
|
Corwyn Corbray
|
|
null | false
| null |
Who wrote Beloved?
|
Beloved is a book written by Toni Morrison
|
|
null | false
| null |
how do I learn flute?
|
Flute is a musical instrument. It requires a lot of practices to play it well. Here are a few ways:
find a good teacher that has rich experience about how to gradually play with it and follow instructions
find materials online about different people's learning schedule and experiences
practice a lot. practice makes perfect. and continuous practice is more important
Also another important thing is to love it and enjoy the learning experience. It creates the intrinsic motivation which is more important.
|
|
null | false
| 251
|
The proliferation of massive pretrained neural language models such as ELMo BIBREF9, BERT BIBREF10 or RoBERTa BIBREF11 has enabled great progress on many NLP benchmarks BIBREF12, BIBREF13. Larger and larger models trained on billions of tokens of raw text are released in an ever-increasing pace BIBREF3, enabling the NLP community to fine-tune them for the task of interest. While many works tried to “probe” those models for the morphological, syntactic and semantic information they capture BIBREF14, BIBREF15, BIBREF16, an important aspect of language remained overlooked in this context – the domain the data comes from, often referred to as the “data distribution”.
The definition of domain is many times vague and over-simplistic (e.g. “medical text” may be used for biomedical research papers and for clinical conversations between doctors and patients, although the two vary greatly in topic, formality etc.). A common definition treats a domain as a data source: “a domain is defined by a corpus from a specific source, and may differ from other domains in topic, genre, style, level of formality, etc.” BIBREF8. We claim that a more data-driven definition should take place, as different data sources may have sentences with similar traits and vice versa - a single massive web-crawled corpus contains texts in numerous styles, topics and registers. Our analysis in Section SECREF2 shows examples for such cases, e.g. a sentence discussing “Viruses and virus-like organisms” in a legal corpus.
We hypothesize that massive pretrained LMs can learn representations that cluster to domains, as texts from similar domains will appear in similar contexts. We test this hypothesis across several large, publicly-available pretrained LMs; we explore both masked-language-models (MLMs) and auto-regressive LMs.
We hypothesize that massive pretrained LMs can learn representations that cluster to domains, as texts from similar domains will appear in similar contexts.
|
What do they hypothesize LMs can do?
|
It can learn representations that cluster to domains.
|
null | false
| null |
Where is the Walmart corporate home office located?
|
While Walmart has many corporate locations located across the world, its home office is located in Bentonville, Arkansas.
|
|
null | false
| null |
Identify which animal is domesticated or wild: Alpaca, Java Sparrow
|
Alpaca is domesticated, Java Sparrow is wild.
|
|
null | false
| 94
|
The development of online peer-to-peer markets in the 1990s, galvanized by the launch of sites like eBay, fundamentally shifted the way buyers and sellers could connect [4]. These new markets not only leveraged technology to allow for faster transaction speeds, but in the process also exposed a variety of unprecedented market-designs [4].
Today, many of the most well-known peer-to-peer markets like Uber and Instacart use a centralized system that matches workers with assigned tasks via a series of complex algorithms [4]. Still, a number of other websites like Airbnb and eBay rely on sellers and buyers to organically find one-another in a decentralized fashion. In the case of these decentralized systems, sellers are asked to price and market their products in order to attract potential buyers. Without a large marketing team at their disposal, however, sellers most often rely on their intuitions for how to present their articles or listings in the most appealing manner. Naturally, this leads to market inefficiencies, where willing sellers and buyers often fail to connect due to an inadequate presentation of the product or service offered.
The development of online peer-to-peer markets in the 1990s, galvanized by the launch of sites like eBay, fundamentally shifted the way buyers and sellers could connect [4]. These new markets not only leveraged technology to allow for faster transaction speeds, but in the process also exposed a variety of unprecedented market-designs [4]. Today, many of the most well-known peer-to-peer markets like Uber and Instacart use a centralized system that matches workers with assigned tasks via a series of complex algorithms [4]. Still, a number of other websites like Airbnb and eBay rely on sellers and buyers to organically find one-another in a decentralized fashion. In the case of these decentralized systems, sellers are asked to price and market their products in order to attract potential buyers. Without a large marketing team at their disposal, however, sellers most often rely on their intuitions for how to present their articles or listings in the most appealing manner. Naturally, this leads to market inefficiencies, where willing sellers and buyers often fail to connect due to an inadequate presentation of the product or service offered.
|
What limitations do P2P websites have?
|
The way that sellers and buyers find each other is decentralized and this brings market inefficiencies.
|
null | false
| 73
|
With the steady growth in the commercial websites and social media venues, the access to users' reviews have become easier. As the amount of data that can be mined for opinion increased, commercial companies' interests for sentiment analysis increased as well. Sentiment analysis is an important part of understanding user behavior and opinions on products, places, or services.
Sentiment analysis has long been studied by the research community, leading to several sentiment-related resources such as sentiment dictionaries that can be used as features for machine learning models BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . These resources help increase sentiment analysis accuracies; however, they are highly dependent on language and require researchers to build such resources for every language to process.
Feature engineering is a large part of the model building phase for most sentiment analysis and emotion detection models BIBREF4 . Determining the correct set of features is a task that requires thorough investigation. Furthermore, these features are mostly language and dataset dependent making it even further challenging to build models for different languages. For example, the sentiment and emotion lexicons, as well as pre-trained word embeddings are not completely transferable to other languages which replicates the efforts for every language that users would like to build sentiment classification models on. For languages and tasks where the data is limited, extracting these features, building language models, training word embeddings, and creating lexicons are big challenges. In addition to the feature engineering effort, the machine learning models' parameters also need to be tuned separately for each language to get the optimal results.
In this paper, we take a different approach. We build a reusable sentiment analysis model that does not utilize any lexicons. Our goal is to evaluate how well a generic model can be used to mine opinion in different languages where data is more limited than the language where the generic model is trained on. To that end, we build a training set that contains reviews from different domains in English (e.g., movie reviews, product reviews) and train a recurrent neural network (RNN) model to predict polarity of those reviews. Then focusing on a domain, we make the model specialized in that domain by using the trained weights from the larger data and further training with data on a specific domain. To evaluate the reusability of the sentiment analysis model, we test with non-English datasets. We first translate the test set to English and use the pre-trained model to score polarity in the translated text. In this way, our proposed approach eliminates the need to train language-dependent models, use of sentiment lexicons and word embeddings for each language. Our experiments show that a generalizable sentiment analysis model can be utilized successfully to perform opinion mining for languages that do not have enough resources to train specific models.
The contributions of this study are; 1) a robust approach that utilizes machine translation to reuse a model trained on one language in other languages, 2) an RNN-based approach to eliminate feature extraction as well as resource requirements for sentiment analysis, and 3) a technique that statistically significantly outperforms baselines for multilingual sentiment analysis task when data is limited. To the best of our knowledge, this study is the first to apply a deep learning model to the multilingual sentiment analysis task.
Our goal is to evaluate how well a generic model can be used to mine opinion in different languages where data is more limited than the language where the generic model is trained on.
|
What is the goal of this article?
|
Their goal is to evaluate how well a generic model can be used to mine opinion in different languages where data is more limited than the language where the generic model is trained on.
|
null | false
| null |
Which are the 2 prominent parties in Indian politics
|
Congress and BJP are the 2 prominent political parties in Indian Politics. In the recent time, these parties have made alliances and have formed groups called UPA and NDA led by Congress and BJP respectively
|
|
null | false
| null |
What causes revenue to decline?
|
In general, lower volume of sales or lower average selling price causes revenue to decline.
|
|
null | false
| null |
The E.S. Hoyt House is a historic house in Red Wing, Minnesota, United States, designed by the firm of Purcell & Elmslie and built in 1913. The house is listed on the National Register of Historic Places. It is also a contributing property to the Red Wing Residential Historic District.
Description and history
The house is one of their more elaborate designs because its owner had a budget that allowed for lavish decoration. The result is comparable to the William Gray Purcell House and the Edward L. Powers House, both in Minneapolis. The horizontal lines of the Prairie School are more pronounced in this house; they are expressed in the bands of art glass windows on both levels and the cantilevered second story. Its owner, E.S. Hoyt, was the president of the Red Wing Stoneware Company.
The exterior is clad with brick on the first floor and red stucco on the second floor. The house has ten rooms and about 3,600 square feet (330 m2) of floor space. It has 99 diamond-paned art glass windows and a considerable amount of quarter sawn white oak wood ornamentation. The second story of the house is cantilevered over the first story. The entrance opens into a hallway that leads to a library on the left and a living room on the right. The living room, dining room, and an enclosed porch are all stretched across the front of the house and provide plenty of light through the art glass windows. Several of these windows provide panoramic views of the Mississippi River. The kitchen is located behind the dining room, although it has been remodeled since its original design. The second floor of the house has four bedrooms and two bathrooms, several of which have corner windows that expand the sense of space. The maid's room, over the kitchen, had a separate staircase.[
|
Given a reference text about the E.S. Hoyt House, tell me how big the inside of the house is and how many rooms it has.
|
The E.S. Hoyt House is about 3,600 square feet and has ten rooms.
|
1709.05404
| false
| null |
Generic Sarcasm. We first examine the different patterns learned on the Gen dataset. Table TABREF29 show examples of extracted patterns for each class. We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method. Instead, such cues co-occur with the cues we search for, expanding our pattern inventory as we show in Table TABREF31 .
Rhetorical Questions. We notice that while the not-sarcastic patterns generated for RQs are similar to the topic-specific not-sarcastic patterns we find in the general dataset, there are some interesting features of the sarcastic patterns that are more unique to the RQs.
Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset. Table TABREF32 shows a few examples of the relations we extract.
Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. Table TABREF33 illustrates some of the new adverb adjective patterns that are frequent, high-precision indicators of sarcasm.
We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive. Table TABREF33 shows just a few examples, such as though it in no way and so much knowledge.
Generic Sarcasm. We first examine the different patterns learned on the Gen dataset.
We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method.
Rhetorical Questions. We notice that while the not-sarcastic patterns generated for RQs are similar to the topic-specific not-sarcastic patterns we find in the general dataset, there are some interesting features of the sarcastic patterns that are more unique to the RQs.
Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset.
Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole.
We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive.
|
What are the linguistic differences between each class?
|
Each class has different patterns in adjectives, adverbs and verbs for sarcastic and non-sarcastic classes
|
null | false
| 65
|
Typical word embeddings models or representations, such as word2vec BIBREF0, GloVe BIBREF1, or FastText BIBREF2, are fast to train and have been pre-trained for a number of different languages. They do not capture the context, though, so each word is always given the same vector, regardless of its context or meaning. This is especially problematic for polysemous words. ELMo (Embeddings from Language Models) embedding BIBREF3 is one of the state-of-the-art pretrained transfer learning models, that remedies the problem and introduces a contextual component.
ELMo model`s architecture consists of three neural network layers. The output of the model after each layer gives one set of embeddings, altogether three sets. The first layer is a CNN layer, which operates on a character level. It is context independent, so each word always gets the same embedding, regardless of its context. It is followed by two biLM layers. A biLM layer consists of two concatenated LSTMs. In the first LSTM, we try to predict the following word, based on the given past words, where each word is represented by the embeddings from the CNN layer. In the second LSTM, we try to predict the preceding word, based on the given following words. It is equivalent to the first LSTM, just reading the text in reverse.
In NLP tasks, any set of these embeddings may be used; however, a weighted average is usually used. The weights of the average are learned during the training of the model for the specific task. Additionally, an entire ELMo model can be fine-tuned on a specific end task.
Although ELMo is trained on character level and is able to handle out-of-vocabulary words, a vocabulary file containing most common tokens is used for efficiency during training and embedding generation. The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words. Later, ELMo models for other languages were trained as well, but limited to larger languages with many resources, like German and Japanese.
ELMo model‘s architecture consists of three neural network layers. The output of the model after each layer gives one set of embeddings, altogether three sets. The first layer is a CNN layer, which operates on a character level. It is context independent, so each word always gets the same embedding, regardless of its context. It is followed by two biLM layers. A biLM layer consists of two concatenated LSTMs. In the first LSTM, we try to predict the following word, based on the given past words, where each word is represented by the embeddings from the CNN layer. In the second LSTM, we try to predict the preceding word, based on the given following words. The second LSTM is equivalent to the first LSTM, just reading the text in reverse.
|
What is the structure of ELMo?
|
ELMo model‘s architecture consists of three neural network layers. The output of the model after each layer gives one set of embeddings, altogether three sets. The first layer is a CNN layer, which operates on a character level. It is context independent, so each word always gets the same embedding, regardless of its context. It is followed by two biLM layers. A biLM layer consists of two concatenated LSTMs. In the first LSTM, they try to predict the following word, based on the given past words, where each word is represented by the embeddings from the CNN layer. In the second LSTM, they try to predict the preceding word, based on the given following words. The second LSTM is equivalent to the first LSTM, just reading the text in reverse.
|
1902.06843
| false
| null |
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender.
Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 .
|
How do this framework facilitate demographic inference from social media?
|
Demographic information is predicted using weighted lexicon of terms.
|
null | false
| null |
What is a credit card?
|
A credit card is small card made out of plastic with numbers, codes and customer data embossed on it issued by banks and credit card companies. They enable people who have them to purchase goods and services using money that has been lended as credit to the person who has the credit card. After purchases are made the credit card holder is responsible for a minimum monthly payment for the lended money that was spend to pay for goods.
|
|
null | false
| null |
Wall was born on the Navajo Nation in Ganado, Arizona, to James and Joan (née Hamilton) Wall. His parents converted to Catholicism after coming into contact with two Franciscan friars. One of six children, he has three sisters and two brothers. Wall graduated from Chandler High School in Chandler, Arizona, in 1983 and attended Arizona State University, obtaining a bachelor's degree in history in 1993.
|
How many children did James S. Wall have?
|
James S. Wall had six children.
|
1904.01608
| false
| null |
FLOAT SELECTED: Table 2: Characteristics of SciCite compared with ACL-ARC dataset by Jurgens et al. (2018)
FLOAT SELECTED: Table 2: Characteristics of SciCite compared with ACL-ARC dataset by Jurgens et al. (2018)
|
What are the citation intent labels in the datasets?
|
Background, extends, uses, motivation, compare/contrast, and future work for the ACL-ARC dataset. Background, method, result comparison for the SciCite dataset.
|
1910.00194
| false
| null |
While WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese, to evaluate the effectiveness of our approach in a different language. We use OntoNotes Release 5.0, which contains a number of annotations including word senses for Chinese. We follow the data setup of BIBREF26 and conduct an evaluation on four genres, i.e., broadcast conversation (BC), broadcast news (BN), magazine (MZ), and newswire (NW), as well as the concatenation of all genres. While the training and development datasets are divided into genres, we train on the concatenation of all genres and test on each individual genre.
While WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese, to evaluate the effectiveness of our approach in a different language.
|
Which language(s) are found in the WSD datasets?
|
The answers are shown as follows:
* WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese
|
null | false
| null |
Kelvyn Park High School is a public 4–year high school located in the Hermosa neighborhood on north-west side of Chicago, Illinois, United States. Opened in 1933, Kelvyn Park is operated by the Chicago Public Schools (CPS) district. In addition to being a neighborhood high school, Kelvyn Park also serves as a middle school with seventh and eighth grades. Kelvyn Park attendance boundaries extends to the Logan Square, Cragin and Belmont Gardens neighborhoods.
|
Write some information in points on Kelvyn Park High School from the given passage.
|
1. The Hermosa neighbourhood on the north-west side of Chicago, Illinois, in the United States, is home to the public 4-year Kelvyn Park High School.
2. Kelvyn Park, which was established in 1933, is run by the Chicago Public Schools (CPS) organisation.
3. Kelvyn Park not only functions as a neighbourhood high school, but also as a middle school housing the seventh and eighth classes.
4. The neighbourhoods of Logan Square, Cragin, and Belmont Gardens are included in the Kelvyn Park attendance boundaries.
|
2002.06053
| false
| null |
Machine translation finds use in cheminformatics in “translation" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples BIBREF110, BIBREF111. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. BIBREF110 refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance.
The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101. BIBREF34 adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. BIBREF34 hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue BIBREF102. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). BIBREF103 built upon the VAE BIBREF34 and GVAE BIBREF102 architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. BIBREF103 compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. BIBREF104 proposed an adversarial AE for the same task. Conditional VAEs BIBREF105, BIBREF106 were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES BIBREF29 and SELFIES BIBREF32 (details in Section SECREF3).
Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules BIBREF107, BIBREF108. ORGAN BIBREF108, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP BIBREF109. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards BIBREF108.
Machine translation finds use in cheminformatics in “translation" from one language (e.g. reactants) to another (e.g. products).
The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101.
Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107.
|
Are this models usually semi/supervised or unsupervised?
|
Both supervised and unsupervised, depending on the task that needs to be solved.
|
null | false
| null |
Why sport cricket is so famous?
|
Cricket is a very interesting and engaging sport which is celebrated vastly in many nations like India, Australia, England, South Africa etc. Because of the popularity of cricket, it attracts big sponsors and investments globally
|
|
null | false
| null |
What do you think are the most relevant things in life?
|
That's a very complex question and depends on the individual and their circumstances. That said, it would be fair to assume that family, friends, and health will likely rank high for many people.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.