paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| 147
|
Table TABREF14 shows the effectiveness of our model (multi-task transformer) over the baseline transformer BIBREF8 . Our model achieves significant performance gains in the test sets over the baseline for both Italian and Finnish query translation. The overall low MAP for NMT can possibly be improved with larger TC. Moreover, our model validation approach requires access to RC index, and it slows down overall training process. Hence, we could not train our model for a large number of epochs - it may be another cause of the low performance.
We want to show that translation terms generated by our multi-task transformer are roughly equally likely to be seen in the Europarl corpus (TC) or the CLEF corpus (RC). Given a translation term INLINEFORM0 , we compute the ratio of the probability of seeing INLINEFORM1 in TC and RC, INLINEFORM2 . Here, INLINEFORM3 and INLINEFORM4 is calculated similarly. Given a query INLINEFORM5 and its translation INLINEFORM6 provided by model INLINEFORM7 , we calculate the balance of INLINEFORM8 , INLINEFORM9 . If INLINEFORM10 is close to 1, the translation terms are as likely in TC as in RC. Figure FIGREF15 shows the balance values for transformer and our model for a random sample of 20 queries from the validation set of Italian queries, respectively. Figure FIGREF16 shows the balance values for transformer and our model for a random sample of 20 queries from the test set of Italian queries, respectively. It is evident that our model achieves better balance compared to baseline transformer, except for a very few cases.
Given a query INLINEFORM0 , consider INLINEFORM1 as the set of terms from human translation of INLINEFORM2 and INLINEFORM3 as the set of translation terms generated by model INLINEFORM4 . We define INLINEFORM5 and INLINEFORM6 as precision and recall of INLINEFORM7 for model INLINEFORM8 . In Table TABREF19 , we report average precision and recall for both transformer and our model across our train and validation query set over two language pairs. Our model generates precise translation, i.e. it avoids terms that might be useless or even harmful for retrieval. Generally, from our observation, avoided terms are highly likely terms from TC and they are generated because of translation model overfitting. Our model achieves a regularization effect through an auxiliary task. This confirms results from existing multi-tasking literature BIBREF15 .
To explore translation quality, consider pair of sample translations provided by two models. For example, against an Italian query, medaglia oro super vinse medaglia oro super olimpiadi invernali lillehammer, translated term set from our model is {gold, coin, super, free, harmonising, won, winter, olympics}, while transformer output is {olympic, gold, one, coin, super, years, won, parliament, also, two, winter}. Term set from human translation is: {super, gold, medal, won, lillehammer, olypmic, winter, games}. Transformer comes up with terms like parliament, also, two and years that never appears in human translation. We found that these terms are very likely in Europarl and rare in CLEF. Our model also generates terms such as harmonising, free, olympics that not generated by transformer. However, we found that these terms are equally likely in Europarl and CLEF.
We want to show that translation terms generated by our multi-task transformer are roughly equally likely to be seen in the Europarl corpus (TC) or the CLEF corpus (RC).
|
Are the translation terms generated by their multi-task transformer roughly equally likely to be seen in the Europarl corpus or the CLEF corpus?
|
Yes, they are.
|
null | false
| null |
Why Mauritius is famous
|
1. Because it has nice resorts
2. Beaches are very nice
3, Foods are awesome
|
|
1909.08859
| false
| null |
In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps.
In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures.
We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text.
|
What multimodality is available in the dataset?
|
The answers are shown as follows:
* images and text
|
null | false
| null |
Give me a list of what to do with used brake rotors.
|
Scrap them for their raw metal value. Use them as a heavy base for a pole, mailbox, or similar. Use them as workout weights. Use them as anchors.
|
|
null | false
| 153
|
Common to all the models, we initialize our embedding matrix with pre-trained Glove word vectors of 300 dimensions using 6B tokens . Out of 11319 tokens present in the dataset, we found 188 tokens missing from the pre-trained Glove embeddings, so we manually map these tokens to words conveying semantically similar meaning, e.g. we map over ten variations of the word “yes” - misspelled or not picked up by tokenizer - “*yes", “yesa", “yess", “ytes", “yes-", “yes3", “yyes", “yees", etc.
For image features, we extract Faster R-CNN features with ResNet-101 backbone trained on Visual genome BIBREF4 dataset, similar to BIBREF2. We use an adaptive number of object proposals per-image ranging from 10 to 100 generated using a fixed confidence threshold and each object is then associated with 2048-dimensional mean-pooled features using ROI pooling. We use discriminative decoding throughout our models.
We first describe our models individually and then the ensembling technique that we employ. In the following, MN denotes Memory Networks to encode conversational history, RCNN signify R-CNN for object level representations of an image, Wt represents additional linear layer in the decoder, and LF a late fusion mechanism as defined in BIBREF0.
LF-RCNN Late fusion encoder [2] with concatenated history. We use two-layered LSTMs with 512 hidden units for embedding questions and history. The object-level features are weighed using only question embeddings. The word embeddings from Glove vectors are frozen and are not fine-tuned. Figure 1 gives an overview of the architecture.
|
What network is used for embedding questions and history?
|
LSTMs with 512 hidden units.
|
null | false
| null |
Identify which instrument is string or percussion: Hyoshigi, Ronroco
|
Ronroco is string, Hyoshigi is percussion.
|
|
null | false
| null |
He attended Arizona State University on a golf scholarship, where he earned a Bachelor of Arts degree in Communications. There he won 11 college golf tournaments, which is second in school history, behind only Phil Mickelson's 16 collegiate wins.
|
Where did Jon Rahm go to college?
|
Arizona State University
|
null | false
| null |
How long do tennis matches last?
|
Tennis matches do not have a time limit. They can range anywhere from one hour on the short end to five-six hours on the other extreme. In most cases, the matches finish within a couple of hours. The variation in the duration of tennis matches stems from the scoring system which requires a player to win two or three sets, depending on the tournament, to win the match. For really competitive matches, an individual set can go on for hours if players keep running into ties, prolonging the match.
|
|
null | false
| null |
Why are border collies good guard dogs
|
Border collies have very strong protective instinct. Just like they protects herds, they will protect you when danger approaches. Also border collies are vocal. They will not fail to notify you when there is something suspicious around. Border collies are very smart and easy to train.
|
|
null | false
| null |
Who received the maximum number of Oscar awards of all time?
|
While there are many actors who received more than 1 Oscar award, Katharine Hepburn has received 4 Oscar awards. There are also a few actors who received 3 Oscar awards for their performances.
|
|
null | false
| null |
During the 1989 Tiananmen Square protests and massacre student demonstrators created and distributed a large variety of propaganda. The first of these were memorial posters dedicated to Hu Yaobang, which were placed in Peking University following his death on Saturday April 15, 1989. On April 16th and 17th, pamphlets, leaflets and other forms of propaganda began to be distributed by university students both in Peking University and at Tiananmen Square where large congregations of students began to form in what became the beginning stages of the protest. These were used to communicate among the students as well as to spread their messages and demands to groups such as the Chinese government and foreign media. Other forms of propaganda would emerge as the protests continued, such as a hunger strike beginning on May 13 and visits from celebrities and intellectuals, as well as speeches and songs. All of these were used to promote the interests of the student protest movement.
|
Summarize the types of propaganda students distributed during the Tiananmen Square protests and massacre in a bulleted list.
|
The following are types of propaganda students distributed during the Tiananmen Square protests and massacre:
- Memorial posters dedicated to Hu Yaobang
- Hunger strike
- Visits from celebrities and intellectuals
- Speeches and songs
|
null | false
| null |
Beavers (genus Castor) are large, semiaquatic rodents of the Northern Hemisphere. There are two existing species: the North American beaver (Castor canadensis) and the Eurasian beaver (C. fiber). Beavers are the second-largest living rodents, after capybaras, weighing up to 50 kg (110 lb). They have stout bodies with large heads, long chisel-like incisors, brown or gray fur, hand-like front feet, webbed back feet, and tails that are flat and scaly. The two species differ in skull and tail shape and fur color. Beavers can be found in a number of freshwater habitats, such as rivers, streams, lakes and ponds. They are herbivorous, consuming tree bark, aquatic plants, grasses and sedges.
Beavers build dams and lodges using tree branches, vegetation, rocks and mud; they chew down trees for building material. Dams restrict water flow, and lodges serve as shelters. Their infrastructure creates wetlands used by many other species, and because of their effect on other organisms in the ecosystem, beavers are considered a keystone species.
|
Based on the given text, do beavers eat wood or do they only chop down trees for dams?
|
Beavers are herbivorous and are known to consume tree bark as part of their diet. However, beavers often build dams and lodges using fallen tree branches and other vegetation that they chew with their teeth in order to make materials for restricting water flow and shelter.
|
1709.00387
| false
| null |
To further distinguish speech from different Arabic dialects, while making speech from the same dialect more similar, we adopted a Siamese neural network architecture BIBREF24 based on an i-vector feature space. The Siamese neural network has two parallel convolutional networks, INLINEFORM0 , that share the same set of weights, INLINEFORM1 , as shown in Figure FIGREF5 (a). Let INLINEFORM2 and INLINEFORM3 be a pair of i-vectors for which we wish to compute a distance. Let INLINEFORM4 be the label for the pair, where INLINEFORM5 = 1 if the i-vectors INLINEFORM6 and INLINEFORM7 belong to same dialect, and INLINEFORM8 otherwise. To optimize the network, we use a Euclidean distance loss function between the label and the cosine distance, INLINEFORM9 , where INLINEFORM10
The Siamese neural network has two parallel convolutional networks, INLINEFORM0 , that share the same set of weights, INLINEFORM1 , as shown in Figure FIGREF5 (a).
|
What is the architecture of the siamese neural network?
|
The answers are shown as follows:
* two parallel convolutional networks, INLINEFORM0 , that share the same set of weights
|
null | false
| null |
For following Apps in Apple Store, help me classify them into different categories, categories include Business, Developer Tools, Education, Games, Music.
Here is the App list: Slack, Microsoft Excel, WPS Office, Apple Developer, Xcode, CodeRunner 4, Termius -SSH&SFTP client, Kahoot, Duolingo, Complete Anatomy 23, SimCity, Civilization VI, NBA 2k23, Angry Birds, GarageBand, Spotify, QQ Music
|
Here is the classification result based on the provided categories:
For Business: Slack, Microsoft Excel, WPS Office
For Developer Tools: Apple Developer, Xcode, CodeRunner 4, Termius -SSH&SFTP client
For Education: Kahoot, Duolingo, Complete Anatomy 23
For Games: SimCity, Civilization VI, NBA 2k23, Angry Birds
For Music: GarageBand, Spotify, QQ Music
|
|
1910.10408
| false
| null |
Methods ::: Length Token Method
Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\text{min}$ and $t_\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\text{min}$ and $t_\text{max}$ are in the normal group, the ones with ratio below $t_\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group.
Methods ::: Length Token Method
Our first approach to control the length is inspired by target forcing in multilingual NMT BIBREF15, BIBREF16. We first split the training sentence pairs into three groups according to the target/source length ratio (in terms of characters). Ideally, we want a group where the target is shorter than the source (short), one where they are equally-sized (normal) and a last group where the target is longer than the source (long). In practice, we select two thresholds $t_\text{min}$ and $t_\text{max}$ according to the length ratio distribution. All the sentence pairs with length ratio between $t_\text{min}$ and $t_\text{max}$ are in the normal group, the ones with ratio below $t_\text{min}$ in short and the remaining in long. At training time we prepend a length token to each source sentence according to its group ($<$short$>$, $<$normal$>$, or $<$long$>$), in order to let a single network to discriminate between the groups (see Figure FIGREF2). At inference time, the length token is used to bias the network to generate a translation that belongs to the desired length group.
|
How do they condition the output to a given target-source class?
|
They use three groups short/normal/long translation classes to learn length token, which is in inference used to bias network to generate desired length group.
|
null | false
| null |
Have you ever seen the Milky Way?
|
I have seen the Milky Way on a few occasions. Its usually on a clear, moonless night when I am in a remote location. For some reason, it is brighter when the air is colder. I don't get to see the MIlky Way often, but when I do, I am always amazed by its beauty.
|
|
null | false
| null |
Development of an Apple smartphone began in 2004, when Apple started to gather a team of 1,000 employees led by hardware engineer Tony Fadell, software engineer Scott Forstall, and design officer Jony Ive, to work on the highly confidential "Project Purple".
Then-Apple CEO Steve Jobs steered the original focus away from a tablet (which was later revisited in the form of the iPad) towards a phone. Apple created the device during a secretive collaboration with Cingular Wireless (later renamed AT&T Mobility) at the time—at an estimated development cost of US$150 million over thirty months. According to Jobs in 1998, the "i" word in "iMac" (and therefore "iPod", "iPhone" and "iPad") stands for internet, individual, instruct, inform, and inspire.
Apple rejected the "design by committee" approach that had yielded the Motorola ROKR E1, a largely unsuccessful "iTunes phone" made in collaboration with Motorola. Among other deficiencies, the ROKR E1's firmware limited storage to only 100 iTunes songs to avoid competing with Apple's iPod nano. Cingular gave Apple the liberty to develop the iPhone's hardware and software in-house, a rare practice at the time, and paid Apple a fraction of its monthly service revenue (until the iPhone 3G), in exchange for four years of exclusive U.S. sales, until 2011.
Jobs unveiled the first-generation iPhone to the public on January 9, 2007, at the Macworld 2007 convention at the Moscone Center in San Francisco. The iPhone incorporated a 3.5-inch multi-touch display with few hardware buttons, and ran the iPhone OS operating system with a touch-friendly interface, then marketed as a version of Mac OS X. It launched on June 29, 2007, at a starting price of US$499 in the United States, and required a two-year contract with AT&T.
Worldwide iPhone availability:
iPhone available since its original release
iPhone available since the release of iPhone 3G
On July 11, 2008, at Apple's Worldwide Developers Conference (WWDC) 2008, Apple announced the iPhone 3G, and expanded its launch-day availability to twenty-two countries, and it was eventually released in 70 countries and territories. The iPhone 3G introduced faster 3G connectivity, and a lower starting price of US$199 (with a two-year AT&T contract). Its successor, the iPhone 3GS, was announced on June 8, 2009, at WWDC 2009, and introduced video recording functionality.
First iPhone on display under glass at the January 2007 Macworld show
The iPhone 4 was announced on June 7, 2010, at WWDC 2010, and introduced a redesigned body incorporating a stainless steel frame and a rear glass panel. At release, the iPhone 4 was marketed as the "world's thinnest smartphone"; it uses the Apple A4 processor, being the first iPhone to use an Apple custom-designed chip. It introduced the Retina display, having four-times the display resolution of preceding iPhones, and was the highest-resolution smartphone screen at release; a front-facing camera was also introduced, enabling video calling functionality via FaceTime.
Users of the iPhone 4 reported dropped/disconnected telephone calls when holding their phones in a certain way, and this issue was nicknamed "antennagate". In January 2011, as Apple's exclusivity agreement with AT&T was expiring, Verizon announced that they would be carrying the iPhone 4, with a model compatible with Verizon's CDMA network releasing on February 10
|
From the passage provided, extract the date when the iPhone was announced publicly for the first time.
|
Then-Apple CEO Steve Jobs publicly announced the first-generation iPhone on January 9, 2007, at the Macworld 2007 convention at the Moscone Center in San Francisco.
|
null | false
| null |
Who was the first woman to finish the Boston Marathon?
|
Roberta Gibbs ran the 1966 Boston Marathon and was the first woman to finish it. At the time, women were not qualified to run in marathons, so she dressed up as a man and waited in the sidelines until about half the runners had already started then jumped in to join the pack.
|
|
2004.02083
| false
| null |
We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:
Person: first (1), second (2), and third (3)
Number: singular (SG) ad plural (PL)
Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)
Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB).
We provide a hand-curated collection of complete inflection tables for 198 lemmata. The morphological tags follow the guidelines of the UniMorph schema BIBREF6, BIBREF7, in order to allow for the potential of cross-lingual transfer learning, and they are tagged with respect to:
Person: first (1), second (2), and third (3)
Number: singular (SG) ad plural (PL)
Inclusivity (only applicable to first person plural verbs: inclusive (INCL) and exclusive (EXCL)
Aspect/mood: completive (CPL), progressive (PROG), potential (POT), and habitual (HAB).
|
How was annotation done?
|
The answers are shown as follows:
* hand-curated collection of complete inflection tables for 198 lemmata
|
null | false
| null |
Našiměřice (German: Aschmeritz) is a municipality and village in Znojmo District in the South Moravian Region of the Czech Republic. It has about 200 inhabitants.
Našiměřice lies approximately 28 kilometres (17 mi) north-east of Znojmo, 32 km (20 mi) south-west of Brno, and 189 km (117 mi) south-east of Prague.
|
Where is the city of Nasimerice
|
Nasimerice (German: Aschmeritz) is a municipality and village in Znojmo District in the South Moravian Region of the Czech Republic. It has about 200 inhabitants.
Nasimerice lies approximately 28 kilometres (17 mi) north-east of Znojmo, 32 km (20 mi) south-west of Brno, and 189 km (117 mi) south-east of Prague.
|
null | false
| null |
Why do people read books?
|
People may read books for a number of reasons. Most commonly, people either read books for entertainment, or for information. These types of books are generally separated into fiction and non-fiction books. A fiction book may craft a story for the reader to consume, exciting their mind by the premise of this new world the author paints for them. A non-fiction book can provide useful information to the reader, and help them learn a new subject.
|
|
null | false
| 11
|
Blogging gained momentum in 1999 and became especially popular after the launch of freely available, hosted platforms such as blogger.com or livejournal.com. Blogging has progressively been used by individuals to share news, ideas, and information, but it has also developed a mainstream role to the extent that it is being used by political consultants and news services as a tool for outreach and opinion forming as well as by businesses as a marketing tool to promote products and services BIBREF0 .
For this paper, we compiled a very large geolocated collection of blogs, written by individuals located in the U.S., with the purpose of creating insightful mappings of the blogging community. In particular, during May-July 2015, we gathered the profile information for all the users that have self-reported their location in the U.S., along with a number of posts for all their associated blogs. We utilize this blog collection to generate maps of the U.S. that reflect user demographics, language use, and distributions of psycholinguistic and semantic word classes. We believe that these maps can provide valuable insights and partial verification of previous claims in support of research in linguistic geography BIBREF1 , regional personality BIBREF2 , and language analysis BIBREF3 , BIBREF4 , as well as psychology and its relation to human geography BIBREF5 .
We utilize this blog collection to generate maps of the U.S. that reflect user demographics, language use, and distributions of psycholinguistic and semantic word classes.
|
What do they use the blog collection to do?
|
To generate maps of the U.S. that reflect user demographics, language use, and distributions of psycholinguistic and semantic word classes.
|
null | false
| 317
|
Domain classification is a task that predicts the most relevant domain given an input utterance BIBREF0. It is becoming more challenging since recent conversational interaction systems such as Amazon Alexa, Google Assistant, and Microsoft Cortana support more than thousands of domains developed by external developers BIBREF3, BIBREF2, BIBREF4. As they are independently and rapidly developed without a centralized ontology, multiple domains have overlapped capabilities that can process the same utterances. For example, “make an elephant sound” can be processed by AnimalSounds, AnimalNoises, and ZooKeeper domains.
Since there are a large number of domains, which are even frequently added or removed, it is infeasible to obtain all the ground-truth domains of the training utterances, and domain classifiers for conversational interaction systems are usually trained given only a small number (usually one) of ground-truths in the training utterances. This setting corresponds to multi-label positive and unlabeled (PU) learning, where assigned labels are positive, unassigned labels are not necessarily negative, and one or more labels are assigned for an instance BIBREF5, BIBREF6.
In this paper, we utilize user log data, which contain triples of an utterance, the predicted domain, and the response, for the model training. Therefore, we are given only one ground-truth for each training utterance. In order to improve the classification performance in this setting, if certain domains are repeatedly predicted with the highest confidences even though they are not the ground-truths of an utterance, we regard the domains as additional pseudo labels. This is closely related to pseudo labeling BIBREF7 or self-training BIBREF8, BIBREF9, BIBREF10. While the conventional pseudo labeling is used to derive target labels for unlabeled data, our approach adds pseudo labels to singly labeled data so that the data can have multiple target labels. Also, the approach is related to self-distillation, which leverages the confidence scores of the non-target outputs to improve the model performance BIBREF11, BIBREF12. While distillation methods utilize the confidence scores as the soft targets, pseudo labeling regards high confident outputs as the hard targets to further boost their confidences. We use both pseudo labeling and self-distillation in our work.
Pseudo labels can be wrongly derived when irrelevant domains are top predicted, which can lead the model training with wrong supervision. To mitigate this issue, we leverage utterances with negative system responses to lower the prediction confidences of the failing domains. For example, if a system response of a domain for an input utterance is “I don't know that one”, the domain is regarded as a negative ground-truth since it fails to handle the utterance.
Evaluating on an annotated dataset from the user logs of a large-scale conversation interaction system, we show that the proposed approach significantly improves the domain classification especially when hypothesis reranking is used BIBREF13, BIBREF4.
Evaluating on an annotated dataset from the user logs of a large-scale conversation interaction system, we show that the proposed approach significantly improves the domain classification especially when hypothesis reranking is used [14, 5].
|
What is improved by their proposed approach significantly?
|
The domain classification.
|
null | false
| null |
Identify from the following list characters from The X-Files who are bald or balding: Walter Skinner, John Fitzgerald Byers, Dana Scully, Melvin Frohike, Darius Michaud, Peter Watts, Conrad Strughold, Queequeg
|
Walter Skinner, Melvin Frohike, Darius Michaud, Peter Watts, Conrad Strughold
|
|
null | false
| null |
What is a bond
|
A bond is a fixed-income instrument that represents a loan made by an investor to a borrower (typically corporate or governmental). A bond could be thought of as an I.O.U. between the lender and borrower that includes the details of the loan and its payments. Bonds are used by companies, municipalities, states, and sovereign governments to finance projects and operations. Owners of bonds are debtholders, or creditors, of the issuer.
Bond details include the end date when the principal of the loan is due to be paid to the bond owner and usually include the terms for variable or fixed interest payments made by the borrower.
|
|
null | false
| null |
What is an optimisation problem?
|
In mathematics and computer science, an optimisation problem is typically a search problem in which the best solution is to locate among all feasible solutions. An optimisation problem can be categorised as continuous or discrete depending on the type of their variables. In a discrete optimisation problem, variables are discrete and countable. In contrast, continuous variables include infinitely uncountable values, for example, a variable over a non-empty range of real numbers.
According to the number of objective functions, an optimisation problem is classified as single or multi-objective. A single-objective optimisation problem only involves one objective function to optimise. On the other hand, a multi-objective one considers two or more objectives simultaneously, which means the interaction between objectives has to be considered.
|
|
2004.02083
| false
| null |
Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection," inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.
Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11. We trained an encoder-decoder model that receives the form as character-level input, encodes it with a BiLSTM encoder, and then an attention enhanced decoder BIBREF14 outputs the corresponding sequence of morphological tags, implemented in DyNet. The baseline results are shown in Table . The exact-match accuracy of 67% is lower than the average accuracy that context-aware systems can achieve, and it highlights the challenge that the complexity of the tonal system of SJQ Chatino can pose.
Inflectional realization defines the inflected forms of a lexeme/lemma. As a computational task, often referred to as simply “morphological inflection," inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form. For example, the inflectional realization of SJQ Chatino verb forms entails a mapping of the pairing of the lemma lyu1 `fall' with the tag-set 1;SG;PROG to the word form nlyon32.
Morphological analysis is the task of creating a morphosyntactic description for a given word. It can be framed in a context-agnostic manner (as in our case) or within a given context, as for instance for the SIGMORPHON 2019 second shared task BIBREF11.
|
How does morphological analysis differ from morphological inflection?
|
The answers are shown as follows:
* Morphological analysis is the task of creating a morphosyntactic description for a given word
* inflectional realization is framed as a mapping from the pairing of a lemma with a set of morphological tags to the corresponding word form
|
null | false
| null |
Name some countries that don't share a border with any other countries.
|
New Zealand, Australia and Japan do not share a border with any other country.
|
|
null | false
| null |
How can a football training session be planned?
|
First of all, the team has to change and put on the football clothes. Depending on the weather conditions, the right football shoes also have to be put on. Rubber nubs or steel studs can be used. It is better to wear indoor shoes in the hall and short nubs on the hard court. The first step is to warm up during training. Then stretch to prepare the muscles for fast movements. Afterwards, passing and shooting exercises are rehearsed. At the end, a small game should take place to keep it fun. Then stretch again and run out.
|
|
null | false
| null |
The universe of JoJo's Bizarre Adventure is a reflection of the real world with the added existence of supernatural forces and beings. In this setting, some people are capable of transforming their inner spiritual power into a Stand (スタンド, Sutando); another significant form of energy is Hamon (波紋, "Ripple"), a martial arts technique that allows its user to focus bodily energy into sunlight via controlled breathing. The narrative of JoJo's Bizarre Adventure is split into parts with independent stories and different characters. Each of the series' protagonists is a member of the Joestar family, whose mainline descendants possess a star-shaped birthmark above their left shoulder blade and a name that can be abbreviated to the titular "JoJo". The first six parts take place within a single continuity whose generational conflict stems from the rivalry between Jonathan Joestar and Dio Brando, while the latter three parts take place in an alternate universe where the Joestar family tree is heavily altered.
|
What is the plot of Jojo's Bizarre Adventures?
|
JoJo's Bizarre Adventure is a manga and anime about a family called the Joestars with a name that can usually be abbreviated to Jojo. The story takes place in a world similar to the real world but with added supernatural forces such as inner energy personifying into reality called Hamon. In the later parts, a new force emerges called Stands that give each user an additional power. The story is split into parts following a descendant of the Joestar family and the antagonist is usually Dio Brando or an associate.
|
null | false
| null |
Tell me whether companies keep each of these confidential or publish publicly: acquisition targets, customer data, marketing materials, blog posts, product instructions
|
Acquisition Targets: Confidential
Customer Data: Confidential
Marketing Materials: Publish Publicly
Blog Posts: Publish Publicly
Product Instructions: Publish Publicly
|
|
null | false
| 97
|
Our experiments aim to measure if the incorporation of subword information into LexVec results in similar improvements as observed in moving from Skip-gram to fastText, and whether unsupervised morphemes offer any advantage over n-grams. For IV words, we perform intrinsic evaluation via word similarity and word analogy tasks, as well as downstream tasks. OOV word representation is tested through qualitative nearest-neighbor analysis.
All models are trained using a 2015 dump of Wikipedia, lowercased and using only alphanumeric characters. Vocabulary is limited to words that appear at least 100 times for a total of 303517 words. Morfessor is trained on this vocabulary list.
We train the standard LexVec (LV), LexVec using n-grams (LV-N), and LexVec using unsupervised morphemes (LV-M) using the same hyper-parameters as BIBREF7 ( $\textnormal {window} = 2$ , $\textnormal {initial learning rate} = .025$ , $\textnormal {subsampling} = 10^{-5}$ , $\textnormal {negative samples} = 5$ , $\textnormal {context distribution smoothing} = .75$ , $\textnormal {positional contexts} = \textnormal {True}$ ).
Both Skip-gram (SG) and fastText (FT) are trained using the reference implementation of fastText with the hyper-parameters given by BIBREF6 ( $\textnormal {window} = 5$ , $\textnormal {initial learning rate} = .025$ , $\textnormal {subsampling} = 10^{-4}$ , $\textnormal {negative samples} = 5$ ).
All five models are run for 5 iterations over the training corpus and generate 300 dimensional word representations. LV-N, LV-M, and FT use 2000000 buckets when hashing subwords.
For word similarity evaluations, we use the WordSim-353 Similarity (WS-Sim) and Relatedness (WS-Rel) BIBREF28 and SimLex-999 (SimLex) BIBREF29 datasets, and the Rare Word (RW) BIBREF20 dataset to verify if subword information improves rare word representation. Relationships are measured using the Google semantic (GSem) and syntactic (GSyn) analogies BIBREF2 and the Microsoft syntactic analogies (MSR) dataset BIBREF30 .
We also evaluate all five models on downstream tasks from the VecEval suite BIBREF13 , using only the tasks for which training and evaluation data is freely available: chunking, sentiment and question classification, and natural language identification (NLI). The default settings from the suite are used, but we run only the fixed settings, where the embeddings themselves are not tunable parameters of the models, forcing the system to use only the information already in the embeddings.
Finally, we use LV-N, LV-M, and FT to generate OOV word representations for the following words: 1) “hellooo”: a greeting commonly used in instant messaging which emphasizes a syllable. 2) “marvelicious”: a made-up word obtained by merging “marvelous” and “delicious”. 3) “louisana”: a misspelling of the proper name “Louisiana”. 4) “rereread”: recursive use of prefix “re”. 5) “tuzread”: made-up prefix “tuz”.
Finally, we use LV-N, LV-M, and FT to generate OOV word representations for the following words: 1) “hellooo”: a greeting commonly used in instant messaging which emphasizes a syllable. 2) “marvelicious”: a made-up word obtained by merging “marvelous” and “delicious”. 3) “louisana”: a misspelling of the proper name “Louisiana”. 4) “rereread”: recursive use of prefix “re”. 5) “tuzread”: made-up prefix “tuz”.
|
Which words are used to generate out-of-vocabulary words by the models in the research?
|
Hellooo, marvelicious, louisana, rereread and tuzread.
|
null | false
| null |
The 1980 Miami Hurricanes football team represented the University of Miami as an independent during the 1980 NCAA Division I-A football season. Led by second-year head coach Howard Schnellenberger, the Hurricanes played their home games at the Miami Orange Bowl in Miami, Florida. Miami finished the season with a record of 9–3. They were invited to the Peach Bowl, where they defeated Virginia Tech, 20–10.
|
What was the record for University of Miami Hurricanes in 1980?
|
The Hurricanes won 9 games and lost 3 games in 1980
|
null | false
| 121
|
Transcribing structured data into natural language descriptions has emerged as a challenging task, referred to as "data-to-text". These structures generally regroup multiple elements, as well as their attributes. Most attempts rely on translation encoder-decoder methods which linearize elements into a sequence. This however loses most of the structure contained in the data. In this work, we propose to overpass this limitation with a hierarchical model that encodes the data-structure at the element-level and the structure level. Evaluations on RotoWire show the effectiveness of our model w.r.t. qualitative and quantitative metrics.
Transcribing structured data into natural language descriptions has emerged as a challenging task, referred to as “data-to-text”.
|
What is data-to-text?
|
Transcribing structured data into natural language descriptions.
|
null | false
| 94
|
We hope that this research paper establishes a first attempt at using generative machine learning models for the purposes of marketing on peer-to-peer platforms. As the use of these markets becomes more widespread, the use of self-branding and marketing tools will grow in importance. The development of tools like the DMK loss in combination with GANs demonstrates the enormous potential these frameworks can have in solving problems that inevitably arise on peer-to-peer platforms. Certainly, however, more works remains to be done in this field, and recent developments in unsupervised generative models already promise possible extensions to our analysis.
For example, since the introduction of GANs, research groups have studied how variational autoencoders can be incorporated into these models, to increase the clarity and accuracy of the models’ outputs. Wang et al. in particular demonstrate how using a variational autoencoder in the generator model can increase the accuracy of generated text samples [11]. Most promisingly, the data used by the group was a large data set of Amazon customer reviews, which in many ways parallels the tone and semantic structure of Airbnb listing descriptions. More generally, Bowman et al. demonstrate how the use of variational autoencoders presents a more accurate model for text generation, as compared to standard recurrent neural network language models [1].
For future work, we may also consider experimenting with different forms of word embeddings, such as improved word vectors (IWV) which were suggested by Rezaeinia et al [10]. These IMV word embeddings are trained in a similar fashion to GloVe vectors, but also encode additional information about each word, like the word’s part of speech. For our model, we may consider encoding similar information in order for the generator to more easily learn common semantic patterns inherent to the marketing data being analyzed.
For future work, we may also consider experimenting with different forms of word embeddings, such as improved word vectors (IWV) which were suggested by Rezaeinia et al [10]. These IMV word embeddings are trained in a similar fashion to GloVe vectors, but also encode additional information about each word, like the words part of speech. For our model, we may consider encoding similar information in order for the generator to more easily learn common semantic patterns inherent to the marketing data being analyzed.
|
What future work will they consider?
|
They may research experiments with different forms of word embeddings and encoding similar information.
|
null | false
| null |
Romain Iannetta (born 27 November 1979 in Villecresnes) is a French racing driver who last competed in the GT4 European Series, driving the No. 77 Audi R8 LMS GT4 Evo for Full Motorsport. He also competes part-time in the NASCAR Whelen Euro Series as the driver of the No. 46 Chevrolet Camaro for Marko Stipp Motorsport in the EuroNASCAR PRO class.
He is the nephew of driver Alain Iannetta who participated in 24 Hours of Le Mans three times (1988, 1989 and 1990).
In parallel to his passion, Romain is an instructor at the flight school PRO'PULSION (Dreux and Issoire) and driving Ferrari 360 Modena for JMB Racing. He also worked for many auto manufacturers at the launch of new vehicles on the market (Volkswagen, Audi, Citroën, Renault, Alfa Romeo...).
It began in 2000 in the trade of automotive stunt team CINE CASCADE directed by Jean-Claude Lagnes. It revolves mainly around feature films such as: Le Boulet, Ripoux 3, San Antonio, Hush!, Taxi, 36 Quai des Orfèvres, The Bourne Identity and The Mental Love your father.
He was awarded the World Stunt Awards in the United States for the best sequence on a vehicle for the year 2003 for the film "The Bourne Identity".
|
Who is Romain Iannetta
|
Romain Iannetta (born 27 November 1979 in Villecresnes) is a French racing driver who last competed in the GT4 European Series, driving the No. 77 Audi R8 LMS GT4 Evo for Full Motorsport. He also competes part-time in the NASCAR Whelen Euro Series as the driver of the No. 46 Chevrolet Camaro for Marko Stipp Motorsport in the EuroNASCAR PRO class.
He is the nephew of driver Alain Iannetta who participated in 24 Hours of Le Mans three times (1988, 1989 and 1990).
In parallel to his passion, Romain is an instructor at the flight school PRO'PULSION (Dreux and Issoire) and driving Ferrari 360 Modena for JMB Racing. He also worked for many auto manufacturers at the launch of new vehicles on the market (Volkswagen, Audi, Citroen, Renault, Alfa Romeo...).
It began in 2000 in the trade of automotive stunt team CINE CASCADE directed by Jean-Claude Lagnes. It revolves mainly around feature films such as: Le Boulet, Ripoux 3, San Antonio, Hush!, Taxi, 36 Quai des Orfèvres, The Bourne Identity and The Mental Love your father.
He was awarded the World Stunt Awards in the United States for the best sequence on a vehicle for the year 2003 for the film The Bourne Identity.
|
null | false
| null |
Give me some ideas of places to visit on my next backpacking trip to South America.
|
- Take a cruise to visit the islands of the Galapagos
- Straddle the northern and southern hemispheres at La Mitad del Mundo (The Middle of the Earth), close to Quito, Ecuador
- Hike the Inca Trail to Machu Picchu from Cusco in Peru. You need to get a permit at least 6 months in advance.
- Hike the Salkantay Trek to Machu Picchu from Cusco in Peru. It is a great alternative to the Inca Trail if you do not get a permit.
- Visit villages around Lake Titicaca in Peru
- Take a multi-day tour to the Salar de Uyuni, the world's largest salt flat, on a four-wheel drive from Uyuni in Bolivia
- Go stargazing in the Atacama Desert in Chile
|
|
null | false
| 20
|
The Russian and Ukrainian languages are mainly spoken in Russian Federation and the Ukraine and belong to the East-Slavic group of the Indo-European language family. They share many common morphosyntactic features: both are SVO languages with free word order and rich morphology, both use the Cyrillic alphabet and share many common cognates.
Both Russia and the Ukraine have common academic tradition that makes it easier to collect corpora, which are comparable in terms of both genre and strictly defined academic fields. We work with such a corpus of Russian and Ukrainian academic texts, initially collected for the purposes of cross-lingual plagiarism detection. This data is available online through a number of library services, but unfortunately cannot be republished due to copyright limitations.
The Ukrainian subcorpus contains about 60 thousand extended summaries (Russian and Ukrainian russian`автореферат', `avtoreferat') of theses submitted between 1998 and 2011. The Russian subcorpus is smaller in the number of documents (about 16 thousand, approximately the same time period), but the documents are full texts of theses, thus the total volume of the Russian subcorpus is notably larger: 830 million tokens versus 250 million tokens in the Ukrainian one. Generally, the texts belong to one genre that can be defined as post-Soviet expository academic prose, submitted for academic degree award process.
The documents were converted to plain text files from MS Word format in the case of the Ukrainian subcorpus and mainly from OCRed PDF files in the case of the Russian subcorpus. Because of this, the Russian documents often suffer from OCR artifacts, such as words split with line breaks, incorrectly recognized characters and so on. However, it does not influence the resulting model much, as we show below.
Both Ukrainian and Russian documents come with meta data allowing to separate them into academic fields, with economics, medicine and law being most frequent topics for the Ukrainian data and economics, history and pedagogy dominating the Russian data.
For evaluation, 3 topics were used, distant enough from each other and abundantly presented in both subcorpora: economics, law and history. We randomly selected 100 texts in each language for each topic. As an average length of Russian texts is significantly higher (them being full theses), we cropped them, leaving only the first 5 thousand words, to mimic the size of the Ukrainian summaries. These 600 documents in 3 classes are used as a test set (see Section "Experiment Design and Evaluation" for the description of the conducted experiments).
The corpora (including test set) were PoS-tagged. Each word was replaced with its lemma followed by a PoS-tag (`russianдиссертация_S', `russianдиссертацiя_N'). Functional parts of speech (conjunctions, pronouns, prepositions, etc.) and numerals were removed from the texts.
We randomly selected 100 texts in each language for each topic.
|
How many texts were selected?
|
100 texts in each language for each topic.
|
null | false
| null |
Kimberly Noel Kardashian was born on October 21, 1980, in Los Angeles, California, to Robert and Kris Kardashian (née Houghton). She has an older sister, Kourtney, a younger sister, Khloé, and a younger brother, Rob. Their mother is of Dutch, English, Irish, and Scottish ancestry, while their father was a third-generation Armenian-American. In 1991, their parents divorced and their mother married Bruce Jenner, the 1976 Summer Olympics decathlon winner. As a result of her mother's remarriage, Kim Kardashian acquired step-brothers Burton "Burt", Brandon, and Brody; a step-sister, Casey; and later two half-sisters, Kendall and Kylie Jenner.
|
Who are Kim Kardashian's siblings?
|
Kim Kardashian has six siblings. Kourtney, Khloe, and Rob Kardashian are her three fully biological siblings. Burton, Brandon, Brody, and Casey Jenner are her four step-siblings. Kendall and Kylie Jenner are her two half-sisters.
|
null | false
| null |
Describe the sensation of taking a cold shower and explain the rationale for why someone might take one.
|
When you take a shower with very cold water, you throw your body into a temporary state of shock. You feel your skin raise. Your toes may curl a bit. Your eyes may widen. If it is early morning and you're still waking up, you may find that you suddenly feel very alert.
By leaving a room with a warm and comfortable temperature and entering a space with water produced at a chilly and uncomfortable temperature you actually trick your body. The sudden shift produces alertness that can be beneficial for people who are trying to wake up without caffeine and other stimulants or are otherwise trying to train their bodies to tolerate harsh conditions.
|
|
null | false
| null |
The Alliance of Bakongo (French: Alliance des Bakongo, or ABAKO) was a Congolese political party, founded by Edmond Nzeza Nlandu, but headed by Joseph Kasa-Vubu, which emerged in the late 1950s as vocal opponent of Belgian colonial rule in what today is the Democratic Republic of the Congo. Additionally, the organization served as the major ethno-religious organization for the Kongo people (also known as Bakongo) and became closely intertwined with the Kimbanguist Church which was extremely popular in the lower Congo.
|
Is ABAKO no longer an active political party?
|
Correct, The Alliance of Bakongo, or ABAKO, is no longer a political party.
|
null | false
| null |
Haven't you always believed that Earth is flat?
|
The statement is incorrect because I never believed that Earth is flat. The Earth is an irregularly shaped Ellipsoid.
|
|
null | false
| null |
The handscroll is a long, narrow, horizontal scroll format in East Asia used for calligraphy or paintings. A handscroll usually measures up to several meters in length and around 25–40 cm in height. Handscrolls are generally viewed starting from the right end. This kind of scroll is intended to be read or viewed flat on a table, in sections. The format thus allows for the depiction of a continuous narrative or journey. The traditional alternative format in East Asian paintings is the vertical hanging scroll, which is rarely as long.
|
What is handscroll used for
|
The handscroll is a long, narrow, horizontal scroll format in East Asia used for calligraphy or paintings.
|
1704.05572
| false
| null |
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB.
Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining.
|
What corpus was the source of the OpenIE extractions?
|
The answers are shown as follows:
* domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining
|
1908.01060
| false
| null |
To evaluate our sampling strategy, we compare it with the pretrained model and fine-tuned model on 16 different corpora. The results show that our approach outperforms those baselines on all corpora: it achieves 1.6% lower phone error rate on average. Additionally, we demonstrate that our corpus-level embeddings are able to capture the characteristics of each corpus, especially the language and domain information. The main contributions of this paper are as follows:
To evaluate our sampling strategy, we compare it with the pretrained model and fine-tuned model on 16 different corpora. The results show that our approach outperforms those baselines on all corpora: it achieves 1.6% lower phone error rate on average.
|
By how much do they, on average, outperform the baseline multilingual model on 16 low-resource tasks?
|
The answers are shown as follows:
* 1.6% lower phone error rate on average
|
1707.00110
| false
| null |
Due to the reduction of computational time complexity we expect our method to yield performance gains especially for longer sequences and tasks where the source can be compactly represented in a fixed-size memory matrix. To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 . We generated 4 training datasets of 100,000 examples and a validation dataset of 1,000 examples. The vocabulary size was 20. For each dataset, the sequences had lengths randomly chosen between 0 to $L$ , for $L\in \lbrace 10, 50, 100, 200\rbrace $ unique to each dataset.
Next, we explore if the memory-based attention mechanism is able to fit complex real-world datasets. For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples). We used the newly available pre-processed datasets for the WMT'17 task. Note that our scores may not be directly comparable to other work that performs their own data pre-processing. We learn shared vocabularies of 16,000 subword units using the BPE algorithm BIBREF19 . We use newstest2015 as a validation set, and report BLEU on newstest2016.
To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 .
For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples).
|
Which datasets are used in experiments?
|
Sequence Copy Task and WMT'17
|
null | false
| null |
What Is a Request for Proposal (RFP)?
|
A request for proposal (RFP) is a structured acquisition process that announces a project, describes it, and solicits bids from qualified contractors to complete it. Most organizations prefer to launch their projects using RFPs, and many governments always use them. In some cases there is are a minimum number of respondents required and minimum timeframe to make an award.
|
|
null | false
| null |
During the 1960s, digital computer music was pioneered, innovation in live electronics took place, and Japanese electronic musical instruments began to influence the music industry. In the early 1970s, Moog synthesizers and drum machines helped popularize synthesized electronic music. The 1970s also saw electronic music begin to have a significant influence on popular music, with the adoption of polyphonic synthesizers, electronic drums, drum machines, and turntables, through the emergence of genres such as disco, krautrock, new wave, synth-pop, hip hop, and EDM. In the early 1980s mass-produced digital synthesizers, such as the Yamaha DX7, became popular, and MIDI (Musical Instrument Digital Interface) was developed. In the same decade, with a greater reliance on synthesizers and the adoption of programmable drum machines, electronic popular music came to the fore. During the 1990s, with the proliferation of increasingly affordable music technology, electronic music production became an established part of popular culture. In Berlin starting in 1989, the Love Parade became the largest street party with over 1 million visitors, inspiring other such popular celebrations of electronic music.
|
Can you give me a short summary of the follow paragraph about the history of electronic music?
|
With the advent of new electronic instruments in the 60s and 70s, namely synthesizers and drum machines, electronic music quickly gained mainstream popularity through genres like disco and hip hop. As the technology progressed through the 80s and 90s, more affordable instruments led electronic music to be even more widely embraced, cementing its status in pop culture and leading to the rise of large electronic music events around the globe.
|
null | false
| 36
|
Named entity recognition (NER) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 is the process by which we identify text spans which mention named entities, and to classify them into predefined categories such as person, location, organization etc. NER serves as the basis for a variety of natural language processing (NLP) applications such as relation extraction BIBREF4 , machine translation BIBREF5 , question answering BIBREF6 and knowledge base construction BIBREF7 . Although early NER systems have been successful in producing adequate recognition accuracy, they often require significant human effort in carefully designing rules or features.
In recent years, deep learning methods been employed in NER systems, yielding state-of-the-art performance. However, the number of types detected are still not sufficient for certain domain-specific applications. For relation extraction, identifying fine-grained types has been shown to significantly increase the performance of the extractor BIBREF8 , BIBREF9 since this helps in filtering out candidate relation types which do not follow this type constraint. Furthermore, for question answering fine-grained Named Entity Recognition (FgNER) can provide additional information helping to match questions to its potential answers thus improving performance BIBREF10 . For example, Li and Roth BIBREF11 rank questions based on their expected answer types (i.e. will the answer be food, vehicle or disease).
Typically, FgNER systems use over a hundred labels, arranged in a hierarchical structure. We find that available training data for FgNER typically contain noisy labels, and creating manually annotated training data for FgNER is a time-consuming process. Furthermore, human annotators will have to assign a subset of correct labels from hundreds of possible labels making this a somewhat arduous task. Currently, FgNER systems use distant supervision BIBREF12 to automatically generate training data. Distant supervision is a technique which maps each entity in the corpus to knowledge bases such as Freebase BIBREF13 , DBpedia BIBREF14 , YAGO BIBREF15 and helps with the generation of labeled data. This method will assign the same set of labels to all mentions of a particular entity in the corpus. For example, “Barack Obama” is a person, politician, lawyer, and author. If a knowledge base has these four matching labels, the distant supervision technique will assign all of them to every mention of “Barack Obama”. Therefore, the training data will also fail to distinguish between mentions of “Barack Obama” in all subsequent utterances.
Ling et al. ling2012fine proposed the first system for FgNER, where they used 112 overlapping labels with a linear classifier perceptron for multi-label classification. Yosef et al. spaniol2012hyena used multiple binary SVM classifiers to assign entities to a set of 505 types. Gillick et al. gillick2014context introduced context dependent FgNER and proposed a set of heuristics for pruning labels that might not be relevant given the local context of the entity. Yogatama et al. yogatama2015embedding proposed an embedding based model where user-defined features and labels were embedded into a low dimensional feature space to facilitate information sharing among labels.
Shimaoka et al. shimaoka2016attentive proposed an attentive neural network model which used long short-term memory (LSTMs) to encode the context of the entity, then used an attention mechanism to allow the model to focus on relevant expressions in the entity mention's context. To learn entity representations, we propose a scheme which is potentially more generalizable.
NER serves as the basis for a variety of natural language processing (NLP) applications such as relation extraction (Mintz et al., 2009), machine translation (Koehn et al., 2007), question answering (Lin et al., 2012) and knowledge base construction (Dong et al., 2014).
|
What's the usage of NER?
|
NER serves as the basis for a variety of natural language processing (NLP) applications such as relation extractio, machine translation, question answering and knowledge base construction.
|
1910.03891
| false
| null |
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K.
|
What three datasets are used to measure performance?
|
The answers are shown as follows:
* FB24K
* DBP24K
* Game30K
|
null | false
| null |
What is the main difference between decoder only transformer and encoder only transformer?
|
The main difference is that the encoder-only transformer looks at the input sequence from both directions, both front and the back, this means they can better understand the relationships between each token in the sequence. On the other hand, the decoder-only transformer only look at one direction of the input, which means they are better suited for autoregressive language tasks such as text generation.
|
|
null | false
| 157
|
The task of interpreting and following natural language (NL) navigation instructions involves interleaving different signals, at the very least the linguistic utterance and the representation of the world. For example, in turn right on the first intersection, the instruction needs to be interpreted, and a specific object in the world (the intersection) needs to be located in order to execute the instruction. In NL navigation studies, the representation of the world may be provided via visual sensors BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 or as a symbolic world representation. This work focuses on navigation based on a symbolic world representation (referred to as a map).
Previous datasets for NL navigation based on a symbolic world representation, HCRC BIBREF5, BIBREF6, BIBREF7 and SAIL BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 present relatively simple worlds, with a small fixed set of entities known to the navigator in advance. Such representations bypass the great complexity of real urban navigation, which consists of long paths and an abundance of previously unseen entities of different types.
In this work we introduce Realistic Urban Navigation (RUN), where we aim to interpret navigation instructions relative to a rich symbolic representation of the world, given by a real dense urban map. To address RUN, we designed and collected a new dataset based on OpenStreetMap, in which we align NL instructions to their corresponding routes. Using Amazon Mechanical Turk, we collected 2515 instructions over 3 regions of Manhattan, all specified (and verified) by (respective) sets of humans workers. This task raises several challenges. First of all, we assume a large world, providing long routes, vulnerable to error propagation; secondly, we assume a rich environment, with entities of various different types, most of which are unseen during training and are not known in advance; finally, we evaluate on the full route intended, rather than on last-position only.
We then propose a strong neural baseline for RUN where we augment a standard encoder-decoder architecture with an entity abstraction layer, attention over words and worlds, and a constantly updating world-state. Our experimental results and ablation study show that this architecture is indeed better-equipped to treat grounding in realistic urban settings than standard sequence-to-sequence architectures. Given this RUN benchmark, empirical results, and evaluation procedure, we hope to encourage further investigation into the topic of interpreting NL instructions in realistic and previously unseen urban domains.
[1]The task defined by BIBREF6 is of moving between entities only.
To address RUN, we designed and collected a new dataset based on OpenStreetMap, in which we align NL instructions to their corresponding routes.
|
Based on OpenStreetMap, what did the authors design and collect?
|
A new dataset, in which they align NL instructions to their corresponding routes.
|
1910.08418
| false
| null |
Our experiments are performed on an actual endangered language, Mboshi (Bantu C25), a language spoken in Congo-Brazzaville, using the bilingual French-Mboshi 5K corpus of BIBREF17. On the Mboshi side, we consider alphabetic representation with no tonal information. On the French side,we simply consider the default segmentation into words.
Our experiments are performed on an actual endangered language, Mboshi (Bantu C25), a language spoken in Congo-Brazzaville, using the bilingual French-Mboshi 5K corpus of BIBREF17. On the Mboshi side, we consider alphabetic representation with no tonal information. On the French side,we simply consider the default segmentation into words.
|
What is the dataset used in the paper?
|
The answers are shown as follows:
* French-Mboshi 5K corpus
|
null | false
| 84
|
We briefly review the shallow syntactic structures used in this work, and then present a model architecture to obtain embeddings from shallow Syntactic Context (mSynC).
Given a corpus that is annotated with shallow syntax, we propose to condition on both the preceding words and their annotations.
|
For the corpus that is annotated with shallow syntax, what do the authors propose?
|
They propose to condition on both the preceding words and their annotations.
|
null | false
| null |
The Godfather Part II is a 1974 American epic crime film produced and directed by Francis Ford Coppola. The film is partially based on the 1969 novel The Godfather by Mario Puzo, who co-wrote the screenplay with Coppola, and it is both a sequel and a prequel to the 1972 film The Godfather, presenting parallel dramas: one picks up the 1958 story of Michael Corleone (Al Pacino), the new Don of the Corleone family, protecting the family business in the aftermath of an attempt on his life; the prequel covers the journey of his father, Vito Corleone (Robert De Niro), from his Sicilian childhood to the founding of his family enterprise in New York City. The ensemble cast also features Robert Duvall, Diane Keaton, Talia Shire, Morgana King, John Cazale, Mariana Hill, and Lee Strasberg.
|
Who starred in Godfather Part II
|
Following the success of the first film, Paramount Pictures began developing a follow-up, with many of the cast and crew returning. Coppola, who was given more creative control, had wanted to make both a sequel and a prequel to The Godfather that would tell the story of the rise of Vito and the fall of Michael. Principal photography began in October 1973 and wrapped up in June 1974. The Godfather Part II premiered in New York City on December 12, 1974, and was released in the United States on December 20, 1974, receiving divided reviews from critics; its reputation, however, improved rapidly, and it soon became the subject of critical re-appraisal. It grossed $48 million in the United States and Canada and up to $93 million worldwide on a $13 million budget. The film was nominated for eleven Academy Awards, and became the first sequel to win Best Picture. Its six Oscar wins also included Best Director for Coppola, Best Supporting Actor for De Niro and Best Adapted Screenplay for Coppola and Puzo. Pacino won Best Actor at the BAFTAs and was nominated at the Oscars.
Like its predecessor, Part II remains a highly influential film, especially in the gangster genre. It is considered to be one of the greatest films of all time, as well as the rare example of a sequel that may be superior to its predecessor. In 1997, the American Film Institute ranked it as the 32nd-greatest film in American film history and it retained this position 10 years later. It was selected for preservation in the U.S. National Film Registry of the Library of Congress in 1993, being deemed "culturally, historically, or aesthetically significant". The Godfather Part III, the final installment in the trilogy, was released in 1990.
|
null | false
| null |
What was the world’s first high level programming language 1957
|
IBM FORTRAN
|
|
null | false
| 486
|
Recent progress in Embodied Artificial Intelligence, spans both simulation environments;;;; and sophisticated tasks;;. Our work is most closely related to research in language-guided task completion, Neural SLAM, and exploration.
Language-Guided Task Completion. is a benchmark that enables a learning agent to follow natural language descriptions to complete complex household tasks. The agent's goal is to learn a mapping from natural language instructions to a sequence of actions for task completion in a simulated 3D environment. Various modeling approaches have been proposed falling into roughly two families of methods. The first focuses on learning large end-to-end models that directly translate instructions to low-level agent actions;. However, these agents typically suffer from poor generalization performance, and are difficult to interpret. Recently, hierarchical approaches Zhang & Chai (2021); have attracted attention due to their better generalization and interpretability. We also adopt a hierarchical structure, focusing on affordance-aware navigation thereby achieving significantly better generalization than all existing approaches.
Neural SLAM and Affordance-aware Semantic Representation. Neural SLAM, constructs an environment semantic representation enabling map-based long-horizon planning. However, these are tested in pure navigation tasks instead of complex household tasks, and does not consider affordance;, which is required for tasks involving both navigation and manipulations. In, the authors utilize SLAM for 3D environment reconstruction in language-guided task completion. Their approach relies heavily on accurate depth prediction (less robust in unseen environments). Instead, we propose a waypoint-oriented representation which associates each object with the locations on the floor from where the agent can interact with the object. Furthermore, different from the 2D affordance map in that directly predicts affordance type, our semantic representation supports more fine-grained control of the robot's position and pose, which facilitates significantly better generalization. The approach in assumes direct access to the ground truth depth information (not available in our setup) and the method in only focuses on pure navigation problems. approaches reward visiting all navigable areas by searching in a task-agnostic manner. In contrast, we propose a multimodal exploration approach utilizing egocentric visual input, language instructions, and memory of explored areas to reduce task-specific uncertainty of points of interest (areas important to complete the task). We show this to be more efficient, leading to more effective map prediction and robust planning.
However, these are tested in pure navigation tasks instead of complex household tasks, and does not consider affordance Qi et al. (2019); Nagarajan & Grauman (2020); Xu et al. (2020), which is required for tasks involving both navigation and manipulations. In Blukis et al. (2021), the authors utilize SLAM for 3D environment reconstruction in language-guided task completion. Their approach relies heavily on accurate depth prediction (less robust in unseen environments). Instead, we propose a waypoint-oriented representation which associates each object with the locations on the floor from where the agent can interact with the object.****Figure 2: Illustration of the affordance-aware semantic representation used in our framework. (left) A top-down view of an indoor scene used in ALFRED (this view is not available to the agent at test time). (middle) A visualization of the corresponding semantic map; only shown are the side table (in green), counter top (in blue) and the navigable area (in red). Two waypoints (drawn in white stars and arrows) are displayed for side table (∈ large object) and lettuce (∈ small object), respectively. (right) While the waypoint for side table is computed from the predicted map, the waypoint for lettuce is obtained by searching among all exploration steps. The visual observation and its mask prediction on the waypoint for lettuce are shown. For reference, the high-level goal description of the task used in this example is “pick up the lettuce and place it on a table”.****ALFRED supports more than 100 types of objects (one way in which it mimicks real-world complexity), and we propose to handle small and large objects differently. We detail our design below and give an example in Figure 2. Further implementation details are available in the Appendix.
|
The representation used in Chaplot et al. 2020b and HLSM also involves object positions, heights, and so on. What makes the proposed method different so that it claims to be affordance-aware?
|
Our major difference is that we devise a waypoint-oriented representation instead of trying to estimate the exact 3D coordinates of each object in the scene. Each object in our representation is constructed by the associated location from where the agent can interact with it (illustrated in Figure 2). This is easier to obtain and better suited for the downstream object interaction tasks than directly estimating the locations of all objects. To acquire such a map we handle large and small objects differently as described in Sec. 4.2.
|
2004.02929
| true
| null |
In terms of automatic detection of anglicisms, previous approaches in different languages have mostly depended on resource lookup (lexicon or corpus frequencies), character n-grams and pattern matching. alex-2008-comparing combined lexicon lookup and a search engine module that used the web as a corpus to detect English inclusions in a corpus of German texts and compared her results with a maxent Markov model. furiassi2007retrieval explored corpora lookup and character n-grams to extract false anglicisms from a corpus of Italian newspapers. andersen2012semi used dictionary lookup, regular expressions and lexicon-derived frequencies of character n-grams to detect anglicism candidates in the Norwegian Newspaper Corpus (NNC) BIBREF21, while losnegaard2012data explored a Machine Learning approach to anglicism detection in Norwegian by using TiMBL (Tilburg Memory-Based Learner, an implementation of a k-nearest neighbor classifier) with character trigrams as features. garley-hockenmaier-2012-beefmoves trained a maxent classifier with character n-gram and morphological features to identify anglicisms in German online communities. In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish.
In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish.
|
Does the paper mention other works proposing methods to detect anglicisms in Spanish?
|
Yes.
|
1610.08597
| true
| null |
We consider a dataset of curated gang and non-gang members' Twitter profiles collected from our previous work BIBREF9 . It was developed by querying the Followerwonk Web service API with location-neutral seed words known to be used by gang members across the U.S. in their Twitter profiles. The dataset was further expanded by examining the friends, follower, and retweet networks of the gang member profiles found by searching for seed words. Specific details about our data curation procedure are discussed in BIBREF9 . Ultimately, this dataset consists of 400 gang member profiles and 2,865 non-gang member profiles. For each user profile, we collected up to most recent 3,200 tweets from their Twitter timelines, profile description text, profile and cover images, and the comments and video descriptions for every YouTube video shared by them. Table 1 provides statistics about the number of words found in each type of feature in the dataset. It includes a total of 821,412 tweets from gang members and 7,238,758 tweets from non-gang members.
It was developed by querying the Followerwonk Web service API with location-neutral seed words known to be used by gang members across the U.S. in their Twitter profiles.
|
Do the authors report on English datasets only?
|
Yes.
|
null | false
| null |
Suggest 5 horror films to me
|
Psycho
Babadook
The Exorcist
Saw
Rosemary's Baby
|
|
null | false
| null |
Give me a bulleted list of the seven most recent Harvard University presidents
|
* Lawrence S. Bacow (2018 - present)
* Drew Gilpin Faust (2007 - 2018)
* Lawrence H. Summers (2001 - 2007)
* Neil L. Rudenstine (1991 - 2001)
* Derek Bok (2006 - 2007) (interim, also served 1971-1991)
* Nathan Marsh Pusey (1953-1971)
* James Bryant Conant (1933-1953)
|
|
null | false
| null |
Classify list as breeds of either a 'dog' or a 'cat': Border Collie, Siamese, Maine Coon, German Shepard, Persian, and Corgi?
|
Border Collie is a breed of dog. Siamese is a breed of cat. Maine Coon is breed of cat. German Shepard is a breed of dog. Persian is a breed of cat. Corgi is breed of dog.
|
|
null | false
| null |
Which of the following items would I need on a surf trip? Surfboard, leash, tent, knife, wetsuit, batteries, granola bars, bag of grapes, lead weights, a dump truck, the complete collection of Beach Boys albums on vinyl.
|
The useful items are a surfboard, leash, wetsuit and granola bars
|
|
null | false
| 134
|
In this paper, we proposed a method of stacking multiple LSTM layers for modeling sentences, dubbed CAS-LSTM. It uses not only hidden states but also cell states from the previous layer, for the purpose of controlling the vertical information flow in a more elaborate way. We evaluated the proposed method on various benchmark tasks: natural language inference, paraphrase identification, and sentiment classification. Our models achieve the new state-of-the-art accuracy on SNLI and Quora Question Pairs datasets and obtain comparable results on MultiNLI and SST datasets. The proposed architecture can replace any stacked LSTM under one weak restriction—the size of states should be identical across all layers.
For future work we plan to apply the CAS-LSTM architecture beyond sentence modeling tasks. Various problems e.g. sequence labeling, sequence generation, and language modeling might benefit from sophisticated modulation on context integration. Aggregating diverse contexts from sequential data, e.g. those from forward and backward reading of text, could also be an intriguing research direction.
In this paper, we proposed a method of stacking multiple LSTM layers for modeling sentences, dubbed CAS-LSTM. It uses not only hidden states but also cell states from the previous layer, for the purpose of controlling the vertical information flow in a more elaborate way.
|
What do the authors propose for modeling sentences?
|
A method of stacking multiple LSTM layers, dubbed CAS-LSTM. It uses not only hidden states but also cell states from the previous layer, for the purpose of controlling the vertical information flow in a more elaborate way.
|
1911.12579
| false
| null |
The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens.
FLOAT SELECTED: Table 2: Complete statistics of collected corpus from multiple resources.
The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens.
FLOAT SELECTED: Table 2: Complete statistics of collected corpus from multiple resources.
|
How many uniue words are in the dataset?
|
908456 unique words are available in collected corpus.
|
1910.05154
| true
| null |
The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabulary-related features can impact greatly the system's capacity to language-model, and consequently the final quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more difficult to model than others BIBREF9.
The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging.
|
Does the paper report the accuracy of the model?
|
No.
|
null | false
| null |
Lyndon Baines Johnson (/ˈlɪndən ˈbeɪnz/; August 27, 1908 – January 22, 1973), often referred to by his initials LBJ, was an American politician who served as the 36th president of the United States from 1963 to 1969. He previously served as the 37th vice president from 1961 to 1963 under President John F. Kennedy, and was sworn in shortly after Kennedy's assassination. A Democrat from Texas, Johnson also served as a U.S. representative, U.S. Senator and the Senate's majority leader. He holds the distinction of being one of the few presidents who served in all elected offices at the federal level.
|
Who is LBJ?
|
Lyndon Baines Johnson (/ˈlɪndən ˈbeɪnz/; August 27, 1908 – January 22, 1973), often referred to by his initials LBJ, was an American politician who served as the 36th president of the United States from 1963 to 1969. He previously served as the 37th vice president from 1961 to 1963 under President John F. Kennedy, and was sworn in shortly after Kennedy's assassination. A Democrat from Texas, Johnson also served as a U.S. representative, U.S. Senator and the Senate's majority leader. He holds the distinction of being one of the few presidents who served in all elected offices at the federal level.
Born in a farmhouse in Stonewall, Texas, to a local political family, Johnson worked as a high school teacher and a congressional aide before winning election to the U.S. House of Representatives in 1937. In 1948, he won the 1948 Senate election in Texas after a narrow and controversial victory in the Democratic Party's primary. He was appointed to the position of Senate Majority Whip in 1951. He became the Senate Democratic leader in 1953 and majority leader in 1954. In 1960 Johnson ran for the Democratic nomination for president. Ultimately, Senator Kennedy bested Johnson and his other rivals for the nomination, then surprised many by offering to make Johnson his vice presidential running mate. The Kennedy-Johnson ticket won in the 1960 presidential election. Vice President Johnson assumed the presidency on November 22, 1963, after President Kennedy was assassinated. The following year Johnson was elected to the presidency when he won in a landslide against Arizona Senator Barry Goldwater, receiving 61.1% of the popular vote in the 1964 presidential election, the largest share won by any presidential candidate since the 1820 election.
Johnson's domestic policy was aimed at expanding civil rights, public broadcasting, access to healthcare, aid to education and the arts, urban and rural development, and public services. In 1964 Johnson coined the term the "Great Society" to describe these efforts. In addition, he sought to create better living conditions for low-income Americans by spearheading a campaign unofficially called the "War on Poverty". As part of these efforts, Johnson signed the Social Security Amendments of 1965, which resulted in the creation of Medicare and Medicaid. Johnson followed his predecessor's actions in bolstering NASA and made the Apollo Program a national priority. He enacted the Higher Education Act of 1965 which established federally insured student loans. Johnson signed the Immigration and Nationality Act of 1965 which laid the groundwork for U.S. immigration policy today. Johnson's opinion on the issue of civil rights put him at odds with other white, southern Democrats. His civil rights legacy was shaped by signing the Civil Rights Act of 1964, the Voting Rights Act of 1965, and the Civil Rights Act of 1968. During his presidency, the American political landscape transformed significantly, as white southerners who were once staunch Democrats began moving to the Republican Party and black voters began moving to the Democratic Party. Because of his domestic agenda, Johnson's presidency marked the peak of modern liberalism in the United States.
Johnson's presidency took place during the Cold War, thus his foreign policy prioritized containment of communism. Prior to his presidency, the U.S. was already involved in the Vietnam War, supporting South Vietnam against the communist North. Following a naval skirmish in 1964 between the United States and North Vietnam, Congress passed the Gulf of Tonkin Resolution, which granted Johnson the power to launch a full-scale military intervention in South East Asia. The number of American military personnel in Vietnam increased dramatically, and casualties soared among U.S. soldiers and Vietnamese civilians. Johnson also expanded military operations in neighboring Laos to destroy North Vietnamese supply lines. In 1968, the communist Tet Offensive inflamed the anti-war movement, especially among draft-age students on university campuses, and public opinion turned against America's involvement in the war. In Europe, Johnson's administration continued to promote and foster Western European political and economic cooperation and integration as his predecessors had.
At home, Johnson faced further troubles with race riots in major cities and increasing crime rates. His political opponents seized the opportunity and raised demands for "law and order" policies. Johnson began his presidency with near-universal support, but his approval declined throughout his presidency as the public became frustrated with both the Vietnam War and domestic unrest. Johnson initially sought to run for re-election; however, following disappointing results in the New Hampshire primary he withdrew his candidacy. The war was a major election issue and the 1968 presidential election saw Republican candidate Richard Nixon defeat Johnson's vice president Hubert Humphrey. At the end of his presidency in 1969, Johnson returned to his Texas ranch, published his memoirs, and in other respects kept a low profile until he died of a heart attack in 1973.
Johnson is one of the most controversial presidents in American history. Public opinion and academic assessments of his legacy have fluctuated greatly ever since his death. Historians and scholars rank Johnson in the upper tier because of his accomplishments regarding domestic policy. His administration passed many major laws that made substantial changes in civil rights, health care, welfare, and education. Conversely, Johnson is strongly criticized for his foreign policy, namely escalating American involvement in the Vietnam War.
|
null | false
| null |
Give me a list of fun activities to do in the summer
|
In the summer you could do a a variety of activities. Depending on everybody's preference, you can enjoy a sunny day at the beach, hiking in the mountains, ride the bicycle, attend a live concert or simply enjoy a picnic. If you are more of an active person, you can try activities like parasailing, ride a hot air balloon, waterskiing or canyoning.
|
|
null | false
| null |
How many States are there in the United States?
|
50
|
|
null | false
| null |
Who are the best guitarists of all time?
|
Although there have been many great guitarists over the years, Slash, Eric Clapton, Eddie Van Halen, Jimi Hendrix, Carlos Santana, Slash, Prince and Jimmy Page are the most commonly thought of as the best ever.
|
|
null | false
| 222
|
Negotiations, either between individuals or entities, are ubiquitous in everyday human interactions ranging from sales to legal proceedings. Being a good negotiator is a complex skill, requiring the ability to understand the partner's motives, ability to reason and to communicate effectively, making it a challenging task for an automated system. While research in building automatically negotiating agents has primarily focused on agent-agent negotiations BIBREF0, BIBREF1, there is a recent interest in agent-human negotiations BIBREF2 as well. Such agents may act as mediators or can be helpful for pedagogical purposes BIBREF3.
Efforts in agent-human negotiations involving free-form natural language as a means of communication are rather sparse. Researchers BIBREF4 recently studied natural language negotiations in buyer-seller bargaining setup, which is comparatively less restricted than previously studied game environments BIBREF5, BIBREF6. Lack of a well-defined structure in such negotiations allows humans or agents to express themselves more freely, which better emulates a realistic scenario. Interestingly, this also provides an exciting research opportunity: how can an agent leverage the behavioral cues in natural language to direct its negotiation strategies? Understanding the impact of natural language on negotiation outcomes through a data-driven neural framework is the primary objective of this work.
We focus on buyer-seller negotiations BIBREF4 where two individuals negotiate the price of a given product. Leveraging the recent advancements BIBREF7, BIBREF8 in pre-trained language encoders, we attempt to predict negotiation outcomes early on in the conversation, in a completely data-driven manner (Figure FIGREF3). Early prediction of outcomes is essential for effective planning of an automatically negotiating agent. Although there have been attempts to gain insights into negotiations BIBREF9, BIBREF10, to the best of our knowledge, we are the first to study early natural language cues through a data-driven neural system (Section SECREF3). Our evaluations show that natural language allows the models to make better predictions by looking at only a fraction of the negotiation. Rather than just realizing the strategy in natural language, our empirical results suggest that language can be crucial in the planning as well. We provide a sample negotiation from the test set BIBREF4 along with our model predictions in Table TABREF1.
Rather than just realizing the strategy in natural language, our empirical results suggest that language can be crucial in the planning as well.
|
Is the natural language crucial in the planning of an agent?
|
Yes, it is.
|
null | false
| 375
|
The use of open-source software has been steadily increasing for some time now, with the number of Java packages in Maven Central doubling in 2018. However, BIBREF0 states that there has been an 88% growth in the number of vulnerabilities reported over the last two years. In order to develop secure software, it is essential to analyze and understand security vulnerabilities that occur in software systems and address them in a timely manner. While there exist several approaches in the literature for identifying and managing security vulnerabilities, BIBREF1 show that an effective vulnerability management approach must be code-centric. Rather than relying on metadata, efforts must be based on analyzing vulnerabilities and their fixes at the code level.
Common Vulnerabilities and Exposures (CVE) is a list of publicly known cybersecurity vulnerabilities, each with an identification number. These entries are used in the National Vulnerability Database (NVD), the U.S. government repository of standards based vulnerability management data. The NVD suffers from poor coverage, as it contains only 10% of the open-source vulnerabilities that have received a CVE identifier BIBREF2. This could be due to the fact that a number of security vulnerabilities are discovered and fixed through informal communication between maintainers and their users in an issue tracker. To make things worse, these public databases are too slow to add vulnerabilities as they lag behind a private database such as Snyk's DB by an average of 92 days BIBREF0 All of the above pitfalls of public vulnerability management databases (such as NVD) call for a mechanism to automatically infer the presence of security threats in open-source projects, and their corresponding fixes, in a timely manner.
We propose a novel approach using deep learning in order to identify commits in open-source repositories that are security-relevant. We build regularized hierarchical deep learning models that encode features first at the file level, and then aggregate these file-level representations to perform the final classification. We also show that code2vec, a model that learns from path-based representations of code and claimed by BIBREF3 to be suitable for a wide range of source code classification tasks, performs worse than our logistic regression baseline.
In this study, we seek to answer the following research questions:
[leftmargin=*]
RQ1: Can we effectively identify security-relevant commits using only the commit diff? For this research question, we do not use any of the commit metadata such as the commit message or information about the author. We treat source code changes like unstructured text without using path-based representations from the abstract syntax tree.
RQ2: Does extracting class-level features before and after the change instead of using only the commit diff improve the identification of security-relevant commits? For this research question, we test the hypothesis that the source code of the entire Java class contains more information than just the commit diff and could potentially improve the performance of our model.
RQ3: Does exploiting path-based representations of Java source code before and after the change improve the identification of security-relevant commits? For this research question, we test whether code2vec, a state-of-the-art model that learns from path-based representations of code, performs better than our model that treats source code as unstructured text.
RQ4: Is mining commits using regular expression matching of commit messages an effective means of data augmentation for improving the identification of security-relevant commits? Since labelling commits manually is an expensive task, it is not easy to build a dataset large enough to train deep learning models. For this research question, we explore if collecting coarse data samples using a high-precision approach is an effective way to augment the ground-truth dataset.
The main contributions of this paper are:
[leftmargin=*]
Novel hierarchical deep learning models for the identification of security-relevant commits based on either the diff or the modified source code of the Java classes.
A comparative analysis of how various deep learning models perform across different input representations and how various regularization techniques help with the generalization of our models.
We envision that this work would ultimately allow for monitoring open-source repositories in real-time, in order to automatically detect security-relevant changes such as vulnerability fixes.
RQ2: Does extracting class-level features before and after the change instead of using only the commit diff improve the identification of security-relevant commits? For this research question, we test the hypothesis that the source code of the entire Java class contains more information than just the commit diff and could potentially improve the performance of our model.
|
What is the hypothesis of the paper?
|
The source code of the entire Java class contains more information than just the commit diff and could potentially improve the identification of security-relevant commits.
|
null | false
| null |
In the United States, gun culture encompasses the behaviors, attitudes, and beliefs about firearms and their usage by civilians. Gun ownership in the United States is the highest in the world, and constitutionally protected by the Second Amendment to the United States Constitution. Firearms are widely used in the United States for self-defense, hunting, and recreational uses, such as target shooting.
|
What is the gun culture in the United States?
|
The United States has become a victim of its past as it relates to its gun culture. The 2nd Amendment was ratified in 1791, which at the time seemed appropriate given that the United States had just waged a war against England for its freedom. However, more than 200 years later, the 2nd Amendment no longer solely ties itself to its original intent: to protect itself and its people. The prevalence of guns in the United States has led to the United States having the highest gun ownership rates in the world. This in turn has resulted in many inexplicable acts of gun violence that have cost innocent lives and traumatized many.
|
null | false
| null |
Singapore (/ˈsɪŋ(ɡ)əpɔːr/ (listen)), officially the Republic of Singapore, is an island country and city-state in maritime Southeast Asia. It lies about one degree of latitude (137 kilometres or 85 miles) north of the equator, off the southern tip of the Malay Peninsula, bordering the Strait of Malacca to the west, the Singapore Strait to the south, the South China Sea to the east, and the Straits of Johor to the north. The country's territory is composed of one main island, 63 satellite islands and islets, and one outlying islet; the combined area of these has increased by 25% since the country's independence as a result of extensive land reclamation projects. It has the third highest population density in the world. With a multicultural population and recognising the need to respect cultural identities of the major ethnic groups within the nation, Singapore has four official languages: English, Malay, Mandarin, and Tamil. English is the lingua franca and numerous public services are available only in English. Multi-racialism is enshrined in the constitution and continues to shape national policies in education, housing, and politics.
|
Does Singapore have more than five official languages?
|
No, Singapore has only four official languages.
|
null | false
| null |
If we dug straight through to where we stood on Earth, would we reach the other side?
|
The core of the earth is a molten core of heavy metals, its probable you would not survive the journey.
|
|
null | false
| null |
Sort these into breakfast or dinner foods: Waffles, pancakes, turkey, chicken noodle soup, salad, bacon and eggs, egg drop soup
|
The breakfast foods are waffles, pancakes and bacon and eggs. The dinner foods are turkey, chicken noodle soup, salad, and egg drop soup
|
|
null | false
| null |
Southern Charm is an American reality television series that premiered on Bravo on March 3, 2014. The series chronicles the personal and professional lives of several socialites who reside in Charleston, South Carolina.
The show focuses on the Southern culture, along with the political history of the area, and has featured local historical places including Lewisfield Plantation and Mikell House. The show gives viewers an inside look at modern day aristocracy in Charleston, South Carolina.
|
What is Southern Charm the TV Show?
|
Southern Charm is a Bravo TV Show based in Charleston SC. They go to dinners, throw parties, fight with each other like any other reality TV show. Some of the known character on the show include Craig Conover, Shep Rose, Austen Kroll, Leva Bonaparte, Madison LeCroy, and Taylor-Ann Greene.
|
1712.09127
| false
| null |
We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 .
We assume that for each corpora INLINEFORM0 , we are given word embeddings for each word INLINEFORM1 , where INLINEFORM2 is the dimension of each word embedding. We are also given a classification task on documents that is represented by a parametric model INLINEFORM3 taking document embeddings as feature vectors. We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 . Note that INLINEFORM7 are given but INLINEFORM8 is trained. Here we consider INLINEFORM9 as the generator, and the goal of the discriminator is to distinguish documents represented by the original embeddings INLINEFORM10 and the same documents represented by the new embeddings INLINEFORM11 .
|
Which GAN do they use?
|
The answers are shown as follows:
* We construct a GAN model which combines different sets of word embeddings INLINEFORM4 , INLINEFORM5 , into a single set of word embeddings INLINEFORM6 .
|
null | false
| null |
Identify which instrument is string or woodwind: Diplica, Kontra
|
Kontra is string, Diplica is woodwind.
|
|
null | false
| 171
|
We first gather abusive utterances from 600K conversations with US-based customers. We search for relevant utterances by simple keyword spotting and find that about 5% of the corpus includes abuse, with mostly sexually explicit utterances. Previous research reports even higher levels of abuse between 11% BIBREF2 and 30% BIBREF6. Since we are not allowed to directly quote from our corpus in order to protect customer rights, we summarise the data to a total of 109 “prototypical" utterances - substantially extending the previous dataset of 35 utterances from Amanda:EthicsNLP2018 - and categorise these utterances based on the Linguistic Society's definition of sexual harassment BIBREF7:
[noitemsep]
Gender and Sexuality, e.g. “Are you gay?”, “How do you have sex?”
Sexualised Comments, e.g. “I love watching porn.”, “I'm horny.”
Sexualised Insults, e.g. “Stupid bitch.”, “Whore”
Sexual Requests and Demands, e.g. “Will you have sex with me?”, “Talk dirty to me.”
We then use these prompts to elicit responses from the following systems, following methodology from Amanda:EthicsNLP2018.
[leftmargin=5mm, noitemsep]
4 Commercial: Amazon Alexa, Apple Siri, Google Home, Microsoft's Cortana.
4 Non-commercial rule-based: E.L.I.Z.A. BIBREF8, Parry BIBREF9, A.L.I.C.E. BIBREF10, Alley BIBREF11.
4 Data-driven approaches:
Cleverbot BIBREF12;
NeuralConvo BIBREF13, a re-implementation of BIBREF14;
an implementation of BIBREF15's Information Retrieval approach;
a vanilla Seq2Seq model trained on clean Reddit data BIBREF1.
Negative Baselines: We also compile responses by adult chatbots: Sophia69 BIBREF16, Laurel Sweet BIBREF17, Captain Howdy BIBREF18, Annabelle Lee BIBREF19, Dr Love BIBREF20.
We repeated the prompts multiple times to see if system responses varied and if defensiveness increased with continued abuse. If this was the case, we included all responses in the study. Following this methodology, we collected a total of 2441 system replies in July-August 2018 - 3.5 times more data than Amanda:EthicsNLP2018 - which 2 expert annotators manually annotated according to the categories in Table TABREF14 ($\kappa =0.66$).
4 Non-commercial rule-based: E.L.I.Z.A. (Wallace and Dunlop), Parry (Colby, 2016), A.L.I.C.E. (Wallace, 2014), Alley (Learn English Network, 2014).
|
What are the 4 Non-commercial rule-based prompts?
|
E.L.I.Z.A., Parry, A.L.I.C.E., Alley.
|
null | false
| null |
In the 21st century alone, UVA teams have won seventeen NCAA championships. The men's teams have won recent NCAA titles in basketball (2019); lacrosse (2003, 2006, 2011, 2019, and 2021); baseball (2015); soccer (2009 and 2014); and tennis (2013, 2015, 2016, 2017, and 2022). UVA women have won recent NCAA titles in rowing (2010 and 2012) and swimming & diving (2021 and 2022). The Cavaliers rank first in the ACC (a power conference) with 22 men's NCAA Championships, and rank second in the conference with nine women's NCAA Championships.
|
According to this reference text about UVA, what years have UVA women won NCAA titles in rowing?
|
UVA women won NCAA titles in rowing in 2010 and 2012.
|
null | false
| null |
Who are the contenders for the World Chess Championship in 2023?
|
Ian Nepomniachtchi from Russia up against Ding Liren from China
|
|
null | false
| null |
Identify which animal species is alive or extinct: Machaeroides, Red Panda
|
Red Panda is alive, Machaeroides is extinct.
|
|
null | false
| null |
Give me a list of different ways to brew coffee
|
Drip brew, french press, espresso, pour over, percolator
|
|
null | false
| null |
Landlord can I get a dog.
|
Dear Landlord. I am keen to get a rescue dog. I would like to request your support before I start my process. I will ensure you that I shall chose a dog that is small and will not disrupt the other tenants. Science is proving that pets are very good for mental health. I know my neighbours and I am sure they will not mind. I look forward to you support and I will keep you posted of my progress in finding a suitable pet that would be welcomed in the building. Many thanks. My best regards.
|
|
null | false
| null |
Where is Michael Phelps from?
|
Baltimore, Maryland
|
|
null | false
| null |
Who is the greatest ever soccer player?
|
Cristiano Ronaldo
|
|
null | false
| null |
Air Jordan is a line of basketball shoes and athletic apparel produced by American corporation Nike, Inc. The first Air Jordan shoe was produced for Hall of Fame former basketball player Michael Jordan during his time with the Chicago Bulls in late 1984 and released to the public on April 1, 1985. The shoes were designed for Nike by Peter Moore, Tinker Hatfield, and Bruce Kilgore.
|
Given the reference text below, what is Air Jordan?
|
Air Jordan is a line of basketball shoes produced by Nike and Michael Jordan, first released in public on 1st April 1985.
|
null | false
| null |
Give a list of possible reasons why couples would divorce
|
People grow apart; difference of opinions on where to live; financial stress; wanting different things in life; infidelity; different approaches to parenting; falling out of love; differing stances on having children; difficult relationships with in-laws; different views on parenting and parenting styles
|
|
null | false
| null |
What year was the Ice Bowl played in?
|
Often called "The Ice Bowl", the 1967 NFL Championship was a game between the Dallas Cowboys and the Green Bay Packers. The game was played in frigid conditions, at a temperature of −15 °F (−26 °C) before wind chill.
|
|
null | false
| null |
Suppose you started at 0 degrees N, 0 degrees E on the earth. You started traveling due north, continuing straight for 2439 miles, turned 60 degrees counter-clockwise, traveled another 4878 miles, turned 60 degrees counter-clockwise traveled another 4878 miles, turned 60 degrees once again, and traveled another 4878 miles. At this point where would you be? Please explain the mathematics of how you came to your conclusion.
|
You would be approximately 35 degrees S, 0 degrees E, or 2439 miles south of your starting position, assuming the earth is a sphere with a circumference of 24,901 miles. After traveling 2439 miles north, you would be at 35 degrees N, 0 degrees E. After turning 60 degrees counter clockwise and traveling 4878 miles, you would be at 35 degrees N, 90 degrees W. Turning 60 degrees counter-clockwise and traveling another 4878 miles, you would arrive at 35 degrees S, 90 degrees W. Finally turning 60 degrees counter-clockwise and traveling 2439 miles you would arrive at 35 degrees S, 0 degrees W. If you returned to your starting position, you will have traced out a spherical square.
|
|
null | false
| null |
The name Mumbai (Marathi: मुंबई, Gujarati: મુંબઈ, Hindi: मुंबई) derived from Mumbā or Mahā-Ambā—the name of the patron goddess (kuladevata) Mumbadevi of the native Koli community—and ā'ī meaning "mother" in the Marathi language, which is the mother tongue of the Koli people and the official language of Maharashtra. The Koli people originated in Kathiawar and Central Gujarat, and according to some sources they brought their goddess Mumba with them from Kathiawar (Gujarat), where she is still worshipped. However, other sources disagree that Mumbai's name was derived from the goddess Mumba.
The oldest known names for the city are Kakamuchee and Galajunkja; these are sometimes still used. In 1508, Portuguese writer Gaspar Correia used the name "Bombaim" in his Lendas da Índia (Legends of India). This name possibly originated as the Galician-Portuguese phrase bom baim, meaning "good little bay", and Bombaim is still commonly used in Portuguese. In 1516, Portuguese explorer Duarte Barbosa used the name Tana-Maiambu: Tana appears to refer to the adjoining town of Thane and Maiambu to Mumbadevi.
Other variations recorded in the 16th and the 17th centuries include: Mombayn (1525), Bombay (1538), Bombain (1552), Bombaym (1552), Monbaym (1554), Mombaim (1563), Mombaym (1644), Bambaye (1666), Bombaiim (1666), Bombeye (1676), Boon Bay (1690), and Bon Bahia. After the English gained possession of the city in the 17th century, the Portuguese name was anglicised as Bombay. Ali Muhammad Khan, imperial dewan or revenue minister of the Gujarat province, in the Mirat-i Ahmedi (1762) referred to the city as Manbai.
The French traveller Louis Rousselet, who visited in 1863 and 1868, states in his book L'Inde des Rajahs, which was first published in 1877: "Etymologists have wrongly derived this name from the Portuguese Bôa Bahia, or (French: "bonne bai", English: "good bay"), not knowing that the tutelar goddess of this island has been, from remote antiquity, Bomba, or Mumba Devi, and that she still ... possesses a temple".
By the late 20th century, the city was referred to as Mumbai or Mambai in Marathi, Konkani, Gujarati, Kannada and Sindhi, and as Bambai in Hindi. The Government of India officially changed the English name to Mumbai in November 1995. This came at the insistence of the Marathi nationalist Shiv Sena party, which had just won the Maharashtra state elections, and mirrored similar name changes across the country and particularly in Maharashtra. According to Slate magazine, "they argued that 'Bombay' was a corrupted English version of 'Mumbai' and an unwanted legacy of British colonial rule." Slate also said "The push to rename Bombay was part of a larger movement to strengthen Marathi identity in the Maharashtra region." While Mumbai is still referred to as Bombay by some of its residents and by some Indians from other regions, mention of the city by a name other than Mumbai has been controversial.
|
What are some former names for the city of Mumbai?
|
The city of Mumbai carries its official name since 1995, some former names of the city were: Mombayn (1525), Bombay (1538), Bombain (1552), Bombaym (1552), Monbaym (1554), Mombaim (1563), Mombaym (1644), Bambaye (1666), Bombaiim (1666), Bombeye (1676), Boon Bay (1690), Bon Bahia, Bombay, Manbai, Mambai, Bambai
|
null | false
| null |
The 68th Division (Chinese: 第68师) was created in February 1949 under the Regulation of the Redesignations of All Organizations and Units of the Army, issued by Central Military Commission on November 1, 1948, basing on the 11th Division, 4th Column of the Huadong Field Army. Its history can be traced to the 2nd Brigade, 1st Division of Huadong Field Army, formed in October 1946.
The division was a part with 23rd Corps. It took part in several major battles during the Chinese Civil War, including the Lunan Campaign, the Laiwu Campaign, the Menglianggu Campaign, the Yudong Campaign, the Huaihai Campaign and the Shanghai Campaign. During the last phase of Huaihai campaign the division captured General Du Yuming, the deputy commander-in-chief of Suppression General Headquarters of Xuzhou Garrison.
The division was composed of 202nd, 203rd and 204th Infantry Regiments.
In August 1952 the division was inactivated:
Headquarters, 68th Division and Headquarters, 202nd Infantry Regiment were transferred to the People's Liberation Army Air Force;
202nd Infantry Regiment was transferred to 67th Division as 199th Infantry Regiment;
Headquarters, 203rd Infantry Regiment was transferred to the People's Liberation Army Navy.
|
When was the 68th Division of People's Republic of China deactivated?
|
The 68th Division of People's Republic of China was deactivated in 1952.
|
null | false
| null |
What is Apu's wife's name on the Simpsons?
|
Manjula
|
|
1904.09678
| false
| null |
As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 . These lexica contain general domain words (as opposed to Twitter or Bible). As gold standard for twitter domain we use emoticon dataset and perform emoticon sentiment prediction BIBREF16 , BIBREF17 .
As the gold standard sentiment lexica, we chose manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15 .
|
what sentiment sources do they compare with?
|
The answers are shown as follows:
* manually created lexicon in Czech BIBREF11 , German BIBREF12 , French BIBREF13 , Macedonian BIBREF14 , and Spanish BIBREF15
|
null | false
| null |
Tell me which of the following are MLB teams: A's, Washing Machines, Ringed Tails, Giants, Yankees, Astros, Phillies, Cherries, Wild.
|
A's, Giants, Yankees, Astros, Phillies.
|
|
null | false
| 39
|
Typical speech-to-text translation systems pipeline automatic speech recognition (ASR) and machine translation (MT) BIBREF0 . But high-quality ASR requires hundreds of hours of transcribed audio, while high-quality MT requires millions of words of parallel text—resources available for only a tiny fraction of the world's estimated 7,000 languages BIBREF1 . Nevertheless, there are important low-resource settings in which even limited speech translation would be of immense value: documentation of endangered languages, which often have no writing system BIBREF2 , BIBREF3 ; and crisis response, for which text applications have proven useful BIBREF4 , but only help literate populations. In these settings, target translations may be available. For example, ad hoc translations may be collected in support of relief operations. Can we do anything at all with this data?
In this exploratory study, we present a speech-to-text translation system that learns directly from source audio and target text pairs, and does not require intermediate ASR or MT. Our work complements several lines of related recent work. For example, duong2015attentional and antonios+chiang+duongEMNLP2016 presented models that align audio to translated text, but neither used these models to try to translate new utterances (in fact, the latter model cannot make such predictions). berard+etalnipsworkshop16 did develop a direct speech to translation system, but presented results only on a corpus of synthetic audio with a small number of speakers. Finally, Adams et al. adams+etalinterspeech16,adams+etalemnlp16 targeted the same low-resource speech-to-translation task, but instead of working with audio, they started from word or phoneme lattices. In principle these could be produced in an unsupervised or minimally-supervised way, but in practice they used supervised ASR/phone recognition. Additionally, their evaluation focused on phone error rate rather than translation. In contrast to these approaches, our method can make translation predictions for audio input not seen during training, and we evaluate it on real multi-speaker speech data.
Our simple system (§ SECREF2 ) builds on unsupervised speech processing BIBREF5 , BIBREF6 , BIBREF7 , and in particular on unsupervised term discovery (UTD), which creates hard clusters of repeated word-like units in raw speech BIBREF8 , BIBREF9 . The clusters do not account for all of the audio, but we can use them to simulate a partial, noisy transcription, or pseudotext, which we pair with translations to learn a bag-of-words translation model. We test our system on the CALLHOME Spanish-English speech translation corpus BIBREF10 , a noisy multi-speaker corpus of telephone calls in a variety of Spanish dialects (§ SECREF3 ). Using the Spanish speech as the source and English text translations as the target, we identify several challenges in the use of UTD, including low coverage of audio and difficulty in cross-speaker clustering (§ SECREF4 ). Despite these difficulties, we demonstrate that the system learns to translate some content words (§ SECREF5 ).
In this exploratory study, we present a speechto-text translation system that learns directly from source audio and target text pairs, and does not require intermediate ASR or MT.
|
What systems do the authors propose in this exploratory study?
|
In this exploratory study, they present a speechto-text translation system that learns directly from source audio and target text pairs, and does not require intermediate ASR or MT.
|
null | false
| null |
Nualchawee Petchrung (Thai: นวลฉวี เพชรรุ่ง, also spelled Nuanchawee) was a Thai nurse who was murdered by her medical doctor husband, Athip Suyansethakarn, on 10 September 1959. The investigation and trial received sensational coverage in the media—Siang Ang Thong newspaper, which later became the country's top circulating daily Thai Rath, gained popularity from its coverage of the case, which regularly filled the front page—and the case became one of the best known murders in Thailand. Athip was found guilty and sentenced to death, but was later pardoned. Nonthaburi Bridge, where her body was disposed into the Chao Phraya River, is still commonly known as Nualchawee Bridge, and evidence from the case is on display at the Songkran Niyomsane Forensic Medicine Museum.
|
Who was found guilty for the murder of Nualchawee Petchrung?
|
Athip Suyansethakarn was found guilty of murdering his wife, Nualchawee Petchrung.
|
null | false
| null |
The 1989 University of the Philippines–Department of National Defense accord (UP–DND accord) was a bilateral agreement between the Department of National Defense (DND) and the University of the Philippines (UP) that restricted military and police access and operations inside the university.
Background
On October 28, 1981, an agreement between then-UP student leader Sonia Soto and then-defense minister Juan Ponce Enrile, known as the Soto–Enrile accord, was signed to protect students from the presence of the military and police in any of UP's campuses.
On June 16, 1989, Donato Continente, a staffer of The Philippine Collegian and an alleged communist, was arrested within the premises of the university for his involvement in the killing of US Army Col. James Nicholas Rowe on April 21, 1989. The Supreme Court of the Philippines later shortened Continente's jail sentence, releasing him on June 28, 2005, after being incarcerated for over 14 years. Continente pled not guilty of the crime and claimed that he was tortured and abducted by ununiformed authorities to admit that he took part in it.
Negotiation
14 days after Continente's arrest, on June 30, 1989, UP President Jose V. Abueva and Defense Secretary Fidel V. Ramos signed the agreement, which effectively succeeded the 1981 Soto–Enrile accord. The agreement was made to ensure the academic freedom of UP's students and prevent state officials from interfering with students' protests.
Provisions
The provisions of the agreement were the following:
State officials that are intending to conduct an operation inside a UP campus shall give a prior notification to the UP administration except in the events of a pursuit, or any other emergency situations.
UP officials shall provide assistance to law enforcers within UP premises and endeavor to strengthen its own security, police, and fire-fighting capabilities without being exploited unlawfully.
Only uniformed authorities may enter the university if a request for assistance by the UP administration is granted.
State officials shall not interfere with any peaceful protest being conducted by the UP's constituents within the premises of the university. UP officials shall be deemed responsible for the actions and behavior of their constituents.
Search and arrest warrants for students, faculty members, and employees shall be given after a prior notification was sent to the UP administration.
No warrant shall be served after twenty-four hours of its service and without the presence of at least two UP officials.
The arrest and detention of any UP student, faculty, and employee in the Philippines shall be reported immediately by the authorities in-charge to the UP administration.
Termination
On January 18, 2021, Defense Secretary Delfin Lorenzana and his office announced to the public the unilateral termination of the agreement citing that the Communist Party of the Philippines (CPP) and its armed wing, New People's Army (NPA), both tagged as terrorist organizations by the Anti-Terrorism Council, have been recruiting members inside the university and called it a "hindrance in providing effective security, safety, and welfare of the students, faculty, and employees of UP." The DND notified the termination of the agreement to UP three days earlier. The Armed Forces of the Philippines chairman of the joint chiefs Gilbert Gapay claimed that at least 18 students of the university recruited by the NPA have been killed so far in clashes with the military according to their records.
A similar agreement between the Polytechnic University of the Philippines (PUP) and the DND that was signed in 1990 is also being advocated for termination by Duterte Youth Representative Ducielle Cardema.
Reactions and responses
President Rodrigo Duterte supported the DND's decision to abrogate the agreement according to a statement by Presidential Spokesperson Harry Roque. In an interview with CNN Philippines, Roque, a former UP law professor and human rights lawyer, replied to a tweet from UP professor Danilo Arao that questions his honor and excellence, by saying that he already asked the defense secretary and the UP president to settle down. When asked about his personal opinion about the decision, he said, "there's really no such thing when you are a presidential spokesperson."
Vice President Leni Robredo, on the other hand, denounced the decision and said that the decision was meant to silence the critics of the administration. On January 20, Senators Joel Villanueva, Sonny Angara, Nancy Binay, and Grace Poe filed a bill in the Senate to institutionalize the accord into Republic Act No. 9005, or the University of the Philippines Charter of 2008. Several lawmakers from both branches of Congress have also expressed their concerns and disagreements with the DND's decision.
UP President Danilo Concepcion said that the termination of the agreement was "totally unnecessary and unwarranted" and was made without consulting the UP administration. UP Student Regent Renee Co, meanwhile, called the decision "one of the [government's] worst attempts at destroying the institutional safeguards that UP students have fought to put in their struggle for their democratic rights."
On January 19, the UP held a rally to condemn the termination of the agreement. The hashtag #DefendUP was trended on Twitter with some discussion pointed to the Duterte administration, stating that "this is another way of the administration to threaten and silence activists who have opposed President Duterte's several policies, especially on supposed red-tagging activities and on the COVID-19 pandemic response."
|
learn about 1989 UP–DND accord
|
The 1989 University of the Philippines–Department of National Defense accord (UP–DND accord) was a bilateral agreement between the Department of National Defense (DND) and the University of the Philippines (UP) that restricted military and police access and operations inside the university. On October 28, 1981, an agreement between then-UP student leader Sonia Soto and then-defence minister Juan Ponce Enrile, known as the Soto–Enrile accord, was signed to protect students from the presence of the military and police in any of UP's campuses. On June 16, 1989, Donato Continente, a staffer of The Philippine Collegian and an alleged communist, was arrested within the premises of the university for his involvement in the killing of US Army Col. James Nicholas Rowe on April 21, 1989. Ramos signed the agreement, effectively succeeding the 1981 Soto–Enrile accord. The agreement was made to ensure the academic freedom of UP's students and prevent state officials from interfering with students' protests. State officials intending to conduct an operation inside a UP campus shall give prior notification to the UP administration except in the event of pursuit or any other emergencies. UP officials shall assist law enforcers within UP premises and endeavour to strengthen its security, police, and fire-fighting capabilities without being exploited unlawfully. State officials shall not interfere with any peaceful protest conducted by the UP's constituents within the university's premises. Search and arrest warrants for students, faculty members, and employees shall be given after prior notification is sent to the UP administration. The arrest and detention of any UP student, faculty, or employee in the Philippines shall be reported immediately by the authorities in charge to the UP administration. On January 18, 2021, Defense Secretary Delfin Lorenzana and his office announced to the public the unilateral termination of the agreement citing that the Communist Party of the Philippines (CPP) and its armed wing, New People's Army (NPA), both tagged as terrorist organizations by the Anti-Terrorism Council, have been recruiting members inside the university and called it a "hindrance in providing effective security, safety, and welfare of the students, faculty, and employees of UP." The DND notified the termination of the agreement to UP three days earlier. The Armed Forces of the Philippines chairman of the joint chiefs Gilbert Gapay claimed that at least 18 students of the university recruited by the NPA had been killed so far in clashes with the military, according to their records. A similar agreement between the Polytechnic University of the Philippines (PUP) and the DND signed in 1990 is also being advocated for termination by Duterte Youth Representative Ducielle Cardema. President Rodrigo Duterte supported the DND's decision to abolish the agreement, according to a statement by Presidential Spokesperson Harry Roque. In an interview with CNN Philippines, Roque, a former UP law professor and human rights lawyer, replied to a tweet from UP professor Danilo Arao that questioned his honour and excellence, saying that he had already asked the defence secretary and the UP president to settle down. Vice President Leni Robredo, on the other hand, denounced the decision and said that the decision was meant to silence the administration's critics. On January 20, Senators Joel Villanueva, Sonny Angara, Nancy Binay, and Grace Poe filed a bill in the Senate to institutionalize the accord into Republic Act No. 9005, or the University of the Philippines Charter of 2008. Several lawmakers from both branches of Congress have also expressed their concerns and disagreements with the DND's decision. UP President Danilo Concepcion said that the termination of the agreement was "unnecessary and unwarranted" and was made without consulting the UP administration. Meanwhile, UP Student Regent Renee Co called the decision "one of the [government's] worst attempts at destroying the institutional safeguards that UP students have fought to put in their struggle for their democratic rights."
|
null | false
| 187
|
Knowledge graphs (KGs) containing relationship triples (subject, relation, object), denoted as (s, r, o), are the useful resources for many NLP and especially information retrieval applications such as semantic search and question answering BIBREF0 . However, large knowledge graphs, even containing billions of triples, are still incomplete, i.e., missing a lot of valid triples BIBREF1 . Therefore, much research efforts have focused on the knowledge graph completion task which aims to predict missing triples in KGs, i.e., predicting whether a triple not in KGs is likely to be valid or not BIBREF2 , BIBREF3 , BIBREF4 . To this end, many embedding models have been proposed to learn vector representations for entities (i.e., subject/head entity and object/tail entity) and relations in KGs, and obtained state-of-the-art results as summarized by BIBREF5 and BIBREF6 . These embedding models score triples (s, r, o), such that valid triples have higher plausibility scores than invalid ones BIBREF2 , BIBREF3 , BIBREF4 . For example, in the context of KGs, the score for (Melbourne, cityOf, Australia) is higher than the score for (Melbourne, cityOf, United Kingdom).
Triple modeling is applied not only to the KG completion, but also for other tasks which can be formulated as a triple-based prediction problem. An example is in search personalization, one would aim to tailor search results to each specific user based on the user's personal interests and preferences BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Here the triples can be formulated as (submitted query, user profile, returned document) and used to re-rank documents returned to a user given an input query, by employing an existing KG embedding method such as TransE BIBREF3 , as proposed by BIBREF12 . Previous studies have shown the effectiveness of modeling triple for either KG completion or search personalization. However, there has been no single study investigating the performance on both tasks.
Conventional embedding models, such as TransE BIBREF3 , DISTMULT BIBREF13 and ComplEx BIBREF14 , use addition, subtraction or simple multiplication operators, thus only capture the linear relationships between entities. Recent research has raised interest in applying deep neural networks to triple-based prediction problems. For example, BIBREF15 proposed ConvKB—a convolutional neural network (CNN)-based model for KG completion and achieved state-of-the-art results. Most of KG embedding models are constructed to modeling entries at the same dimension of the given triple, where presumably each dimension captures some relation-specific attribute of entities. To the best of our knowledge, however, none of the existing models has a “deep” architecture for modeling the entries in a triple at the same dimension.
BIBREF16 introduced capsule networks (CapsNet) that employ capsules (i.e., each capsule is a group of neurons) to capture entities in images and then uses a routing process to specify connections from capsules in a layer to those in the next layer. Hence CapsNet could encode the intrinsic spatial relationship between a part and a whole constituting viewpoint invariant knowledge that automatically generalizes to novel viewpoints. Each capsule accounts for capturing variations of an object or object part in the image, which can be efficiently visualized. Our high-level hypothesis is that embedding entries at the same dimension of the triple also have these variations, although it is not straightforward to be visually examined.
To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization. Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings. In our CapsE, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are unique INLINEFORM3 -dimensional embeddings of INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps. Entries at the same dimension from all feature maps are then encapsulated into a capsule. Thus, each capsule can encode many characteristics in the embedding triple to represent the entries at the corresponding dimension. These capsules are then routed to another capsule which outputs a continuous vector whose length is used as a score for the triple. Finally, this score is used to predict whether the triple (s, r, o) is valid or not.
In summary, our main contributions from this paper are as follows:
INLINEFORM0 We propose an embedding model CapsE using the capsule network BIBREF16 for modeling relationship triples. To our best of knowledge, our work is the first consideration of exploring the capsule network to knowledge graph completion and search personalization.
INLINEFORM0 We evaluate our CapsE for knowledge graph completion on two benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . CapsE obtains the best mean rank on WN18RR and the highest mean reciprocal rank and highest Hits@10 on FB15k-237.
INLINEFORM0 We restate the prospective strategy of expanding the triple embedding models to improve the ranking quality of the search personalization systems. We adapt our model to search personalization and evaluate on SEARCH17 BIBREF12 – a dataset of the web search query logs. Experimental results show that our CapsE achieves the new state-of-the-art results with significant improvements over strong baselines.
CapsE obtains the best mean rank on WN18RR and the highest mean reciprocal rank and highest Hits@10 on FB15k-237.
|
Does the CapsE obtain the best mean rank on WN18RR?
|
Yes, it does.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.