paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
What is sphingosine?
|
Sphingosine (2-amino-4-trans-octadecene-1,3-diol) is an 18-carbon amino alcohol with an unsaturated hydrocarbon chain, which forms a primary part of sphingolipids, a class of cell membrane lipids that include sphingomyelin, an important phospholipid.
|
|
null | false
| null |
Which of the following US States border Canada and which do not: Montana, Wyoming, Michigan, New York, Maine, Idaho, and Minnesota?
|
They all border Canada with the exception of Wyoming.
|
|
null | false
| null |
What is the best way to store passwords?
|
Password managers are generally regarded as the gold standard for safely warehousing your passwords. A password manager can index many unique and complicated passwords so that a user can have confidence that they can set a strong and secure password without compromising accessibility due to the password being difficult to read or overly complex.
|
|
null | false
| 295
|
First, we evaluated the standard HMM model with an uninformative prior (this will be our baseline) for the two different input features: MFCC (and derivatives) and MBN. Results are shown in Table TABREF20 . Surprisingly, the MBN features perform relatively poorly compared to the standard MFCC. These results are contradictory to those reported in BIBREF3 . Two factors may explain this discrepancy: the Mboshi5k data being different from the training data of the MBN neural network, the neural network may not generalize well. Another possibility may be that the initialization scheme of the model is not suitable for this type of features. Indeed, Variational Bayesian Inference algorithm converges only to a local optimum of the objective function and is therefore dependent of the initialization. We believe the second explanation is the more likely since, as we shall see shortly, the best results in term of word segmentation and NMI are eventually obtained with the MBN features when the inference is done with the informative prior. Next, we compared the HMM and the SVAE models when trained with an uninformative prior (lines with "Inf. Prior" set to "no" in Table TABREF23 ). The SVAE significantly improves the NMI and the precision showing that it extracts more consistent units than the HMM model. However, it also degrades the segmentation in terms of recall. We further investigated this behavior by looking at the duration of the units found by both models compared to the true phones (Table TABREF22 ). We observe that the SVAE model favors longer units than the HMM model hence leading to fewer boundaries and consequently smaller recall.
We then evaluated the effect of the informative prior on the acoustic unit discovery (Table TABREF23 ). On all 4 combinations (2 features sets INLINEFORM0 2 models) we observe an improvement in terms of precision and NMI but a degradation of the recall. This result is encouraging since the informative prior was trained on English data (TIMIT) which is very different from Mboshi. Indeed, this suggests that even speech from an unrelated language can be of some help in the design of an ASR for a very low resource language. Finally, similarly to the SVAE/HMM case described above, we found that the degradation of the recall is due to longer units discovered for models with an informative prior (numbers omitted due to lack of space).
Word discovery results are given in Table TABREF21 for the Boundary metric BIBREF20 , BIBREF21 . We observe that i) the best word boundary detection (F-score) is obtained with MBN features, an informative prior and the SVAE model; this confirms the results of table TABREF23 and shows that better AUD leads to better word segmentation ii) word segmentation from AUD graph Lattices is slightly better than from flat sequences of AUD symbols (1-best); iii) our results outperform a pure speech based baseline based on segmental DTW BIBREF22 (F-score of 19.3% on the exact same corpus).
Indeed, this suggests that even speech from an unrelated language can be of some help in the design of an ASR for a very low resource language.
|
Can the speech from an unrelated language be of some help in the design of an ASR for a very low resource language?
|
Yes, it can.
|
null | false
| null |
Categorize the days of the week based on their starting alphabet.
Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday
|
S - Sunday, Saturday
M - Monday
W - Wednesday
T - Tuesday, Thursday
F - Friday
|
|
null | false
| null |
Think of 3 shades of dark red that I could use to describe a rug
|
Rugs could be described as maroon, mahogany or scarlet
|
|
null | false
| null |
What is GitHub Actions?
|
GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production.
|
|
null | false
| null |
An airfield traffic pattern is a standard path followed by aircraft when taking off or landing while maintaining visual contact with the airfield.
At an airport, the pattern (or circuit) is a standard path for coordinating air traffic. It differs from "straight-in approaches" and "direct climb-outs" in that aircraft using a traffic pattern remain close to the airport. Patterns are usually employed at small general aviation (GA) airfields and military airbases. Many large controlled airports avoid the system unless there is GA activity as well as commercial flights. However, some kind of a pattern may be used at airports in some cases such as when an aircraft is required to go around, but this kind of pattern at controlled airports may be very different in form, shape, and purpose to the standard traffic pattern as used at GA airports.
The use of a pattern at airfields is for aviation safety. By using a consistent flight pattern, pilots will know from where to expect other air traffic and be able to see and avoid it. Pilots flying under visual flight rules (VFR) may not be separated by air traffic control, so this consistent predictable pattern is a vital way to keep things orderly. At tower-controlled airports, air traffic control (ATC) may provide traffic advisories for VFR flights on a work-load permitting basis.
|
How important is pilot communication with a non-towered controlled airport?
|
Piloting into a non-towered airport in my experience can be more challenging than towered airports. What makes it challenging is that you are not speaking to anyone in charge of the airspace. You are broadcasting to an open communications channel and hope all the other pilots “play” by the rules. The first best practice is to have a basic understanding of the airport, how many runways, what direction do they run, direction of the traffic pattern and traffic pattern altitude etc.? Practice consistency as this will aid in your decision making process. As you approach the airport you must also check the weather via AWOS or ASOS to determine the wind direction, speed and altimeter setting. The next step is to tune into airport UNICOM to listen to other traffic in the area, note the departures and arrivals and also note other planes in the traffic pattern. The next step is to identify yourself via the airport UNICOM channel, provide your distance to the airport and your direction from the airport. Your approach direction will most likely determine how and where you will join the traffic pattern. Most importantly, keep listening to the airport UNICOM and actively look for any other planes either in the traffic pattern or in the vicinity. There are a minimum of 4 traffic pattern legs - crosswind, downwind, base and final. Your job as a pilot is to communicate as you enter each pattern leg. It is also beneficial to communicate your intentions whether you are executing a full stop landing or a touch-n-go. If you are doing a full-stop landing be sure to communicate when you are clear of the given runway.
|
null | false
| 222
|
We study human-human negotiations in the buyer-seller bargaining scenario, which has been a key research area in the literature BIBREF0. In this section, we first describe our problem setup and key terminologies by discussing the dataset used. Later, we formalize our problem definition.
Dataset: For our explorations, we use the Craigslist Bargaining dataset (CB) introduced by BIBREF4. Instead of focusing on the previously studied game environments BIBREF5, BIBREF6, the dataset considers a more realistic setup: negotiating the price of products listed on Craigslist. The dataset consists of 6682 dialogues between a buyer and a seller who converse in natural language to negotiate the price of a given product (sample in Table TABREF1). In total, 1402 product ad postings were scraped from Craigslist, belonging to six categories: phones, bikes, housing, furniture, car and electronics. Each ad posting contains details such as Product Title, Category Type and a Listing Price. Moreover, a secret target price is also pre-decided for the buyer. The final price after the agreement is called the Agreed Price, which we aim to predict.
Defining the problem: Say we are provided with a product scenario $S$, a tuple: (Category, Title, Listing Price, Target Price). Define the interactions between a buyer and seller using a sequence of $n$ events $E_n:<e_{1}, e_{2}, ..., e_{n}>$, where $e_{i}$ occurs before $e_{j}$ iff $i<j$. Event $e_{i}$ is also a tuple: (Initiator, Type, Data). Initiator is either the Buyer or Seller, Type can be one of (message, offer, accept, reject or quit) and Data consists of either the corresponding natural language dialogue, offer price or can be empty. Nearly $80\%$ of events in CB dataset are of type `message', each consisting a textual message as Data. An offer is usually made and accepted at the end of each negotiation. Since the offers directly contain the agreed price (which we want to predict), we only consider `message' events in our models. Given the scenario $S$ and first $n$ events $E_n$, our problem is then to learn the function $f_{n}$: $A = f_{n}(S, E_n)$ where $A$ refers to the final agreed price between the two negotiating parties.
Since the offers directly contain the agreed price (which we want to predict), we only consider ‘message’ events in our models.
|
Why do the authors only consider ‘message’ events in the models?
|
Because the offers directly contain the agreed price.
|
null | false
| null |
How to have an evergreen garden?
|
In order to have an evergreen garden it's crucial to buy the right plants and plant them in the correct way in your garden. In order to choose your plants you will need to evaluate the soil and climate conditions in your garden. Please always consult planting guidance. I can strongly recommend Aucuba, Bergenia, Carex, Choisya, Cotoneaster, Erica, Escallonia, Hebe, Heuchera and Viburnum as perfect evergreen plants to put in your garden. This way you will always be surrounded by stunning green colours throughout the year with spikes of beautiful flowering in the summer.
|
|
null | false
| null |
What was the result of the Emancipation Proclamation?
|
The Emancipation Proclamation changed the legal status of slaves in the USA from enslaved to free.
|
|
null | false
| 41
|
We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries.
For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package4. A common parameter which is required by all these algorithms is number of sentences keep in the final summary.
|
For comparing the performance of their ILP-based summarization algorithm, they explored a few summarization algorithms provided by the Sumy package4, what is the common parameter required by all these algorithms?
|
Number of sentences keep in the final summary.
|
null | false
| null |
Who was Arseny Bibikov
|
Arseny Nikolayevich Bibikov (1873 — 1927) was a writer, film and stage actor, poet. Bibikov played in almost 50 films.
|
|
2003.12738
| false
| null |
We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26.
We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26.
|
What three conversational datasets are used for evaluation?
|
The answers are shown as follows:
* MojiTalk
* PersonaChat
* Empathetic-Dialogues
|
1912.09713
| false
| null |
For the goal of measuring compositional generalization as accurately as possible, it is particularly interesting to construct maximum compound divergence (MCD) splits, which aim for a maximum compound divergence at a low atom divergence (we use $\mathcal {D}_A \le 0.02$). Table TABREF18 compares the compound divergence $\mathcal {D}_C$ and atom divergence $\mathcal {D}_A$ of three MCD splits to a random split baseline as well as to several previously suggested compositionality experiments for both CFQ and the existing scan dataset (cf. Section SECREF30). The split methods (beyond random split) are the following:
Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\le \hspace{-2.5pt} N$, while the test set consists of examples with output length $> \hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.
Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.
Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.
Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice.
The split methods (beyond random split) are the following:
Output length: Variation of the setup described by BIBREF2 where the train set consists of examples with output (sparql query or action sequence) length $\le \hspace{-2.5pt} N$, while the test set consists of examples with output length $> \hspace{-2.5pt} N$. For CFQ, we use $N = 7$ constraints. For scan, we use $N = 22$ actions.
Input length: Variation of the above setup, in which the train set consists of examples with input (question or command) length $\le N$, while test set consists of examples with input length $> N$. For CFQ, we use $N=19$ grammar leaves. For SCAN, we use $N=8$ tokens.
Output pattern: Variation of setup described by BIBREF8, in which the split is based on randomly assigning clusters of examples sharing the same output (query or action sequence) pattern. Query patterns are determined by anonymizing entities and properties; action sequence patterns collapse primitive actions and directions.
Input pattern: Variation of the previous setup in which the split is based on randomly assigning clusters of examples sharing the same input (question or command) pattern. Question patterns are determined by anonymizing entity and property names ; command patterns collapse verbs and the interchangeable pairs left/right, around/opposite, twice/thrice.
|
What are other approaches into creating compositional generalization benchmarks?
|
The answers are shown as follows:
* random
* Output length
* Input length
* Output pattern
* Input pattern
|
null | false
| null |
Tell me which is a language and which is a type of people: Malayali & Malayalam
|
Malayali is a people. Malayalam is the language Malayali people speak.
|
|
null | false
| null |
Write a story about a cat named Rory and the adventures he gets up to while his owner, Maria, is gone at work. Write the story in the form of an hourly journal from 7am to 7 pm written by Rory, and convey that Rory loves to sleep while Maria is gone.
|
7 am: There is still no food in my bowl. I have tried to point Maria in the direction of the bowl at least 8 times, and she is not noticing. I even attempted to mimic the cry of a newborn human to attract her attention, and yet she is only focused on her coffee machine. I fear she is addicted.
8 am: Belly full from breakfast. I dislike my new food, the cat on the bag scares me. I will take a nap in the sun.
9 am: Continued to nap in the sun. Had to move multiple times because the sun patch moved, very annoying. I am starting to feel higher energy as my stomach is no longer stuffed. Time to eat the remaining food in my bowl I saved for a mid morning snack
10 am: There was a bird on the porch that I scared away. It was a close call, but I showed him who is boss here
11 am: Visited my Maria's roommate at their desk. I helped them with their work, but my paws are not particularly well suited for their keyboard. Was then unceremoniously kicked out of the room
12 pm: Tried to convince the roommate to give me lunch. Instead she just sat their and ate her fish and didn't offer me any.
1 pm: Time for my afternoon nap in the hammock
2 pm: Woke up discovering I had overheated in the hammock. Hunted the laser pointer and then I noticed the bird had returned. Put him in his place again.
3 pm: Took a tour of the kitchen counters. Noticed that Maria had left the cat food partially open. Ate some to prevent it from going bad and wasting perfectly good food.
4 pm: Took a nap under Maria's bed while waiting for her to get home
5 pm: Reunited! All is well again, except for Maria still doesn't notice my food bowl
6 pm: Maria brought me a mouse to play with. I indulge her by chasing it.
7 pm: Maria finally notices the food bowl. She pulls out a new bag, without a scary cat on it. All is well in the world.
|
|
null | false
| null |
Name some rap albums that were released in 2022.
|
Some rap albums that were released in 2022 include:
- The Forever Story by J.I.D.
- Melt My Eyez See Your Future by Denzel Curry
- King's Disease III by Nas
- Her Loss by Drake
- Mr. Morale and the Big Steppers by Kendrick Lamar
|
|
1903.07398
| true
| null |
Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech.
The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 .
Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality.
Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models.
The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4
Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality.
|
Do they reduce the number of parameters in their architecture compared to other direct text-to-speech models?
|
Yes.
|
null | false
| null |
Paul McCartney credits which artist with teaching him everything he knows?
|
Little Richard.
|
|
null | false
| 218
|
The standard approach to multilingual NLP is to design a single architecture, but tune and train a separate model for each language. While this method allows for customizing the model to the particulars of each language and the available data, it also presents a problem when little data is available: extensive language-specific annotation is required. The reality is that most languages have very little annotated data for most NLP tasks.
ammar2016malopa found that using training data from multiple languages annotated with Universal Dependencies BIBREF1 , and represented using multilingual word vectors, outperformed monolingual training. Inspired by this, we apply the idea of training one model on multiple languages—which we call polyglot training—to PropBank-style semantic role labeling (SRL). We train several parsers for each language in the CoNLL 2009 dataset BIBREF0 : a traditional monolingual version, and variants which additionally incorporate supervision from English portion of the dataset. To our knowledge, this is the first multilingual SRL approach to combine supervision from several languages.
The CoNLL 2009 dataset includes seven different languages, allowing study of trends across the same. Unlike the Universal Dependencies dataset, however, the semantic label spaces are entirely language-specific, making our task more challenging. Nonetheless, the success of polyglot training in this setting demonstrates that sharing of statistical strength across languages does not depend on explicit alignment in annotation conventions, and can be done simply through parameter sharing. We show that polyglot training can result in better labeling accuracy than a monolingual parser, especially for low-resource languages. We find that even a simple combination of data is as effective as more complex kinds of polyglot training. We include a breakdown into label categories of the differences between the monolingual and polyglot models. Our findings indicate that polyglot training consistently improves label accuracy for common labels.
We train several parsers for each language in the CoNLL 2009 dataset: a traditional monolingual version, and variants which additionally incorporate supervision from English portion of the dataset. To our knowledge, this is the first multilingual SRL approach to combine supervision from several languages.
|
Is their approach the first multilingual SRL approach to combine supervision from several languages?
|
Yes.
|
null | false
| null |
In cryptography, Galois/Counter Mode (GCM) is a mode of operation for symmetric-key cryptographic block ciphers which is widely adopted for its performance. GCM throughput rates for state-of-the-art, high-speed communication channels can be achieved with inexpensive hardware resources.
The GCM algorithm provides both data authenticity (integrity) and confidentiality and belongs to the class of authenticated encryption with associated data (AEAD) methods. This means that as input it takes a key K, some plaintext P, and some associated data AD; it then encrypts the plaintext using the key to produce ciphertext C, and computes an authentication tag T from the ciphertext and the associated data (which remains unencrypted). A recipient with knowledge of K, upon reception of AD, C and T, can decrypt the ciphertext to recover the plaintext P and can check the tag T to ensure that neither ciphertext nor associated data were tampered with.
|
Please give me a summary of what AES-GCM is and what are its inputs and outputs.
|
AES-GCM is a mode of operation for symmetric-key cryptography. It stands for Galois/Counter Mode and is an authenticated block cipher that provides integrity and authenticity on top of confidentiality. It accepts as input a key K, a plaintest P and associated data AD, and it produces as output the ciphertext C and an authentication tag T.
|
null | false
| 82
|
Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) BIBREF0 , BIBREF1 , rule-based BIBREF2 , and machine translation (MT) BIBREF3 , BIBREF4 . LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences.
In recent years, neural Machine Translation (NMT) is a newly-proposed deep learning approach and achieves very impressive results BIBREF5 , BIBREF6 , BIBREF7 . Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to-end, without the need to have external decoders, language models or phrase tables. Therefore, the existing architectures in NMT are used for text simplification BIBREF8 , BIBREF4 . However, most recent work using NMT is limited to the training data that are scarce and expensive to build. Language models trained on simplified corpora have played a central role in statistical text simplification BIBREF9 , BIBREF10 . One main reason is the amount of available simplified corpora typically far exceeds the amount of parallel data. The performance of models can be typically improved when trained on more data. Therefore, we expect simplified corpora to be especially helpful for NMT models.
In contrast to previous work, which uses the existing NMT models, we explore strategy to include simplified training corpora in the training process without changing the neural network architecture. We first propose to pair simplified training sentences with synthetic ordinary sentences during training, and treat this synthetic data as additional training data. We obtain synthetic ordinary sentences through back-translation, i.e. an automatic translation of the simplified sentence into the ordinary sentence BIBREF11 . Then, we mix the synthetic data into the original (simplified-ordinary) data to train NMT model. Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data.
Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to end, without the need to have external decoders, language models or phrase tables.
|
In what aspect is a neural machine translation system (NMT) superior to traditional phrased-based machine translation system?
|
NMT system is being trained end-to end, without the need to have external decoders, language models or phrase tables.
|
null | false
| null |
which of the following are ex indian prime ministers?
rajiv gandhi
rahul gandhi
manmohan singh
sonia gandhi
atal bihari vajpayee
kapil sibal
ak antony
jawaharlal nehru
|
Ex indian prime ministers:
rajiv gandhi
manmohan singh
atal bihari vajpayee
jawaharlal nehru
|
|
null | false
| null |
Die Schutzbefohlenen (translated as "Charges (The Supplicants)"[note 1]), is a play by Elfriede Jelinek written in 2013. She termed it a Sprachkunstwerk, a language artwork. It deals critically with the politics at the time concerning refugees. The play was first read in Hamburg on 21 September 2013. The first scenic production was in Mannheim on 23 May 2014. The first production in Austria was staged at the Burgtheater in Vienna on 28 March 2015, which was recognised internationally. Later that year, Jelinek expanded the text to reflect the changed political situation.
|
Tell me about the play Die Schutzbefohlenen
|
Die Schutzbefohlenen, translated as the supplicants is a play by Elfriede Jelinek written in 2013. She termed it a Sprachkunstwerk, a language artwork. It deals critically with the politics at the time concerning refugees. The play was first read in Hamburg on 21 September 2013. The first scenic production was in Mannheim on 23 May 2014.
The first production in Austria was staged at the Burgtheater in Vienna on 28 March 2015, which was recognised internationally. Later that year, Jelinek expanded the text to reflect the changed political situation.
|
null | false
| 18
|
Neural Machine Translation (NMT) has shown its effectiveness in translation tasks when NMT systems perform best in recent machine translation campaigns BIBREF0 , BIBREF1 . Compared to phrase-based Statistical Machine Translation (SMT) which is basically an ensemble of different features trained and tuned separately, NMT directly modeling the translation relationship between source and target sentences. Unlike SMT, NMT does not require much linguistic information and large monolingual data to achieve good performances.
An NMT consists of an encoder which recursively reads and represents the whole source sentence into a context vector and a recurrent decoder which takes the context vector and its previous state to predict the next target word. It is then trained in an end-to-end fashion to learn parameters which maximizes the likelihood between the outputs and the references. Recently, attention-based NMT has been featured in most state-of-the-art systems. First introduced by BIBREF2 , attention mechanism is integrated in decoder side as feedforward layers. It allows the NMT to decide which source words should take part in the predicting process of the next target words. It helps to improve NMTs significantly. Nevertheless, since the attention mechanism is specific to a particular source sentence and the considering target word, it is also specific to particular language pairs.
Some recent work has focused on extending the NMT framework to multilingual scenarios. By training such network using parallel corpora in number of different languages, NMT could benefit from additional information embedded in a common semantic space across languages. Basically, the proposed NMT are required to employ multiple encoders or multiple decoders to deal with multilinguality. Furthermore, in order to avoid the tight dependency of the attention mechanism to specific language pairs, they also need to modify their architecture to combine either the encoders or the attention layers. These modifications are specific to the purpose of the tasks as well. Thus, those multilingual NMTs are more complicated, much more free parameters to learn and more difficult to perform standard trainings compared to the original NMT.
In this paper, we introduce a unified approach to seamlessly extend the original NMT to multilingual settings. Our approach allows us to integrate any language in any side of the encoder-decoder architecture with only one encoder and one decoder for all the languages involved. Moreover, it is not necessary to do any network modification to enable attention mechanism in our NMT systems. We then apply our proprosed framework in two demanding scenarios: under-resourced translation and zero-resourced translation. The results show that bringing multilinguality to NMT helps to improve individual translations. With some insightful analyses of the results, we set our goal toward a fully multilingual NMT framework.
The paper starts with a detailed introduction to attention-based NMT. In Section SECREF3 , related work about multi-task NMT is reviewed. Section SECREF5 describes our proposed approach and thorough comparisons to the related work. It is followed by a section of evaluating our systems in two aforementioned scenarios, in which different strategies have been employed under a unified approach (Section SECREF4 ). Finally, the paper ends with conclusion and future work.
This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/
Neural Machine Translation (NMT) has shown its effectiveness in translation tasks when NMT systems perform best in recent machine translation campaigns.
|
In recent machine translation campaigns, which machine translation performs best?
|
Neural Machine Translation (NMT).
|
null | false
| 330
|
The encoder-decoder based framework BIBREF0, BIBREF1, BIBREF2 is the dominant approach for neural machine translation (NMT) BIBREF3, BIBREF4. Although the encoder and decoder usually adopt the same model structure (RNN BIBREF5, CNN BIBREF6 or self-attention BIBREF3, BIBREF7) and the same number of layers, they perform different functionalities: the encoder extracts the hidden representations of the source sentence, and the decoder generates target tokens conditioned on the source hidden representations as well as the previous generated tokens.
While most existing works focus on the design and improvement of encoder-decoder framework for NMT BIBREF8, BIBREF6, BIBREF3, BIBREF9 as well as its detailed analyses BIBREF10, BIBREF11, BIBREF12, BIBREF13, few works concentrate on the characteristics and functionalities of the encoder and the decoder, which are valuable to understand this popular framework and improve its performance in NMT. Therefore, in this paper, we conduct a study and aim to understand the characteristics of the encoder and the decoder in NMT. We observe some interesting phenomena:
The decoder handles an easier task than the encoder. 1) We find that adding more layers to the encoder achieves larger improvements than adding more layers to the decoder. 2) We also compare the training time of the encoder and decoder by fixing the parameters of a well-trained decoder (encoder), and just update the parameters of the encoder (decoder). We found that the decoder converges faster than the encoder. These two results suggest that the decoder handles an easier task than the encoder in NMT.
The decoder is more sensitive to the input noise than the encoder. We randomly add different level of noise to the input of the encoder and decoder respectively during inference, and find that adding noise to the input of the decoder leads to better accuracy drop than that of the encoder.
We further analyze why the decoder is more sensitive by masking the previous tokens, and comparing autoregressive NMT with the non-autoregressive counterpart. We find that the preceding tokens in the decoder provide strong conditional information, which partially explain the previous two observations on the decoder.
We believe our studies on the different characteristics of the encoder and decoder will inspire the following research on the encoder-decoder framework as well as improve the performance on NMT and other encoder-decoder based tasks.
We believe our studies on the different characteristics of the encoder and decoder will inspire the following research on the encoder-decoder framework as well as improve the performance on NMT and other encoder-decoder based tasks.
|
What do the authors believe about their studies on the different characteristics of the encoder and decoder?
|
The authors believe that their studies on the different characteristics of the encoder and decoder will inspire the following research on the encoder-decoder framework as well as improve the performance on NMT and other encoder-decoder based tasks.
|
null | false
| null |
Recommend me a SCI FI fiction book
|
The Three body could be a good beginning
|
|
null | false
| 334
|
Automated deriving discourse relation from free text is a challenging but im-portant problem. The shallow discourse parsing is very useful in the text summariza-tion BIBREF0 , opinion analysis BIBREF1 and natural language generation. Shallow discourse parser is the system of parsing raw text into a set of discourse relations between two adjacent or non-adjacent text spans. Discourse relation is composed of a discourse connective, two arguments of the discourse connective and the sense of the discourse connective. Discourse connective signals the explicit dis-course relation, but in non-explicit discourse relation, a discourse connective is omit-ted. Two arguments of the discourse connective, Arg1 and Arg2, which are the two adjacent or non-adjacent text spans connecting in the discourse relation. The sense of the discourse connective characterizes the nature of the discourse relations. The following discourse relation annotation is taken from the document in the PDTB. Arg1 is shown in italicized, and Arg2 is shown in bold. The discourse connective is underlined.
The connective identifier finds the connective word, “unless”. The arguments identifier locates the two arguments of “unless”. The sense classifier labels the dis-course relation. The non-explicit identifier checks all the pair of adjacent sentences. If the non-explicit identifier indentifies the pair of sentences as non-explicit relation, it will label it the relation sense. Though many research work BIBREF2 , BIBREF3 , BIBREF4 are committed to the shallow discourse parsing field, all of them are focus on the subtask of parsing only rather than the whole parsing process. Given all that, a full shallow discourse parser framework is proposed in our paper to turn the free text into discourse relations set. The parser includes connective identifier, arguments identifier, sense classifier and non-explicit identifier, which connects with each other in pipeline. In order to enhance the performance of the parser, the feature-based maximum entropy model approach is adopted in the experiment. Maximum entropy model offers a clean way to combine diverse pieces of contextual evidence in order to estimate the probability of a certain linguistic class occurring with a certain linguistic context in a simple and accessible manner. The three main contributions of the paper are:
The rest of this paper is organized as follows. Section 2 reviews related work in discourse parsing. Section 3 describes the experimental corpus–PDTB. Section 4 de-scribes the framework and the components of the parser. Section 5 presents experi-ments and evaluations. Conclusions are presented in the Section 6.
Given all that, a full shallow discourse parser framework is proposed in our paper to turn the free text into discourse relations set
|
What kind of framework do they propose?
|
A shallow discourse parser framework.
|
null | false
| null |
Give me an example of numbers in these categories: Natural Numbers, Prime Number, Composite Number, Whole Numbers, Integers, Rational Numbers, Irrational Numbers, Real Numbers, Complex Number.
|
These are a few examples for each type of number:
Natural Numbers: 1, 2, 3, 4
Prime Number: 2, 3, 5, 7
Composite Number: 4, 6, 8, 9
Whole Numbers: 0, 1, 2, 3
Integers: -1, 0, 1, 2
Rational Numbers: −1/3, 0.99999…, 7/2, 13/9
Irrational Numbers: √5, √11, √21, π
Real Numbers: -2, 0, 1/2, √3
Complex Number: 0.39 + 1.9 i, √5 + √9i, −5 + πi, 2 + i
|
|
null | false
| null |
Home-field advantage
World Series
The World Series used several different formats in its early years. Initially, it generally followed an alternating home-and-away pattern, except that if a seventh game was possible, its site was determined by coin toss prior to the sixth game. In 1924 the Series began using a 2-3-2 format, presumably to save on travel costs, a pattern that has continued to this day with the exception of a couple of the World War II years when wartime travel restrictions compelled a 3-4 format (used in 1943 and 1945, but not in the 1944 series, which was contested between crosstown rivals the St. Louis Browns and St. Louis Cardinals; all games were held in the same stadium in St. Louis). From the start of the 2-3-2 format through the 2002 season, home-field advantage generally alternated between leagues each year. Prior to the 1994 strike, the National League champion received home-field advantage in even-numbered years and the American League champion in odd-numbered years; these were reversed for 1995–2002 (because 1994 would have been the NL's turn to have home-field, but the World Series was canceled by the aforementioned strike). That changed starting in 2003.
The 2002 All-Star Game ended in a tie, much to the displeasure of both fans and sportswriters, who complained about a lack of intensity and competitiveness on the part of the players. This hit especially close to home for Commissioner Bud Selig, as the game had been played in his home city of Milwaukee, Wisconsin. In response, to make the game's outcome more meaningful, in 2003, MLB began assigning home-field advantage in the World Series to the winner of that year's All-Star Game, which is typically held in mid-July.
Following the acceptance of a new collective bargaining agreement after the 2016 season (which went into effect in 2017) home-field advantage in the World Series is no longer tied to the outcome of the All-Star Game, but instead is granted to the team with the better regular-season record. One exception was 2020, when all World Series games were played at a neutral site. The home-field advantage designation in the World Series was determined based on whichever pennant winner held the higher seed in its league, not necessarily the team with the better regular season record. However, in the event both pennant winners had held the same seeding number, only then regular season records would have determined the home team in Games 1, 2, 6, and 7.
|
Provide a bulleted summary of the history of how home-field advantage is determined for the World Series.
|
Home field advantage for the World Series has been determined in the following ways throughout history:
- 1924 - 2002: Home field advantage simply alternated between the American and National Leagues each year.
- 2002 - 2016: Home field advantage was determined by the winner of the All-Star game.
- 2016 - Present: Home field advantage is determined by the team with the best record playing in the World Series.
|
null | false
| 377
|
The Recognizing Textual Entailment (RTE) challenges first appeared in 2004 as a means to test textual entailment, i.e. relations between a premise text and a hypothesis text ( "Recognizing Textual Entailment" ):
An entailment example from RTE1.
Budapest again became the focus of national political drama in the late 1980s, when Hungary led the reform movement in eastern Europe that broke the communist monopoly on political power and ushered in the possibility of multiparty politics.
In the late 1980s Budapest became the center of the reform movement.
Entailment [RTE702]
In contrast to the FraCaS test suite, the RTE challenges use naturally occurring data as premises. The hypothesis text is then constructed based on this premise text. There is either a binary or a tripartite classification of entailment — depending on the version of RTE. The first two RTE challenges follow the former scheme and make a binary classification of entailment (entailed or not entailed). Tripartite classification (entailment, negation of the hypothesis entailment or no entailment) is added in the later datasets, retaining two way classification versions as well. Seven RTE challenges have been created altogether.
The main advantages of the RTE challenges is their use of examples from natural text and the inclusion of cases that require presupposed information, mostly world knowledge. Indeed, the very definition of inference assumed in a number of the examples is problematic. As BIBREF1 have pointed out, RTE platforms suffer from cases of inference that should not be categorized as such. For these cases, a vast amount of world knowledge needs to be taken into consideration (that most importantly not every linguistic agent has). In this paper, and having the RTE as our starting point challenges, we claim that RTE is insufficiently precise to perform logical reasoning or precise reasoning tasks and we take up the task of validating our working hypothesis and proposing a method for doing proper collection of precise entailment pairs in the style of RTE. Of course, the creators of RTE had in mind a more loose definition of inference where both a precise and an imprecise definition of entailment would be at play. BIBREF2 mention that “our applied notion of textual entailment is also related, of course, to classical semantic entailment in the linguistics literature... a common definition of entailment specifies that a text t entails another text h (hypothesis, in our terminology) if h is true in every circumstance (possible world) in which t is true." This is close to what we want to capture in this paper. But, at the same time, BIBREF2 also mention that “however, our applied definition allows for cases in which the truth of the hypothesis is highly plausible, for most practical purposes, rather than certain". It is these cases we want to make more precise, in the sense of making the supporting hidden inferences that are at play in many of the RTE examples explicit. In a way, what we are aiming at is a methodology of constructing entailment datasets in the style of RTE, that will involve a more precise definition of entailment and will further record any missing/hidden premises are used in justifying or not an entailment pattern.
In contrast to the FraCaS test suite, the RTE challenges use naturally occurring data as premises.
|
What is the difference between the FraCaS test suite and the RTE challenges?
|
In contrast to the FraCaS test suite, the RTE challenges use naturally occurring data as premises.
|
null | false
| null |
Q: What is an emergency use authorization and how is it being used to respond to COVID-19?
|
A: An emergency use authorization (EUA) is a mechanism to facilitate the availability and use of medical countermeasures, including vaccines, tests, and medicines, during public health emergencies, such as the current COVID-19 pandemic. Under an EUA, the FDA may allow the use of unapproved medical products, or unapproved uses of approved medical products in an emergency to diagnose, treat, or prevent serious or life-threatening diseases or conditions when certain statutory criteria have been met, including that there are no adequate, approved, and available alternatives. Taking into consideration input from the FDA, manufacturers decide whether and when to submit an EUA request to the FDA.
Learn more about EUAs in this videoExternal Link Disclaimer
Read more about COVID-19 tests
Find a community-based testing site
|
|
null | false
| null |
What is an option in finance?
|
In finance, an option is a contract which conveys to its owner, the holder, the right, but not the obligation, to buy or sell a specific quantity of an underlying asset or instrument at a specified strike price on or before a specified date, depending on the style of the option. Options are typically acquired by purchase, as a form of compensation, or as part of a complex financial transaction. Thus, they are also a form of asset and have a valuation that may depend on a complex relationship between underlying asset price, time until expiration, market volatility, the risk-free rate of interest, and the strike price of the option.
|
|
null | false
| 21
|
Many reinforcement learning algorithms are designed for relatively small discrete or continuous action spaces and so have trouble scaling. Text-adventure games—or interaction fictions—are simulations in which both an agents' state and action spaces are in textual natural language. An example of a one turn agent interaction in the popular text-game Zork1 can be seen in Fig. FIGREF1. Text-adventure games provide us with multiple challenges in the form of partial observability, commonsense reasoning, and a combinatorially-sized state-action space. Text-adventure games are structured as long puzzles or quests, interspersed with bottlenecks. The quests can usually be completed through multiple branching paths. However, games can also feature one or more bottlenecks. Bottlenecks are areas that an agent must pass through in order to progress to the next section of the game regardless of what path the agent has taken to complete that section of the quest BIBREF0. In this work, we focus on more effectively exploring this space and surpassing these bottlenecks—building on prior work that focuses on tackling the other problems.
Formally, we use the definition of text-adventure games as seen in BIBREF1 and BIBREF2. These games are partially observable Markov decision processes (POMDPs), represented as a 7-tuple of $\langle S,T,A,\Omega , O,R, \gamma \rangle $ representing the set of environment states, mostly deterministic conditional transition probabilities between states, the vocabulary or words used to compose text commands, observations returned by the game, observation conditional probabilities, reward function, and the discount factor respectively. For our purposes, understanding the exact state and action spaces we use in this work is critical and so we define each of these in relative depth.
Action-Space. To solve Zork1, the cannonical text-adventure games, requires the generation of actions consisting of up to five-words from a relatively modest vocabulary of 697 words recognized by the game’s parser. This results in $\mathcal {O}(697^5)={1.64e14}$ possible actions at every step. To facilitate text-adventure game playing, BIBREF2 introduce Jericho, a framework for interacting with text-games. They propose a template-based action space in which the agent first selects a template, consisting of an action verb and preposition, and then filling that in with relevant entities $($e.g. $[get]$ $ [from] $ $)$. Zork1 has 237 templates, each with up to two blanks, yielding a template-action space of size $\mathcal {O}(237 \times 697^2)={1.15e8}$. This space is still far larger than most used by previous approaches applying reinforcement learning to text-based games.
State-Representation. Prior work has shown that knowledge graphs are effective in terms of dealing with the challenges of partial observability $($BIBREF3 BIBREF3; BIBREF4$)$. A knowledge graph is a set of 3-tuples of the form $\langle subject, relation, object \rangle $. These triples are extracted from the observations using Stanford's Open Information Extraction (OpenIE) BIBREF5. Human-made text-adventure games often contain relatively complex semi-structured information that OpenIE is not designed to parse and so they add additional rules to ensure that the correct information is parsed. The graph itself is more or less a map of the world, with information about objects' affordances and attributes linked to the rooms that they are place in a map. The graph also makes a distinction with respect to items that are in the agent's possession or in their immediate surrounding environment. An example of what the knowledge graph looks like and specific implementation details can be found in Appendix SECREF14.
BIBREF6 introduce the KG-A2C, which uses a knowledge graph based state-representation to aid in the section of actions in a combinatorially-sized action-space—specifically they use the knowledge graph to constrain the kinds of entities that can be filled in the blanks in the template action-space. They test their approach on Zork1, showing the combination of the knowledge graph and template action selection resulted in improvements over existing methods. They note that their approach reaches a score of 40 which corresponds to a bottleneck in Zork1 where the player is eaten by a “grue” (resulting in negative reward) if the player has not first lit a lamp. The lamp must be lit many steps after first being encountered, in a different section of the game; this action is necessary to continue exploring but doesn’t immediately produce any positive reward. That is, there is a long term dependency between actions that is not immediately rewarded, as seen in Figure FIGREF1. Others using artificially constrained action spaces also report an inability to pass through this bottleneck BIBREF7, BIBREF8. They pose a significant challenge for these methods because the agent does not see the correct action sequence to pass the bottleneck enough times. This is in part due to the fact that for that sequence to be reinforced, the agent needs to reach the next possible reward beyond the bottleneck.
More efficient exploration strategies are required to pass bottlenecks. Our contributions are two-fold. We first introduce a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state. This method freezes the policy used to reach the bottleneck and restarts the training from there on out, additionally conducting a backtracking search to ensure that a sub-optimal policy has not been frozen. The second contribution explore how to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-spaces such as Go-Explore BIBREF9. We additionally present a comparative ablation study analyzing the performance of these methods on the popular text-game Zork1.
To facilitate text-adventure game playing, Hausknecht et al. (2019) introduce Jericho1, a framework for interacting with text-games.
|
What framework do Hausknecht's team introduce to facilitate text-adventure game playing?
|
Jericho.
|
null | false
| 209
|
The key aspect of the low dimensionality representation (LDR) is the use of weights to represent the probability of each term to belong to each one of the different language varieties. We assume that the distribution of weights for a given document should be closer to the weights of its corresponding language variety. Formally, the LDR is estimated as follows:
The key aspect of the low dimensionality representation (LDR) is the use of weights to represent the probability of each term to belong to each one of the different language varieties.
|
What is the key aspect of the low dimensionality representation (LDR)?
|
The key aspect of the low dimensionality representation (LDR) is the use of weights to represent the probability of each term to belong to each one of the different language varieties.
|
null | false
| null |
Why is there an Irish pub everywhere you go?
|
There are many reasons in history for Irish pubs being situated in obscure towns, places and holiday destinations. One reason is due to the amount of Irish people that immigrated from the 1500's right up too late 1980's for various reasons be it famine, civil war or opportunity. This resulted in Irish people settling and developing communities that included Irish pubs that in turn attracted customers and were successful. In more recent times, there is a holding company that has placed Irish pubs in obscure areas i.e. on some of the highest mountains in the world or small villages with small populations to build on the global phenomenon of 'there is always an irish pub everywhere you go'.
|
|
null | false
| 170
|
It is well-established that gender bias exists in language – for example, we see evidence of this given the prevalence of sexism in abusive language datasets BIBREF0, BIBREF1. However, these are extreme cases of gender norms in language, and only encompass a small proportion of speakers or texts.
Less studied in NLP is how gender norms manifest in everyday language – do people talk about women and men in different ways? These types of differences are far subtler than abusive language, but they can provide valuable insight into the roots of more extreme acts of discrimination. Subtle differences are difficult to observe because each case on its own could be attributed to circumstance, a passing comment or an accidental word. However, at the level of hundreds of thousands of data points, these patterns, if they do exist, become undeniable. Thus, in this work, we introduce new datasets and methods so that we can study subtle gender associations in language at the large-scale.
Our contributions include:
Two datasets for studying language and gender, each consisting of over 300K sentences.
Methods to infer gender-associated words and labeled clusters in any domain.
Novel findings that demonstrate in both domains that people do talk about women and men in different ways.
Each contribution brings us closer to modeling how gender associations appear in everyday language. In the remainder of the paper, we present related work, our data collection, methods and findings, and human evaluations of our system.
Our contributions include: • Two datasets for studying language and gender, each consisting of over 300K sentences. • Methods to infer gender-associated words and labeled clusters in any domain. • Novel findings that demonstrate in both domains that people do talk about women and men in different ways.
|
What are the contributions of this paper?
|
• Two datasets for studying language and gender, each consisting of over 300K sentences. • Methods to infer gender-associated words and labeled clusters in any domain. • Novel findings that demonstrate in both domains that people do talk about women and men in different ways.
|
null | false
| null |
5 origami styles
|
Book fold
Kite fold
Cupboard fold
Shawl fold
Cushion fold
|
|
1910.03042
| true
| null |
Figure FIGREF3 provides an overview of Gunrock's architecture. We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response. Finally, we markup the synthesized responses and return to the users through text to speech (TTS) (tts). While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.
We extend the Amazon Conversational Bot Toolkit (CoBot) BIBREF6 which is a flexible event-driven framework. CoBot provides ASR results and natural language processing pipelines through the Alexa Skills Kit (ASK) BIBREF7. Gunrock corrects ASR according to the context (asr) and creates a natural language understanding (NLU) (nlu) module where multiple components analyze the user utterances. A dialog manager (DM) (dm) uses features from NLU to select topic dialog modules and defines an individual dialog flow. Each dialog module leverages several knowledge bases (knowledge). Then a natural language generation (NLG) (nlg) module generates a corresponding response.
While we provide an overview of the system in the following sections, for detailed system implementation details, please see the technical report BIBREF1.
|
Do they specify the model they use for Gunrock?
|
No.
|
null | false
| 349
|
Image Captioning, the task of automatic generation of image captions, has attracted attentions from researchers in many fields of computer science, being computer vision, natural language processing and machine learning in recent years. This paper contributes to research on Image Captioning task in terms of extending dataset to a different language - Vietnamese. So far, there is no existed Image Captioning dataset for Vietnamese language, so this is the foremost fundamental step for developing Vietnamese Image Captioning. In this scope, we first build a dataset which contains manually written captions for images from Microsoft COCO dataset relating to sports played with balls, we called this dataset UIT-ViIC. UIT-ViIC consists of 19,250 Vietnamese captions for 3,850 images. Following that, we evaluate our dataset on deep neural network models and do comparisons with English dataset and two Vietnamese datasets built by different methods. UIT-ViIC is published on our lab website for research purposes.
In this scope, we first build a dataset which contains manually written captions for images from Microsoft COCO dataset relating to sports played with balls, we called this dataset UIT-ViIC.
|
What does the dataset UIT-ViIC contain?
|
It is a dataset which contains manually written captions for images from Microsoft COCO dataset relating to sports played with balls.
|
null | false
| null |
Java is a high-level, class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let programmers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need to recompile. Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of the underlying computer architecture. The syntax of Java is similar to C and C++, but has fewer low-level facilities than either of them. The Java runtime provides dynamic capabilities (such as reflection and runtime code modification) that are typically not available in traditional compiled languages. As of 2019, Java was one of the most popular programming languages in use according to GitHub,[citation not found] particularly for client–server web applications, with a reported 9 million developers.
Java was originally developed by James Gosling at Sun Microsystems. It was released in May 1995 as a core component of Sun Microsystems' Java platform. The original and reference implementation Java compilers, virtual machines, and class libraries were originally released by Sun under proprietary licenses. As of May 2007, in compliance with the specifications of the Java Community Process, Sun had relicensed most of its Java technologies under the GPL-2.0-only license. Oracle offers its own HotSpot Java Virtual Machine, however the official reference implementation is the OpenJDK JVM which is free open-source software and used by most developers and is the default JVM for almost all Linux distributions.
As of March 2023, Java 20 is the latest version, while Java 17, 11 and 8 are the current long-term support (LTS) versions.
|
From the passage provided, extract where Java was developed
|
Java was originally developed by James Gosling at Sun Microsystems.
|
null | false
| null |
What are good native plants for a garden in Ohio?
|
Native plants that will grow well in Ohio include Purple Coneflower, Thin Leaf Mountain Mint, Sawtooth Sunflower, and Little Bluestem Grass. There's quite a few plants that grow well in Ohio and are native to the area. The Missouri Botanical Garden has great resources on those kinds of plants as does the Lady Bird Johnson Wildflower Center.
|
|
null | false
| 130
|
The input provided by x-stance is two-fold: (A) a natural language question concerning a political issue; (B) a natural language commentary on a specific stance towards the question.
The label to be predicted is either `favor' or `against`. This corresponds to a standard established by BIBREF0. However, x-stance differs from that dataset in that it lacks a `neither' class; all comments refer to either a `favor' or an `against` position. The task posed by x-stance is thus a binary classification task.
As an evaluation metric we report the macro-average of the F1-score for `favor' and the F1-score for `against', similar to BIBREF9. We use this metric mainly to strengthen comparability with the previous benchmarks.
The input provided by X-stance is two-fold:(A) a natural language question concerning a political issue; (B) a natural language commentary on a specific stance towards the question.
|
What's the two-fold input provided by X-stance?
|
(A) a natural language question concerning a political issue; (B) a natural language commentary on a specific stance towards the question.
|
null | false
| null |
Do the following activities typically happen at night or during the day? A hike in a park, stargazing, going to a beach tidepool, dinner at a restaurant, looking for shooting stars, working a desk job
|
A hike in a park: daytime, stargazing: nighttime, going to a beach tidepool: daytime, dinner at a restaurant: nighttime, looking for shooting stars: nighttime, working a desk job: daytime
|
|
1808.09920
| true
| null |
To each node $v_i$ , we associate a continuous annotation $\mathbf {x}_i \in \mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section "Node annotations" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph.
We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges).
|
Did they use a relation extraction method to construct the edges in the graph?
|
No.
|
null | false
| null |
What are the benefits of minimally-invasive orthopedic surgery?
|
There are several reasons why an orthopedic surgeon would want to choose a minimally-invasive surgical option as opposed to an open procedure. On the one hand: minimally invasive operations tend to have faster recovery times, less blood loss, and smaller incisions (which is good from an aesthetic perspective as it means smaller scars). On the other hand, minimally invasive options might not provide as much visibility into the surgical site as an open incision would, and depending on the procedure might not promote the type of fusion required. Of course, any surgical decision should be made by a trained and qualified physician and will depend entirely on the specific circumstances presented by the patient.
|
|
null | false
| null |
Intermittent fasting is any of various meal timing schedules that cycle between voluntary fasting (or reduced calorie intake) and non-fasting over a given period. Methods of intermittent fasting include alternate-day fasting, periodic fasting, and daily time-restricted feeding.
|
What is Intermittent Fasting?
|
Intermittent fasting is a time-restricted form of meal scheduling to reduce caloric intake by voluntarily abstaining from eating during defined periods. Various fasting methods include daily feeding during restricted time windows, alternate-day fasting, and periodic fasting.
|
null | false
| null |
"Big Bang": what does it mean?
|
The Big Bang theory, which is how astronomers explain the origin of the universe, holds that the universe as we know it began billions of years ago in a huge explosion. At the time of this explosion, all the matter that exists in the universe today was concentrated at one point.
From the moment of the explosion, matter began to expand and is still expanding today. That is why the Big Bang theory includes the theory of the expanding universe. In addition to expanding, matter began to form more complex structures, from atoms and molecules to the living organisms we know today.
The name "Big Bang" was first used by physicist Fred Hoyle in 1949, who disagreed with this theory, preferring the steady-state model of the universe. The idea of a big bang seemed absurd to Hoyle.
|
|
null | false
| null |
What are Riot Games' most popular games?
|
League of Legends, Valorant, Teamfight Tactics
|
|
null | false
| null |
The potential environmental effects of hydraulic fracturing include air emissions and climate change, high water consumption, groundwater contamination, land use, risk of earthquakes, noise pollution, and various health effects on humans. Air emissions are primarily methane that escapes from wells, along with industrial emissions from equipment used in the extraction process. Modern UK and EU regulation requires zero emissions of methane, a potent greenhouse gas. Escape of methane is a bigger problem in older wells than in ones built under more recent EU legislation.
|
Based on the following passage, what are the environmental consequences of fracking?
|
The environmental effects of fracking include air emissions (primarily methane), groundwater contamination, high water consumption, groundwater contamination, land use, noise pollution, risk of earthquakes, and health effects on humans.
|
1704.05572
| true
| null |
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
|
Can the method answer multi-hop questions?
|
Yes.
|
null | false
| null |
Why is pickleball growing more popular than tennis?
|
Pickleball is easy to learn. Compared to tennis, it requires less strength and skills to play. It also takes up less space to set up a pickleball court. It does not create forearm tightness like tennis. It is a good sport for all ages.
|
|
null | false
| null |
Segregate the below numbers as palindrome or not.
1001, 202, 11, 65, 76, 36, 88, 999, 12345
|
Palindrome - 1001, 202, 11, 88, 999
Not a palindrome - 65, 76, 36, 12345
|
|
null | false
| null |
Which US state produces the most pineapples?
|
Hawaii
|
|
null | false
| 380
|
Since the advent of large structured knowledge bases (KBs) like Freebase BIBREF0 , YAGO BIBREF1 and DBpedia BIBREF2 , answering natural language questions using those structured KBs, also known as KB-based question answering (or KB-QA), is attracting increasing research efforts from both natural language processing and information retrieval communities.
The state-of-the-art methods for this task can be roughly categorized into two streams. The first is based on semantic parsing BIBREF3 , BIBREF4 , which typically learns a grammar that can parse natural language to a sophisticated meaning representation language. But such sophistication requires a lot of annotated training examples that contains compositional structures, a practically impossible solution for large KBs such as Freebase. Furthermore, mismatches between grammar predicted structures and KB structure is also a common problem BIBREF4 , BIBREF5 , BIBREF6 .
On the other hand, instead of building a formal meaning representation, information extraction methods retrieve a set of candidate answers from KB using relation extraction BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 or distributed representations BIBREF11 , BIBREF12 . Designing large training datasets for these methods is relatively easy BIBREF7 , BIBREF13 , BIBREF14 . These methods are often good at producing an answer irrespective of their correctness. However, handling compositional questions that involve multiple entities and relations, still remains a challenge. Consider the question what mountain is the highest in north america. Relation extraction methods typically answer with all the mountains in North America because of the lack of sophisticated representation for the mathematical function highest. To select the correct answer, one has to retrieve all the heights of the mountains, and sort them in descending order, and then pick the first entry. We propose a method based on textual evidence which can answer such questions without solving the mathematic functions implicitly.
Knowledge bases like Freebase capture real world facts, and Web resources like Wikipedia provide a large repository of sentences that validate or support these facts. For example, a sentence in Wikipedia says, Denali (also known as Mount McKinley, its former official name) is the highest mountain peak in North America, with a summit elevation of 20,310 feet (6,190 m) above sea level. To answer our example question against a KB using a relation extractor, we can use this sentence as external evidence, filter out wrong answers and pick the correct one.
Using textual evidence not only mitigates representational issues in relation extraction, but also alleviates the data scarcity problem to some extent. Consider the question, who was queen isabella's mother. Answering this question involves predicting two constraints hidden in the word mother. One constraint is that the answer should be the parent of Isabella, and the other is that the answer's gender is female. Such words with multiple latent constraints have been a pain-in-the-neck for both semantic parsing and relation extraction, and requires larger training data (this phenomenon is coined as sub-lexical compositionality by wang2015). Most systems are good at triggering the parent constraint, but fail on the other, i.e., the answer entity should be female. Whereas the textual evidence from Wikipedia, ...her mother was Isabella of Barcelos ..., can act as a further constraint to answer the question correctly.
We present a novel method for question answering which infers on both structured and unstructured resources. Our method consists of two main steps as outlined in sec:overview. In the first step we extract answers for a given question using a structured KB (here Freebase) by jointly performing entity linking and relation extraction (sec:kb-qa). In the next step we validate these answers using an unstructured resource (here Wikipedia) to prune out the wrong answers and select the correct ones (sec:refine). Our evaluation results on a benchmark dataset WebQuestions show that our method outperforms existing state-of-the-art models. Details of our experimental setup and results are presented in sec:experiments. Our code, data and results can be downloaded from https://github.com/syxu828/QuestionAnsweringOverFB.
Our evaluation results on a benchmark dataset WebQuestions show that our method outperforms existing state-of-the-art models.
|
Does their method outperform existing state-of-the-art models?
|
Yes.
|
null | false
| null |
The Organization of the Petroleum Exporting Countries (OPEC, /ˈoʊpɛk/ OH-pek) is an organization enabling the co-operation of leading oil-producing countries, in order to collectively influence the global market and maximise profit. Founded on 14 September 1960 in Baghdad by the first five members (Iran, Iraq, Kuwait, Saudi Arabia, and Venezuela), it has, since 1965, been headquartered in Vienna, Austria, although Austria is not an OPEC member state. As of September 2018, the 13 member countries accounted for an estimated 44 percent of global oil production and 81.5 percent of the world's proven oil reserves, giving OPEC a major influence on global oil prices that were previously determined by the so-called "Seven Sisters" grouping of multinational oil companies.
The formation of OPEC marked a turning point toward national sovereignty over natural resources, and OPEC decisions have come to play a prominent role in the global oil market and international relations. The effect can be particularly strong when wars or civil disorders lead to extended interruptions in supply. In the 1970s, restrictions in oil production led to a dramatic rise in oil prices and in the revenue and wealth of OPEC, with long-lasting and far-reaching consequences for the global economy. In the 1980s, OPEC began setting production targets for its member nations; generally, when the targets are reduced, oil prices increase. This has occurred most recently from the organization's 2008 and 2016 decisions to trim oversupply.
|
From the passage below, extract the first five member countries of OPEC
|
The first five members of the Organization of the Petroleum Exporting Countries (OPEC) were Iran, Iraq, Kuwait, Saudi Arabia and Venezuela. As of September 2018, OPEC officially has 13 member countries.
|
null | false
| null |
Zigalga National Park (Russian: Национальный парк «Зигальга») is located on the high Zigalga Ridge of the Southern Ural Mountains in Russia, on the transition between Europe and Siberia. Much of the territory is untouched by human activity and so supports Ice Age relict floral communities through all altitude zones - pine and birch forest, dark coniferous taiga, alpine meadows and mountain tundra. The park was officially created in 2019. The park is located in the Katav-Ivanovsky District of Chelyabinsk Oblast.
Topography
Ziglalga National Park connects two large protected areas on the main ridge of the Southern Urals, with Zyuratkul National Park to the northeast, and the South Ural Nature Reserve to the southwest. The park includes headwaters of the Yuryuzan River and the Kutkurka River. The highest point is at Mount Poperechnaya (1,387 metres (4,551 ft))
|
What are common florals found in Zigalga National Park?
|
Zigalga National Park has the majority of its territory untouched by human activity and includes pine and birch forest, dark coniferous taiga, alpine meadows and mountain tundra.
|
null | false
| 185
|
In this work we focus on transferring supervision signals of natural language generation (NLG) tasks between multiple languages. We propose to pretrain the encoder and the decoder of a sequence-to-sequence model under both monolingual and cross-lingual settings. The pre-training objective encourages the model to represent different languages in the shared space, so that we can conduct zero-shot cross-lingual transfer. After the pre-training procedure, we use monolingual data to fine-tune the pre-trained model on downstream NLG tasks. Then the sequence-to-sequence model trained in a single language can be directly evaluated beyond that language (i.e., accepting multi-lingual input and producing multi-lingual output). Experimental results on question generation and abstractive summarization show that our model outperforms the machine-translation-based pipeline methods for zero-shot cross-lingual generation. Moreover, cross-lingual transfer improves NLG performance of low-resource languages by leveraging rich-resource language data. Our implementation and data are available at https://github.com/CZWin32768/xnlg.
In this work we focus on transferring supervision signals of natural language generation (NLG) tasks between multiple languages. We propose to pretrain the encoder and the decoder of a sequence-to-sequence model under both monolingual and cross-lingual settings.
|
Under what settings do they propose to pre-train the encoder and the decoder of a sequence-to-sequence model?
|
Monolingual and cross-lingual settings.
|
null | false
| null |
John Clark Dore (March 22, 1822 – December 14, 1900) was an American educator and politician from New Hampshire. A graduate of Dartmouth College, Dore briefly taught in Boston, Massachusetts before he was recruited to come to Chicago, Illinois to serve as its first superintendent of school. Dore was also a successful merchant, serving as the vice president of the Chicago Board of Trade. He was elected to the Illinois Senate in 1868, where he helped to establish the Illinois Humane Society.
He assumed the presidency of the Commercial Insurance Company, then was named president of the Chicago Board of Underwriters in 1869. Following the Great Chicago Fire of 1871, he was named the president of the State Savings Institution, resigning in 1873. Dore also presided over the Newsboys' and Bootblacks' Home and donated a building to them in 1884.
|
Was John Clark Dore president of State Savings Institution in 1872?
|
Yes, he was named president of the State Savings Institution in 1871 and did not resign until 1873.
|
null | false
| null |
Are turtles amphibians or reptiles?
|
Turtles are reptiles. Some may assume Turtles are amphibian because they live in water. But that is not the critical factor. Many other animals live in waters, such as whales or crocodiles. To be an amphibian the animal has to have life stage in which it exclusively lives and breathes in water such as frogs.
|
|
null | false
| null |
What are some sports that are played with a round ball?
|
Cricket, Baseball, Basketball, Volleyball, Tennis, Pickleball, Table Tennis, Soccer, Racquetball, Hockey, Squash, handball
|
|
null | false
| 162
|
The concept of message passing over graphs has been around for many years BIBREF0, BIBREF1, as well as that of graph neural networks (GNNs) BIBREF2, BIBREF3. However, GNNs have only recently started to be closely investigated, following the advent of deep learning. Some notable examples include BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. These approaches are known as spectral. Their similarity with message passing (MP) was observed by BIBREF9 and formalized by BIBREF13 and BIBREF14.
The MP framework is based on the core idea of recursive neighborhood aggregation. That is, at every iteration, the representation of each vertex is updated based on messages received from its neighbors. All spectral GNNs can be described in terms of the MP framework.
GNNs have been applied with great success to bioinformatics and social network data, for node classification, link prediction, and graph classification. However, a few studies only have focused on the application of the MP framework to representation learning on text. This paper proposes one such application. More precisely, we represent documents as word co-occurrence networks, and develop an expressive MP GNN tailored to document understanding, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Evaluation on 10 document classification datasets show that our architectures learn representations that are competitive with the state-of-the-art. Furthermore, ablation experiments shed light on the impact of various architectural choices.
In what follows, we first provide some background about the MP framework (in sec. SECREF2), thoroughly describe and explain MPAD (sec. SECREF3), present our experimental framework (sec. SECREF4), report and interpret our results (sec. SECREF5), and provide a review of the relevant literature (sec. SECREF6).
More precisely, we represent documents as word co-occurrence networks, and develop an expressive MP GNN tailored to document understanding, the Message Passing Attention network for Document understand_x001f_ing (MPAD).
|
What does MPAD stand for?
|
It refers to Message Passing Attention network for Document understand_x001f_ing.
|
null | false
| null |
Who is the best actor in the world
|
Shah Rukh Khan
Tom Cruise
Keanu reeves
|
|
null | false
| null |
In Casablanca what is the name of the nightclub
|
Rick's
|
|
1710.09589
| true
| null |
The data stems from a joint ADAPT-Microsoft project. An overview of the provided dataset is given in Table TABREF16 . Notice that the available amount of data differs per language.
An overview of the provided dataset is given in Table TABREF16 . Notice that the available amount of data differs per language.
|
is the dataset balanced across the four languages?
|
No.
|
1904.05527
| false
| null |
The main set of experiments uses a Linear Support Vector Machine (Joachims, 1998) to classify dialects using CxG features. Parameters are tuned using the development data. Given the general robust performance of SVMs in the literature relative to other similar classifiers on variation tasks (c.f., Dunn, et al., 2016), we forego a systematic evaluation of classifiers.
This paper has used data-driven language mapping to select national dialects of English to be included in a global dialect identification model. The main experiments have focused on a dynamic syntactic feature set, showing that it is possible to predict dialect membership within-domain with only a small loss of performance against lexical models. This work raises two remaining problems:
The main set of experiments uses a Linear Support Vector Machine (Joachims, 1998) to classify dialects using CxG features.
This paper has used data-driven language mapping to select national dialects of English to be included in a global dialect identification model.
|
What do the models that they compare predict?
|
The answers are shown as follows:
* national dialects of English
|
null | false
| null |
What are the most common languages in the world?
|
English, Spanish, Chinese, Arabic, Hindi
|
|
null | false
| null |
How many triangles can be formed with 6 matchsticks of equal size without breaking or overlapping them?
|
4 triangles if you arrange them in 3 dimension like a pyramid.
|
|
null | false
| null |
What are some titles from George Orwell?
|
Nineteen Eighty-Four, Down and Out in Paris and London, Animal Farm, Burmese Days
|
|
1612.05270
| false
| null |
FLOAT SELECTED: Table 3: Datasets details from each competition tested in this work
FLOAT SELECTED: Table 3: Datasets details from each competition tested in this work
|
How many texts/datapoints are in the SemEval, TASS and SENTIPOLC datasets?
|
Total number of annotated data:
Semeval'15: 10712
Semeval'16: 28632
Tass'15: 69000
Sentipol'14: 6428
|
null | false
| null |
What is heavier? A pound of feathers or a pound of iron?
|
A pound of feathers and a pound of iron both weigh the same.
|
|
1710.07395
| false
| null |
Table TABREF24 shows performance of ensemble models by combining prediction results of the best context-aware logistic regression model and the best context-aware neural network model. We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions.
We used two strategies in combining prediction results of two types of models. Specifically, the Max Score Ensemble model made the final decisions based on the maximum of two scores assigned by the two separate models; instead, the Average Score Ensemble model used the average score to make final decisions.
|
How do they combine the models?
|
The answers are shown as follows:
* maximum of two scores assigned by the two separate models
* average score
|
1906.10551
| true
| null |
A serious challenge in the field of AV is the lack of publicly available (and suitable) corpora, which are required to train and evaluate AV methods. Among the few publicly available corpora are those that were released by the organizers of the well-known PAN-AV competitions BIBREF11 , BIBREF5 , BIBREF12 . In regard to our experiments, however, we cannot use these corpora, due to the absence of relevant meta-data such as the precise time spans where the documents have been written as well as the topic category of the texts. Therefore, we decided to compile our own corpora based on English documents, which we crawled from different publicly accessible sources. In what follows, we describe our three constructed corpora, which are listed together with their statistics in Table TABREF23 . Note that all corpora are balanced such that verification cases with matching (Y) and non-matching (N) authorships are evenly distributed.
Therefore, we decided to compile our own corpora based on English documents, which we crawled from different publicly accessible sources.
|
Do they report results only on English data?
|
Yes.
|
null | false
| null |
Germany is the seventh-largest country in Europe; bordering Denmark to the north, Poland and the Czech Republic to the east, Austria to the southeast, and Switzerland to the south-southwest. France, Luxembourg and Belgium are situated to the west, with the Netherlands to the northwest. Germany is also bordered by the North Sea and, at the north-northeast, by the Baltic Sea. German territory covers 357,022 km2 (137,847 sq mi), consisting of 348,672 km2 (134,623 sq mi) of land and 8,350 km2 (3,224 sq mi) of water.
Elevation ranges from the mountains of the Alps (highest point: the Zugspitze at 2,963 metres or 9,721 feet) in the south to the shores of the North Sea (Nordsee) in the northwest and the Baltic Sea (Ostsee) in the northeast. The forested uplands of central Germany and the lowlands of northern Germany (lowest point: in the municipality Neuendorf-Sachsenbande, Wilstermarsch at 3.54 metres or 11.6 feet below sea level) are traversed by such major rivers as the Rhine, Danube and Elbe. Significant natural resources include iron ore, coal, potash, timber, lignite, uranium, copper, natural gas, salt, and nickel.
|
Extract a list of all countries sharing a border with Germany from this passage.
|
Following countries share a border with Germany:
- Denmark
- Poland
- Czech Republic
- Austria
- Switzerland
- France
- Luxembourg
- Belgium
- Netherlands
|
2001.06888
| false
| null |
FLOAT SELECTED: Table 3: Evaluation results of different approaches compared to ours
FLOAT SELECTED: Table 3: Evaluation results of different approaches compared to ours
|
What are the baseline state of the art models?
|
Stanford NER, BiLSTM+CRF, LSTM+CNN+CRF, T-NER and BiLSTM+CNN+Co-Attention
|
null | false
| null |
What are some good items to keep in your pantry if you want to cook Asian food?
|
Pantry items that are commonly used in eastern Asian cuisine are Soy Sauce, Fish Sauce, Oyster Sauce, rice vinegar, sesame oil, rice cooking wine (mirin), rice, and MSG.
|
|
null | false
| 225
|
Building a machine translation (MT) system requires lots of bilingual data. Neural MT models BIBREF0 , which become the current standard, are even more difficult to train without huge bilingual supervision BIBREF1 . However, bilingual resources are still limited to some of the selected language pairs—mostly from or to English.
A workaround for zero-resource language pairs is translating via an intermediate (pivot) language. To do so, we need to collect parallel data and train MT models for source-to-pivot and pivot-to-target individually; it takes a double effort and the decoding is twice as slow.
Unsupervised learning is another alternative, where we can train an MT system with only monolingual corpora. Decipherment methods BIBREF2 , BIBREF3 are the first work in this direction, but they often suffer from a huge latent hypothesis space BIBREF4 .
Recent work by unmt-artetxe and unmt-facebook train sequence-to-sequence MT models of both translation directions together in an unsupervised way. They do back-translation BIBREF5 back and forth for every iteration or batch, which needs an immensely long time and careful tuning of hyperparameters for massive monolingual data.
Here we suggest rather simple methods to build an unsupervised MT system quickly, based on word translation using cross-lingual word embeddings. The contributions of this paper are:
The proposed models can be efficiently trained with off-the-shelf softwares with little or no changes in the implementation, using only monolingual data. The provided analyses help for better learning of cross-lingual word embeddings for translation purpose. Altogether, our unsupervised MT system outperforms the sequence-to-sequence neural models even without training signals from the opposite translation direction, i.e. via back-translation.
The proposed models can be efficiently trained with off-the-shelf softwares with little or no changes in the implementation, using only monolingual data. The provided analyses help for better learning of cross-lingual word embeddings for translation purpose.
|
Do the provided analyses help for better learning of cross-lingual word embeddings for translation purposes?
|
Yes.
|
1911.09845
| false
| null |
Three annotators from a commercial annotation company are recruited to conduct our human evaluation. Responses from different models are shuffled for labeling. 300 test queries are randomly selected out, and annotators are asked to independently score the results of these queries with different points in terms of their quality: (1) Good (3 points): The response is grammatical, semantically relevant to the query, and more importantly informative and interesting; (2) Acceptable (2 points): The response is grammatical, semantically relevant to the query, but too trivial or generic (e.g.,“我不知道(I don't know)", “我也是(Me too)”, “我喜欢(I like it)" etc.); (3) Failed (1 point): The response has grammar mistakes or irrelevant to the query.
300 test queries are randomly selected out, and annotators are asked to independently score the results of these queries with different points in terms of their quality: (1) Good (3 points): The response is grammatical, semantically relevant to the query, and more importantly informative and interesting; (2) Acceptable (2 points): The response is grammatical, semantically relevant to the query, but too trivial or generic (e.g.,“我不知道(I don't know)", “我也是(Me too)”, “我喜欢(I like it)" etc.); (3) Failed (1 point): The response has grammar mistakes or irrelevant to the query.
|
How is human evaluation performed, what were the criteria?
|
The answers are shown as follows:
* (1) Good (3 points): The response is grammatical, semantically relevant to the query, and more importantly informative and interesting
* (2) Acceptable (2 points): The response is grammatical, semantically relevant to the query, but too trivial or generic
* (3) Failed (1 point): The response has grammar mistakes or irrelevant to the query
|
null | false
| null |
What is cross-country skiing?
|
Cross-country (or "nordic") skiing, as opposed to downhill skiing, refer to the winter snow sport where the participant travels across relatively flat ground without the assistance of a chairlift or other machine aid in getting uphill. There are three main categories of cross-country ski available: classic, skate and touring, all of which come with specialized types of skies. Classic cross-country skiing involves a striding technique in which the skier keeps their skis parallel to each other and is best done in a narrow, groomed track. Skate skiing is considered more challenging to learn and involves a skiing technique that resembles the motion of ice skating and roller blading. Skate skiers also typically use a groomed trail but it is wider and smoother than the narrow grooves for classic skiing. Touring skis allows skiers to most easily ski off the trails on top of snow that has not been groomed.
|
|
null | false
| null |
The liberum veto (Latin for "free veto") was a parliamentary device in the Polish–Lithuanian Commonwealth. It was a form of unanimity voting rule that allowed any member of the Sejm (legislature) to force an immediate end to the current session and to nullify any legislation that had already been passed at the session by shouting either Sisto activitatem! (Latin: "I stop the activity!") or Nie pozwalam! (Polish: "I do not allow!"). The rule was in place from the mid-17th century to the late 18th century in the Sejm's parliamentary deliberations. It was based on the premise that since all Polish noblemen were equal, every measure that came before the Sejm had to be passed unanimously. The liberum veto was a key part of the political system of the Commonwealth, strengthening democratic elements and checking royal power and went against the European-wide trend of having a strong executive (absolute monarchy).
Many historians hold that the liberum veto was a major cause of the deterioration of the Commonwealth political system, particularly in the 18th century, when foreign powers bribed Sejm members to paralyze its proceedings, and the Commonwealth's eventual destruction in the partitions of Poland and foreign occupation, dominance and manipulation of Poland. Piotr Stefan Wandycz wrote that the "liberum veto had become the sinister symbol of old Polish anarchy". In the period of 1573–1763, about 150 sejms were held, about a third failing to pass any legislation, mostly because of the liberum veto. The expression Polish parliament in many European languages originated from the apparent paralysis.
|
How was Liberum Veto mis-used?
|
With Liberum veto, it was only possible to pass any legislation with unanimous consent. So one member of parliament to stop any legislation. Adversaries would often influence or bribe just one member of Parliament to derail a legislation.
|
null | false
| null |
Uście Gorlickie [ˈuɕt͡ɕɛ ɡɔrˈlʲit͡skʲɛ] (Ukrainian: Устя Руське, Ustia Rus’ke) is a village in Gorlice County, Lesser Poland Voivodeship, in southern Poland, close to the border with Slovakia. It is the seat of the gmina (administrative district) called Gmina Uście Gorlickie. It lies approximately 15 kilometres (9 mi) south of Gorlice and 106 km (66 mi) south-east of the regional capital Kraków.
The village has a population of 1,100.
|
What is Uście Gorlickie?
|
Uscie Gorlicke is a small village of 1,100 people in southern Poland, close to the border of Slovakia. Is it the seat of the administrative district (gmina) called Gmina Uscie Gorlicke). The village lies 66 miles southeast of regional capital Krakow.
|
1909.00754
| false
| null |
We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. The statistics of these two datsets are shown in Table 2 .
We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35.
|
Which datasets are used to evaluate performance?
|
The answers are shown as follows:
* the single domain dataset, WoZ2.0
* the multi-domain dataset, MultiWoZ
|
null | false
| null |
Who is the author of the Sound and the Fury
|
The Sound and the Fury is written by William Faulkner
|
|
null | false
| 295
|
Out of nearly 7000 languages spoken worldwide, current speech (ASR, TTS, voice search, etc.) technologies barely address 200 of them. Broadening ASR technologies to ideally all possible languages is a challenge with very high stakes in many areas and is at the heart of several fundamental research problems ranging from psycholinguistic (how humans learn to recognize speech) to pure machine learning (how to extract knowledge from unlabeled data). The present work focuses on the narrow but important problem of unsupervised Acoustic Unit Discovery (AUD). It takes place as the continuation of an ongoing effort to develop a Bayesian model suitable for this task, which stems from the seminal work of BIBREF0 later refined and made scalable in BIBREF1 . This model, while rather crude, has shown that it can provide a clustering accurate enough to be used in topic identification of spoken document in unknown languages BIBREF2 . It was also shown that this model can be further improved by incorporating a Bayesian "phonotactic" language model learned jointly with the acoustic units BIBREF3 . Finally, following the work in BIBREF4 it has been combined successfully with variational auto-encoders leading to a model combining the potential of both deep neural networks and Bayesian models BIBREF5 . The contribution of this work is threefold:
In this work, we have used two variants of this original model.
|
How many variants do the authors use?
|
Two.
|
null | false
| null |
Primarily a character actor, Harvey began his career on stage and in silent films. He appeared in the Broadway and original film versions of The Awful Truth, then had supporting roles in many Hollywood films, often portraying dignified executives or pompous authority figures.
He was a vacationing businessman whose car is commandeered by fugitive killer Humphrey Bogart in the 1936 crime drama The Petrified Forest and the minister who marries Spencer Tracy's daughter Elizabeth Taylor in the 1950 comedy Father of the Bride and baptizes her baby in its sequel. In the thriller Side Street, Harvey played a married man forced to pay $30,000 in blackmail money after having an affair.
Besides his numerous films, Harvey appeared in 1950s television series such as I Love Lucy, December Bride, My Little Margie, Father Knows Best and The George Burns and Gracie Allen Show before his death from a coronary thrombosis in 1955.
|
Extract from the text the films that the actor Paul Harvey participated in. Separate them with a comma.
|
The Awful Truth, The Petrified Forest, Father of the Bride, Side Street.
|
null | false
| null |
Pilates is a type of mind-body exercise developed in the early 20th century by German physical trainer Joseph Pilates, after whom it was named. Pilates called his method "Contrology". It is practiced worldwide, especially in countries such as Australia, Canada, South Korea, the United States and the United Kingdom. As of 2005, approximately 11 million people were practicing the discipline regularly, and there were 14,000 instructors in the US.Pilates was developed by Joseph Pilates from Mönchengladbach, Germany. His father was a gymnast and his mother a naturopath.
|
Who developed Pilates?
|
Pilates was developed by Joseph Pilates in the early 20th century, and he lived in Monchengladbach, Germany.
|
null | false
| null |
Identify the bird from the list: Black mamba, Black grouse, Black magic
|
Black grouse
|
|
null | false
| null |
What's the different between a plantain and a banana?
|
Plantains are a type of banana, which are a family edible fruits from the genus Musa. In the West, "bananas" refer the sweet, yellow type. Plantains are a starchy variety that are typically cooked.
|
|
null | false
| null |
Magnesium is a chemical element with the symbol Mg and atomic number 12. It is a shiny gray metal having a low density, low melting point and high chemical reactivity. Like the other alkaline earth metals (group 2 of the periodic table) it occurs naturally only in combination with other elements and it almost always has an oxidation state of +2. It reacts readily with air to form a thin passivation coating of magnesium oxide that inhibits further corrosion of the metal. The free metal burns with a brilliant-white light. The metal is obtained mainly by electrolysis of magnesium salts obtained from brine. It is less dense than aluminium and is used primarily as a component in strong and lightweight alloys that contain aluminium.
|
Given this paragraph about magnesium, what is the element symbol of magnesium?
|
Mg is the element symbol of magnesium.
|
null | false
| null |
The 2021 Abu Dhabi Grand Prix (officially known as the Formula 1 Etihad Airways Abu Dhabi Grand Prix 2021) was a Formula One motor race held on 12 December 2021 at the Yas Marina Circuit in Abu Dhabi, United Arab Emirates. Contested over a distance of 58 laps, the race was the twenty-second and final round of the 2021 Formula One World Championship. The race decided both the Drivers' and Constructors' championships; Max Verstappen and Lewis Hamilton both had 369.5 points coming into the race. Hamilton led most of the race and appeared on course to win what would have been a record eighth World Drivers' Championship, having taken the lead at the start of the race at turn 1 on the first lap. However Verstappen was the eventual winner, overtaking Hamilton on the final lap after a controversial safety car restart in the last moments of the race. The FIA thus conducted an inquiry into the race which led to a restructuring of race control including the replacement of Michael Masi as race director and amendments to the safety car rules. That inquiry subsequently concluded that the race officials had misapplied the then-regulations due to human error, but confirmed the results.
Verstappen's win secured his first Formula One World Drivers' Championship of his career by eight points from Hamilton, and Red Bull Racing's first since 2013. Mercedes won their eighth consecutive Formula One World Constructors' Championship, setting a new record. The Grand Prix was also the final race for 2007 World Champion Kimi Räikkönen; the 42-year-old Finn retired from Formula One after a two-decade career spanning 349 Grands Prix starts.
Race
Start and opening laps
The race started at 17:00 local time on Sunday 12 December. Lewis Hamilton immediately took the lead from Max Verstappen on the race start, prompting Verstappen to attempt to regain his position at the turn 6 chicane. Verstappen's trajectory forced Hamilton off the track, and the Briton re-entered slightly further ahead of the Dutch driver than he was before the corner. Arguing that he should have ceded the position to Verstappen, Red Bull Racing protested the racing line taken by Hamilton, and were told by radio that Hamilton had ultimately given back any advantage gained. The incident was referred to the stewards, which concluded that no further investigation was necessary. Hamilton then utilised the durability of his medium-compound tyres to extend his lead over Verstappen, whose soft-compound tyres were suffering from greater degradation.
Pit-stops and virtual safety car
Verstappen made his stop at the end of lap 13 with Hamilton following suit one lap later, both opting for a set of the hardest tyres. The lead drivers pitting elevated Sergio Pérez to first, with the Mexican driver being informed his strategy was to hold up Hamilton to allow his teammate Verstappen to catch up. Hamilton caught up with Pérez on lap 20; a spirited defence from the Red Bull second driver allowed Verstappen to significantly close the gap from about 11 seconds after the pits to 1.3 second. Verstappen was unable to take advantage of it, with Hamilton's superior pace extending the gap to four seconds by the midpoint of the race.
Final laps
On lap 53, a crash at turn 14 for Nicholas Latifi, who was fighting for position with Haas' Mick Schumacher and had dirty tyres after going off circuit at turn 9, brought out the safety car. Hamilton again stayed out without pitting because he would have lost track position had the safety car period not ended, while Verstappen pitted behind him for a fresh set of the softest available tyres. Pérez retired under the safety car due to oil pressure. After Verstappen's pit stop, he retained second, but with five lapped cars (those of Lando Norris, Fernando Alonso, Esteban Ocon, Charles Leclerc, and Sebastian Vettel) between himself and Hamilton (in first). As the debris from Latifi's crash was being cleared by the race marshals, the lapped drivers were initially informed that they would not be permitted to overtake. On lap 57 Masi gave the direction that only the five cars between Hamilton and Verstappen were to unlap themselves.
Immediately after Vettel passed the safety car to join the lead lap, race control announced the safety car would enter the pits at the end of the lap to allow for a final lap of green-flag racing, leading to angry remonstrations from Mercedes team principal Toto Wolff. On the final lap, Verstappen used his fresh soft tyres to pass Hamilton into turn 5 to take the lead of the race. He held off counter-attacks from Hamilton to win the race and his first World Drivers' Championship, with Hamilton in second and Ferrari driver Carlos Sainz Jr. in third. AlphaTauri driver Yuki Tsunoda finished in a career-best fourth place, after overtaking Bottas. Mercedes gained enough points to win their eighth consecutive Constructors' title, extending their own record. Over the radio, Wolff appealed to Masi to reinstate the order of the penultimate lap, to which Masi gave the reply: "Toto, it's called a motor race, ok? We went car racing."
Safety car controversy
Michael Masi was involved in a safety car procedure controversy during the last laps of the race. On lap 56 of the race, during which Hamilton was leading, Masi allowed only the five cars directly in between championship contenders Hamilton and Verstappen to unlap themselves before the restart. The race restarted, and on the final lap of the race, Verstappen overtook Hamilton and won the 2021 World Drivers' Championship.
Mercedes' post-race protests
Mercedes protested the race result, alleging that Verstappen had overtaken Hamilton during the safety car and that Masi violated safety car procedure by allowing the cars directly in front of Verstappen to unlap themselves, and no others, and that according to regulations the restart should have occurred the following lap, which would have resulted in Hamilton's victory. While the first issue was dismissed because Verstappen was not ahead at the end of the safety car period, the second issue was more contentious, with Mercedes retaining legal counsel for the protest. Mercedes argued that if a message for lapped cars to overtake is issued under Article 48.12, then all lapped cars are required to unlap, and that the safety car was required to wait until the end of the following lap to return to the pit lane; if this process was complied with, Mercedes submitted that Hamilton would have won the race and therefore the championship, and requested that the classification be amended as such. Red Bull argued that a) the regulation stated "any cars", not "all cars", were required to overtake under Articles 48.12; b) 48.13, which governs the withdrawal of the safety car, overrides Article 48.12; c) Article 15.3 gives the race director overriding authority over the use of the safety car; and d) the race result would not have changed if all eight lapped cars were permitted to unlap. Masi argued that the principle behind Article 48.12 was to remove cars which "interfered" with drivers racing on the lead lap and that all teams had agreed in principle that all races should end under racing conditions.
The clerk of the course shall work in permanent consultation with the Race Director. The Race Director shall have overriding authority in the following matters and the clerk of the course may give orders in respect of them only with his express agreement: ... e) The use of the safety car.
Mercedes lodged their intentions to appeal the ruling. Following the FIA's announcement that it would conduct a "detailed analysis and clarification exercise" of the incident and its admission that the controversy was "tarnishing the image" of the sport, Mercedes decided not to pursue their appeal, and announced the discontinuation of their protest on 16 December, several hours before the submission deadline. Amongst the reasons to discontinue, Wolff said that neither he nor Hamilton wished to be awarded the drivers' championship in court, and instead would focus their efforts in to ensuring an equitable result from the FIA's investigation into the sporting code. Wolff remained heavily critical of Masi's direction of the race—in particular, Wolff criticised Masi's decision in Abu Dhabi as being inconsistent with a decision at the 2020 Eifel Grand Prix, where Masi cited the need to let all cars unlap to extend a safety car period—and both he and Hamilton boycotted the FIA Prize Giving Ceremony that evening. Hamilton was later fined for not attending; he asked the FIA to donate the fine as contribution towards the work the body carries out with underprivileged children. The statement also fuelled speculation that Hamilton could announce a snap retirement after being disillusioned by the race's events, although Wolff indicated both he and his driver would continue in the sport after a period of reflection in the off-season.
Race reactions and commentary
In the immediate post-race interview, Hamilton and his father Anthony congratulated Verstappen and his family on his first World Drivers' Championship.
Several drivers criticised Masi's decision which was seen as a divergence from standard safety car procedure. In a last-lap radio message to his race engineer Peter Bonnington, that was not played on the television feed, Hamilton said the race result had been "manipulated". Russell, who was signed to be Hamilton's teammate at Mercedes in 2022, called the race finishing decision of race director Michael Masi "unacceptable". Norris, who was situated at the front of the five cars permitted to unlap, said the decision to go racing again on the last lap was made "for TV", and Alonso, Ocon, Leclerc, and Vettel, the drivers of the other four cars in the pack, also expressed confusion at the sudden instruction to unlap. Daniel Ricciardo, who was situated immediately behind Verstappen during the safety car period and was not permitted to unlap himself, said he was "speechless" at the instruction, especially as it did not allow him to also race the five-car pack on newer soft-compound tyres, and Sainz Jr., who was positioned behind Ricciardo and Lance Stroll's cars upon the restart and was under pressure from Yuki Tsunoda, Pierre Gasly, and Valtteri Bottas, opined that the decision to resume racing under the circumstances "nearly cost [him his] podium". Speaking at Aston Martin's 2022 car launch in February 2022, Lance Stroll described the Abu Dhabi situation as "ridiculous" and that rules should be set in stone before a season starts.
|
Based on the information provided, what was the controversy surrounding the 2021 Abu Dhabi Grand Prix?
|
While there was a lot of complexity based on the circumstances that occurred, the prevailing controversy with the race was related to the fact that lapped cars were only partially allowed to clear the track. This meant that Max Verstappen was cleared to attack Lewis Hamilton on fresher tyres for the last part of the final lap. Lewis and Max both needed to win the final race to win the championship. This would have been Max's first ever championship or Lewis' record-breaking 8th championship. By allowing only the cars between Max and Lewis, the race director only partially adhered to the rules. This virtually handed the win to Max and Red Bull. While Mercedes immediately protested the decision and final result. Max was awarded the championship.
|
null | false
| null |
The National Commission for Aerospace Research and Development (CONIDA) (Spanish: Comisión Nacional de Investigación y Desarrollo Aeroespacial) is a national space agency tasked with government space activities in Peru.
History
CONIDA was founded on 11 June 1974 following Decree Law 20643, designated to be an entity subordinate to then the Ministry of Aeronautics, which was later merged into the Ministry of Defense.
Peru joined the Asia-Pacific Space Cooperation Organization on 28 October 2005.
|
What is the history of the national commission for aerospace research and development
|
The National Commission for Aerospace Research and Development (CONIDA) is a national space agency tasked with government space activities in Peru.
CONIDA was founded on 11 June 1974 following Decree Law 20643, designated to be an entity subordinate to then the Ministry of Aeronautics, which was later merged into the Ministry of Defense.
Peru joined the Asia-Pacific Space Cooperation Organization on 28 October 2005.
|
1809.08510
| false
| null |
To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section "UG-WGAN" . The embedding size of the table was 300 and the internal LSTM hidden size was 512. A dropout rate of $0.1$ was used and trained with the ADAM optimization method BIBREF23 . Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 . We binarize the label's for all the datasets.
A natural language inference task consists of two sentences; a premise and a hypothesis which are either contradictions, entailments or neutral. Learning a NLI task takes a certain nuanced understanding of language. Therefore it is of interest whether or not UG-WGAN captures the necessary linguistic features. For this task we use the Stanford NLI (sNLI) dataset as our training data in english BIBREF29 . To test the zero-shot learning capabilities we created a russian sNLI test set by random sampling 400 sNLI test samples and having a native russian speaker translate both premise and hypothesis to russian. The label was kept the same.
Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 .
A natural language inference task consists of two sentences; a premise and a hypothesis which are either contradictions, entailments or neutral. Learning a NLI task takes a certain nuanced understanding of language. Therefore it is of interest whether or not UG-WGAN captures the necessary linguistic features.
|
Did they experiment with tasks other than word problems in math?
|
They experimented with sentiment analysis and natural language inference task
|
null | false
| null |
What is guitar amp modeling?
|
Guitar amp modeling is a digital technology for reproducing the sound and feel of playing guitar through physical amplifiers. This technology may be embodied in a piece of outboard gear, or as a software plugin for a digital audio workstation. In recent years, guitar amp modeling has advanced to the point of being difficult to discern by ear from the sound of the amplifiers being modeled. Some brands associated with advanced guitar amp modeling include Line 6, Fractal Audio, and Neural DSP.
|
|
1905.12260
| false
| null |
FLOAT SELECTED: Table 1: Crosslingual semantic similarity scores (Spearman’s ρ) across six subtasks for ImageVec (our method) and previous work. Coverage is in brackets. The last column indicates the combined score across all subtasks. Best scores on each subtask are bolded.
FLOAT SELECTED: Table 1: Crosslingual semantic similarity scores (Spearman’s ρ) across six subtasks for ImageVec (our method) and previous work. Coverage is in brackets. The last column indicates the combined score across all subtasks. Best scores on each subtask are bolded.
|
How much important is the visual grounding in the learning of the multilingual representations?
|
performance is significantly degraded without pixel data
|
null | false
| null |
What is the Mile of Music?
|
The Mile of Music is an annual 4-day music festival held in Appleton, WI. Started in 2013, each year of the festival increments an additional 'mile,' so 2023 will be celebrating Mile 10. The artists that perform are typically up-and-coming artists who only play original music at the festival. Sets tend to be about an hour long, and they are hosted throughout the approximate 1-mile stretch of downtown Appleton on College Avenue at various coffee shops, bars, restaurants and parks. A wide range of musical genres have been represented, including: rock, country, jazz, folk, hip hop and circus punk. Some performers come from the local area, such as Kyle Megna and The Monsoons. Others are more regional performers, like Bascom Hill. But many performers come from across the country, and have performed on multiple 'Miles', such as Chestnut Grove, The Quiet Hollers and Jamie Kent. Repeat bands often feel famous during the four days of the festival, leading Jamie Kent to write his song 'Famous (in Appleton)'. In conjunction with the music festival, an annual supported bike ride was launched called Bike to the Beat. Riders can stop at several locations along routes of various distances to get refreshments and hear performers from the festival.
|
|
null | false
| null |
In finance, an exchange rate is the rate at which one currency will be exchanged for another currency. Currencies are most commonly national currencies, but may be sub-national as in the case of Hong Kong or supra-national as in the case of the euro.
|
What is exchange rate?
|
Exchange rate is the rate at which one currency will be exchanged at for another currency. Most countries have their own national currency. However, some currency are sub-national such as Hong Kong; some are supra-national such as the euro.
|
null | false
| null |
Identify which animal is domesticated or wild: Silkmoth, Roman snail
|
Silkmouth is domesticated, Roman snail is wild.
|
|
null | false
| null |
Why is selling stocks short a high risk investment technique
|
Short selling stocks is a high risk technique as you are betting that a stock will go down in value and the downside risk is essentially unlimited. When you buy a stock "long" you know the stock can "only" go to (0) so your risk is what you invested, when you are selling a stock "short" the stock has no limit to far it can go up and therefore how much money you can lose.
|
|
null | false
| null |
The Hobbit, or There and Back Again is a children's fantasy novel by English author J. R. R. Tolkien. It was published in 1937 to wide critical acclaim, being nominated for the Carnegie Medal and awarded a prize from the New York Herald Tribune for best juvenile fiction. The book is recognized as a classic in children's literature, and is one of the best-selling books of all time with over 100 million copies sold.
|
What is The Hobbit?
|
The Hobbit, or There and Back Again is a children's fantasy novel by English author J. R. R. Tolkien. It was published in 1937 to wide critical acclaim, being nominated for the Carnegie Medal and awarded a prize from the New York Herald Tribune for best juvenile fiction. The book is recognized as a classic in children's literature, and is one of the best-selling books of all time with over 100 million copies sold.
The Hobbit is set in Middle-earth and follows home-loving Bilbo Baggins, the titular hobbit, who joins the wizard Gandalf and thirteen dwarves that make up Thorin Oakenshield's Company, on a quest to reclaim the dwarves' home and treasure from the dragon Smaug. Bilbo's journey takes him from his peaceful rural surroundings into more sinister territory.
The story is told in the form of an episodic quest, and most chapters introduce a specific creature or type of creature of Tolkien's geography. Bilbo gains a new level of maturity, competence, and wisdom by accepting the disreputable, romantic, fey, and adventurous sides of his nature and applying his wits and common sense. The story reaches its climax in the Battle of Five Armies, where many of the characters and creatures from earlier chapters re-emerge to engage in conflict.
Personal growth and forms of heroism are central themes of the story, along with motifs of warfare. These themes have led critics to view Tolkien's own experiences during World War I as instrumental in shaping the story. The author's scholarly knowledge of Germanic philology and interest in mythology and fairy tales are often noted as influences.
The publisher was encouraged by the book's critical and financial success and, therefore, requested a sequel. As Tolkien's work progressed on its successor, The Lord of the Rings, he made retrospective accommodations for it in The Hobbit. These few but significant changes were integrated into the second edition. Further editions followed with minor emendations, including those reflecting Tolkien's changing concept of the world into which Bilbo stumbled.
The work has never been out of print. Its ongoing legacy encompasses many adaptations for stage, screen, radio, board games, and video games. Several of these adaptations have received critical recognition on their own merits.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.