paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
When I use Delta Lake, will I be able to port my code to other Spark platforms easily?
|
Yes. When you use Delta Lake, you are using open Apache Spark APIs so you can easily port your code to other Spark platforms. To port your code, replace delta format with parquet format.
|
|
null | false
| null |
Which is a species of fish? Oscar or Papa
|
Oscar
|
|
1910.11471
| false
| null |
Training parallel corpus had 18805 lines of annotated code in it. The training model is executed several times with different training parameters. During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data. We run the training with epoch value of 10 with a batch size of 64. After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40% (Fig. FIGREF17).
During the final training process, 500 validation data is used to generate the recurrent neural model, which is 3% of the training data.
After finishing the training, the accuracy of the generated model using validation data from the source corpus was 74.40%
|
What dataset is used to measure accuracy?
|
The answers are shown as follows:
* validation data
|
null | false
| null |
Antarctica (/ænˈtɑːrktɪkə/) is Earth's southernmost and least-populated continent. Situated almost entirely south of the Antarctic Circle and surrounded by the Southern Ocean (also known as the Antarctic Ocean), it contains the geographic South Pole. Antarctica is the fifth-largest continent, being about 40% larger than Europe, and has an area of 14,200,000 km2 (5,500,000 sq mi). Most of Antarctica is covered by the Antarctic ice sheet, with an average thickness of 1.9 km (1.2 mi).
Antarctica is, on average, the coldest, driest, and windiest of the continents, and it has the highest average elevation. It is mainly a polar desert, with annual precipitation of over 200 mm (8 in) along the coast and far less inland. About 70% of the world's freshwater reserves are frozen in Antarctica, which, if melted, would raise global sea levels by almost 60 metres (200 ft). Antarctica holds the record for the lowest measured temperature on Earth, −89.2 °C (−128.6 °F). The coastal regions can reach temperatures over 10 °C (50 °F) in summer. Native species of animals include mites, nematodes, penguins, seals and tardigrades. Where vegetation occurs, it is mostly in the form of lichen or moss.
The ice shelves of Antarctica were probably first seen in 1820, during a Russian expedition led by Fabian Gottlieb von Bellingshausen and Mikhail Lazarev. The decades that followed saw further exploration in French, American, and British expeditions. The first confirmed landing was by a Norwegian team in 1895. In the early 20th century, there were a few expeditions into the interior of the continent. British explorers were the first to reach the magnetic South Pole in 1909, and the geographic South Pole was first reached in 1911 by Norwegian explorers.
|
What are the native animals of Antarctica?
|
Native species of animals in Antarctica include mites, nematodes, penguins, seals and tardigrades.
|
null | false
| 30
|
We used the attention-based NMT BIBREF2 as a baseline-NMT, the encoder-decoder-reconstructor BIBREF3 and the encoder-decoder-reconstructor that jointly trained forward translation and back-translation without pre-training. The RNN used in the experiments had 512 hidden units, 512 embedding units, 30,000 vocabulary size and 64 batch size. We used Adagrad (initial learning rate 0.01) for optimizing model parameters. We trained our model on GeForce GTX TITAN X GPU. Note that we set the hyper-parameter INLINEFORM0 on the encoder-decoder-reconstructor same as tu2016neural.
We used the attention-based NMT (Bahdanau et al., 2015) as a baseline-NMT, the encoderdecoder-reconstructor (Tu et al., 2017) and the encoder-decoder-reconstructor that jointly trained forward translation and back-translation without pre-training.
|
What did the author use as a baseline?
|
They used the attention-based NMT as a baseline-NMT.
|
null | false
| null |
What are some of the challenges and solutions involved in growing tomatoes in the Pacific Northwest?
|
Growing tomatoes in the Pacific Northwest is very possible, but can present a number of challenges. The three biggest concerns revolve around:
1. A short growing season
2. Cooler summers
3. Wetter than average summers
Here are some solutions you can use to mitigate these challenges.
Do deal with the short growing season, you need to get a headstart on growing by using starters. What this means is that instead of planting a seed in the ground in Springtime, you start a seed indoors in late winter, grow it into a small to medium plant, then transplant that "starter", into the ground when the temperature is warm enough. You can buy starters from Nurseries, or start your own at home. Nursery starters are easier, but by growing at home you can experiment with more rare varieties and gain experience in cultivating a plant from the seedling stage. If you grow your own it's important to slowly climatize the plant to outdoor weather leading up to the transplant date by moving it outside starting with a couple hours a day, increasing an hour or two per day over 2-3 weeks. By using starters you essentially extend the growing season by giving your plant a multi month head start.
Tomatoes mature best in warm to hot climates. The pacific northwest has cooler summers than many of the growing regions in the world. There is not a lot we can do to control mother nature, but there are one or two methods to help yourself here. Certain varieties of tomatoes do better in cooler summers - try and select these. If you're using starts from a local nursery they can guide you - starts from big national box stores might not be the best for your local Northwest climate. Another technique, if you are experiencing a cooler than normal Spring and start of Summer, is to build a "hoop house", using PVC pipe frame and a visqueen tarp. This will let light in and act as a mini-greenhouse, trapping heat around the plant. Keep in mind tomatoes need pollinators like bees to produce fruit, so once the plan t begins to flower you'll want to ensure pollination can occur.
Moisture creates issues with Tomato plants as they are very susceptible to various types of fungus that grow on the leaves. Blight is one in particular common among tomatoes. The wet summers we sometimes have create conditions on the leaves conducive to fungal growth, they can also splash fungus onto the leaves from the soil. The easiest way to deal with these two issues is in proper pruning. As early as you can prune any lower sets of leaves that could possibly come into contact with the ground. Don't do this too soon as the plant needs some leaves to grow. There are many ways to prune a tomato that you can read about, but one common one in wet areas is called single stemming - in this technique you prune "suckers", which are the small offshoots that grow in between the main stem and the leaf stems. By pruning the suckers you prevent too much bushy growth, which will help the leaves dry out with airflow after a rain storm. The last issue related to water is that heavy late season rains will cause the fruits to swell with water and crack. There's not an easy solution here. If it's towards the end of the season and a rainstorm is coming in, this can be a good time to pick all your fruit, your green tomatoes can be used for green salsas. Some will resort to building a tarp structure above the plants to stop the rain, though this can require a bit more work and ingenuity, and you'll need another system to keep the plants watered.
Overall, growing tomatoes in the Pacific Northwest is totally possible and very rewarding. Hopefully these tips help you to have a fruitful growing season!
|
|
null | false
| null |
"More" is a song written and performed by American contemporary Christian musician Matthew West, from his first album, Happy. The album was released in late 2003, and it was the first and most successful radio single of West's career up to that time. The song stayed at No. 1 on Christian radio for nine consecutive weeks during 2004, breaking the previous record of eight weeks. In 2004, it was the most-played contemporary Christian song of the year.
By request of astronaut Rex J. Walheim, "More" was played during the Day 5 wake-up call aboard STS-135, NASA's last Space Shuttle fligh
|
Given a reference text about "More", when was it the top song on Christian radio and how long did it maintain its spot?
|
The song "More" was the top song on Christian radio for nine weeks in 2004.
|
null | false
| null |
What is Canada known for?
|
Canada is known for being a large country with cold winters and a small population given its large size. Canada has a diverse population and high rates of immigration, making it a very multi-cultural country. Canadians are well known for being polite, kind people. Canada excels in winter sports including ice hockey and figure skating. Poutine is a French Canadian dish that is very popular across the entire country.
|
|
null | false
| null |
What is meditation?
|
Meditation is the practice of focusing on the mind on a particular object, such a the breath, to calm the mind and bring clarity. Through various meditation practices individual bring attention and awareness to mental activity. It is through this awareness that mental clarity is achieved.
Meditation has been known to reduce stress and anxiety as well as bring a greater sense of ease within individuals.
|
|
null | false
| null |
As of 2017, Mexico was the 6th most visited country in the world and had the 15th highest income from tourism in the world which is also the highest in Latin America. The vast majority of tourists come to Mexico from the United States and Canada followed by Europe and Asia. A smaller number also come from other Latin American countries. In the 2017 Travel and Tourism Competitiveness Report, Mexico was ranked 22nd in the world, which was 3rd in the Americas.
|
Given this question about tourism, what country is the 6th most visited country?
|
Mexico
|
null | false
| null |
What is the best location for a family vacation in western Europe?
|
The best location depends on individual tastes and preferences.
For a vacation in a city, you can’t beat Paris. There is a large range of available cuisine, from inexpensive bistros and cafes, to very expensive, Michelin starred restaurants. It is hard to get a bad meal in Paris. The sightseeing also is impressive, with modern architecture combined with classic structures, and many museums to fill your days. Getting around Paris is very easy, as it is a very walkable city and also has an efficient and relatively inexpensive public transportation system.
For a lake vacation, Lake Como is wonderful, and in particular Bellagio, which is widely regarded as the prettiest village in Lake Como. Getting to Bellagio is fairly easy, as it’s close to Milan. The food is reasonably priced, although it’s generally northern Italian food (with limited options for other ethnic food). Transportation around the lake is convenient with an efficient ferry service on Lake Como, although getting into the individual villages can be a bit challenging given the hills (so expect to walk!). Many villages near Bellagio are also very quaint, and within walking distance – you’ll definitely get good exercise going from village to village.
For a seaside vacation, Dubrovnik is a great choice. It’s not so convenient to get there from the United States, especially for those in the western United States since there are no direct flights. However, it’s well worth the trip. This walled city is jaw dropping beautiful. The people are among the friendliest in Europe. The food is fairly inexpensive and delicious, with a lot of seafood options. And some restaurants are literally perched off of a cliff, so the views are incredible. Croatian dried lemons and oranges are fantastic, and a specialty of the region. There also is beautiful pottery.
|
|
null | false
| null |
Imagine a world where life expectancy is fixed and the same for everyone at birth, yet people can engage in life expectancy "donation" – meaning that someone can extend your life three years longer at the expense of theirs. Talk about a likely scenario in this world that involves manipulated, coerced donation.
|
Those with power and money could lure the underprivileged groups into "donating" their life expectancy in exchange for a better future. Assuming that illnesses are still not eradicated, another scenario involving coerced donation could be those in control of the medications force the patients to cut their life short in order to get treated.
|
|
null | false
| null |
Which of these countries are in South America: Mexico, Peru, Brazil, Chile, Ghana, India.
|
Peru, Brazil, and Chile are in South America. Mexico is in North America, while Ghana is in Africa and India is in Asia.
|
|
null | false
| 505
|
Our implementation is in PyTorch and depends on the VISSL, MMClassification (Contributors, 2020), and Weights & Biases libraries. The code is attached and will be released for publication.
Stage 0 (source) We train residual networks with various depths (including 18, 50, 101), and initializations (ImageNet pretraining or Kaiming init when training from scratch). We optimize cross-entropy loss by SGD with an initial learning rate 0.1, momentum 0.9, weight decay 0.0001, batch size 256. We do not apply label smoothing except for SHOT, as it specifically includes it. We adopt the standard data augmentation pipeline from ImageNet training, such as random cropping, random flipping, and color jitter. We choose ImageNet statistics as the default input mean and variance for all models.
Stage 1 (teacher) We experiment with three types of teacher models: 1) source-only, 2) TENT-IM, 3) SHOT. To optimize this altered loss, we choose SGD with learning rate 0.0001, momentum 0.9, and weight decay 0.0001. In addition to batch normalization, we also update convolutional layers except the final classification layer. As for SHOT, We execute the authors' open-sourced codebase with the same hyper-parameters for various architectures , initialization (from scratch and ImageNet pretrain), and domain shifts (train to val/test splits on VisDA-C).
Stage 2 (student) We experiment with two designs as students: 1) source-only, 2) contrastive learning. Specifically, we leverage some of off-the-shelf contrastive learning methods to initialize target-domain representation, such as MoCo v2, SimSiam SwAV, Barlow Twins. Compared to their training recipes on ImageNet, we have more epochs on VisDA-C val/test with the same batch size, learning rate, data augmentation, and model architecture, to make the training procedure longer with the smaller amount of images.
Stage 3 (teacher-student) By default, the whole knowledge distillation consists of three phases, where each phase has 10 epochs to train the student model with the hard pseudo label. The student would be reset to the contrastive model to avoid error accumulation at the beginning of every phase. The teacher would be replaced with the latest student before starting the next phase, so that the quality of pseudo-labeling could be improved gradually. We utilize SGD with an initial learning rate 0.01, momentum 0.9, weight decay 0.0001, batch size 256, and cosine annealing scheduler.
Table 2: Classification accuracy of on-target adaptation on VisDA-C (validation) across all categories and averaged over classes (avg.) and images (acc.). R18/50/101S denotes ResNet-18/50/101 randomly initialized from scratch and R18/50/101P denotes ResNet-18/50/101 pretrained on ImageNet.
|
Is there a lack of uniformity in the evaluation?
|
Please see "Evaluation" under our general response. Our experiments report both controlled comparisons and exploratory results where we use our new capacity to change the target architecture. The reported combinations actually favor the compared methods, as we clarify in using the preferred configuration of SHOT for instance. We have also included R50S in Table 2 as requested.
|
null | false
| 121
|
As outlined in Section SECREF2, most previous work BIBREF28, BIBREF12, BIBREF13, BIBREF10, BIBREF29 make use of flat encoders that do not exploit the data structure. To keep the semantics of each element from the data-structure, we propose a hierarchical encoder which relies on two modules. The first one (module A in Figure FIGREF11) is called low-level encoder and encodes entities on the basis of their records; the second one (module B), called high-level encoder, encodes the data-structure on the basis of its underlying entities. In the low-level encoder, the traditional embedding layer is replaced by a record embedding layer as in BIBREF11, BIBREF12, BIBREF10. We present in what follows the record embedding layer and introduce our two hierarchical modules.
The first layer of the network consists in learning two embedding matrices to embed the record keys and values. Keys $k_{i,j}$ are embedded to $\mathbf {k}_{i,j} \in \mathbb {R}^{d}$ and values $v_{i,j}$ to $\mathbf {v}_{i,j} \in \mathbb {R}^{d}$, with $d$ the size of the embedding. As in previous work BIBREF11, BIBREF12, BIBREF10, each record embedding $\mathbf {r}_{i,j}$ is computed by a linear projection on the concatenation $[\mathbf {k}_{i,j}$; $\mathbf {v}_{i,j}]$ followed by a non linearity:
where $\mathbf {W}_r \in \mathbb {R}^{2d \times d}$ and $\mathbf {b}_r \in \mathbb {R}^{d}$ are learnt parameters.
The low-level encoder aims at encoding a collection of records belonging to the same entity while the high-level encoder encodes the whole set of entities. Both the low-level and high-level encoders consider their input elements as unordered. We use the Transformer architecture from BIBREF21. For each encoder, we have the following peculiarities:
the Low-level encoder encodes each entity $e_i$ on the basis of its record embeddings $\mathbf {r}_{i,j}$. Each record embedding $\mathbf {r}_{i,j}$ is compared to other record embeddings to learn its final hidden representation $\mathbf {h}_{i,j}$. Furthermore, we add a special record [ENT] for each entity, illustrated in Figure FIGREF11 as the last record. Since entities might have a variable number of records, this token allows to aggregate final hidden record representations $\lbrace \mathbf {h}_{i,j}\rbrace _{j=1}^{J_i}$ in a fixed-sized representation vector $\mathbf {h}_{i}$.
the High-level encoder encodes the data-structure on the basis of its entity representation $\mathbf {h}_{i}$. Similarly to the Low-level encoder, the final hidden state $\mathbf {e_i}$ of an entity is computed by comparing entity representation $\mathbf {h}_{i}$ with each others. The data-structure representation $\mathbf {z}$ is computed as the mean of these entity representations, and is used for the decoder initialization.
To keep the semantics of each element from the data-structure, we propose a hierarchical encoder which relies on two modules.
|
Why is the hierarchical encoder proposed rely on two modules?
|
To keep the semantics of each element from the data-structure.
|
null | false
| null |
The title "Pharaoh" is used for those rulers of Ancient Egypt who ruled after the unification of Upper and Lower Egypt by Narmer during the Early Dynastic Period, approximately 3100 BC. However, the specific title was not used to address the kings of Egypt by their contemporaries until the New Kingdom's 18th Dynasty, c. 1400 BC. Along with the title Pharaoh for later rulers, there was an Ancient Egyptian royal titulary used by Egyptian kings which remained relatively constant during the course of Ancient Egyptian history, initially featuring a Horus name, a Sedge and Bee (nswt-bjtj) name and a Two Ladies (nbtj) name, with the additional Golden Horus, nomen and prenomen titles being added successively during later dynasties.
|
Given this paragraph about Pharaohs, when the title used to address the kings of egypt
|
1400 BC
|
null | false
| null |
Telangana (/ˌtɛlənˈɡɑːnə/ (listen); Telugu: [ˈtelaŋɡaːɳa], Urdu: [ˈtɪləŋɡɑːna] (listen)) is a state in India situated on the south-central stretch of the Indian peninsula on the high Deccan Plateau. It is the eleventh-largest state and the twelfth-most populated state in India with a geographical area of 112,077 km2 (43,273 sq mi) and 35,193,978 residents as per 2011 census. On 2 June 2014, the area was separated from the northwestern part of Andhra Pradesh as the newly formed state of Telangana, with Hyderabad as its capital. Its other major cities include Warangal, Nizamabad, Khammam, Karimnagar and Ramagundam. Telangana is bordered by the states of Maharashtra to the north, Chhattisgarh to the northeast, Karnataka to the west, and Andhra Pradesh to the east and south. The terrain of Telangana consists mostly of the Deccan Plateau with dense forests covering an area of 27,292 km2 (10,538 sq mi). As of 2019, the state of Telangana is divided into 33 districts.
|
From the passage provided, extract the major cities in Telangana. Separate them with a comma.
|
Hyderabad, Warangal, Nizamabad, Khammam, Karimnagar, Ramagundam
|
null | false
| null |
Write a paragraph on why visit and stay at Marysville in Victoria Australia.
|
Maryville is located 120km from Melbourne. Nestled just past the Victorian Yarra Valley, Marysville offers a range of activities for the outdoor enthusiast. Mountain Biking, Bush Walking and during winter Marysville's Lake Mountain is the closest snow resort to the Melbourne CBD. Marysville has some fantastic accommodation options from Caravan Parks all the way to high-end houses for multiple families to enjoy time together. A magnificent location to get away and feel connected with nature. Marysville also has some amazing places to enjoy a meal. From the Duck Inn Pub to the various other eatery options Marysville caters for all tastes and budgets.
|
|
null | false
| 138
|
We present our results for the binary classification tasks in Tables TABREF23 – TABREF24 in terms of average precision, average recall and macro average F1 score. The results of the regression tasks are reported in Tables TABREF25 and TABREF29 in terms of the mean absolute error between the predicted and actual scores, as well as the Spearman INLINEFORM0 correlation between the rankings induced by both sets of scores. It can be clearly seen from the results that our proposed method (EGEL-All) can effectively integrate Flickr tags with the available structured information. It outperforms the baselines for all the considered tasks. Furthermore, note that the PPMI-based weighting in EGEL-Tags consistently outperforms GloVe and that both the addition of negative examples and term selection lead to further improvements. The use of term selection leads to particularly substantial improvements for the regression problems.
While our experimental results confirm the usefulness of embeddings for predicting environmental features, this is only consistently the case for the variants that use both the tags and the structured datasets. In particular, comparing BOW-Tags with EGEL-Tags, we sometimes see that the former achieves the best results. While this might seem surprising, it is in accordance with the findings in BIBREF54 , BIBREF38 , among others, where it was also found that bag-of-words representations can sometimes lead to surprisingly effective baselines. Interestingly, we note that in all cases where EGEL-KL(Tags+NS) performs worse than BOW-Tags, we also find that BOW-KL(Tags) performs worse than BOW-Tags. This suggests that for these tasks there is a very large variation in the kind of tags that can inform the prediction model, possibly including e.g. user-specific tags. Some of the information captured by such highly specific but rare tags is likely to be lost in the embedding.
To further analyze the difference in performance between BoW representations and embeddings, Figure TABREF29 compares the performance of the GloVe model with the bag-of-words model for predicting place scenicness, as a function of the number of tag occurrences at the considered locations. What is clearly noticeable in Figure TABREF29 is that GloVe performs better than the bag-of-words model for large corpora and worse for smaller corpora. This issue has been alleviated in our embedding method by the addition of negative examples.
While our experimental results confirm the usefulness of embeddings for predicting environmental features, this is only consistently the case for the variants that use both the tags and the structured datasets. In particular, comparing BOW-Tags with EGEL-Tags, we sometimes see that the former achieves the best results. While this might seem surprising, it is in accordance with the findings in [24,54], among others, where it was also found that bag-of-words representations can sometimes lead to surprisingly effective baselines.
|
Are embeddings useful for predicting environmental features according to their experiment?
|
Yes, but only consistently the case for the variants that use both the tags and the structured datasets.
|
null | false
| null |
What are the strongest mobs in the online game called Minecraft?
|
The strongest Minecraft mobs are Wither, Ender Dragon, and Warden.
|
|
1710.06700
| false
| null |
To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.
Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 .
To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each.
Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 .
|
How was the dataset annotated?
|
The answers are shown as follows:
* Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization
|
null | false
| null |
What are three popular women's fashion rental services?
|
Rent the Runway, Nuuly, and FashionPass
|
|
null | false
| null |
The European Union (EU) is a political and economic union of 27 member states that are signatories to the founding treaties of the union and thereby share in the privileges and obligations of membership. They have agreed by the treaties to share their own sovereignty through the institutions of the European Union in some, but not all, aspects of government. State governments must agree unanimously in the Council for the union to adopt some policies; for others, collective decisions are made by qualified majority voting. These obligations and sharing of sovereignty within the EU (sometimes referred to as supranational) make it unique among international organisations, as it has established its own legal order which by the provisions of the founding treaties is both legally binding and supreme on all the member states (after a landmark ruling of the ECJ in 1964). A founding principle of the union is the principle of subsidiarity, meaning that decisions are taken collectively if and only if they cannot realistically be taken individually.
Another notable and unique feature of membership are the commissioners of the European Commission, who are appointed by each of the governments of the member states but do not represent their member state but instead work collectively in the interests of all the member states.
In the 1950s, six core states founded the EU's predecessor European Communities (Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany). The remaining states have acceded in subsequent enlargements. To accede, a state must fulfil the economic and political requirements known as the Copenhagen criteria, which require a candidate to have a democratic government and free-market economy together with the corresponding freedoms and institutions, and respect for the rule of law. Enlargement of the Union is also contingent upon the consent of all existing members and the candidate's adoption of the existing body of EU law, known as the acquis communautaire.
The United Kingdom, which had acceded to the EU's predecessor in 1973, ceased to be an EU member state on 31 January 2020. No other member state has withdrawn from the EU and none has been suspended, although some dependent territories or semi-autonomous areas have left.
|
Extract the list of European Union member countries
|
Formed in 1950s with six core states (Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany) the European Union now has 27 members which are:
Austria, Belgium, Bulgaria, Croatia, Republic of Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain and Sweden
|
null | false
| 163
|
Since sarcasm in text can be expected to induce cognitive load, it is reasonable to believe that it would require more processing time BIBREF14 . Hence, fixation duration normalized over total word count should usually be higher for a sarcastic text than for a non-sarcastic one. We observe this for all participants in our dataset, with the average fixation duration per word for sarcastic texts being at least 1.5 times more than that of non-sarcastic texts. To test the statistical significance, we conduct a two-tailed t-test (assuming unequal variance) to compare the average fixation duration per word for sarcastic and non-sarcastic texts. The hypothesized mean difference is set to 0 and the error tolerance limit ( INLINEFORM0 ) is set to 0.05. The t-test analysis, presented in Table TABREF11 , shows that for all participants, a statistically significant difference exists between the average fixation duration per word for sarcasm (higher average fixation duration) and non-sarcasm (lower average fixation duration). This affirms that the presence of sarcasm affects the duration of fixation on words.
It is important to note that longer fixations may also be caused by other linguistic subtleties (such as difficult words, ambiguity and syntactically complex structures) causing delay in comprehension, or occulomotor control problems forcing readers to spend time adjusting eye-muscles. So, an elevated average fixation duration per word may not sufficiently indicate the presence of sarcasm. But we would also like to share that, for our dataset, when we considered readability (Flesch readability ease-score BIBREF15 ), number of words in a sentence and average character per word along with the sarcasm label as the predictors of average fixation duration following a linear mixed effect model BIBREF16 , sarcasm label turned out to be the most significant predictor with a maximum slope. This indicates that average fixation duration per word has a strong connection with the text being sarcastic, at least in our dataset.
We now analyze scanpaths to gain more insights into the sarcasm comprehension process.
Since sarcasm in text can be expected to induce cognitive load, it is reasonable to believe that it would require more processing time (Ivanko and Pexman, 2003). Hence, fixation duration normalized over total word count should usually be higher for a sarcastic text than for a non-sarcastic one.
|
What hypothesis supports the fact that fixation duration normalized over total word count should usually be higher for a sarcastic text than for a non-sarcastic one?
|
Since sarcasm in text can be expected to induce cognitive load, it is reasonable to believe that it would require more processing time.
|
null | false
| 219
|
Human-computer dialog/conversation is one of the most challenging problems in artificial intelligence. Given a user-issued utterance (called a query in this paper), the computer needs to provide a reply to the query. In early years, researchers have developed various domain-oriented dialogue systems, which are typically based on rules or templates BIBREF4 , BIBREF5 , BIBREF6 . Recently, open-domain conversation systems have attracted more and more attention in both academia and industry (e.g., XiaoBing from Microsoft and DuMi from Baidu). Due to high diversity, we can hardly design rules or templates in the open domain. Researchers have proposed information retrieval methods BIBREF7 and modern generative neural networks BIBREF8 , BIBREF9 to either search for a reply from a large conversation corpus or generate a new sentence as the reply.
In open-domain conversations, context information (one or a few previous utterances) is particularly important to language understanding BIBREF1 , BIBREF9 , BIBREF10 , BIBREF11 . As dialogue sentences are usually casual and short, a single utterance (e.g., “Thank you.” in Figure FIGREF2 ) does not convey much meaning, but its previous utterance (“...writing an essay”) provides useful background information of the conversation. Using such context will certainly benefit the conversation system.
However, tracking all previous utterances as the context is unwise. First, commercial chat-bots usually place high demands on efficiency. In a retrieval-based system, for example, performing a standard process of candidate retrieval and re-ranking for each previous utterance may well exceed the time limit (which is very short, e.g., 500ms). Second, we observe that not all sentences in the current conversation session are equally important. The sentence “Want to take a walk?” is irrelevant to the current context, and should not be considered when the computer synthesizes the reply. Therefore, it raises the question of session segmentation in conversation systems.
Document segmentation for general-purpose corpora has been widely studied in NLP. For example, Hearst BIBREF12 proposes the TextTiling approach; she measures the similarity of neighboring sentences based on bag-of-words features, and performs segmentation by thresholding. However, such approaches are not tailored to the dialogue genre and may not be suitable for conversation session segmentation.
In this paper, we address the problem of session segmentation for open-domain conversations. We leverage the classic TextTiling approach, but enhance it with modern embedding-based similarity measures. Compared with traditional bag-of-words features, embeddings map discrete words to real-valued vectors, capturing underlying meanings in a continuous vector space; hence, it is more robust for noisy conversation corpora. Further, we propose a tailored method for word embedding learning. In traditional word embedding learning, the interaction between two words in a query and a reply is weaker than that within an utterance. We propose to combine a query and its corresponding reply as a “virtual sentence,” so that it provides a better way of modeling utterances between two agents.
We propose to combine a query and its corresponding reply as a “virtual sentence,” so that it provides a better way of modeling utterances between two agents.
|
Why do the authors propose to combine a query and its corresponding reply as a "virtual sentence"?
|
To provide a better way of modeling utterances between two agents.
|
null | false
| null |
Classify each of these objects as having color that is red or blue: tomato, sky, ice, fire, cherries.
|
Tomatoes and cherries are red. The sky is blue. The color of ice is much closer to blue than red. Fire can be red or blue but under most circumstances would be closer to red.
|
|
null | false
| null |
Niagara was a federal electoral district in the Canadian province of Ontario, which was represented in the House of Commons of Canada from 1867 to 1883. It is sometimes also considered one of Ontario's historic counties, as it was listed in some post-Confederation census records as a county of residence. Niagara consisted of the Lincoln County townships of Niagara and Grantham, including the towns of Niagara-on-the-Lake and St. Catharines. The electoral district was abolished in 1882 when it was merged into Lincoln and Niagara riding.
|
List all the places and countries mentioned in the paragraph in the format [Place] separated by comma.
|
[Niagara],[Ontario],[Canada],[Lincoln County],[Grantham],[Niagara-on-the-Lake],[St. Catharines]
|
null | false
| null |
Give me a list of popular tourist attractions in Rome
|
Some popular tourist attractions are:
- Colosseum
- Pantheon
- Trevi Fountain
- St. Peter's Basilica
- Roman Forum
- Spanish Steps
|
|
null | false
| null |
Imagine you are someone's shadow for a day. Write about your experiences in the present tense.
|
I'm feeling surprised this morning. I feel a little flatter than usual, and a little lighter. To be honest with you, I can't see myself at all.
WOW! Someone turned the lights on. I can't see! Wait... I'm on the floor. Why am I on the floor? I'm right behind this man, he's just got out of bed. I don't recognise this room.
Ah. I seem to be a shadow. Not what I was hoping for this morning, if I'm being honest. I have a big day at work, and this is going to make things rather more challenging. I'll have to get on with it though, stiff upper lip and all that. I can only hope this my shadow's driver has an interesting plan for the day.
It's looking promising so far. They are putting on hiking boots and other outdoor wear. Certainly better than staying inside and playing video games all day. I'd hardly have anything to do at all. Now we need to hope for some good weather. If we're in Norway, there isn't much light at this time of the year. I'd barely even exist.
Finally, he's opened the curtains. I've struck gold. We're in the Sierra Nevada mountains in Spain, and it's a clear day.
We're outside and we're climbing. This is fun! But scary! I never climbed before in my real body. I wish he would stop sitting down to rest. I don't like being sat on.
The view from the summit, it's finally here. It's amazing. Or at least that is what the man says. As a 2 dimensional shadow, I can't really see much. I'm not tall enough.
|
|
null | false
| null |
Explain Cricket game
|
Cricket is mainly played between 2 teams and each team contains 11 players, distance between wickets is about 22 yards, Cricket has 3 different formats One International, T20 format and Test match which is longer format around 5 days, T20 is shorter format of game which has 20 overs to play each side, an Over means 6 deliveries bowled by a bowler, a batter from other team try to defend the bowler, 2nd batting team should be able to chase whatever scored by first batting team to win the game other wise second bowling team wins. If the batter hit the ball and reaches out of boundary line then its considered as six runs, if it crosses boundary line by the ground then it is considered as four, batters allowed to run between the wickets to score singles and doubles, total 11 batters can bat until they get all out, bowler can get out batters in different ways, if he misses the bat and touches the stumps then its called bowled, if fielders take catch with in boundary line then its called catch out, if batter try to take runs in between wickets and fielding team throws the ball at wickets before batter reaches popping crease then its called run out some what similar to baseball, Cricket is second popular sport in the world after soccer, India, England, Australia, South Africa and New Zealand are poplar teams, Virat Kohli is the biggest cricket of current generation. One Day International World cup happens every 4 years most of the times won by Australia, T20 World Cup happens in every 2 years.
|
|
null | false
| null |
Give me a bulleted list of things to consider when deciding where to apply to college.
|
The following should be considered when deciding where to apply to college:
1. Academic programs the college offers
2. Cost of attendance
3. Location of the campus
4. Admission requirements and expectations
5. Size of the student population
6. Extracurricular activities the college offers
|
|
null | false
| null |
Cambridge Zero is Cambridge University's response to climate change. Led by Dr Emily Shuckburgh OBE, a climate scientist, mathematician and science communicator, it is an interdisciplinary and collaborative initiative created "to harness the full range of the University’s research and policy expertise, developing solutions that work for our lives, our society and our economy".
Shuckburgh stresses that Cambridge Zero will not work in isolation and that the climate crisis demands a holistic response: "There is a really urgent need for, frankly, an immediate response on a scale that is utterly unprecedented and does require the whole of our economy and the whole of society. It's about all of us contributing in the way we live our lives".
Cambridge Zero was first announced by Cambridge Vice-Chancellor Stephen Toope in his annual address in October 2019.
Electric flight is one of several engineering areas covered but Shuckburgh emphasizes the need for the broadest range of skills to support Cambridge Zero's output. She says, “I started being involved in climate-related research right at the start of it becoming an international political topic, and it’s been a convergence of different interests.”
Cambridge Zero represents the University of Cambridge in the COP26 Universities Network and is a facilitator of COP26's "Visions for a Net Zero Future" initiative.
|
Who leads the Cambridge Zero initiative?
|
The Cambridge Zero is initiative is led by Dr. Emily Shuckburgh, she is a climate scientist, mathematician, and science communicator.
|
null | false
| 7
|
Generally speaking, people tend to use more concrete or specific language when they have to describe a person that does not meet their expectations. beukeboom2014mechanisms lists several linguistic `tools' that people use to mark individuals who deviate from the norm. I will mention two of them.
[leftmargin=0cm]
One well-studied example BIBREF4 , BIBREF5 is sexist language, where the sex of a person tends to be mentioned more frequently if their role or occupation is inconsistent with `traditional' gender roles (e.g. female surgeon, male nurse). Beukeboom also notes that adjectives are used to create “more narrow labels [or subtypes] for individuals who do not fit with general social category expectations” (p. 3). E.g. tough woman makes an exception to the `rule' that women aren't considered to be tough.
can be used when prior beliefs about a particular social category are violated, e.g. The garbage man was not stupid. See also BIBREF6 .
These examples are similar in that the speaker has to put in additional effort to mark the subject for being unusual. But they differ in what we can conclude about the speaker, especially in the context of the Flickr30K data. Negations are much more overtly displaying the annotator's prior beliefs. When one annotator writes that A little boy is eating pie without utensils (image 2659046789), this immediately reveals the annotator's normative beliefs about the world: pie should be eaten with utensils. But when another annotator talks about a girls basketball game (image 8245366095), this cannot be taken as an indication that the annotator is biased about the gender of basketball players; they might just be helpful by providing a detailed description. In section 3 I will discuss how to establish whether or not there is any bias in the data regarding the use of adjectives.
Generally speaking, people tend to use more concrete or specific language when they have to describe a person that does not meet their expectations. Beukeboom (2014) lists several linguistic ‘tools’ that people use to mark individuals who deviate from the norm. I will mention two of them.2 Adjectives One well-studied example (Stahlberg et al., 2007; Romaine, 2001) is sexist language, where the sex of a person tends to be mentioned more frequently if their role or occupation is inconsistent with ‘traditional’ gender roles.**** Negation can be used when prior beliefs about a particular social category are violated,
|
Which two of Beukeboom's list of language "tools" does the author mention?
|
Adjectives and Negation.
|
null | false
| null |
Who is the greatest male tennis player of all time?
|
Roger Federer, Rafael Nadal, and Novak Djokovic (aka the Big 3 of tennis) are all regarded as the greatest male tennis players of all time. If the criteria for greatest of all time is based on the number of grand slams won, then that title is shared by Nadal and Djokovic at 22 majors. However, many fans, pundits, and sports analysts consider Federer to be the greatest of all time due to his longevity, success, and ability to attract fans to the game, and humanitarian impact off the court. Ultimately this is a very subjective question and is difficult to answer without first defining the criteria for greatness in tennis.
|
|
null | false
| null |
Bermuda (/bərˈmjuːdə/; historically known as The Bermudas or Somers Isles) is a British Overseas Territory in the North Atlantic Ocean. The Bermuda archipelago consists of 181 islands with a total land area of 54 km2 (21 sq mi). The closest land outside the territory is in the US state of North Carolina, approximately 1,035 km (643 mi) to the northwest.
Bermuda is named after Spanish explorer Juan de Bermúdez, who discovered the archipelago in 1505. The islands have been permanently inhabited since 1612, and, forming part of British America, became a crown colony in 1684. The first African slaves arrived in 1616, but as the slave trade ceased by the end of the 17th century, the colony developed into a base for merchants, privateers, and the Royal Navy. More recently, tourism has been a significant contributor to Bermuda's economy. After World War II the territory became an offshore financial centre and tax haven.
As of 2019, Bermuda had a population of around 64,000 people, making it the second-most populous of the British overseas territories, after the Cayman Islands. Black Bermudians, primarily descended from African slaves, make up almost 60% of the population, while White Bermudians, primarily of British, Irish, and Portuguese descent, account for more than 30%.
|
Based on the above passage classify Bermudians based on their descent along with respective percentages in a bracket. Display the results in a comma separated format.
|
Black Bermudians (60%), White Bermudians (30%)
|
null | false
| null |
What is the city of Portsmouth in England famous for?
|
Portsmouth is a coastal town in the south of England only connected to the mainland by three road bridges built over the sea. Portsmouth has a long and well documented history that is closely associated with the Royal Navy, who are headquartered there, and the sea faring exploits of the British have shaped the modern day culture we see there today. Ship building and Engineering industries are most prevalent in Portsmouth with the local "super" Hospital and University also being amongst the largest employers. It is also worth noting that the local football club is well supported by the community and is famous throughout the English Football Leagues for the passion and commitment shown by their fans - particularly when they travel for away fixtures.
|
|
1909.03023
| false
| null |
We carried out a reliability study for the proposed scheme using two pairs of expert annotators, P1 and P2. The annotators were trained by coding one transcript at a time and discussing disagreements. Five text-based discussions were used for testing reliability after training: pair P1 annotated discussions of The Bluest Eye, Death of a Salesman, and Macbeth, while pair P2 annotated two separate discussions of Ain't I a Woman. 250 argument moves (discussed by over 40 students and consisting of over 8200 words) were annotated. Inter-rater reliability was assessed using Cohen's kappa: unweighted for argumentation and knowledge domain, but quadratic-weighted for specificity given its ordered labels.
We carried out a reliability study for the proposed scheme using two pairs of expert annotators, P1 and P2.
Inter-rater reliability was assessed using Cohen's kappa: unweighted for argumentation and knowledge domain, but quadratic-weighted for specificity given its ordered labels.
|
what experiments are conducted?
|
The answers are shown as follows:
* a reliability study for the proposed scheme
|
null | false
| null |
Write a compelling short story
|
Baby shoes. For sale. Never Worn
|
|
null | false
| null |
What are the three highest peaks in the UK?
|
Ben Nevis, Scafell Pike, and Snowdon are the highest mountains in Scotland, England, and Wales.
|
|
null | false
| null |
What is GIS?
|
GIS stands for Geographic Information system. It is a system that creates, manages, analyzes, and displays all different types of geospatial data. GIS can show different types of data - such as streets, structures, tree cover, vegetation, various types of imagery, etc on a single map. This enables people to quickly see, find, analyze, and understand different patterns and relationships. This provides a foundation for mapping and analysis which is used in almost every industry. It results in improved communication and efficiency as well as better management and decision-making.
|
|
null | false
| null |
What is a lab environment?
|
A lab environment is a place that has computers or servers used for testing different use cases or projects in development, test and production type scenarios. Lab environment may more relaxed security rules too if new technology is being tested for production use in a company to get a go / no-go answer faster. Lab environments save money because solutions and proof of concepts can usually be built out a lot of faster reducing the time to value to a technology or project decision.
|
|
null | false
| null |
Identify which instrument is string or percussion: Sand blocks, Gadulka
|
Gadulka is string, Sand blocks is percussion.
|
|
null | false
| 104
|
We propose a Bayesian model of semantic role induction (SRI) that uses crosslingual latent variables to capture role alignments in parallel corpora. The crosslingual latent variables capture correlations between roles in different languages, and regularize the parameter estimates of the monolingual models. Because this is a joint Bayesian model of multilingual SRI, we can apply the same model to a variety of training scenarios just by changing the inference procedure appropriately. We evaluate monolingual SRI with a large unlabeled dataset, bilingual SRI with a parallel corpus, bilingual SRI with annotations available for the source language, and monolingual SRI with a small labeled dataset. Increasing the amount of monolingual unlabeled data significantly improves SRI in German but not in English. Adding word alignments in parallel sentences results in small, non significant improvements, even if there is some labeled data available in the source language. This difficulty in showing the usefulness of parallel corpora for SRI may be due to the current assumptions about role alignments, which mean that only a small percentage of roles are aligned. Further analyses reveals that annotating small amounts of data can easily outperform the performance gains obtained by adding large unlabeled dataset as well as adding parallel corpora.
Future work includes training on different language pairs, on more than two languages, and with more inclusive models of role alignment.
Further analyses reveals that annotating small amounts of data can easily outperform the performance gains obtained by adding large unlabeled dataset as well as adding parallel corpora. Future work includes training on different language pairs, on more than two languages, and with more inclusive models of role alignment.
|
What's their future work?
|
Training on different language pairs, on more than two languages, and with more inclusive models of role alignment.
|
null | false
| null |
Marvel counts among its characters such well-known superheroes as Spider-Man, Iron Man, Captain America, Thor, Doctor Strange, Hulk, Daredevil, Wolverine, Black Panther and Captain Marvel, as well as popular superhero teams such as the Avengers, X-Men, Fantastic Four, and Guardians of the Galaxy. Its stable of well-known supervillains includes the likes of Doctor Doom, Magneto, Ultron, Thanos, Kang the Conqueror, Green Goblin, Galactus, Loki, and Kingpin. Most of Marvel's fictional characters operate in a single reality known as the Marvel Universe, with most locations mirroring real-life places; many major characters are based in New York City, New York, United States. Additionally, Marvel has published several licensed properties from other companies. This includes Star Wars comics twice from 1977 to 1986 and again since 2015.
|
Give me some of the most famous characters Marvel Comics has produced from this list.
|
Some of the most famous characters Marvel have produced include Spider-Man, Iron Man, Captain America, Thor, Doctor Strange, Hulk, and Wolverine. These characters are often affiliated with teams of superheroes, such as the Avengers, X-Men, Fantastic Four, and Guardians of the Galaxy.
|
null | false
| null |
What are some of most popular science fiction and fantasy books with TV or movie adaptations in the past twenty years?
|
There have been many extremely popular science fiction and fantasy books in the last ten years, but a small selection that were extremely popular and had successful TV/movie adaptations might include: "The Lord of the Rings" by J.R.R. Tolkien, which was published in 1955 but remains extremely popular and whose movie adaptations won a record number of Academy Awards; the series "A Song of Ice and Fire" by George R.R. Martin, the first book of which is "A Game of Thrones", is very popular and was adapted into a critically successful TV show on HBO; "The Hunger Games" by Suzanne Collins, also with a series of successful movie adaptations; and "Dune" by Frank Herbert, which has had a number of movie adaptations with the most recent releasing in 2021.
|
|
null | false
| null |
When should I take a medication?
|
Knowing when to take a medication requires understanding what kind of medication it is, who is taking it, and what kinds of pre-existing conditions they have that may interact with the medication. Therefore, in this case it is best to consult with your doctor.
|
|
null | false
| null |
Identify which animal is domesticated or wild: Ferret, Northern bobwhite
|
Ferret is domesticated, Northern bobwhite is wild.
|
|
null | false
| 459
|
As we illustrated in Section 3.1, higher complexity indicates more similar distractors to targets, thus requires messages conveying more fine-grained features of inputs. This leads to the following prediction:
1. higher complexity in games with same contextual unpredictability can facilitate emergent languages having higher expressivity;
Meanwhile, following the conclusion from that higher contextual unpredictability leads to higher signal autonomy, i.e. more comprehensive information about input space is encoded in the emergent languages, we then make the following prediction: 2. higher unpredictability in games having same contextual complexity can facilitate emergent languages having higher expressivity; Following the above two predictions, both higher complexity and higher unpredictability can unilaterally lead to better expressivity. However, with an input space whose size is finite, changing context size always affects the two factors in an inverse way, which leads to our Hypothesis 1 and gives the following prediction:
3. if the size of X is fixed across games, a language which emerges in a game with moderate complexity and unpredictability should have the highest expressivity.
As we illustrated in Section 3.1, higher complexity indicates more similar distractors to targets, thus requires messages conveying more fine-grained features of inputs. This leads to the following prediction: 1. higher complexity in games with same contextual unpredictability can facilitate emergent languages having higher expressivity;Meanwhile, following the conclusion from Winters et al. (2018) that higher contextual unpredictability leads to higher signal autonomy, i.e. more comprehensive information about input space is encoded in the emergent languages, we then make the following prediction: 2. higher unpredictability in games having same contextual complexity can facilitate emergent languages having higher expressivity; Following the above two predictions, both higher complexity and higher unpredictability can unilaterally lead to better expressivity. However, with an input space whose size is finite, changing context size always affects the two factors in an inverse way, which leads to our Hypothesis 1 and gives the following prediction: 3. if the size of X is fixed across games, a language which emerges in a game with moderate complexity and unpredictability should have the highest expressivity
|
Predictions in section 4.1 depends on the expressions being accurate in Section 3.1. How does the definitions of unpredictability and complexity lead to predictions over expressivity?
|
We updated Section 4.1 to give better derivation of the three predictions. More specifically, we show that the first two predictions directly follow the existing findings from other domains, and there’s naturally a tradeoff since we cannot increase both complexity and unpredictability at the same time.
|
null | false
| null |
Is it safe to invest in cryptocurrency?
|
Compared to other asset classes such as stocks and government bonds, investing in cryptocurrency can be considered very risky.
|
|
null | false
| null |
What was invented by Dr Albert Southwick in 1881
|
Electric chair
|
|
null | false
| null |
Bob's mother had 3 children. Two of them are Mary and Hugh. What is the name of the last child?
|
The last child is Bob since Bob's mother had 3 children.
|
|
null | false
| null |
Does Arizona use Daylight Savings?
|
No, Arizona does not use Daylight Savings?
|
|
null | false
| 141
|
Automatic summarization, machine translation, question answering, and semantic parsing operations are useful for processing, analyzing, and extracting meaningful information from text. However, when applied to long texts, these tasks usually require some minimal syntactic structure to be identified, such as sentences BIBREF0 , BIBREF1 , BIBREF2 , which always end with a period (“.”) in English BIBREF3 .
However, written Thai does not use an explicit end-of-sentence marker to identify sentence boundaries BIBREF4 . Prior works have adapted traditional machine learning models to predict the beginning position of a sentence. The authors of BIBREF5 , BIBREF6 , BIBREF7 proposed traditional models to determine whether a considered space is a sentence boundary based on the words and their part of speech (POS) near the space. Meanwhile, Zhou N. et al. BIBREF8 considered Thai sentence segmentation as a sequence tagging problem and proposed a CRF-based model with n-gram embedding to predict which word is the sentence boundary. This method achieves the state-of-the-art result for Thai sentence segmentation and achieves greater accuracy than other models by approximately 10% on an Orchid dataset BIBREF9 .
Several deep learning approaches have been applied in various tasks of natural language processing (NLP), including the long short-term memory BIBREF10 , self-attention BIBREF11 , and other models. Huang Z. et al. BIBREF12 proposed a deep learning sequence tagging model called Bi-LSTM-CRF, which integrates a conditional random field (CRF) module to gain the benefit of both deep learning and traditional machine learning approaches. In their experiments, the Bi-LSTM-CRF model achieved an improved level of accuracy in many NLP sequence tagging tasks, such as named entity recognition, POS tagging and chunking.
The CRF module achieved the best result on the Thai sentence segmentation task BIBREF8 ; therefore, we adopt the Bi-LSTM-CRF model as our baseline. This paper makes the following three contributions to improve Bi-LSTM-CRF for sentence segmentation.
First, we propose adding n-gram embedding to Bi-LSTM-CRF due to its success in BIBREF8 and BIBREF12 . By including n-gram embedding, the model can capitalize on both approaches. First, the model gains the ability to extract past and future input features and sentence level tag information from Bi-LSTM-CRF; moreover, with the n-gram addition, it can also extract a local representation from n-gram embedding, which helps in capturing word groups that exist near sentence boundary. Although Jacovi A. et al. BIBREF13 reported that a convolutional neural network (CNN) can be used as an n-gram detector to capture local features, we chose n-gram embedding over a CNN due to its better accuracy, as will be shown in Section SECREF8 .
Second, we propose adding incorporative distant representation into the model via a self-attention mechanism, which can focus on the keywords of dependent clauses that are far from the considered word. Self-attention has been used in many recent state-of-the-art models, most notably the transformer BIBREF11 and BERT BIBREF14 . BERT has outperformed Bi-LSTM on numerous tasks, including question answering and language inference. Therefore, we choose to use self-attention modules to extract distant representations along with local representations to improve model accuracy.
Third, we also apply semi-supervised learning BIBREF15 , allowing us to employ unlimited amounts of unlabeled data, which is particularly important for low-resource languages such as Thai, for which annotation is costly and time-consuming. Many semi-supervised learning approaches have been proposed in the computer vision BIBREF16 , BIBREF17 and natural language processing BIBREF18 , BIBREF19 , BIBREF20 fields. Our choice for semi-supervised learning to enhance model representation is Cross-View Training (CVT) BIBREF20 . Clark K. et al. BIBREF20 claims that CVT can improve the representation layers of the model, which is our goal. However, CVT was not designed to be integrated with self-attention and CRF modules; consequently, we provide a modified version of CVT in this work.
Based on the above three contributions, we pursue two main experiments. The first experiment was conducted on two Thai datasets, Orchid and UGWC BIBREF21 , to evaluate our Thai sentence segmentation model. In this case, our model achieves F1 scores of 92.5% and 88.9% on Orchid and UGWC, respectively, and it outperforms all the baseline models. The second experiment was executed on the IWSLT dataset BIBREF22 and involves an English-language punctuation restoration task. This experiment demonstrates that our model is generalizable to different languages. Our model, which does not require pretrained word vectors, improved the overall F1 score by 0.9% compared to the baselines, including a model that uses pretrained word vectors.
There are five sections in the remainder of this paper. Section SECREF2 reviews the related works on Thai sentence segmentation, English punctuation restoration and introduces the original CVT. Section SECREF3 describes the proposed model architecture and the integration of cross-view training. The datasets, implementation process and evaluation metrics are explained in Section SECREF4 . The results of the experiments are discussed in Section SECREF5 . Finally, Section SECREF6 concludes the paper.
Second, we propose adding incorporative distant representation into the model via a self-attention mechanism, which can focus on the keywords of dependent clauses that are far from the considered word.
|
What's the usage of this representation addition?
|
Can focus on the keywords of dependent clauses that are far from the considered word.
|
null | false
| null |
What is enriched air and why would divers dive with enriched air?
|
Enriched air, also known as nitrox, is one kind of air with more than 21% oxygen. 21% oxygen is the normal air we breathe on the earth. The primary benefit of using enriched air is to extend the diver's bottom time, and reduce the need to push to no decompression limit. This advantage is really beneficial in deeper diving.
For example, at 80 feet of depth, normal air can last around 30 min, while enriched air of 32% oxygen can last for 45 min and 36% can last for 55 min.
While enriched air is really beneficial in extending bottom time, it is important that divers get trained to use it before diving with enriched air.
|
|
null | false
| null |
Wagyu
Cattle were brought to Japan from China at the same time as the cultivation of rice, in about the second century AD, in the Yayoi period.: 209 Until about the time of the Meiji Restoration in 1868, they were used only as draught animals, in agriculture, forestry, mining and for transport, and as a source of fertilizer. Milk consumption was unknown, and – for cultural and religious reasons – meat was not eaten. Cattle were highly prized and valuable, too expensive for a poor farmer to buy.: 2
Japan was effectively isolated from the rest of the world from 1635 until 1854; there was no possibility of the intromission of foreign genes to the cattle population during this time. Between 1868, the year of the Meiji Restoration, and 1887, some 2600 foreign cattle were imported.: 7 At first, there was little interest in cross-breeding these with the native stock, but from about 1900, it became widespread. It ceased abruptly in 1910 when it was realised that, while the cross-breeds might be larger and have better dairy qualities, their working capacity and meat quality was lower.: 7 From 1919, the various heterogeneous regional populations that resulted from this brief period of cross-breeding were registered and selected as "Improved Japanese Cattle". Four separate strains were characterised, based mainly on which type of foreign cattle had most influenced the hybrids, and were recognised as individual breeds in 1944.: 8 They are collectively known as wagyū, and are:: 66 : 420
The Japanese Black (黒毛和種, kuroge washu), which constitutes over 90% of beef cattle in Japan; regional strains within the breed include the Tottori, Tajima, Shimane and Okayama.
The Japanese Brown or Japanese Red (褐毛和種, akage washu or akaushi), the other main breed, representing about 5% of all beef cattle;: 17 reared in southern Japan, in Kōchi Prefecture on Shikoku island, and in Kumamoto Prefecture on Kyushu island.: 8
The Japanese Polled (無角和種, mukaku washu), found principally in Yamaguchi Prefecture: 8
The Japanese Shorthorn (日本短角和種, nihon tankaku washu), reared in northern Japan, mainly in Iwate Prefecture; it constitutes less than one percent of all wagyu cattle.[
|
Given a reference text about Wagyu, tell me the different types of cattle that are together known as wagyu.
|
The different types of cattle that are known as Wagyu are:
- The Japanese Black
- The Japanese Brown
- The Japanese Polled
- The Japanese Shorthorn
|
null | false
| 393
|
One of the significant challenges in contemporary information processing is the sheer volume of available data. BIBREF0 , for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing BIBREF1 , relies on hashing data into short, locality-preserving binary codes BIBREF2 . The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items. Most of the algorithms from this family are data-oblivious, i.e. can generate hashes for any type of data. Nevertheless, some methods target specific kind of input data, like text or image.
In this work we focus on learning binary codes for text documents. An important work in this direction has been presented by BIBREF3 . Their semantic hashing leverages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-of-words (BOW) representation. Salakhutdinov & Hinton report that binary codes allow for up to 20-fold improvement in document ranking speed, compared to real-valued representation of the same dimensionality. Moreover, they demonstrate that semantic hashing codes used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning binary representation from BOW, however, has its disadvantages. First, word-count representation, and in turn the learned codes, are not in itself stronger than TF-IDF. Second, BOW is an inefficient representation: even for moderate-size vocabularies BOW vectors can have thousands of dimensions. Learning fully-connected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hinton restricted the BOW vocabulary in their experiments to 2000 most frequent words.
Binary codes have also been applied to cross-modal retrieval where text is one of the modalities. Specifically, BIBREF4 incorporated tag information that often accompany text documents, while BIBREF5 employed siamese neural networks to learn single binary representation for text and image data.
Recently several works explored simple neural models for unsupervised learning of distributed representations of words, sentences and documents. BIBREF6 proposed log-linear models that learn distributed representations of words by predicting a central word from its context (CBOW model) or by predicting context words given the central word (Skip-gram model). The CBOW model was then extended by BIBREF7 to learn distributed representations of documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model, in which the central word is predicted given the context words and the document vector. During training, PV-DM learns the word embeddings and the parameters of the softmax that models the conditional probability distribution for the central words. During inference, word embeddings and softmax weights are fixed, but the gradients are backpropagated to the inferred document vector. In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Distributed Bag of Words (PV-DBOW). This model predicts words in the document given only the document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOW and bag-of-bigrams in information retrieval task, while using only few hundreds of dimensions. These models are also amendable to learning and inference over large vocabularies. Original CBOW network used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation BIBREF8 or importance sampling BIBREF9 to approximate the gradients with respect to the softmax logits.
An alternative approach to learning representation of pieces of text has been recently described by BIBREF10 . Networks proposed therein, inspired by the Skip-gram model, learn to predict surrounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned on the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level.
In this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes from a recent work by BIBREF11 on learning binary codes for images. Specifically, we introduce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary activations. We demonstrate that the resultant binary paragraph vectors significantly outperform semantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where training and inference are carried out on unrelated text corpora. Finally, we study models that simultaneously learn short binary codes for document filtering and longer, real-valued representations for ranking. While BIBREF11 employed a supervised criterion to learn image codes, binary paragraph vectors remain unsupervised models: they learn to predict words in documents.
In this work we focus on learning binary codes for text documents.
|
What do the authors focus on in this paper?
|
The authors focus on learning binary codes for text documents.
|
1912.00423
| false
| null |
One of the several formats into which FHIR can be serialized is RDF. However, because RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, there is the potential for a slight mismatch between the models. This comes up in two ways: One, RDF makes statements of fact, whereas FHIR makes records of events. The example given in the FHIR documentation is the difference between "patient x has viral pneumonia" (statement of fact) and "Dr. Jones diagnosed patient x with viral pneumonia" (record of event). Two, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts. The example given for this mismatch is "a modifier extension indicates that the surrounding element's meaning will likely be misunderstood if the modifier extension is not understood." The potential for serious error resulting from this mismatch is small, but it is worth bearing in mind when designing information systems.
One of the several formats into which FHIR can be serialized is RDF. However, because RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting, there is the potential for a slight mismatch between the models. This comes up in two ways: One, RDF makes statements of fact, whereas FHIR makes records of events.
Two, RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts.
|
How are FHIR and RDF combined?
|
The answers are shown as follows:
* RDF was designed as an abstract information model and FHIR was designed for operational use in a healthcare setting
* RDF makes statements of fact, whereas FHIR makes records of events
* RDF is intended to have the property of monotonicity, meaning that previous facts cannot be invalidated by new facts
|
null | false
| null |
The Australian Chess Championship is a tournament organised by the Australian Chess Federation and held every two years. The tournament is largely restricted to Australian chess players, although it is common to allow a small number of strong overseas players to compete. The highest-scoring eligible player (frequently the tournament winner) holds the title of Australian Chess Champion until the next tournament is held. The tournament format is normally a restricted Swiss system, and in case of a tie for first place, a playoff match or tournament is conducted.
|
How often is The Australian Chess Championship held?
|
Every two years
|
null | false
| null |
What is Autism Spectrum Disorder?
|
Autism Spectrum Disorder (ASD) is a developmental disability caused by differences in the brain condition and is characterized by difficulty with social communication and restricted, repetitive patterns of behaviour, interest, or activities.
Many children with high functioning autism (also known as Asperger's syndrome) have average or above average IQ. ASD affects men almost 4 times more than women, and is known to be more prevalent among people in technical professions such as engineering and physics.
|
|
null | false
| null |
What are the books in Harry Potter series in order?
|
Harry Potter is a book series written by British author J.K. Rowling. The series was published from 1997 to 2007 with 7 books
Book 1: Harry Potter and the Philosopher’s Stone (1997)
Book 2: Harry Potter and the Chamber of Secrets (1998)
Book 3: Harry Potter and the Prisoner of Azkaban (1999)
Book 4: Harry Potter and the Goblet of Fire (2000)
Book 5: Harry Potter and the Order of the Phoenix (2003)
Book 6: Harry Potter and the Half-Blood Prince (2005)
Book 7: Harry Potter and the Deathly Hallows (2007)
|
|
null | false
| null |
Name five teams in Liga Portugal.
|
Five teams in Liga Portugal are Benfica, Porto, Sporting, Braga, and Arouca.
|
|
null | false
| 192
|
In this paper we presented the approach for introducing thesaurus information into topic models. The main idea of the approach is based on the assumption that if related words or phrases co-occur in the same text, their frequencies should be enhanced and this action leads to their mutual larger contribution into topics found in this text.
In the experiments on four English collections, it was shown that the direct implementation of this idea using WordNet synonyms and/or direct relations leads to great degradation of the unigram model. But the correction of initial assumptions and excluding hyponyms from frequencies adding improve the model and makes it much better than the initial model in several measures. Adding ngrams in a similar manner further improves the model.
Introducing information from domain-specific thesaurus EuroVoc led to improving the initial model without the additional assumption, which can be explained by the absence of general abstract words in such information-retrieval thesauri.
We also considered thematic analysis of an Islam Internet site and evaluated the combined topic models manually. We found that the best, understandable topics are obtained by adding domain-specific thesaurus knowledge (domain terms, synonyms, and relations).
In this paper we presented the approach for introducing thesaurus information into topic models. The main idea of the approach is based on the assumption that if related words or phrases co-occur in the same text, their frequencies should be enhanced and this action leads to their mutual larger contribution into topics found in this text.
|
What assumption is their method based on?
|
If related words or phrases co-occur in the same text, their frequencies should be enhanced and this action leads to their mutual larger contribution into topics found in this text.
|
null | false
| 0
|
Although Neural Machine Translation (NMT) has dominated recent research on translation tasks BIBREF0, BIBREF1, BIBREF2, NMT heavily relies on large-scale parallel data, resulting in poor performance on low-resource or zero-resource language pairs BIBREF3. Translation between these low-resource languages (e.g., Arabic$\rightarrow $Spanish) is usually accomplished with pivoting through a rich-resource language (such as English), i.e., Arabic (source) sentence is translated to English (pivot) first which is later translated to Spanish (target) BIBREF4, BIBREF5. However, the pivot-based method requires doubled decoding time and suffers from the propagation of translation errors.
One common alternative to avoid pivoting in NMT is transfer learning BIBREF6, BIBREF7, BIBREF8, BIBREF9 which leverages a high-resource pivot$\rightarrow $target model (parent) to initialize a low-resource source$\rightarrow $target model (child) that is further optimized with a small amount of available parallel data. Although this approach has achieved success in some low-resource language pairs, it still performs very poorly in extremely low-resource or zero-resource translation scenario. Specifically, BIBREF8 reports that without any child model training data, the performance of the parent model on the child test set is miserable.
In this work, we argue that the language space mismatch problem, also named domain shift problem BIBREF10, brings about the zero-shot translation failure in transfer learning. It is because transfer learning has no explicit training process to guarantee that the source and pivot languages share the same feature distributions, causing that the child model inherited from the parent model fails in such a situation. For instance, as illustrated in the left of Figure FIGREF1, the points of the sentence pair with the same semantics are not overlapping in source space, resulting in that the shared decoder will generate different translations denoted by different points in target space. Actually, transfer learning for NMT can be viewed as a multi-domain problem where each source language forms a new domain. Minimizing the discrepancy between the feature distributions of different source languages, i.e., different domains, will ensure the smooth transition between the parent and child models, as shown in the right of Figure FIGREF1. One way to achieve this goal is the fine-tuning technique, which forces the model to forget the specific knowledge from parent data and learn new features from child data. However, the domain shift problem still exists, and the demand of parallel child data for fine-tuning heavily hinders transfer learning for NMT towards the zero-resource setting.
In this paper, we explore the transfer learning in a common zero-shot scenario where there are a lot of source$\leftrightarrow $pivot and pivot$\leftrightarrow $target parallel data but no source$\leftrightarrow $target parallel data. In this scenario, we propose a simple but effective transfer approach, the key idea of which is to relieve the burden of the domain shift problem by means of cross-lingual pre-training. To this end, we firstly investigate the performance of two existing cross-lingual pre-training methods proposed by BIBREF11 in zero-shot translation scenario. Besides, a novel pre-training method called BRidge Language Modeling (BRLM) is designed to make full use of the source$\leftrightarrow $pivot bilingual data to obtain a universal encoder for different languages. Once the universal encoder is constructed, we only need to train the pivot$\rightarrow $target model and then test this model in source$\rightarrow $target direction directly. The main contributions of this paper are as follows:
We propose a new transfer learning approach for NMT which uses the cross-lingual language model pre-training to enable a high performance on zero-shot translation.
We propose a novel pre-training method called BRLM, which can effectively alleviates the distance between different source language spaces.
Our proposed approach significantly improves zero-shot translation performance, consistently surpassing pivoting and multilingual approaches. Meanwhile, the performance on supervised translation direction remains the same level or even better when using our method.
Besides, a novel pre-training method called BRidge Language Modeling (BRLM) is designed to make full use of the source$leftrightarrow $target bilingual data to obtain a universal encoder for different languages.
|
What kind of method is designed?
|
A novel pre-training method called BRidge Language Modeling (BRLM).
|
null | false
| null |
Which of the following numbers is a irrational number: -200, -1.5, -1, 0, 1, 1.5, 200, pi, e, sqrt(-1)?
|
Square root of -1 is an irrational number
|
|
null | false
| null |
Power resource theory is a political theory proposing that variations among welfare states is largely attributable to differing distributions of power between economic classes. It argues that "working class power achieved through organisation by labor unions or left parties, produces more egalitarian distributional outcomes".
Pioneered in the 1970s and 1980s by a school of Scandinavian researchers closely associated with Walter Korpi, Gøsta Esping-Andersen, and John Stephens, power resource theory is an empirical approach to examining the development, characteristics and effects of social policies in advanced industrialized nations. It attempts to account for the various approaches to social policy adopted by different nations, focusing primarily on the role and strength of labor mobilization. Its major conclusion is that social welfare provisions are larger and income inequality lower in countries where working-class people are more politically organized.
While power resource theory became the dominant paradigm among scholars in the 1990s for explaining variations between welfare states in advanced democracies, there are criticisms that point out the lack of accountability for factors such as variations in “coverage, extension, and generosity among welfare states” in addition to not accounting for the importance of political mobilization based on social class. Competing theories have also challenged power resource theory with alternative explanations for the varying levels of welfare development such as the importance of employers and cross-class alliances that exist in coordinated market societies.
|
What is power resource theory?
|
Power resource theory is a political theory proposing that variations among welfare states is largely attributable to differing distributions of power between economic classes. Power resource theory was pioneered in the 1970's by a school of Scandinavian researchers and it attempts to account for the various approached to social policy adopted by different nations.
|
null | false
| null |
Which countries are the world's largest or biggest manufacturer in terms of $ value?
|
According to data from the World Bank, in 2021, the world's largest or biggest manufacturers in terms of $ output are:
1. China, $4,865,827.26
2. United States, 2,497,131.58
3. Japan, $995,308.97
4. Germany, $803,213.14
5. South Korea, $461,104.31
6. India, $443,911.66
7. Italy, $314,095.61
8. United Kingdom, $274,872.28
9. France, $262,642.86
10. Russian Federation, $256,958.27
|
|
1804.08050
| true
| null |
On the other hand, in the case of MHD, instead of the integration at attention level, we assign multiple decoders for each head and then integrate their outputs to get a final output. Since each attention decoder captures different modalities, it is expected to improve the recognition performance with an ensemble effect. The calculation of the attention weight at the head INLINEFORM0 in Eq. ( EQREF21 ) is replaced with following equation: DISPLAYFORM0
On the other hand, in the case of MHD, instead of the integration at attention level, we assign multiple decoders for each head and then integrate their outputs to get a final output.
|
Does each attention head in the decoder calculate the same output?
|
No.
|
null | false
| null |
Sucrose is rapidly lost during the roasting process, and may disappear entirely in darker roasts. During roasting, aromatic oils and acids weaken, changing the flavor; at 205 °C (401 °F), other oils start to develop. One of these oils, caffeol, is created at about 200 °C (392 °F), which is largely responsible for coffee's aroma and flavor. The difference of caffeine content between a light roast and a dark roast is only about 0.1%.
|
From the passage provided, extract the difference in caffeine between a light roast and a dark roast.
|
The difference of caffeine content between a light roast and a dark roast is about 0.1%.
|
null | false
| 163
|
Sarcasm is an intensive, indirect and complex construct that is often intended to express contempt or ridicule . Sarcasm, in speech, is multi-modal, involving tone, body-language and gestures along with linguistic artifacts used in speech. Sarcasm in text, on the other hand, is more restrictive when it comes to such non-linguistic modalities. This makes recognizing textual sarcasm more challenging for both humans and machines.
Sarcasm detection plays an indispensable role in applications like online review summarizers, dialog systems, recommendation systems and sentiment analyzers. This makes automatic detection of sarcasm an important problem. However, it has been quite difficult to solve such a problem with traditional NLP tools and techniques. This is apparent from the results reported by the survey from DBLP:journals/corr/JoshiBC16. The following discussion brings more insights into this.
Consider a scenario where an online reviewer gives a negative opinion about a movie through sarcasm: “This is the kind of movie you see because the theater has air conditioning”. It is difficult for an automatic sentiment analyzer to assign a rating to the movie and, in the absence of any other information, such a system may not be able to comprehend that prioritizing the air-conditioning facilities of the theater over the movie experience indicates a negative sentiment towards the movie. This gives an intuition to why, for sarcasm detection, it is necessary to go beyond textual analysis.
We aim to address this problem by exploiting the psycholinguistic side of sarcasm detection, using cognitive features extracted with the help of eye-tracking. A motivation to consider cognitive features comes from analyzing human eye-movement trajectories that supports the conjecture: Reading sarcastic texts induces distinctive eye movement patterns, compared to literal texts. The cognitive features, derived from human eye movement patterns observed during reading, include two primary feature types:
The cognitive features, along with textual features used in best available sarcasm detectors, are used to train binary classifiers against given sarcasm labels. Our experiments show significant improvement in classification accuracy over the state of the art, by performing such augmentation.
However, it has been quite difficult to solve such a problem with traditional NLP tools and techniques. This is apparent from the results reported by the survey from Joshi et al. (2016).
|
Has Joshi's team reported that traditional NLP tools and techniques are not effective in the automatic detection of sarcasm?
|
Yes.
|
null | false
| 322
|
OWL BIBREF15 is the de-facto standard for machine processable and interoperable ontologies on the SW. In its second version, OWL is equivalent to the description logic $\mathcal {SROIQ}(D)$. Such expressiveness has a higher computational cost but allows the development of interesting applications such as automated reasoning BIBREF16. OWL 2 ontologies consist of the following three different syntactic categories:
Entities, such as classes, properties, and individuals, are identified by IRIs. They form the primitive terms and constitute the basic elements of an ontology. Classes denote sets of individuals and properties link two individuals or an individual and a data value along a property. For example, a class :Animal can be used to represent the set of all animals. Similarly, the object property :childOf can be used to represent the parent-child relationship and the data property :birthDate assigns a particular birth date to an individual. Finally, the individual :Alice can be used to represent a particular person called "Alice".
Expressions represent complex notions in the domain being described. For example, a class expression describes a set of individuals in terms of the restrictions on the individuals' characteristics. OWL offers existential (SOME) or universal (ONLY) qualifiers and a variety of typical logical constructs, such as negation (NOT), other Boolean operators (OR, AND), and more constructs such as cardinality restriction (MIN, MAX, EXACTLY) and value restriction (VALUE), to create class expressions. Such constructs can be combined in arbitrarily complex class expressions CE according to the following grammar
where A is an atomic class, C and D are class expressions, R is an object property, a as well as a$_1$ to a$_m$ with $\texttt {m} \ge 1$ are individuals, and $\texttt {n} \ge 0$ is an integer.
Axioms are statements that are asserted to be true in the domain being described. Usually, one distinguish between (1) terminological and (2) assertional axioms. (1) terminological axioms are used to describe the structure of the domain, i.e., the relationships between classes resp. class expressions. For example, using a subclass axiom (SubClassOf:), one can state that the class :Koala is a subclass of the class :Animal. Classes can be subclasses of other classes, thus creating a taxonomy. In addition, axioms can arrange properties in hierarchies (SubPropertyOf:) and can assign various characteristics (Characteristics:) such as transitivity or reflexivity to them. (2) Assertional axioms formulate facts about individuals, especially the classes they belong to and their mutual relationships. OWL can be expressed in various syntaxes with the most common computer readable syntax being RDF/XMLA more human-readable format is the MOS BIBREF17. For example, the class expression that models people who work at a university that is located in Spain could be as follows in MOS:
Likewise, expressing that every professor works at a university would read as
OWL 2 ontologies consist of Entities, Expressions and Axioms as introduced in subsec:owl. While both expressions and axioms can be mapped to RDF, i.e. into a set of RDF triples, using this mapping and applying the triple-based verbalization on it would lead to a non-human understandable text in many cases. For example, the intersection of two classes :A and :B can be represented in RDF by the six triples
The verbalization of these triples would result in Something that is a class and the intersection of something whose first is A and whose rest is something whose first is B and whose rest ist nil., which is obviously far away from how a human would express it in NL. Therefore, generating NL from OWL requires a different procedure based on its syntactic categories, OWL expressions and OWL axioms. We show the general rules for each of them in the following.
In theory, class expressions can be arbitrarily complex, but as it turned out in some previous analysis BIBREF22, in practice they seldom arise and can be seen as some corner cases. For example, an ontology could contain the following class expression about people and their birth place:
Class expressions do have a tree-like structure and can simply be parsed into a tree by means of the binary OWL class expressions constructors contained in it. For our example, this would result in the following tree:
every tree node/.style=align=center,anchor=base,font=, edge from parent/.style= thick, draw, edge from parent path=(.south) – +(0,-8pt) -| () frontier/.style=distance from root=9 [.AND Person [.SOME birthPlace [.AND City [.VALUE locatedIn France ] ] ] ]
Such a tree can be traversed in post-order, i.e. sub-trees are processed before their parent nodes recursively. For the sake of simplicity, we only process sub-trees that represent proper class expression in our example, i.e. we omit birthPlace, locatedIn, and France. Moreover and again for simplicity, we'll explain the transformation process by starting from the right-hand side of the tree. Thus, in our example we begin with the class expression City which is transformed to everything that is a city and locatedIn VALUE France resulting in everything that is located in France by application of a rule. Both class expressions are used in the conjunction City AND locatedIn VALUE France. Thus, the next step would be to merge both phrases. An easy way is to use the coordinating conjunction and, i.e. everything that is a city and everything that is located in France. Although the output of this transformation is correct, it still contains unnecessarily redundant information. Therefore, we apply the aggregation procedure described in subsec:grouping, i.e. we get everything that is a city and located in France. Yet, the aggregation can still be improved: if there is any atomic class in the conjunction, we know that this is more specific than the placeholder everything. Thus, we can replace it by the plural form of the class, finally resulting in cities that are located in France. The same procedure is applied for its parent class expression being the existential restriction
This will be transformed to everything whose birth place is a city that is located in France. Note, that we used the singular form here, assuming that the property birthPlace is supposed to be functional in the ontology. In the last step, we process the class expression Person, which gives us everything that is a person. Again, due to the conjunction we merge this result with with the previous one, such that in the end we get people whose birth place is a city that is located in France.
As we described in sec:owl, OWL axioms can roughly be categorized into terminological and assertional axioms. Therefore, we have different procedures for processing each category:
Assertional Axioms (ABox Axioms) - Most assertional axioms assert individuals to atomic classes or relate individuals to another individual resp. literal value. For example, axioms about the type as well as birth place and birth date of Albert Einstein can be expressed by
Those axioms can simply be rewritten as triples, thus, we can use the same procedure as we do for triples (sec:singletriple). Converting them into NL gives us Albert Einstein is a person whose birth place is Ulm and whose birth date is 14 March 1879. OWL also allows for assigning an individual to a complex class expression. In that case we'll use our conversion of OWL class expressions as described in subsec:owlce.
Terminological Axioms (TBox Axioms) - According to power2010, most of the terminological axioms used in ontologies are subclass axioms. By definition, subclass and superclass can be arbitrarily complex class expressions $\texttt {CE}_1$ and $\texttt {CE}_2$, i.e. [language=manchester,mathescape=true]CE$_1$ SubClassOf CE$_2$, but in praxis it is quite often only used with atomic classes as subclass or even more simple with the superclass also beeing an atomic class. Nevertheless, we support any kind of subclass axiom and all other logical OWL axioms in LD2NL. For simplicity, we outline here how we verbalize subclass axioms in LD2NL. The semantics of a subclass axiom denotes that every individual of the subclass also belongs to the superclass. Thus, the verbalization seems to be relatively straightforward, i.e. we verbalize both class expressions and follow the template : every $\rho (\texttt {\texttt {CE}$1$})$ is a $\rho (\texttt {\texttt {CE}$2$})$. Obviously, this works pretty well for subclass axioms with atomic classes only. For example, the axiom
is verbalized as every scientist is a person.
This work was supported by the German Federal Ministry of Transport and Digital Infrastructure (BMVI) through the projects LIMBO (no. 19F2029I) and OPAL (no. 19F2028A). This work was supported by the German Federal Ministry of Economics and Technology (BMWI) in the projects RAKI (no. 01MD19012D) as well as by the BMBF project SOLIDE (no. 13N14456).
OWL2 (OWL Working Group, 2009) is the de-facto standard for machine processable and interoperable ontologies on the SW.
|
What is the OWL?
|
The de-facto standard for machine processable and interoperable ontologies on the SW.
|
null | false
| 71
|
In the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. In fact, online harassment becomes very common in Twitter and there have been a lot of critics that Twitter has become the platform for many racists, misogynists and hate groups which can express themselves openly. Online harassment is usually in the form of verbal or graphical formats and is considered harassment, because it is neither invited nor has the consent of the receipt. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter. The main reason is because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. The basic goal of this automatic classification is that it will significantly improve the process of detecting these types of hate speech on social media by reducing the time and effort required by human beings.
Previous studies have been focused on collecting data about sexism and racism in very broad terms or have proposed two categories of sexism as benevolent or hostile sexism BIBREF0, which undermines other types of online harassment. However, there is no much study focusing on different types online harassment alone attracting natural language processing techniques.
In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion.
In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories.
|
What dataset did the authors use in this paper?
|
The authors used the dataset of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference, which includes text from tweets having the aforementioned categories.
|
null | false
| null |
Stargate Command (abbreviated to SGC) is a top-secret military organization founded and led by the United States Air Force in conjunction with the International Oversight Advisory, tasked with operating the Stargate device and all matters pertaining to things offworld, such as threats to Earth or missions to procure new technology from extraterrestrial civilizations. It is based inside the Cheyenne Mountain Complex in Colorado, code named "Area 52". (SG1: "Politics")
|
Where is Stargate Command located in the Stargate universe.
|
The Stargate Command facility is located on planet Earth, in Colorado, United States, in the Cheyenne Mountain complex
|
null | false
| null |
What are some things to consider when going to a theme park?
|
It is important to thing about what the weather may be like before going to a theme park. If it is raining you might want to plan by wearing a rain jacket and also consider some of the roller coaster rides may not be operational during this kind of weather. You also need to consider things like sun screen and hydration if the weather is going to be hot and sunny. Bug spray for mosquitos is also another important item if the weather is humid. Prepare a list of questions or things to bring before you travel to a theme park and you will be better prepared when you get there.
|
|
null | false
| null |
When is the best time of year to visit the Smoky Mountains?
|
The Smoky Mountains are open and accessible year around. One can visit the Smokies during each of the 4 different seasons and have a completely different experience in each. Colder months, such as December through early March, are great if you want to explore when it's less crowded, but be prepared for certain higher elevation trails and roadways to be closed due to ice and snow. Summer months are great for morning hikes and hikes at higher elevations as the temperatures are moderately warm and comfortable, but be prepared for larger crowds, as this is the busiest time of year around the park. In October, the Smokies are full of red, orange, and yellow leaves changing colors before dropping for the Winter. This is one of the more beautiful times of year in the Smokies, but also a very busy time as visitors flock to the park to see all the colors. Spring break is also a great time of year to visit the Smoky Mountains, as the weather is cool for long hikes, the bears are just starting to come out with their cubs, and the crowds are less than what you'd see during the summer.
|
|
null | false
| 499
|
We jointly optimize for the parameters of the neural implicit representations θ bg and θ obj and estimate the physical parameters θ ode , z 0 and θ + of the dynamics and the transformation. To this end, we use a simple photometric loss defined over all the pixel values, which reads where d computes the discrepancy between its two inputs, T is the set of all given time steps, I is the set of all pixel coordinates at the current resolution (see next section) and I (x, t) are the given images. To capture information on multiple scales we employ an image pyramid scheme. More details can be found in App. C.
In this work we presented a solution for learning a physical model from an image sequence that depicts some physical phenomenon. To this end, we proposed to combine neural implicit representations and neural ordinary differential equations in an analysis-by-synthesis fashion. Unlike existing learning-based approaches that require large training corpora, a single short video clip is sufficient for our approach. In contrast to prior works that use encoder-decoder architectures specifically tailored to 2D images, we built upon neural implicit representations that have been shown to give impressive results for 3D scene reconstruction. Therefore, the extension of the proposed method to 3D is a promising direction for future work.
We present diverse experiments in which the ODE parametrizes a rigid-body transformation between the background and the foreground objects, such as the pendulum motion. We emphasize that conceptually our model is not limited to rigid-body motions, and that it can directly be extended to other cases, for example to nonlinear transformations for modelling soft-body dynamics. The focus of this work is on learning a physical model of a phenomenon from a short video. Yet, the high fidelity of our model's renderings, together with the easy modifiability of the physical parameters, enables various computer graphics applications such as the artistic re-rendering of scenes, which we briefly demonstrate in the supplementary video. Overall, our per-scene model combines a unique set of favorable properties, including the interpretability of physical parameters, the ability to perform long-term predictions, and the synthesis of high-resolution images. We believe that our work may serve as inspiration for follow-up works on physics-based machine learning using neural implicit representations.
Two masses spring system in which MNIST digits are connected by an (invisible) spring.****We emphasize that conceptually our model is not limited to rigid-body motions, and that it can directly be extended to other cases, for example to nonlinear transformations for modelling soft-body dynamics. The focus of this work is on learning a physical model of a phenomenon from a short video. Yet, the high fidelity of our model’s renderings, together with the easy modifiability of the physical parameters, enables various computer graphics applications such as the artistic re-rendering of scenes, which we briefly demonstrate in the supplementary video. Overall, our per-scene model combines a unique set of favorable properties, including the interpretability of physical parameters, the ability to perform long-term predictions, and the synthesis of high-resolution images.
|
Can the proposed approach work on other physical models?
|
We show that our proposed method works for other physical systems, in this specific case two masses that are connected by a spring.
|
null | false
| null |
How can I go from my couch to running a 5K in 3 months?
|
Basically, you are asking, how to go from not running at all to completing your first 5K. It is best to first consult your doctor before starting any exercise program. Make sure you invest in an excellent pair of running shoes, try a local running shoe store for advice. Start off by measuring the time you are running. Alternate days running and gradually increase the length of time being careful not to increase more than 10% per run. Once you get up to 30 minutes of running every other day, start increasing the length of time running every fourth day by 10% again until you hit 45 minutes. It is ok to walk if you feel you need to. Then you will be 5K running race ready.
|
|
null | false
| 89
|
As shown in Figure FIGREF10, our framework consists offour components (1) Sentence Encoder and Relation Encoder, (2) Decoder, (3) Gated Attention Mechanism and (4) Dual Copy Mechanism. The sentence encoder and relation encoder encode the unstructured sentence and the structured answer-relevant relation, respectively. To select and combine the source information from the two encoders, a gated attention mechanism is employed to jointly attend both contextualized information sources, and a dual copy mechanism copies words from either the sentence or the relation.
We employ two encoders to integrate information from the unstructured sentence $S$ and the answer-relevant relation $M$ separately. Sentence encoder takes in feature-enriched embeddings including word embeddings $\mathbf {w}$, linguistic embeddings $\mathbf {l}$ and answer position embeddings $\mathbf {a}$. We follow BIBREF9 to transform POS and NER tags into continuous representation ($\mathbf {l}^p$ and $\mathbf {l}^n$) and adopt a BIO labelling scheme to derive the answer position embedding (B: the first token of the answer, I: tokens within the answer fragment except the first one, O: tokens outside of the answer fragment). For each word $w_i$ in the sentence $S$, we simply concatenate all features as input: $\mathbf {x}_i^s= [\mathbf {w}_i; \mathbf {l}^p_i; \mathbf {l}^n_i; \mathbf {a}_i]$. Here $[\mathbf {a};\mathbf {b}]$ denotes the concatenation of vectors $\mathbf {a}$ and $\mathbf {b}$.
We use bidirectional LSTMs to encode the sentence $(\mathbf {x}_1^s, \mathbf {x}_2^s, ..., \mathbf {x}_n^s)$ to get a contextualized representation for each token:
where $\overrightarrow{\mathbf {h}}^{s}_i$ and $\overleftarrow{\mathbf {h}}^{s}_i$ are the hidden states at the $i$-th time step of the forward and the backward LSTMs. The output state of the sentence encoder is the concatenation of forward and backward hidden states: $\mathbf {h}^{s}_i=[\overrightarrow{\mathbf {h}}^{s}_i;\overleftarrow{\mathbf {h}}^{s}_i]$. The contextualized representation of the sentence is $(\mathbf {h}^{s}_1, \mathbf {h}^{s}_2, ..., \mathbf {h}^{s}_n)$.
For the relation encoder, we firstly join all items in the n-ary relation $M$ into a sequence. Then we only take answer position embedding as an extra feature for the sequence: $\mathbf {x}_i^m= [\mathbf {w}_i; \mathbf {a}_i]$. Similarly, we take another bidirectional LSTMs to encode the relation sequence and derive the corresponding contextualized representation $(\mathbf {h}^{m}_1, \mathbf {h}^{m}_2, ..., \mathbf {h}^{m}_n)$.
We use an LSTM as the decoder to generate the question. The decoder predicts the word probability distribution at each decoding timestep to generate the question. At the t-th timestep, it reads the word embedding $\mathbf {w}_{t}$ and the hidden state $\mathbf {u}_{t-1}$ of the previous timestep to generate the current hidden state:
We design a gated attention mechanism to jointly attend the sentence representation and the relation representation. For sentence representation $(\mathbf {h}^{s}_1, \mathbf {h}^{s}_2, ..., \mathbf {h}^{s}_n)$, we employ the Luong2015EffectiveAT's attention mechanism to obtain the sentence context vector $\mathbf {c}^s_t$,
where $\mathbf {W}_a$ is a trainable weight. Similarly, we obtain the vector $\mathbf {c}^m_t$ from the relation representation $(\mathbf {h}^{m}_1, \mathbf {h}^{m}_2, ..., \mathbf {h}^{m}_n)$. To jointly model the sentence and the relation, a gating mechanism is designed to control the information flow from two sources:
where $\odot $ represents element-wise dot production and $\mathbf {W}_g, \mathbf {W}_h$ are trainable weights. Finally, the predicted probability distribution over the vocabulary $V$ is computed as:
where $\mathbf {W}_V$ and $\mathbf {b}_V$ are parameters.
To deal with the rare and unknown words, the decoder applies the pointing method BIBREF10, BIBREF11, BIBREF12 to allow copying a token from the input sentence at the $t$-th decoding step. We reuse the attention score $\mathbf {\alpha }_{t}^s$ and $\mathbf {\alpha }_{t}^m$ to derive the copy probability over two source inputs:
Different from the standard pointing method, we design a dual copy mechanism to copy from two sources with two gates. The first gate is designed for determining copy tokens from two sources of inputs or generate next word from $P_V$, which is computed as $g^v_t = \text{sigmoid}(\mathbf {w}^v_g \tilde{\mathbf {h}}_t + b^v_g)$. The second gate takes charge of selecting the source (sentence or relation) to copy from, which is computed as $g^c_t = \text{sigmoid}(\mathbf {w}^c_g [\mathbf {c}_t^s;\mathbf {c}_t^m] + b^c_g)$. Finally, we combine all probabilities $P_V$, $P_S$ and $P_M$ through two soft gates $g^v_t$ and $g^c_t$. The probability of predicting $w$ as the $t$-th token of the question is:
Given the answer $A$, sentence $S$ and relation $M$, the training objective is to minimize the negative log-likelihood with regard to all parameters:
where $\mathcal {\lbrace }Q\rbrace $ is the set of all training instances, $\theta $ denotes model parameters and $\text{log} P(Q|A,S,M;\theta )$ is the conditional log-likelihood of $Q$.
In testing, our model targets to generate a question $Q$ by maximizing:
As shown in Figure 3, our framework consists of four components (1) Sentence Encoder and Relation Encoder, (2) Decoder, (3) Gated Attention Mechanism and (4) Dual Copy Mechanism.
|
What are the four components in the framework developed by the team?
|
(1) Sentence Encoder and Relation Encoder, (2) Decoder, (3) Gated Attention Mechanism and (4) Dual Copy Mechanism.
|
1704.00253
| false
| null |
By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. For automatic translation, we utilize a pre-trained and publicly released NMT model for En $\rightarrow $ De and train another NMT model for En $\rightarrow $ Fr using the WMT'15 En-Fr parallel corpus BIBREF19 . A beam of size 5 is used to generate synthetic sentences. Lastly, to match the size of the training data, PSEUDOmix is established by randomly sampling half of each Fr*-De and Fr-De* corpus and mixing them together.
By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section.
|
How do they align the synthetic data?
|
The answers are shown as follows:
* By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section.
|
1704.05572
| false
| null |
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining.
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB.
We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
|
What was the textual source to which OpenIE was applied?
|
The answers are shown as follows:
* domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining
|
null | false
| null |
which is the silicon valley of India.
|
Bangalore City in the state of Karnataka is very well known as the Information technology capital or silicon valley of India due to the presence of most of the tech giants and the start-up ecosystem. There is a lot of talent pool in the region which is also helping its booming ecosystems. In recent days, cities like Chennai, Pune, Hyderabad, and Noida are also helping many firms establish their centers.
|
|
null | false
| null |
Gary C. Butler was the CEO and president of Automatic Data Processing. He held this position from August 2006 through November 2011, and had been with the company for 37 years. From 1990 to 1995, he served as Group President for ADP's Dealer Services Group; and from 1995 to 1998 he served in the same capacity for ADP's Employer Services Group. Prior to assuming his role as CEO in 2006, Butler was President and Chief Operating Officer of the company.
|
What did Gary C. Butler do?
|
Gary C. Butler was the CEO and president of Automatic Data Processing.
|
null | false
| null |
Mariano Sánchez Martínez (born 28 January 1978) is a Spanish former professional footballer who played as a defensive midfielder.
He appeared in 108 Segunda División games over three seasons, scoring two goals for Cartagena.
Club career
Born in San Pedro del Pinatar, Region of Murcia, Sánchez did not reach the Segunda División B until he was 26, in 2004, arriving at CD Alcoyano from amateurs AD Mar Menor-San Javier. In the following year he moved to another club at that level, FC Cartagena, helping it promote to Segunda División in his fourth season.
Sánchez made his debut in the competition on 29 August 2009 at the age of 31 years and seven months, playing the full 90 minutes in a 1–0 away win against Girona FC. He scored his first league goal on 22 May 2010 in the 3–5 home loss to Levante UD, and never appeared in less than 34 league matches during his three seasons in that tier, suffering relegation in his last and renewing his contract for a further two years in June 2012.
On 14 May 2014, the 36-year-old Sánchez announced he would retire at the end of the campaign while hoping to help his team promote, which eventually did not befell.
Personal life
Sánchez rejected an offer to play youth football for Real Murcia when he was 18, after deciding to move to Madrid to study architecture. Not being able to enter Real Madrid's youth system, he chose to retire from football.
After his playing days, Sánchez continued to work as an architect. Still as an active player, he was the figurehead behind the creation of the sports complex Pinatar Arena, in his hometown.
|
Is he still an active football player?
|
No, he retired at the age of 36 on May 14, 2014. However he is still working as an architect which he studied during his graduation.
|
null | false
| null |
For the list given, classify which items are considered vegan, vegitarian and non vegetarian: milk, tofu, curd, chicken, mutton, bacon, clam, tomato, potato, onion, egg, beef, pork, mango, apple, salmon, soy milk, paneer
|
Vegan : tofu, soy milk, tomato, potato, onion, mango, apple
Vegetarian: milk, curd, paneer, tofu, soy milk, tomato, potato, onion, mango, apple
Non vegetarian: chicken, mutton, bacon, clam,egg, beef, pork, salmon
|
|
1611.09441
| true
| null |
We segment a hashtag into meaningful English phrases. The `#' character is removed from the tweet text. As for example, #killthebill is transformed into kill the bill.
In order to achieve this, we use a dictionary of English words. We recursively break the hashtagged phrase into segments and match the segments in the dictionary until we get a complete set of meaningful words. This is important since many users tend to post tweets where the actual message of the tweet is expressed in form of terse hashtagged phrases.
The sentiment of url: Since almost all the articles are written in well-formatted english, we analyze the sentiment of the first paragraph of the article using Standford Sentiment Analysis tool BIBREF4 . It predicts sentiment for each sentence within the article. We calculate the fraction of sentences that are negative, positive, and neutral and use these three values as features.
We segment a hashtag into meaningful English phrases.
In order to achieve this, we use a dictionary of English words.
The sentiment of url: Since almost all the articles are written in well-formatted english, we analyze the sentiment of the first paragraph of the article using Standford Sentiment Analysis tool BIBREF4 .
|
Do the authors report only on English language data?
|
Yes.
|
null | false
| null |
Which items from this list are types of poems and which are types of insurance: sonnet, disability, haiku, limerick, auto, umbrella, whole life, and homeowners.
|
The types of poems from this list include: sonnet, haiku, and limerick. The types of insurance include: disability, auto, umbrella, whole life, and homeowners.
|
|
null | false
| null |
Vanessa and Lauren are both 32-year-old women with different levels of exercise experience. Vanessa, who weighs 124 lbs, has lifted weights before but Lauren, who weighs 166 lbs, has never participated in resistance training. Both women participate in initial fitness assessments and want your advice about starting a muscular fitness program at your exercise facility. Choose a different test to measure each woman's upper body muscular strength. Briefly explain the procedures for each assessment. What common errors should you watch for?
|
For Vanessa, we’ll choose for her to perform a one-rep max assessment, specifically with the bench press. This is because she has experience with lifting weights. To start, she should warm up by completing several submaximal repetitions. We’re going to select an initial weight of 50-70% of her capacity, and progressively increase resistance by 2.5 to 20 kg until the subject cannot complete the repetitions. The final weight lifted should be the absolute 1-RM. Some common error we want to watch for is to make sure that the repetitions are performed at the same speed of movement and range of motion to ensure consistency. Additionally, on the setup portion, we should check the surroundings for hazards, set the bar to the right height for Vanessa based on her arm length, remove safety bars from both sides of the rack, secure weights correctly with safety clips, and have a spotter present. We should also make sure there is a proper progression of weight lifted, she’s not bouncing the bar off her chest, that she’s maintaining a neutral spine, and that she is breathing during the lift. For Lauren, because she has never participated in resistance training, we will select the flexed arm support test, as she may not be able to perform the dynamic push-up assessment properly. To start, she should have her hands outside her shoulders, and keep her back and legs straight. She should lower her body until the upper arm is parallel to the floor or the elbow is flexed at 90 degrees. She should be supporting her body in a push-up position from the knees. Some common mistakes include not breathing during the assessment, having a too-wide or too-narrow width for her hands, not maintaining a straight back and legs, and not lowering her body enough for the assessment.
|
|
1804.11297
| false
| null |
To test if the learned representations can separate phonetic categories, we use a minimal pair ABX discrimination task BIBREF19 , BIBREF20 . It only requires to define a dissimilarity function $d$ between speech tokens, no external training algorithm is needed. We define the ABX-discriminability of category $x$ from category $y$ as the probability that $A$ and $X$ are further apart than $B$ and $X$ when $A$ and $X$ are from category $x$ and $x$0 is from category $x$1 , according to a dissimilarity function $x$2 . Here, we focus on phone triplet minimal pairs: sequences of 3 phonemes that differ only in the central one (“beg”-“bag”, “api”-“ati”, etc.). For the within-speaker task, all the phones triplets belong to the same speaker (e.g. $x$3 ) Finally the scores for every pair of central phones are averaged and subtracted from 1 to yield the reported within-talker ABX error rate. For the across-speaker task, $x$4 and $x$5 belong to the same speaker, and $x$6 to a different one (e.g. $x$7 ). The scores for a given minimal pair are first averaged across all of the pairs of speakers for which this contrast can be made. As above, the resulting scores are averaged over all contexts over all pairs of central phones and converted to an error rate.
To test if the learned representations can separate phonetic categories, we use a minimal pair ABX discrimination task BIBREF19 , BIBREF20 .
For the within-speaker task, all the phones triplets belong to the same speaker (e.g. $x$3 ) Finally the scores for every pair of central phones are averaged and subtracted from 1 to yield the reported within-talker ABX error rate. For the across-speaker task, $x$4 and $x$5 belong to the same speaker, and $x$6 to a different one (e.g. $x$7 ). The scores for a given minimal pair are first averaged across all of the pairs of speakers for which this contrast can be made. As above, the resulting scores are averaged over all contexts over all pairs of central phones and converted to an error rate.
|
What is the metric that is measures in this paper?
|
error rate in a minimal pair ABX discrimination task
|
null | false
| null |
I want to write a some new articles for my data blog, can you give me some snappy ideas?
|
Lakehouse vs Warehouse - Hype vs Reality
ETL in the 2020s - What's in and what's out
Is the Data Warehouse dead - modern analytics for the modern era
Open data sharing - The whys, hows and gotchas
|
|
null | false
| null |
Electricity generation is the process of generating electric power from sources of primary energy. For utilities in the electric power industry, it is the stage prior to its delivery (transmission, distribution, etc.) to end users or its storage (using, for example, the pumped-storage method).
The first power plants used water power or coal. Today, various energy sources are used, such as coal, nuclear, natural gas, hydroelectric, wind, and oil, as well as solar energy, tidal power, and geothermal sources.
|
How is electricity generated?
|
Electricity generation is the process of generating electric power from sources of primary energy. For utilities in the electric power industry, it is the stage prior to its delivery (transmission, distribution, etc.) to end users or its storage (using, for example, the pumped-storage method).
Electricity is not freely available in nature, so it must be "produced" (transforming other forms of energy into electricity). Production is carried out in power stations (also called "power plants"). Electricity is most often generated at a power plant by electromechanical generators, primarily driven by heat engines fueled by combustion or nuclear fission but also by other means such as the kinetic energy of flowing water and wind. Other energy sources include solar photovoltaics and geothermal power. There are also exotic and speculative methods to recover energy, such as proposed fusion reactor designs which aim to directly extract energy from intense magnetic fields generated by fast-moving charged particles generated by the fusion reaction.
|
1811.01299
| true
| null |
Once the S-V-O is generated, Text2Visual provides users with visual components that convey the S-V-O text meanings.
Once the S-V-O is generated, Text2Visual provides users with visual components that convey the S-V-O text meanings.
|
Does their solution involve connecting images and text?
|
Yes.
|
null | false
| null |
Identify which instrument is string or percussion: Janggu, Chapman Stick
|
Chapman Stick is string, Janggu is percussion.
|
|
1611.02550
| false
| null |
FLOAT SELECTED: Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations.
FLOAT SELECTED: Table 1: Final test set results in terms of average precision (AP). Dimensionalities marked with * refer to dimensionality per frame for DTW-based approaches. For CNN and LSTM models, results are given as means over several training runs (5 and 10, respectively) along with their standard deviations.
|
By how much do they outpeform previous results on the word discrimination task?
|
Their best average precision tops previous best result by 0.202
|
null | false
| null |
Identify which instrument is string or percussion: Wind machine, Langeleik
|
Langeleik is string, Wind machine is percussion.
|
|
null | false
| null |
Name a Jane Austen book.
|
Pride and Prejudice.
|
|
null | false
| null |
Hawking was born on 8 January 1942 in Oxford to Frank and Isobel Eileen Hawking (née Walker). Hawking's mother was born into a family of doctors in Glasgow, Scotland. His wealthy paternal great-grandfather, from Yorkshire, over-extended himself buying farm land and then went bankrupt in the great agricultural depression during the early 20th century. His paternal great-grandmother saved the family from financial ruin by opening a school in their home. Despite their families' financial constraints, both parents attended the University of Oxford, where Frank read medicine and Isobel read Philosophy, Politics and Economics. Isobel worked as a secretary for a medical research institute, and Frank was a medical researcher. Hawking had two younger sisters, Philippa and Mary, and an adopted brother, Edward Frank David (1955–2003).
https://en.wikipedia.org/wiki/Stephen_Hawking
|
From this passage, extract the names of Hawking's siblings, including both by birth and through adoption, and separate by bullets.
|
The names of Hawking's siblings are:
-Philippa
-Mary
-Edward Frank David
|
null | false
| null |
Tell me about Indiana
|
Indiana is a state in the Midwest region of the United States and has the motto "Crossroads of America" due to the large number of interstates that pass through the state. The capitol is Indianapolis, which is home to over 800,000 people. Indiana is known for its agriculture and the popularity of basketball, particularly the Indiana University Hoosiers basketball team.
|
|
null | false
| null |
Barack Hussein Obama II (/bəˈrɑːk huːˈseɪn oʊˈbɑːmə/ (listen) bə-RAHK hoo-SAYN oh-BAH-mə; born August 4, 1961) is an American former politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, he was the first African-American president of the United States. Obama previously served as a U.S. senator representing Illinois from 2005 to 2008 and as an Illinois state senator from 1997 to 2004, and worked as a civil rights lawyer before holding public office.
Obama was born in Honolulu, Hawaii. After graduating from Columbia University in 1983, he worked as a community organizer in Chicago. In 1988, he enrolled in Harvard Law School, where he was the first black president of the Harvard Law Review. After graduating, he became a civil rights attorney and an academic, teaching constitutional law at the University of Chicago Law School from 1992 to 2004. Turning to elective politics, he represented the 13th district in the Illinois Senate from 1997 until 2004, when he ran for the U.S. Senate. Obama received national attention in 2004 with his March Senate primary win, his well-received keynote address at the July Democratic National Convention, and his landslide November election to the Senate. In 2008, after a close primary campaign against Hillary Clinton, he was nominated by the Democratic Party for president and chose Joe Biden as his running mate. Obama was elected over Republican nominee John McCain in the presidential election and was inaugurated on January 20, 2009. Nine months later, he was named the 2009 Nobel Peace Prize laureate, a decision that drew a mixture of praise and criticism.
Obama's first-term actions addressed the global financial crisis and included a major stimulus package, a partial extension of George W. Bush's tax cuts, legislation to reform health care, a major financial regulation reform bill, and the end of a major US military presence in Iraq. Obama also appointed Supreme Court justices Sonia Sotomayor and Elena Kagan, the former being the first Hispanic American on the Supreme Court. He ordered the counterterrorism raid which killed Osama bin Laden and downplayed Bush's counterinsurgency model, expanding air strikes and making extensive use of special forces while encouraging greater reliance on host-government militaries.
|
Can you list a few cities that Obama lived in from the below text?
|
Honolulu, Chicago,
|
null | false
| 200
|
Datasets and Models We evaluate our adversarial attacks on different text classification datasets from tasks such as sentiment classification, subjectivity detection and question type classification. Amazon, Yelp, IMDB are sentence-level sentiment classification datasets which have been used in recent work BIBREF15 while MR BIBREF16 contains movie reviews based on sentiment polarity. MPQA BIBREF17 is a dataset for opinion polarity detection, Subj BIBREF18 for classifying a sentence as subjective or objective and TREC BIBREF19 is a dataset for question type classification.
We use 3 popular text classification models: word-LSTM BIBREF20, word-CNN BIBREF21 and a fine-tuned BERT BIBREF12 base-uncased classifier. For each dataset we train the model on the training data and perform the adversarial attack on the test data. For complete model details refer to Appendix.
As a baseline, we consider TextFooler BIBREF11 which performs synonym replacement using a fixed word embedding space BIBREF22. We only consider the top $K{=}50$ synonyms from the MLM predictions and set a threshold of 0.8 for the cosine similarity between USE based embeddings of the adversarial and input text.
Results We perform the 4 modes of our attack and summarize the results in Table . Across datasets and models, our BAE attacks are almost always more effective than the baseline attack, achieving significant drops of 40-80% in test accuracies, with higher average semantic similarities as shown in parentheses. BAE-R+I is the strongest attack since it allows both replacement and insertion at the same token position, with just one exception. We observe a general trend that the BAE-R and BAE-I attacks often perform comparably, while the BAE-R/I and BAE-R+I attacks are much stronger. We observe that the BERT-based classifier is more robust to the BAE and TextFooler attacks than the word-LSTM and word-CNN models which can be attributed to its large size and pre-training on a large corpus.
The baseline attack is often stronger than the BAE-R and BAE-I attacks for the BERT based classifier. We attribute this to the shared parameter space between the BERT-MLM and the BERT classifier before fine-tuning. The predicted tokens from BERT-MLM may not drastically change the internal representations learned by the BERT classifier, hindering their ability to adversarially affect the classifier prediction.
Effectiveness We study the effectiveness of BAE on limiting the number of R/I operations permitted on the original text. We plot the attack performance as a function of maximum $\%$ perturbation (ratio of number of word replacements and insertions to the length of the original text) for the TREC dataset. From Figure , we clearly observe that the BAE attacks are consistently stronger than TextFooler. The classifier models are relatively robust to perturbations up to 20$\%$, while the effectiveness saturates at 40-50$\%$. Surprisingly, a 50$\%$ perturbation for the TREC dataset translates to replacing or inserting just 3-4 words, due to the short text lengths.
Qualitative Examples We present adversarial examples generated by the attacks on a sentence from the IMDB and Yelp datasets in Table . BAE produces more natural looking examples than TextFooler as tokens predicted by the BERT-MLM fit well in the sentence context. TextFooler tends to replace words with complex synonyms, which can be easily detected. Moreover, BAE's additional degree of freedom to insert tokens allows for a successful attack with fewer perturbations.
Human Evaluation We consider successful adversarial examples generated from the Amazon and IMDB datasets and verify their sentiment and grammatical correctness. Human evaluators annotated the sentiment and the grammar (Likert scale of 1-5) of randomly shuffled adversarial examples and original texts. From Table , BAE and TextFooler have inferior accuracies compared to the Original, showing they are not always perfect. However, BAE has much better grammar scores, suggesting more natural looking adversarial examples.
Ablation Study We analyze the benefits of R/I operations in BAE in Table . From the table, the splits $\mathbb {A}$ and $\mathbb {B}$ are the $\%$ of test points which compulsorily need I and R operations respectively for a successful attack. We can observe that the split $\mathbb {A}$ is larger than $\mathbb {B}$ thereby indicating the importance of the I operation over R. Test points in split require both R and I operations for a successful attack. Interestingly, split is largest for Subj, which is the most robust to attack (Table ) and hence needs both R/I operations. Thus, this study gives positive insights towards the importance of having the flexibility to both replace and insert words.
Refer to the Appendix for additional results, effectiveness graphs and details of human evaluation.
We calculate the sentiment accuracy by asking 3 annotators to predict the sentiment for each sentence in this shuffled set.
|
How to calculate the sentiment accuracy?
|
The authors calculate the sentiment accuracy by asking 3 annotators to predict the sentiment for each sentence in this shuffled set.
|
null | false
| null |
Which female Canadian director has directed episodes of Better Call Saul, The X-Files, and Game of Thrones?
|
Michelle MacLaren
|
|
null | false
| null |
What are financial instruments?
|
Financial instruments are monetary contracts between parties. They can be created, traded, modified and settled. They can be cash (currency), evidence of an ownership interest in an entity or a contractual right to receive or deliver in the form of currency (forex); debt (bonds, loans); equity (shares); or derivatives (options, futures, forwards).
International Accounting Standards IAS 32 and 39 define a financial instrument as "any contract that gives rise to a financial asset of one entity and a financial liability or equity instrument of another entity".
Financial instruments may be categorized by "asset class" depending on whether they are equity-based (reflecting ownership of the issuing entity) or debt-based (reflecting a loan the investor has made to the issuing entity). If the instrument is debt it can be further categorized into short-term (less than one year) or long-term. Foreign exchange instruments and transactions are neither debt- nor equity-based and belong in their own category.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.