paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Which characters belong to DC or Marvel Universe? Joseph Martin, Ghost Rider
|
Ghost Rider is Marvel, Joseph Martin is DC
|
|
null | false
| null |
Which F1 drivers race for Red Bull Racing in the 2023 season?
|
Max Verstappen and Sergio Perez
|
|
null | false
| null |
When was the last time the Cubs won the World Series
|
The Chicago Cubs last won the World Series in 2016. They defeated the Cleveland Indians, now referred to as the Cleveland Guardians, in 7 games. It was a memorable World Series for Chicago considering the Cubs did not play in a World Series for 71 years. The 7th game was very close as the Cubs blew a 3 run lead in the 8th, leading to extra innings. Ultimately winning the series in 10 with the last out being a ground ball to a smiling Kris Bryant who threw to Anthony Rizzo for the last out.
|
|
null | false
| null |
Office of War Information?
|
Franklin D Roosevelt create the Office of War information in 1942. It helped centralize and govern everything involved with the war effort.
|
|
null | false
| null |
Brazilian jiu-jitsu was initially developed in 1925 by Brazilian brothers Carlos, Oswaldo, Gastão Jr., George, and Hélio Gracie, after Carlos was taught jiu-jitsu by a travelling Japanese judoka, Mitsuyo Maeda, who himself mastered his ground fighting while interacting with Taro Miyake (Tanabe student), Sadakazu Uyenishi (Handa, Tanabe) and Yukio Tani (Tenjin Shinyo-ryu) and catch wrestlers in Europe. Later on, the Gracie family developed their own self-defense system, and published Gracie Jiu-Jitsu.
|
Which martial art or arts did Brazilian jiu-jitsu derive from?
|
Brazilian jiu-jitsu was derived from judo and catch wrestling.
|
null | false
| null |
Why is it important to be a listener?
|
I genuinely think we need to practice being listeners more, putting ourselves in other people’s shoes more, looking beyond our own problems more. The only way to foster truly deep connections is through listening to others as much as we want to be listened to by others. I cannot stress the importance of cultivating a natural curiosity for other people’s stories and feeling for them, as much as we feel for ourselves and our own issues, enough. In the end it’s all about loving as much as we want to be loved.
|
|
null | false
| null |
Where is the best place to get married
|
In nature
|
|
null | false
| null |
After the widening project in 1992, Interstate 275 was increased to eight lanes on the bridge itself. However, this did not increase capacity on either end of the bridge. Backups were still seen on the Howard Frankland heading into Tampa, primarily due to a bottleneck at the SR 60/Veterans Expressway exit. On the St. Petersburg side, after a comprehensive reconstruction project that took over ten years, lane counts were increased from four lanes prior to the bridge to six lanes through downtown St. Petersburg, and eight lanes from Gandy Boulevard to the bridge.
A reconstruction project was planned to begin in 2017 for the new Gateway Expressway project, a plan to build a new toll road to connect different parts in Pinellas County. However, FDOT planned to reconstruct the interstate in smaller phases rather than the original larger two-phase project and the start of construction was delayed to 2020.
Once the reconstruction project is finished in 2025, major traffic congestion on the Howard Frankland bridge is expected to be significantly reduced by the addition of new lanes. On January 7, 2021, FDOT postponed by a week to January 16, 2021 the start date for removing an overpass and the corresponding exit ramp over I-275. It had been scheduled to shut down by 8 p.m., EST, January 9, through January 10. This work was part of the U.S. $600 million toll road project. On January 16 to 17, 2021, workers removed the 4th street north interchange bridge (overpass) from 8 p.m. on January 16, to 12 p.m. on January 17. Exit 32 was closed until late 2021 while the new overpass is constructed.
Overall construction on this large project began in fall 2020 and completion is expected in late 2025. A completely new bridge will carry southbound traffic. Once that bridge is completed, the existing southbound bridge will be converted to carry northbound traffic. When all traffic is moved to the final alignment, the existing northbound bridge will be removed. The newly built bridge for southbound traffic will include a separate pathway for pedestrian and bicycle traffic, and additional vehicle traffic lanes.[
|
Based on the reference text, what was the start date and the expected completion date of the reconstruction project on the Howard Frankland bridge.
|
The reconstruction project of the Howard Frankland bridge started in fall of 2020 and is expected to be completed in 2025.
|
null | false
| null |
Who wrote The Godfather?
|
The Godfather was written by Mario Puzo, originally published in 1969.
|
|
null | false
| null |
How much caffeine is in a cup of coffee on average. Does the size of the cup matter?
|
The short answer to this is that it depends on several factors. These factors include the bean, where it was grown, how it was roasted and then how the coffee is brewed. There are also decaf versions which have close to no coffee. Normal brew, Espresso, Instant, French Press, Drip coffee, pour over, and cold brew all tend to vary slightly in caffeine content that ultimately make it into your cup. In general you can expect con average 95 mg of caffeine from a normal size drip coffee.
Another example from a reputable coffee chain (Dunkin Donuts) claims the following:
amounts per size:
Small (10 oz): 215 mg
Medium (16 oz): 302 mg
Large (20 oz): 431 mg
Extra large (24 oz): 517 mg
Too much caffeine can have health risks that one should be aware of.
|
|
null | false
| null |
He was the Democratic leader of the state senate from 1996 to 2008, but he stepped down to explore a run for Governor of Indiana. From 2008 to 2014 Young served as the Minority Leader Pro Tempore. Young is also a co-founder and member of the Rural Caucus. Young served as the Ranking Democrat on the Agriculture and Natural Resources and Local Government committees.
|
From given text, Please jot down some points about Richard D. Young
|
1. From 1996 to 2008, Richard D. Young served as the Democratic leader of the state senate, but he resigned to consider running for governor of Indiana.
2. Young served as the Minority Leader Pro Tempore from 2008 until 2014.
3. Young joined the Rural Caucus as a member and co-founder.
4. Young served as the top Democrat on the committees for local government and agriculture and natural resources.
|
null | false
| null |
Why do humans need sunshine?
|
It makes people feel warm. It also provides vitamin D to make people more energetic. Without enough sunshine, people can become depressed.
|
|
null | false
| null |
Explain different pizza styles
|
New York style pizza is composed of a thin crust that is cooked to a crispy consistency. The sauce is typically a basic tomato puree with some seasoning. Cheese and sauce is spread uniformly within a half inch of the edge of the circular dough. Slices of a New York pizza are portable and are often eating without utensils. New York pizza has a traditional set of optional toppings, including pepperoni, mushrooms, sausage and onions.
Chicago style pizza is often cooked in a deep, round pan. The crust is composed of a flaky dough that is similar to a pastry. Sauces are often rich and have a chunky consistency. Slices of this pizza are usually eaten with the assistance of a knife and fork.
Detroit style pizza is often cooked in a deep, rectangular pan. The bottom of the pan is often oiled or seasoned so that the bottom of the crust is browned and crispy. Cheese is applied to the edge of the pan, and this provides a caramelized cheese edge. The dough is thick and airy, and slices are study enough to be eaten by hand.
Neapolitan style pizza is similar to New York but is more rustic in nature. The dough is often more thick than New York pizza and cheese slices are spread sparsely along with some basil leaves.
California style pizza is similar to New York style pizza but has non-standard toppings and sauces. For instance, a California pizza may use a pesto base as the sauce and goat cheese with walnuts as toppings.
|
|
null | false
| 129
|
Relation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence “We poured the <e1>milk</e1> into the <e2>pumpkin mixture</e2>.”, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 , recent research showed performance improvements by applying neural networks (NNs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 on the benchmark data from SemEval 2010 shared task 8 BIBREF8 .
This study investigates two different types of NNs: recurrent neural networks (RNNs) and convolutional neural networks (CNNs) as well as their combination. We make the following contributions:
(1) We propose extended middle context, a new context representation for CNNs for relation classification. The extended middle context uses all parts of the sentence (the relation arguments, left of the relation arguments, between the arguments, right of the arguments) and pays special attention to the middle part.
(2) We present connectionist bi-directional RNN models which are especially suited for sentence classification tasks since they combine all intermediate hidden layers for their final decision. Furthermore, the ranking loss function is introduced for the RNN model optimization which has not been investigated in the literature for relation classification before.
(3) Finally, we combine CNNs and RNNs using a simple voting scheme and achieve new state-of-the-art results on the SemEval 2010 benchmark dataset.
We present connectionist bi-directional RNN models which are especially suited for sentence classification tasks since they combine all intermediate hidden layers for their final decision.
|
Why connectionist bi-directional RNN models are especially suited for sentence classification tasks?
|
Since they combine all intermediate hidden layers for their final decision.
|
null | false
| null |
What are some good ski resorts for beginners?
|
Most ski resorts have excellent beginner areas (often called bunny hills), which are labeled as "green runs". These areas are also where beginner ski lessons are taught. In some cases, though, beginners prefer to go to smaller or less expensive ski resorts as they are not able to take advantage of the more difficult terrain on a larger mountain. In that case, a smaller, cheaper ski resort might be better for a beginner; these tend to be located on smaller mountains and can often be more convenient to nearby cities and towns, although the snow quality and number of ski runs is often significantly less.
|
|
2003.01472
| true
| null |
Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation).
For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation).
|
Is this software available to the public?
|
Yes.
|
null | false
| 483
|
In order to capture and filter the relevant information from the comments, we propose a transformerbased Context Adapter Module (CAM) which operates in a residual fashion, additively adapting either the visual or text branch of CLIP with contextual information obtained from the comments (see Fig.). Formally, we are now adding another modality-the comments-to the input which extends it to x i = (v i , t i , c i,1 , . . . , c i,M ) with c i,k ∈ T . To reduce clutter in the notation, we have defined a fixed number of comments M for each sample. Since both, title and comments, share the same underlying modality, namely text, we can leverage the same encoder to transform comments to embeddings f t (c ik ) = φ c,ik .
As we expect the comments to be sometimes unrelated, our Context Adapter Module needs a mechanism to discount off-topic comments and update the primary modality φ v (v i ) or φ t (t i ), steering it in the most informative direction.
We introduce this mechanism as a function of both the primary modality and the comment embeddings φ c,ik , as we want to compare the informativeness of all these inputs at a high level. To this end, we design adapter modules g v and g t that extract information from the comments in the form of a residual:
With the adapted embeddings φvi and φti we recompute the affinity matrix (now Â) (Eq. 1) and use it for the loss L( Â). This design has several advantages. On one hand, extracting "only" a residual from the auxiliary inputs c ik means that the model is easily able to ignore them by predicting g(•) = 0. On the other hand, this effectively allows us to skip the adapter module when we benchmark the model on a dataset that does not have comments, while still learning the joint embedding from richer data.
In practice, we implement g as a small transformer architecture. Rather than operating on tokenised words, this transformer operates on embeddings (φ vi and φ ti ) themselves, taking as input the encoded feature from the branch to be adapted, along with comment features φ c,ik . By treating embeddings as tokens in their own right, we allow the embeddings to attend to each other and learn what combinations of the inputs should be used to update the original feature through the residual connection.
Additionally, to avoid bleeding information between the two modalities through the Context Adapter Module, during training, we only adapt either the video embedding with g v or the text embedding with g t . If we would use both adapters simultaneously, there is a trivial solution that minimizes the loss L: when the adapters learn to remove the original embedding through the residual, e.g.
both adapted embeddings become the same φvi = φti which trivially maximizes their similarity, thus preventing the model to learn a meaningful modality alignment. To prevent the model from learning a transformation of the embedding space through the residual, we train only one adapter at a time. We also consider the case where g v = g t and choose a random modality to adapt for each minibatch, leading to an adapter that is agnostic to the source of the primary embedding.
We evaluate our Context Adapter Module on the above described datasets with comments in Table. We find that on both datasets adding comments boosts the retrieval performance significantly, confirming the value of the modality. Further, moving from images to videos also improves the performance. Since KineticsComments (human actions) is quite different to RedditVC (broad range of videos, games, etc.) combining both datasets bridges the domain gap and improves the evaluation performance on KineticsComments.
Baseline Comparisons. Since the main purpose of the Context Adapter Module is to extract meaningful information from sometimes unrelated auxiliary data, we perform most of the experiments in this section with an image/text backbone using a single video frame instead of a video/text backbone to reduce the computational burden of the evaluation.
In Table we evaluate the efficacy of the proposed Context Adapter Module against various baselines in various settings on our RedditVC dataset. The most basic baseline is achieved by not using any form of context adaption, which degenerates the model to the one proposed by.
Across all experimental settings, we find that finetuning the backbone architecture helps to improve the performance, while using the proposed CAM with frozen backbones (rows h,k) even outperforms the naive baseline of row (a). We also compare to two baselines. The first baseline is randomly replacing the title features with comments during training (Table(c-d)), however, this does not yield any improvement over no adaptation. In the second baseline we average the title features φ t and the . Surprisingly, we find that training with comments performs best when they are added to the image branch (rows j-l), as opposed to the text-branch (rows g-i). We hypothesize that this is because the comments can better adapt the semantically richer feature of the visual modality as opposed to the text, which can often be as short as a single word.
Adapting Different Modalities. Since both the visual embedding φ v and the text embedding φ t , come from the same embedding space and should be similar for the same sample, we perform an experiment where we swap the embeddings during evaluation time. In Table we find that there are still differences between φ v and φ t and swapping results in decreased performance. When we train both adapters we can achieve good performance in both cases. Auxiliary Information. Additionally, Table shows how varying the source of the text at train and test time impacts the retrieval results. Whilst the performance of the model trained with image labels degrades when given comments at evaluation time, a model trained with comments still benefits from image labels at test time. As expected, the performance of both models is negatively impacted when tested with uncorrelated random words as additional text. However, the model trained with image labels does noticeably worse in this setting. This suggests that it has become overly reliant on image labels and easily fooled by irrelevant information, which overrides the title feature.
Varying the number of comments. In Fig. we vary the number of comments during training and evaluation time. Training on more comments results in better retrieval results when evaluated on the same number of comments. However, while there is little difference for models trained with only a few comments, we found that increasing the number of comments at train time leads to models that are more robust to different number of comments at test time, indicating that training with more comments learns an overall better context adapter module that is then able to work efficiently in various settings.
Figure 2: Method Overview. We introduce a context adapter module that uses inputs of the auxiliary modality to adapt the embedding of another branch. With this module the model is able to accept or discount information.
|
Can you illustrate the structure of the network?
|
The main architecture of our context adapter model is shown in Fig. 2. The backend is CLIP extended with the TimeSformer principle [1] for which we will add a diagram to the appendix.[1] Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding?
arXiv preprint arXiv:2102.05095, 2021.
|
null | false
| 426
|
EHR data comprise complex time-series data, being high-dimensional, multi-modal and heterogeneous, and thus presenting challenges when used in machine learning models;). An important goal in a medical setting is to identify phenotypically separable clusters with distinct phenotypic profiles (which we denote as phenotypic clustering hereafter). For the purpose of this work, cluster phenotypes result from the combination of two distinct components: a) the evolution profile of patient trajectories' within the cluster, and b) the characterisation of the cluster with regards to clinical variables of interest. The latter may include features not used for clustering and may provide information about the underlying or future health status.
Traditional clustering models such as K-Means or hierarchical clustering have been shown to fail to capture the existing time-dependent feature relationships. As such, variants have been proposed to mitigate this problem. A temporal version of the K-Means algorithm, Time-Series K-Means (TSKM,), models the distance between time-series of different datapoints, using the Euclidean distance (which is equivalent to considering all temporal observations as an independent feature value for the corresponding patient admission), or time-series alignment strategies such as Dynamic-Time Warping) and soft-DTW, to infer temporal evolution within the latent space. Clustering is performed in the low dimensional latent space through the use of self-organising maps to obtain a discrete, topologically-interpretable latent representation of the learnt clusters. In a supervised setting, AC-TPC serves as the current state-of-the-art for identifying phenotypically separable clusters in patient trajectories in EHR data. AC-TPC maps EHR data into a latent space via an encoder, and uses an actor-critic network which leverages clinical outcomes to aid in cluster formation and obtaining cluster phenotypes. Neither SOM-VAE and AC-TPC provide clinically meaningful interpretation of feature-time importance or outcome of interest.
Attention mechanisms have recently been proposed to provide greater interpretability to Recurrent Neural Networks (RNN) and to aid in dealing with long-term dependencies, and have also been used in modelling EHR time-series. RETAIN proposes a two-level reverse attention mechanism to mimic physician's decision process and predict a future diagnosis. In other recent works, attention mechanisms based on bi-directional RNN and CNN outperformed standard classification models in predicting high risk vascular diseases with the addition of medication information as input data. A drawback of such attention mechanisms is the focus on temporal interpretability only, and inability to look at individual features, which is key in a medical setting. After processing, input data contained over 100,000 unique patient trajectories corresponding to 4,266 unique patient admissions (only last patients admissions were considered in our analysis).
Original trajectories for the patient cohort are shown in the Appendix in for different variables/features. A lack of clear outcome group separability can be observed across temporal and static variables. Furthermore, we note the high degree of imbalance in the dataadmissions with no event account for over 86.8% of the total number of admissions, while event classes correspond to 10.3% Death, 1.8% ICU and 1.1% Cardiac.
For the purpose of this work, cluster phenotypes result from the combination of two distinct components: a) the evolution profile of patient trajectories’ within the cluster, and b) the character_x005f_x0002_isation of the cluster with regards to clinical variables of interest. The latter may include features not used for clustering and may provide information about the underlying or future health status.
|
However, given the capacity of deep neural networks and the ease of learning convex clusters, why would the deep learning method be at a disadvantage?
|
While the TSKM Baseline performs better when considering solely clustering metrics, we argue that the clusters learnt by TSKM are less relevant to our overall prediction task as well as performing worse at identifying separable cluster phenotypes.
We clarify the definition of “phenotype” in Section 2 (highlighted in yellow) – a cluster phenotype is a combination of a) trajectory evolution profile, and b) characterisation with regards to an outcome of interest. In our setting, four patient outcomes were considered, based on admission events (or discharge). Outcomes are unknown for new admissions, yet they are ultimately what hospital clinicians are interested in identifying.
TSKM Clusters are very hard to distinguish with regards to clinical outcome prediction. This can be seen in Table 4, where TSKM performs much worse than CAMELOT (0.55 AUROC for TSKM vs 0.73 AUROC). Furthermore, cluster trajectories learnt by TSKM are less separable with regards to trajectory evolution. We have added to the Appendix Figures A10 and A11 to illustrate this phenomenon, and to justify how pure standard clustering metrics, while useful, might fail to identify relevant feature trends in the input trajectories.
|
null | false
| null |
If I wanted to travel the world, would it be faster to travel by air or sea?
|
Generally, you can travel faster by air than sea.
|
|
null | false
| null |
Optimus, also known as Tesla Bot, is a conceptual general-purpose robotic humanoid under development by Tesla, Inc. It was announced at the company's Artificial Intelligence (AI) Day event on August 19, 2021. CEO Elon Musk claimed during the event that Tesla would likely build a prototype by 2022. Musk is on record having said that he thinks Optimus "has the potential to be more significant than the vehicle business over time."
Tesla Bot is planned to measure 5 ft 8 in (173 cm) tall and weigh 125 lb (57 kg). According to the presentation made during the first AI Day event, a Tesla Bot will be "controlled by the same AI system Tesla is developing for the advanced driver-assistance system used in its cars" and have a carrying capacity of 45 lb (20 kg). Proposed tasks for the product are ones that are "dangerous, repetitive and boring", such as providing manufacturing assistance.
https://en.wikipedia.org/wiki/Optimus_(robot)
|
Given these paragraphs about the Tesla bot, where and when was it announced?
|
It was announced at the company's Artificial Intelligence (AI) Day event on August 19, 2021.
|
null | false
| null |
Suggest some fantasy books I could read?
|
The Fellowship of the Ring, A Game of Thrones, The Colour of Magic, The Lion, and The Witch and the Wardrobe are all classic Fantasy books.
|
|
null | false
| 364
|
Automatic dialog/conversation systems have served humans for a long time in various fields, ranging from train routing nbcitetrain to museum guiding nbcitemuseum. In the above scenarios, the dialogs are domain-specific, and a typical approach to such in-domain systems is by human engineering, for example, using manually constructed ontologies nbciteyoungsigdial, natural language templates nbcitetemplate, and even predefined dialog states nbcitestatetracking.
Recently, researchers have paid increasing attention to open-domain, chatbot-style human-computer conversation, because of its important commercial applications, and because it tackles the real challenges of natural language understanding and generation nbciteretrieval1,acl,aaai. For open-domain dialogs, rules and temples would probably fail as we can hardly handle the great diversity of dialog topics and natural language sentences. With the increasing number of human-human conversation utterances available on the Internet, previous studies have developed data-oriented approaches in the open domain, which can be roughly categorized into two groups: retrieval systems and generative systems.
When a user issues an utterance (called a query), retrieval systems search for a most similar query in a massive database (which consists of large numbers of query-reply pairs), and respond to the user with the corresponding reply nbciteretrieval1,retrieval2. Through information retrieval, however, we cannot obtain new utterances, that is, all replies have to appear in the database. Also, the ranking of candidate replies is usually judged by surface forms (e.g., word overlaps, tf $\cdot $ idf features) and hardly addresses the real semantics of natural languages.
Generative dialog systems, on the other hand, can synthesize a new sentence as the reply by language models nbciteBoWdialog,acl,aaai. Typically, a recurrent neural network (RNN) captures the query's semantics with one or a few distributed, real-valued vectors (also known as embeddings); another RNN decodes the query embeddings to a reply. Deep neural networks allow complicated interaction by multiple non-linear transformations; RNNs are further suitable for modeling time-series data (e.g., a sequence of words) especially when enhanced with long short term memory (LSTM) or gated recurrent units (GRUs). Despite these, RNN also has its own weakness when applied to dialog systems: the generated sentence tends to be short, universal, and meaningless, for example, “I don't know” nbcitenaacl or “something” nbciteaaai. This is probably because chatbot-like dialogs are highly diversified and a query may not convey sufficient information for the reply. Even though such universal utterances may be suited in certain dialog context, they make users feel boring and lose interest, and thus are not desirable in real applications.
In this paper, we are curious if we can combine the above two streams of approaches for open-domain conversation. To this end, we propose an ensemble of retrieval and generative dialog systems. Given a user-issued query, we first obtain a candidate reply by information retrieval from a large database. The query, along with the candidate reply, is then fed to an utterance generator based on the “bi-sequence to sequence” (biseq2seq) model nbcitemultiseq2seq. Such sequence generator takes into consideration the information contained in not only the query but also the retrieved reply; hence, it alleviates the low-substance problem and can synthesize replies that are more meaningful. After that we use the scorer in the retrieval system again for post-reranking. This step can filter out less relevant retrieved replies or meaningless generated ones. The higher ranked candidate (either retrieved or generated) is returned to the user as the reply.
From the above process, we see that the retrieval and generative systems are integrated by two mechanisms: (1) The retrieved candidate is fed to the sequence generator to mitigate the “low-substance” problem; (2) The post-reranker can make better use of both the retrieved candidate and the generated utterance. In this sense, we call our overall approach an ensemble in this paper. To the best of our knowledge, we are the first to combine retrieval and generative models for open-domain conversation.
Experimental results show that our ensemble model consistently outperforms each single component in terms of several subjective and objective metrics, and that both retrieval and generative methods contribute an important portion to the overall approach. This also verifies the rationale for building model ensembles for dialog systems.
Figure 1: The overall architecture of our model ensemble. We combine retrieval and generative dialog systems by 1 enhancing the generator with the retrieved candidate and by 2 post-reranking of both retrieved and generated candidates.
|
What do they combine retrieval and generative dialog systems by?
|
By enhancing the generator with the retrieved candidate and post-reranking of both retrieved and generated candidates.
|
null | false
| null |
Who is the greatest soccer player of all time?
|
While the answer to this question is quite subjective, common answers include Lionel Messi, Pele, Cristiano Ronaldo, and Maradona. Messi's World Cup title in 2022 put the finishing touches on an already remarkable resume for club and country, while Pele has long been considered the greatest international player of all time. In general it is challenging to compare players in any sport across history, and while it is an entertaining debate my recommendation is to simply enjoy the incredible accomplishments and abilities of athletes.
|
|
null | false
| null |
The title of their top-rated show changed to The George Burns and Gracie Allen Show on September 26, 1936. One successful episode, "Grandpa's 92nd Birthday," aired July 8, 1940. In 1941 they moved from comedy patter into a successful sitcom format, continuing with shows on NBC and CBS until May 17, 1950. As in the early days of radio, the sponsor's name became the show title, such as Maxwell House Coffee Time (1945–49).
|
If the sponsor of the Burns and Allen show was Pee Wee Herman's Big Adventure, the show would be called what?
|
The show title would be Adventure Time with Pee-Wee Herman.
|
null | false
| null |
What are some of the pros of subleasing?
|
Subleasing can provide flexibility to a tenant, allowing them to move earlier than anticipated without having to continue to pay double rent or a lease breaking fee (often 1.5 to 2 times monthly rent, if allowed at all!) For the subletter, it can provide a way in which to find a lease with a non standard length (like 4 or 8 months, not typically offered) at a great rate. For landlords, it continues to keep their space occupied, and provides a new potential next tenant. Needless to say there are wins all around!
|
|
1909.00542
| true
| null |
We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels.
We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels.
|
Did classification models perform better than previous regression one?
|
Yes.
|
null | false
| null |
Who are the last 7 men's icc world cup winners in ODI?
|
- England
- Australia
- India
- Australia
- Australia
- Australia
- Sri Lanka
|
|
1711.02013
| false
| null |
From a character-level view, natural language is a discrete sequence of data, where discrete symbols form a distinct and shallow tree structure: the sentence is the root, words are children of the root, and characters are leafs. However, compared to word-level language modeling, character-level language modeling requires the model to handle longer-term dependencies. We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.
The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset. WSJ10 is the 7422 sentences in the Penn Treebank Wall Street Journal section which contained 10 words or less after the removal of punctuation and null elements. Evaluation was done by seeing whether proposed constituent spans are also in the Treebank parse, measuring unlabeled F1 ( INLINEFORM0 ) of unlabeled constituent precision and recall. Constituents which could not be gotten wrong (those of span one and those spanning entire sentences) were discarded. Given the mechanism discussed in Section SECREF14 , our model generates a binary tree. Although standard constituency parsing tree is not limited to binary tree. Previous unsupervised constituency parsing model also generate binary trees BIBREF11 , BIBREF13 . Our model is compared with the several baseline methods, that are explained in Appendix .
We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets.
The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset.
|
Which dataset do they experiment with?
|
The answers are shown as follows:
* Penn Treebank
* Text8
* WSJ10
|
null | false
| 138
|
Users of photo-sharing websites such as Flickr often provide short textual descriptions in the form of tags to help others find the images. With the availability of GPS systems in current electronic devices such as smartphones, latitude and longitude coordinates are nowadays commonly made available as well. The tags associated with such georeferenced photos often describe the location where these photos were taken, and Flickr can thus be regarded as a source of environmental information. The use of Flickr for modelling urban environments has already received considerable attention. For instance, various approaches have been proposed for modelling urban regions BIBREF0 , and for identifying points-of-interest BIBREF1 and itineraries BIBREF2 , BIBREF3 . However, the usefulness of Flickr for characterizing the natural environment, which is the focus of this paper, is less well-understood.
Many recent studies have highlighted that Flickr tags capture valuable ecological information, which can be used as a complementary source to more traditional sources. To date, however, ecologists have mostly used social media to conduct manual evaluations of image content with little automated exploitation of the associated tags BIBREF4 , BIBREF5 , BIBREF6 . One recent exception is BIBREF7 , where bag-of-words representations derived from Flickr tags were found to give promising result for predicting a range of different environemental phenomena.
Our main hypothesis in this paper is that by using vector space embeddings instead of bag-of-words representations, the ecological information which is implicitly captured by Flickr tags can be utilized in a more effective way. Vector space embeddings are representations in which the objects from a given domain are encoded using relatively low-dimensional vectors. They have proven useful in natural language processing, especially for encoding word meaning BIBREF8 , BIBREF9 , and in machine learning more generally. In this paper, we are interested in the use of such representations for modelling geographic locations. Our main motivation for using vector space embeddings is that they allow us to integrate the textual information we get from Flickr with available structured information in a very natural way. To this end, we rely on an adaptation of the GloVe word embedding model BIBREF9 , but rather than learning word vectors, we learn vectors representing locations. Similar to how the representation of a word in GloVe is determined by the context words surrounding it, the representation of a location in our model is determined by the tags of the photos that have been taken near that location. To incorporate numerical features from structured environmental datasets (e.g. average temperature), we associate with each such feature a linear mapping that can be used to predict that feature from a given location vector. This is inspired by the fact that salient properties of a given domain can often be modelled as directions in vector space embeddings BIBREF10 , BIBREF11 , BIBREF12 . Finally, evidence from categorical datasets (e.g. land cover types) is taken into account by requiring that locations belonging to the same category are represented using similar vectors, similar to how semantic types are sometimes modelled in the context of knowledge graph embedding BIBREF13 .
While our point-of-departure is a standard word embedding model, we found that the off-the-shelf GloVe model performed surprisingly poorly, meaning that a number of modifications are needed to achieve good results. Our main findings are as follows. First, given that the number of tags associated with a given location can be quite small, it is important to apply some kind of spatial smoothing, i.e. the importance of a given tag for a given location should not only depend on the occurrences of the tag at that location, but also on its occurrences at nearby locations. To this end, we use a formulation which is based on spatially smoothed version of pointwise mutual information. Second, given the wide diversity in the kind of information that is covered by Flickr tags, we find that term selection is in some cases critical to obtain vector spaces that capture the relevant aspects of geographic locations. For instance, many tags on Flickr refer to photography related terms, which we would normally not want to affect the vector representation of a given location. Finally, even with these modifications, vector space embeddings learned from Flickr tags alone are sometimes outperformed by bag-of-words representations. However, our vector space embeddings lead to substantially better predictions in cases where structured (scientific) information is also taken into account. In this sense, the main value of using vector space embeddings in this context is not so much about abstracting away from specific tag usages, but rather about the fact that such representations allow us to integrate numerical and categorical features in a much more natural way than is possible with bag-of-words representations.
The remainder of this paper is organized as follows. In the next section, we provide a discussion of existing work. Section SECREF3 then presents our model for embedding geographic locations from Flickr tags and structured data. Next, in Section SECREF4 we provide a detailed discussion about the experimental results. Finally, Section SECREF5 summarizes our conclusions.
Finally, even with these modifications, vector space embeddings learned from Flickr tags alone are sometimes outperformed by bag-of-words representations.
|
How about the performce of vector space embeddings learned from Flickr tags alone?
|
Sometimes outperformed by bag-of-words representations.
|
null | false
| null |
What are some good costumes I can wear for Halloween? Give me at least five in a bullet list
|
- A pumpkin
- A witch
- A lifeguard
- A fairy
- Harry Potter
|
|
null | false
| 126
|
Automatic summarization has enjoyed wide popularity in natural language processing due to its potential for various information access applications. Examples include tools which aid users navigate and digest web content (e.g., news, social media, product reviews), question answering, and personalized recommendation engines. Single document summarization — the task of producing a shorter version of a document while preserving its information content — is perhaps the most basic of summarization tasks that have been identified over the years (see BIBREF0 , BIBREF0 for a comprehensive overview).
Modern approaches to single document summarization are data-driven, taking advantage of the success of neural network architectures and their ability to learn continuous features without recourse to preprocessing tools or linguistic annotations. Abstractive summarization involves various text rewriting operations (e.g., substitution, deletion, reordering) and has been recently framed as a sequence-to-sequence problem BIBREF1 . Central in most approaches BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 is an encoder-decoder architecture modeled by recurrent neural networks. The encoder reads the source sequence into a list of continuous-space representations from which the decoder generates the target sequence. An attention mechanism BIBREF8 is often used to locate the region of focus during decoding.
Extractive systems create a summary by identifying (and subsequently concatenating) the most important sentences in a document. A few recent approaches BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 conceptualize extractive summarization as a sequence labeling task in which each label specifies whether each document sentence should be included in the summary. Existing models rely on recurrent neural networks to derive a meaning representation of the document which is then used to label each sentence, taking the previously labeled sentences into account. These models are typically trained using cross-entropy loss in order to maximize the likelihood of the ground-truth labels and do not necessarily learn to rank sentences based on their importance due to the absence of a ranking-based objective. Another discrepancy comes from the mismatch between the learning objective and the evaluation criterion, namely ROUGE BIBREF13 , which takes the entire summary into account.
In this paper we argue that cross-entropy training is not optimal for extractive summarization. Models trained this way are prone to generating verbose summaries with unnecessarily long sentences and redundant information. We propose to overcome these difficulties by globally optimizing the ROUGE evaluation metric and learning to rank sentences for summary generation through a reinforcement learning objective. Similar to previous work BIBREF9 , BIBREF11 , BIBREF10 , our neural summarization model consists of a hierarchical document encoder and a hierarchical sentence extractor. During training, it combines the maximum-likelihood cross-entropy loss with rewards from policy gradient reinforcement learning to directly optimize the evaluation metric relevant for the summarization task. We show that this global optimization framework renders extractive models better at discriminating among sentences for the final summary; a sentence is ranked high for selection if it often occurs in high scoring summaries.
We report results on the CNN and DailyMail news highlights datasets BIBREF14 which have been recently used as testbeds for the evaluation of neural summarization systems. Experimental results show that when evaluated automatically (in terms of ROUGE), our model outperforms state-of-the-art extractive and abstractive systems. We also conduct two human evaluations in order to assess (a) which type of summary participants prefer (we compare extractive and abstractive systems) and (b) how much key information from the document is preserved in the summary (we ask participants to answer questions pertaining to the content in the document by reading system summaries). Both evaluations overwhelmingly show that human subjects find our summaries more informative and complete.
Our contributions in this work are three-fold: a novel application of reinforcement learning to sentence ranking for extractive summarization; corroborated by analysis and empirical results showing that cross-entropy training is not well-suited to the summarization task; and large scale user studies following two evaluation paradigms which demonstrate that state-of-the-art abstractive systems lag behind extractive ones when the latter are globally trained.
The novel application of reinforcement learning is used to sentence ranking for extractive summarization.
|
What are the functions of the novel application proposed by the author?
|
Their contributions in this work are three-fold: a novel application of reinforcement learning to sentence ranking for extractive summarization;
|
null | false
| null |
The girl was wearing a red dress whilst her friend was wearing a white dress. What colour dress was the girl wearing?
|
The girl was wearing a red dress.
|
|
null | false
| null |
Provide a list of superpower you wish you had
|
1. Fly
2. Hold breath forever
3. Read minds
4. Infinite strength
5. Time Travel
6. Invisibility
7. Future Telling
|
|
null | false
| 252
|
We perform experiments on RumourEval and PHEME datasets to evaluate the performance of our method and the baselines. The experimental results are shown in Table TABREF27. We gain the following observations:
On the whole, most well-designed deep learning methods, such as ours, Bayesian-DL, and TRNN, outperform feature engineering-based methods, like SVM. This illustrates that deep learning methods can represent better intrinsic semantics of claims and replies.
In terms of recall (R), our method and MTL-LSTM, both based on multi-task learning, achieve more competitive performances than other baselines, which presents that sufficient features are shared for each other among multiple tasks. Furthermore, our method reflects a more noticeable performance boost than MTL-LSTM on both datasets, which extrapolates that our method earns more valuable shared features.
Although our method shows relatively low performance in terms of precision (P) and recall (R) compared with some specific models, our method achieves the state-of-the-art performance in terms of accuracy (A) and F1-score (F1) on both datasets. Taking into account the tradeoff among different performance measures, this reveals the effectiveness of our method in the task of fake news detection.
Although our method shows relatively low performance in terms of precision (P) and recall (R) compared with some specific models, our method achieves the state-of-the-art performance in terms of accuracy (A) and F1-score (F1) on both datasets.
|
In terms of which metric their method achieves the state-of-the-art performance?
|
Accuracy (A) and F1-score (F1).
|
null | false
| null |
The winners were announced during the awards ceremony on February 26, 2017. Moonlight became the first film with an all-black cast and the first LGBT-themed film to win Best Picture. In an event unprecedented in the history of the Oscars, La La Land was incorrectly announced as the Best Picture, and, a few minutes later, the error was corrected and Moonlight was declared the winner. O.J.: Made in America, at 467 minutes, became the longest film to win an Academy Award, surpassing the 431-minute long War and Peace, which won the Academy Award for Best Foreign Language Film in 1969. Following the five-part documentary's win, new academy rules barred any "multi-part or limited series" from being eligible for documentary categories. With Casey Affleck winning the Oscar for Best Actor, he and his older brother, Ben Affleck, became the 16th pair of siblings to win Academy Awards. Mahershala Ali became the first Muslim actor to win an Oscar. Viola Davis became the first black person to achieve the Triple Crown of Acting with her Oscar, Emmy, and Tony wins.
|
Which movie was originally declared the winner for Best Picture?
|
La La Land was originally declared the winner for Best Picture
|
null | false
| null |
What were the best TV shows of 2022?
|
The best television shows from 2022 (with the network in parentheses) were:
1. The Bear (FX)
2. Reservation Dogs (Hulu)
3. Barry (HBO)
4. Better Call Saul (AMC/Netflix)
5. White Lotus (HBO)
6. Severance (HBO)
7. Atlanta (FX)
8. Andor (Disney)
9. For All Mankind (HBO)
10. Euphoria (HBO)
|
|
null | false
| 167
|
Statistic topic models such as Latent Dirichlet Allocation (LDA) and its variants BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 have been proven to be effective in modeling textual documents. In these models, a word token in a document is assumed to be generated by a hidden mixture model, where the hidden variables are the topic indexes for each word and the topic assignments for words are related to document level topic weights. Due to the effectiveness and efficiency in modeling the document generation process, topic models are widely adopted in quite a lot of real world tasks such as sentiment classification BIBREF5 , social network analysis BIBREF6 , BIBREF5 , and recommendation systems BIBREF7 .
Most topic models take the bag-of-words assumption, in which every document is treated as an unordered set of words and the word tokens in such a document are sampled independently with each other. The bag-of-words assumption brings computational convenience, however, it sacrifices the characterization of sequential properties of words in a document and the topic coherence between words belonging to the same language segment (e.g., sentence). As a result, people have observed many negative examples. Just list one for illustration BIBREF8 : the department chair couches offers and the chair department offers couches have very different topics, although they have exactly the same bag of words.
There have been some works trying to solve the aforementioned problems, although still insufficiently. For example, several sentence level topic models BIBREF9 , BIBREF10 , BIBREF11 tackle the topic coherence problem by assuming all the words in a sentence to share the same topic (i.e., every sentence has only one topic). In addition, they model the sequential information by assuming the transition between sentence topics to be Markovian. However, words within the same sentence are still exchangeable in these models, and thus the bag-of-words assumption still holds within a sentence. For another example, in BIBREF12 , the embedding based neural language model BIBREF13 , BIBREF14 , BIBREF15 and topic model are integrated. They assume the generation of a given word in a sentence to depend on its local context (including its preceding words within a fixed window) as well as the topics of the sentence and document it lies in. However, using a fixed window of preceding words, instead of the whole word stream within a sentence, could only introduce limited sequential dependency. Furthermore, there is no explicit coherence constraints on the word topics and sentence topics, since every word can have its own topics in their model.
We propose Sentence Level Recurrent Topic Model (SLRTM) to tackle the limitations of the aforementioned works. In the new model, we assume the words in the same sentence to share the same topic in order to guarantee topic coherence, and we assume the generation of a word to rely on the whole history in the same sentence in order to fully characterize the sequential dependency. Specifically, for a particular word INLINEFORM0 within a sentence INLINEFORM1 , we assume its generation depends on two factors: the first is the whole set of its historical words in the sentence and the second is the sentence topic, which we regard as a pseudo word and has its own distributed representations. We use Recurrent Neural Network (RNN) BIBREF16 , such as Long Short Term Memory (LSTM) BIBREF17 or Gated Recurrent Unit (GRU) network BIBREF18 , to model such a long term dependency.
With the proposed SLRTM, we can not only model the document generation process more accurately, but also construct new natural sentences that are coherent with a given topic (we call it topic2sentence, similar to image2sentece BIBREF19 ). Topic2sentence has its huge potential for many real world tasks. For example, it can serve as the basis of personalized short text conversation system BIBREF20 , BIBREF21 , in which once we detect that the user is interested in certain topics, we can let these topics speak for themselves using SLRTM to improve the user satisfactory.
We have conducted experiments to compare SLRTM with several strong topic model baselines on two tasks: generative model evaluation (i.e. test set perplexity) and document classification. The results on several benchmark datasets quantitatively demonstrate SLRTM's advantages in modeling documents. We further provide some qualitative results on topic2sentence, the generated sentences for different topics clearly demonstrate the power of SLRTM in topic-sensitive short text conversations.
We propose Sentence Level Recurrent Topic Model (SLRTM) to tackle the limitations of the aforementioned works. In the new model, we assume the words in the same sentence to share the same topic in order to guarantee topic coherence, and we assume the generation of a word to rely on the whole history in the same sentence in order to fully characterize the sequential dependency.
|
What model do they propose in the paper?
|
Sentence Level Recurrent Topic Model (SLRTM).
|
null | false
| null |
What is rain good for?
|
Rain comes from the sky by ocean and land water evaporating into the clouds. When these nimbus clouds move to land, and the the rain in them is gets too heavy the rain will fall to the ground. Rain is needed for humans, animals, and plants. Purified rain water helps humans to stay hydrated, animals to grow, and plants to grow to make food for humans and animals. Rain also helps with cleaning out pollution from the air because pollution particles stick to the rain and then it falls to the ground.
|
|
null | false
| 42
|
We evaluated our attention transformations on three language pairs. We focused on small datasets, as they are the most affected by coverage mistakes. We use the IWSLT 2014 corpus for De-En, the KFTT corpus for Ja-En BIBREF19 , and the WMT 2016 dataset for Ro-En. The training sets have 153,326, 329,882, and 560,767 parallel sentences, respectively. Our reason to prefer smaller datasets is that this regime is what brings more adequacy issues and demands more structural biases, hence it is a good test bed for our methods. We tokenized the data using the Moses scripts and preprocessed it with subword units BIBREF20 with a joint vocabulary and 32k merge operations. Our implementation was done on a fork of the OpenNMT-py toolkit BIBREF21 with the default parameters . We used a validation set to tune hyperparameters introduced by our model. Even though our attention implementations are CPU-based using NumPy (unlike the rest of the computation which is done on the GPU), we did not observe any noticeable slowdown using multiple devices.
As baselines, we use softmax attention, as well as two recently proposed coverage models:
We also experimented combining the strategies above with the sparsemax transformation.
As evaluation metrics, we report tokenized BLEU, METEOR ( BIBREF22 , as well as two new metrics that we describe next to account for over and under-translation.
DROP-score: a new metric that accounts for possibly dropped words.
|
What is DROP-score?
|
It is a new metric that accounts for possibly dropped words.
|
null | false
| null |
In your mind, what is most concerning about the state of American politics today?
|
I think the most distressing aspect of American politics today is the lack of civil discourse on any issue. Pick an issue, and Americans try to distill often complex topics to a single tweet of 140 characters. The discourse lacks depth and educated discussion. Furthermore, politicians try to rally their base with increasingly polarized takes, and each party shifts further and further towards to their respective extremes. Abandoned in the middle, are the nation's centrists who care more for results and general prosperity than token, empty political victories.
|
|
null | false
| null |
What happens when someone throws a phone onto a mattress?
|
The phone hits the mattress and causes a momentary indent on the mattress. Depending on which part of the phone hits the mattress and the angle as well as speed of the throw, the phone may jump to another spot on the mattress, or it may stay where it first made contact. Regardless, it will likely bounce a few times and then eventually stop moving.
|
|
null | false
| null |
Can you classify what each NFL team is named after?
|
Arizona Cardinals - Animal
Atlanta Falcons - Animal
Baltimore Ravens - Animal
Buffalo Bills - Named after Buffalo Bill Cody
Carolina Panthers - Animal
Chicago Bears - Animal
Cincinnati Bengals - Animal
Cleveland Browns - Named after Head Coach Paul Brown
Dallas Cowboys - Named after men who tend to cattle
Denver Broncos - Animal
Detroit Lions - Animal
Green Bay Packers - Named after an early sponsor of the team, Indian Packing Company
Houston Texans - Named after people who live in Texas
Indianapolis Colts - Animal
Jacksonville Jaguars - Animal
Kansas City Chiefs - Named after former mayor of Kansas City
Las Vegas Raiders - Named after a person who attacks their enemy
Los Angeles Chargers - Named after a person that moves forward with vigor and force
Los Angeles Rams - Animal
Miami Dolphins - Animal
Minnesota Vikings - Named after a group of people originating in Scandinavia
New England Patriots - Named after a person who supports their country
New Orleans Saints - Named after a person that religions recognize as holy and virtuous
New York Giants - Named after of person of exceptional size
New York Jets - Named after a jet airplane
Philadelphia Eagles - Animal
Pittsburgh Steelers - Named after a person who works in a steel factory
San Francisco 49ers - Named after a person who participated in the Gold Rush
Seattle Seahawks - Animal
Tampa Bay Buccaneers - Named after a pirate
Tennessee Titans - Named after a person who is very strong
Washington Commanders - Named after a person in the military
|
|
null | false
| null |
What should I do this spring in San Francisco?
|
During the spring time, people in San Francisco enjoy visiting the parks for the cherry blossom, check out the open markets when the weather is nice, and explore the Golden Gate Park. If you want to venture farther, you can enjoy the wildflower festival in Half Moon Bay, take the ferry to Sausalito, or even spot Humpback whales in the Bay Area!
|
|
1910.04006
| false
| null |
These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain.
These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component.
45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features):
These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain.
These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text.
45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier.
|
How do they incorporate sentiment analysis?
|
The answers are shown as follows:
* features per admission were extracted as inputs to the readmission risk classifier
|
null | false
| null |
How to sell?
|
1. Create a need
2. Offer a product or service to fill the need
3. Rewards and consequences
4. Create urgency and deadlines to act
|
|
null | false
| null |
Which of the following grow on trees?: apples, lemons, potatoes, carrots
|
Apples and lemons grow on trees
|
|
null | false
| null |
Name some popular breakfast foods in the United States
|
In the United States of America popular breakfasts include bacon and eggs, waffles, French toast, cereal and pancakes
|
|
null | false
| 349
|
After finishing constructing UIT-ViIC dataset, we have a look in statistical analysis on our corpus in this section. UIT-ViIC covers 3,850 images described by 19,250 Vietnamese captions. Sticking strictly to our annotation guidelines, the majority of our captions are at the length of 10-15 tokens. We are using the term “tokens” here as a Vietnamese word can consist of one, two or even three tokens. Therefore, to apply Vietnamese properly to Image Captioning, we present a tokenization tool - PyVI BIBREF17, which is specialized for Vietnamese language tokenization, at words level. The sentence length using token-level tokenizer and word-level tokenizer are compared and illustrated in Fig. FIGREF23, we can see there are variances there. So that, we can suggest that the tokenizer performs well enough, and we can expect our Image Captioning models to perform better with Vietnamese sentences that are tokenized, as most models perform more efficiently with captions having fewer words.
Table TABREF24 summarizes top three most occuring words for each part-of-speech. Our dataset vocabulary size is 1,472 word classes, including 723 nouns, 567 verbs, and 182 adjectives. It is no surprise that as our dataset is about sports with balls, the noun “bóng” (meaning “ball") occurs most, followed by “sân” and "cầu thủ" (“pitch” and “athlete” respectively). We also found that the frequency of word “tennis” stands out among other adjectives, which specifies that the set covers the majority of tennis sport, followed by “bóng chày” (meaning “baseball”). Therefore, we expect our model to generate the best results for tennis images.
Therefore, to apply Vietnamese properly to Image Captioning, we present a tokenization tool - PyVI , which is specialized for Vietnamese language tokenization, at words level.
|
What is used to apply Vietnamese properly to Image Captioning?
|
They present a tokenization tool - PyVI.
|
null | false
| 293
|
Knowledge-based question answering (KBQA) aims to answer natural language questions over knowledge bases (KBs) such as DBpedia and Freebase. Formal query generation is an important component in many KBQA systems BIBREF0 , BIBREF1 , BIBREF2 , especially for answering complex questions. Given entity and relation linking results, formal query generation aims to generate correct executable queries, e.g., SPARQL queries, for the input natural language questions. An example question and its formal query are shown in Figure FIGREF1 . Generally speaking, formal query generation is expected to include but not be limited to have the capabilities of (i) recognizing and paraphrasing different kinds of constraints, including triple-level constraints (e.g., “movies" corresponds to a typing constraint for the target variable) and higher level constraints (e.g., subgraphs). For instance, “the same ... as" represents a complex structure shown in the middle of Figure FIGREF1 ; (ii) recognizing and paraphrasing aggregations (e.g., “how many" corresponds to Count); and (iii) organizing all the above to generate an executable query BIBREF3 , BIBREF4 .
There are mainly two kinds of query generation approaches for complex questions. (i) Template-based approaches choose a pre-collected template for query generation BIBREF1 , BIBREF5 . Such approaches highly rely on the coverage of templates, and perform unstably when some complex templates have very few natural language questions as training data. (ii) Approaches based on semantic parsing and neural networks learn entire representations for questions with different query structures, by using a neural network following the encode-and-compare framework BIBREF2 , BIBREF4 . They may suffer from the lack of training data, especially for long-tail questions with rarely appeared structures. Furthermore, both above approaches cannot handle questions with unseen query structures, since they cannot generate new query structures.
To cope with the above limitations, we propose a new query generation approach based on the following observation: the query structure for a complex question may rarely appear, but it usually contains some substructures that frequently appeared in other questions. For example, the query structure for the question in Figure FIGREF1 appears rarely, however, both “how many movies" and “the same ... as" are common expressions, which correspond to the two query substructures in dashed boxes. To collect such frequently appeared substructures, we automatically decompose query structures in the training data. Instead of directly modeling the query structure for the given question as a whole, we employ multiple neural networks to predict query substructures contained in the question, each of which delivers a part of the query intention. Then, we select an existing query structure for the input question by using a combinational ranking function. Also, in some cases, no existing query structure is appropriate for the input question. To cope with this issue, we merge query substructures to build new query structures. The contributions of this paper are summarized below:
To cope with the above limitations, we propose a new query generation approach based on the following observation: the query structure for a complex question may rarely appear, but it usually contains some substructures that frequently appeared in other questions.
|
What is the new query generation approach based on?
|
It is based on the following observation: the query structure for a complex question may rarely appear, but it usually contains some substructures that frequently appeared in other questions.
|
null | false
| 211
|
Events are a kind of important objective information of the world. Structuralizing and representing such information as machine-readable knowledge are crucial to artificial intelligence BIBREF0, BIBREF1. The main idea is to learn distributed representations for structured events (i.e. event embeddings) from text, and use them as the basis to induce textual features for downstream applications, such as script event prediction and stock market prediction.
Parameterized additive models are among the most widely used for learning distributed event representations in prior work BIBREF2, BIBREF3, which passes the concatenation or addition of event arguments' word embeddings to a parameterized function. The function maps the summed vectors into an event embedding space. Furthermore, BIBREF4 ding2015deep and BIBREF5 weber2018event propose using neural tensor networks to perform semantic composition of event arguments, which can better capture the interactions between event arguments.
This line of work only captures shallow event semantics, which is not capable of distinguishing events with subtle differences. On the one hand, the obtained event embeddings cannot capture the relationship between events that are syntactically or semantically similar, if they do not share similar word vectors. For example, as shown in Figure FIGREF2 (a), “PersonX threw bomb” and “PersonZ attacked embassy”. On the other hand, two events with similar word embeddings may have similar embeddings despite that they are quite unrelated, for example, as shown in Figure FIGREF2 (b), “PersonX broke record” and “PersonY broke vase”. Note that in this paper, similar events generally refer to events with strong semantic relationships rather than just the same events.
One important reason for the problem is the lack of the external commonsense knowledge about the mental state of event participants when learning the objective event representations. In Figure FIGREF2 (a), two event participants “PersonY” and “PersonZ” may carry out a terrorist attack, and hence, they have the same intent: “to bloodshed”, which can help representation learning model maps two events into the neighbor vector space. In Figure FIGREF2 (b), a change to a single argument leads to a large semantic shift in the event representations, as the change of an argument can result in different emotions of event participants. Who “broke the record” is likely to be happy, while, who “broke a vase” may be sad. Hence, intent and sentiment can be used to learn more fine-grained semantic features for event embeddings.
Such commonsense knowledge is not explicitly expressed but can be found in a knowledge base such as Event2Mind BIBREF6 and ATOMIC BIBREF7. Thus, we aim to incorporate the external commonsense knowledge, i.e., intent and sentiment, into the learning process to generate better event representations. Specifically, we propose a simple and effective model to jointly embed events, intents and emotions into the same vector space. A neural tensor network is used to learn baseline event embeddings, and we define a corresponding loss function to incorporate intent and sentiment information.
Extensive experiments show that incorporating external commonsense knowledge brings promising improvements to event embeddings, achieving 78% and 200% improvements on hard similarity small and big dataset, respectively. With better embeddings, we can achieve superior performances on script event prediction and stock market prediction compared to state-of-the-art baseline methods.
Extensive experiments show that incorporating external commonsense knowledge brings promising improvements to event embeddings, achieving 78% and 200% improvements on hard similarity small and big dataset, respectively.
|
How much do they achieve improvement on hard similarity small and big datasets through incorporating external commonsense knowledge bringing promising improvements to event embeddings?
|
78% and 200%.
|
null | false
| 41
|
Performance appraisal (PA) is an important HR process, particularly for modern organizations that crucially depend on the skills and expertise of their workforce. The PA process enables an organization to periodically measure and evaluate every employee's performance. It also provides a mechanism to link the goals established by the organization to its each employee's day-to-day activities and performance. Design and analysis of PA processes is a lively area of research within the HR community BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .
The PA process in any modern organization is nowadays implemented and tracked through an IT system (the PA system) that records the interactions that happen in various steps. Availability of this data in a computer-readable database opens up opportunities to analyze it using automated statistical, data-mining and text-mining techniques, to generate novel and actionable insights / patterns and to help in improving the quality and effectiveness of the PA process BIBREF4 , BIBREF5 , BIBREF6 . Automated analysis of large-scale PA data is now facilitated by technological and algorithmic advances, and is becoming essential for large organizations containing thousands of geographically distributed employees handling a wide variety of roles and tasks.
A typical PA process involves purposeful multi-step multi-modal communication between employees, their supervisors and their peers. In most PA processes, the communication includes the following steps: (i) in self-appraisal, an employee records his/her achievements, activities, tasks handled etc.; (ii) in supervisor assessment, the supervisor provides the criticism, evaluation and suggestions for improvement of performance etc.; and (iii) in peer feedback (aka INLINEFORM0 view), the peers of the employee provide their feedback. There are several business questions that managers are interested in. Examples:
In this paper, we develop text mining techniques that can automatically produce answers to these questions. Since the intended users are HR executives, ideally, the techniques should work with minimum training data and experimentation with parameter setting. These techniques have been implemented and are being used in a PA system in a large multi-national IT company.
The rest of the paper is organized as follows. Section SECREF2 summarizes related work. Section SECREF3 summarizes the PA dataset used in this paper. Section SECREF4 applies sentence classification algorithms to automatically discover three important classes of sentences in the PA corpus viz., sentences that discuss strengths, weaknesses of employees and contain suggestions for improving her performance. Section SECREF5 considers the problem of mapping the actual targets mentioned in strengths, weaknesses and suggestions to a fixed set of attributes. In Section SECREF6 , we discuss how the feedback from peers for a particular employee can be summarized. In Section SECREF7 we draw conclusions and identify some further work.
In this paper, we develop text mining techniques that can automatically produce answers to these questions. Since the intended users are HR executives, ideally, the techniques should work with minimum training data and experimentation with parameter setting. These techniques have been implemented and are being used in a PA system in a large multi-national IT company.
|
Have the text mining techniques developed by them been put into practical application?
|
Yes. They have been implemented and are being used in a PA system in a large multi-national IT company.
|
null | false
| null |
Determine which companies are primarily cloud-software companies and which are semiconductor companies: Intel, Google, Workday, Zscaler, AMD, NVIDIA, and Texas Instruments.
|
Intel, AMD, NVIDIA, and Texas Instruments are primarily semiconductor companies. Workday and Zscaler are fully cloud-software companies. Google is primarily a software company, however, they also develop custom-developed application-specific integrated circuits which are semiconductors.
|
|
null | false
| 116
|
The new generation of Neural Machine Translation (NMT) systems is known to be extremely data hungry BIBREF0 . Yet, most existing NMT training pipelines fail to fully take advantage of the very large volume of monolingual source and/or parallel data that is often available. Making a better use of data is particularly critical in domain adaptation scenarios, where parallel adaptation data is usually assumed to be small in comparison to out-of-domain parallel data, or to in-domain monolingual texts. This situation sharply contrasts with the previous generation of statistical MT engines BIBREF1 , which could seamlessly integrate very large amounts of non-parallel documents, usually with a large positive effect on translation quality.
Such observations have been made repeatedly and have led to many innovative techniques to integrate monolingual data in NMT, that we review shortly. The most successful approach to date is the proposal of BIBREF2 , who use monolingual target texts to generate artificial parallel data via backward translation (BT). This technique has since proven effective in many subsequent studies. It is however very computationally costly, typically requiring to translate large sets of data. Determining the “right” amount (and quality) of BT data is another open issue, but we observe that experiments reported in the literature only use a subset of the available monolingual resources. This suggests that standard recipes for BT might be sub-optimal.
This paper aims to better understand the strengths and weaknesses of BT and to design more principled techniques to improve its effects. More specifically, we seek to answer the following questions: since there are many ways to generate pseudo parallel corpora, how important is the quality of this data for MT performance? Which properties of back-translated sentences actually matter for MT quality? Does BT act as some kind of regularizer BIBREF3 ? Can BT be efficiently simulated? Does BT data play the same role as a target-side language modeling, or are they complementary? BT is often used for domain adaptation: can the effect of having more in-domain data be sorted out from the mere increase of training material BIBREF2 ? For studies related to the impact of varying the size of BT data, we refer the readers to the recent work of BIBREF4 .
To answer these questions, we have reimplemented several strategies to use monolingual data in NMT and have run experiments on two language pairs in a very controlled setting (see § SECREF2 ). Our main results (see § SECREF4 and § SECREF5 ) suggest promising directions for efficient domain adaptation with cheaper techniques than conventional BT.
To our knowledge, this is the first work systematically exploring the cross-lingual transferring ability of multi-BERT on RC tasks.
|
Are they the first one tosystematically explore the cross-lingual transferring ability of multi-BERT on RC tasks?
|
Yes.
|
null | false
| null |
Who is the successor of Queen Elizabeth II ?
|
Charles III
|
|
1810.03459
| false
| null |
Table TABREF16 shows the recognition performance of naive multilingual approach using BLSTMP and VGG model against a monolingual model trained with BLSTMP. The results clearly indicate that having a better architecture such as VGG-BLSTM helps in improving multilingual performance. Except Pashto, Georgian and Tokpisin, the multilingual VGG-BLSTM model gave 8.8 % absolute gain in average over monolingual model. In case of multilingual BLSTMP, except Pashto and Georgian an absolute gain of 5.0 % in average is observed over monolingual model. Even though the VGG-BLSTM gave improvements, we were not able to perform stage-1 and stage-2 retraining with it due to time constraints. Thus, we proceed further with multilingual BLSTMP model for retraining experiments tabulated below.
We used a character-level RNNLM, which was trained with 2-layer LSTM on character sequences. We use all available paired text in the corresponding target language to train the LM for the language. No external text data were used. All language models are trained separately from the seq2seq models. When building dictionary, we combined all the characters over all 15 languages mentioned in table TABREF14 to make them work with transferred models. Regardless of the amount of data used for transfer learning, the RNNLM provides consistent gains across all languages over different data sizes.
Table TABREF16 shows the recognition performance of naive multilingual approach using BLSTMP and VGG model against a monolingual model trained with BLSTMP. The results clearly indicate that having a better architecture such as VGG-BLSTM helps in improving multilingual performance.
We used a character-level RNNLM, which was trained with 2-layer LSTM on character sequences.
|
What architectures are explored to improve the seq2seq model?
|
The answers are shown as follows:
* VGG-BLSTM
* character-level RNNLM
|
null | false
| null |
What are 5 potential uses for an empty La Croix can?
|
1. Make your bike sound like it has an engine by wrapping the can around the back tire.
2. A percussion instrument to play along with friends in a jam session. You could just crinkle it, hit it with a pen, or fill it with sand or debris and shake it.
3. A baking tool. You can used the bottom of the can to stamp out perfectly circle, uniform cookies from a rolled-out sheet of cookie dough.
4. A field razor. Tear the can carefully to create an effective cutting device for emergency use.
5. A make shift ball to throw around with friends at the park. An empty can could be a fun projectile to use in a friendly game of 3 Flies Up.
|
|
null | false
| null |
How long is a professional football field.
|
A professional football field is 100 yards long.
|
|
null | false
| null |
Five people have received two Nobel Prizes. Marie Curie received the Physics Prize in 1903 for her work on radioactivity and the Chemistry Prize in 1911 for the isolation of pure radium, making her the only person to be awarded a Nobel Prize in two different sciences. Linus Pauling was awarded the 1954 Chemistry Prize for his research into the chemical bond and its application to the structure of complex substances. Pauling was also awarded the Peace Prize in 1962 for his activism against nuclear weapons, making him the only laureate of two unshared prizes. John Bardeen received the Physics Prize twice: in 1956 for the invention of the transistor and in 1972 for the theory of superconductivity. Frederick Sanger received the prize twice in Chemistry: in 1958 for determining the structure of the insulin molecule and in 1980 for inventing a method of determining base sequences in DNA. Karl Barry Sharpless was awarded the 2001 Chemistry Prize for his research into chirally catalysed oxidation reactions, and the 2022 Chemistry Prize for click chemistry.
|
Extract the names of the noble award winners, respective category and year in which they got from given text and list them using bullets in the format {Name} - {Category} - {Year}
|
• Marie Curie - Physics - 1903
• Marie Curie - Chemistry - 1911
• Linus Pauling - Chemistry - 1954
• Linus Pauling - Peace - 1962
• John Bardeen - Physics - 1956
• John Bardeen - Physics - 1972
• Frederick Sanger - Chemistry - 1958
• Frederick Sanger - Chemistry - 1980
• Karl Barry Sharpless - Chemistry - 2001
• Karl Barry Sharpless - Chemistry - 2022
|
null | false
| null |
What does CEO stand for
|
Chief Executive Officer
|
|
null | false
| null |
Which of these foods are from North and South of India: Chicken Tikka, Chole Samosa, Chole Batura, Idli Sambar, Dosa, Upama, Tamarind Rice
|
Chicken Tikka, Chole Samosa, Chole Batura are North Indian Foods. Idli Sambar, Dosa, Upama, Tamarind Rice are from South of India.
|
|
null | false
| 133
|
The task of document quality assessment is to automatically assess a document according to some predefined inventory of quality labels. This can take many forms, including essay scoring (quality = language quality, coherence, and relevance to a topic), job application filtering (quality = suitability for role + visual/presentational quality of the application), or answer selection in community question answering (quality = actionability + relevance of the answer to the question). In the case of this paper, we focus on document quality assessment in two contexts: Wikipedia document quality classification, and whether a paper submitted to a conference was accepted or not.
Automatic quality assessment has obvious benefits in terms of time savings and tractability in contexts where the volume of documents is large. In the case of dynamic documents (possibly with multiple authors), such as in the case of Wikipedia, it is particularly pertinent, as any edit potentially has implications for the quality label of that document (and around 10 English Wikipedia documents are edited per second). Furthermore, when the quality assessment task is decentralized (as in the case of Wikipedia and academic paper assessment), quality criteria are often applied inconsistently by different people, where an automatic document quality assessment system could potentially reduce inconsistencies and enable immediate author feedback.
Current studies on document quality assessment mainly focus on textual features. For example, BIBREF0 examine features such as the article length and the number of headings to predict the quality class of a Wikipedia article. In contrast to these studies, in this paper, we propose to combine text features with visual features, based on a visual rendering of the document. Figure 1 illustrates our intuition, relative to Wikipedia articles. Without being able to read the text, we can tell that the article in Figure 1 has higher quality than Figure 1 , as it has a detailed infobox, extensive references, and a variety of images. Based on this intuition, we aim to answer the following question: can we achieve better accuracy on document quality assessment by complementing textual features with visual features?
Our visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. We perform experiments on two datasets: a Wikipedia dataset novel to this paper, and an arXiv dataset provided by BIBREF2 split into three sub-parts based on subject category. Experimental results on the visual renderings of documents show that implicit quality indicators, such as images and visual layout, can be captured by an image classifier, at a level comparable to a text classifier. When we combine the two models, we achieve state-of-the-art results over 3/4 of our datasets.
This paper makes the following contributions:
All code and data associated with this research will be released on publication.
We perform experiments on two datasets: a Wikipedia dataset novel to this paper, and an arXiv dataset provided by Kang et al. (2018) split into three sub-parts based on subject category.
|
Which two databases did the authors perform experiments on?
|
A Wikipedia dataset novel to this paper, and an arXiv dataset.
|
null | false
| null |
Classify each item as being related to "golf" or "football": Iron, driver, quarterback, lineman, putter, tee, linebacker
|
Iron: Golf
Driver: Golf
Quarterback: Football
Lineman: Football
Putter: Golf
Tee: Golf
Linebacker: Football
|
|
null | false
| null |
Jing is a 46-year-old male. He is an insurance executive who is married with two children. Jing is active in his church and plays golf on the weekends. He went to see his cardiologist because he experienced recent fatigue with chest pain on exertion. He has never smoked but he consumes 1 to 2 alcoholic drinks per day. His medical history reveals a blood cholesterol level of 260 mg%, a triglyceride level of 198 mg% and an HDL-C value of 43 mg%. Considering his sex, age, symptoms, and risk factors, can you reason about the likelihood of Jing having coronary heart disease?
|
Considering his sex, Jing is male, and men are generally at greater risk of coronary artery disease. Age is also a risk factor, as getting older increases his risk of damaged arteries. Additionally, he has experienced angina, which can be a common symptom of coronary artery disease. He also has high cholesterol, and high triglyceride levels. He also consumes alcohol at high levels. All this in consideration, Jing has a good likelihood of having some degree of coronary heart disease.
|
|
null | false
| null |
In 93rd Academy Awards(Oscar Awards)The Best Picture award was given to whom?
|
Nomadland
|
|
null | false
| null |
The Wimbledon Championships, commonly known simply as Wimbledon, is the oldest tennis tournament in the world and is widely regarded as the most prestigious. It has been held at the All England Club in Wimbledon, London, since 1877 and is played on outdoor grass courts, with retractable roofs over the two main courts since 2019.
|
What are the Wimbledon Championships?
|
The Wimbledon Championships, commonly known simply as Wimbledon, is the oldest tennis tournament in the world and is widely regarded as the most prestigious. It has been held at the All England Club in Wimbledon, London, since 1877 and is played on outdoor grass courts, with retractable roofs over the two main courts since 2019.
Wimbledon is one of the four Grand Slam tennis tournaments, the others being the Australian Open, the French Open, and the US Open. Wimbledon is the only major still played on grass, the traditional tennis playing surface. Also, it is the only Grand Slam that retains a night-time curfew, though matches can now continue until 11.00 pm under the lights.
The tournament traditionally takes place over two weeks in late June and early July, starting on the last Monday in June and culminating with the Ladies' and Gentlemen's Singles Finals, scheduled for the Saturday and Sunday at the end of the second week. Five major events are held each year, with additional junior and invitational competitions also taking place. In 2009, Wimbledon's Centre Court was fitted with a retractable roof to lessen the loss of playing time due to rain. A roof was operational over No. 1 Court from 2019, when a number of other improvements were made, including adding cushioned seating, a table and 10 independently operable cameras per court to capture the games.
Wimbledon traditions include a strict all-white dress code for competitors, and royal patronage. Strawberries and cream are traditionally consumed at the tournament. Unlike other tournaments, advertising is minimal and low key from official suppliers such as Slazenger and Rolex. The relationship with Slazenger is the world's longest-running sporting sponsorship, providing balls for the tournament since 1902.
Due to the COVID-19 pandemic, 2020 Wimbledon was cancelled, the first cancellation of the tournament since World War II. The rescheduled 134th edition was staged from 28 June 2021 to 11 July 2021, following from the 2020 cancellation. The 135th edition was played between 27 June 2022 and 10 July 2022, and regularly scheduled play occurred on the middle Sunday for the first time. It marks the centenary of the inaugural championships staged at the Centre Court. The ATP, ITF, and WTA did not award ranking points for the 2022 tournament, due to controversy over the tournament excluding players representing Russia and Belarus.
The 2023 Wimbledon Championships will be the 136th staging and will run from 3 July 2023 to 16 July 2023 and it will be the first event of King Charles III since the death of the former patron, Queen Elizabeth II on 8 September 2022.
|
null | false
| 309
|
In this paper, we adopt a data-driven approach which includes data collection, data cleaning, data normalization, descriptive analysis and predictive analysis, to evaluate the quality on Zhihu Live platform. To the best of our knowledge, we are the first to research quality evaluation of voice-answering products. We publicize a dataset named ZhihuLive-DB, which contains 7242 records and 286,938 comments text for researchers to evaluate Zhihu Lives' quality. We also make a detailed analysis to reveal inner insights about Zhihu Live. In addition, we propose MTNet to accurately predict Zhihu Lives' quality. Our proposed method achieves best performance compared with the baselines.
As knowledge sharing and Q&A platforms continue to gain a greater popularity, the released dataset ZhihuLive-DB could greatly help researchers in related fields. However, current data and attributes are relatively unitary in ZhihuLive-DB. The malicious comment and assessment on SNS platforms are also very important issues to be taken into consideration. In our future work, we will gather richer dataset, and integrate malicious comments detector into our data-driven approach.
In our future work, we will gather richer dataset, and integrate malicious comments detector into our data-driven approach.
|
What will they do in the future?
|
They will gather richer dataset, and integrate malicious comments detector into their data-driven approach.
|
null | false
| 251
|
Experimental Setup We follow BIBREF28 and train domain-specific models for all domains. We then evaluate each model across the different domain test sets, enabling us to understand the effect of different domains on the downstream MT performance and to set up strong baselines for data selection experiments. We also train a general-domain model using the available data from all domains, as it is also a common approach in multi-domain scenarios BIBREF29. In all experiments we use a similar Transformer BIBREF36 model, and only control for the training data. More details on the exact training and hyperparameter settings for the NMT models are available in the supplementary material.
Results The results for the cross-domain evaluation are available in Table TABREF28. In most cases, the best results for each domain are obtained by training on the in-domain data. Training on all the available data helped mostly for the Koran test set. This is expected as the training data for this domain is considerably smaller than the training data for rest of the domains (Table TABREF24). We can also see that more data is not necessarily better BIBREF37: while the subtitles corpus is the largest of all 5 and includes 500,000 sentence pairs, it is second to last in performance as measured by the average BLEU across all test sets.
Cross-Domain BLEU vs. Cluster Proximity An interesting observation can be made with respect to the visual analysis of the domain clusters as depicted in Figure FIGREF15: as the Medical cluster (in Yellow), Law cluster (in Purple) and IT cluster (in Red) are close to each other in the embedding space, their cross-domain BLEU scores are also higher. For example, note how in the results for the Medical domain-specific model (first row in Table TABREF28), the BLEU scores on the Law and IT test sets are much higher in comparison to those on the Koran and Subtitles test sets, which clusters are farther away in the visualized embedding space. Similarly, as the Subtitles cluster (Blue) is closer to the Koran cluster (Green), the highest cross-domain BLEU score on the Koran test set is from the Subtitles model. To further quantify this phenomenon, we plot and measure Pearson's correlation between the cosine similarity of the centroids for the English BERT-based dev sentence representations for each domain pair, and the cross-domain BLEU score for this domain pair. This is shown in Figure FIGREF29. We can see the general trend where the closer the domain centroids are (with a similarity of 1 for training and evaluating on the same domain), the higher the cross-domain BLEU is between those domains, resulting in a Pearson's correlation of 0.81 (strong correlation). This suggests that such preliminary visual analysis can be a useful tool for understanding the relationship between diverse datasets, and motivates the use of pre-trained language model representations for domain data selection in MT.
In most cases, the best results for each domain are obtained by training on the in-domain data.
|
How are the best results for each domain obtained in most cases?
|
They are obtained by training on the in-domain data.
|
null | false
| null |
Since 2008, Marvel Studios has released 31 films within the MCU, from Iron Man (2008) to Ant-Man and the Wasp: Quantumania (2023), eight television series since 2021, from WandaVision (2021) to She-Hulk: Attorney at Law (2022), and two television specials, Werewolf by Night (2022) and The Guardians of the Galaxy Holiday Special (2022). The television series What If...? (2021) is the studio's first animated property, created by its "mini-studio" Marvel Studios Animation. These films, television series, and television specials all share continuity with each other, along with the One-Shots short films produced by the studio. The television series produced by Marvel Television also acknowledge the continuity.The Avengers (2012), Iron Man 3 (2013), Avengers: Age of Ultron (2015), Captain America: Civil War (2016), Black Panther (2018), Avengers: Infinity War (2018), Captain Marvel (2019), Avengers: Endgame (2019), Spider-Man: Far From Home (2019) and Spider-Man: No Way Home (2021) are all among the 50 highest-grossing films of all time, with Avengers: Endgame becoming the highest-grossing film of all time from July 2019 until March 2021. In addition to the MCU, Marvel Studios was also involved with the production of other Marvel-character film franchises that have exceeded $1 billion in North American box office revenue, including the X-Men and Spider-Man multi-film franchises.
|
Name some of the highest-grossing film of all time from July 2019 until March 2021 based on the passage.
|
The Avengers (2012), Iron Man 3 (2013), Avengers: Age of Ultron (2015), Captain America: Civil War (2016), Black Panther (2018), Avengers: Infinity War (2018), Captain Marvel (2019), Avengers: Endgame (2019), Spider-Man: Far From Home (2019) and Spider-Man: No Way Home (2021).
|
null | false
| null |
How many wheels does a bicycle have?
|
A bicycle has two wheels
|
|
null | false
| 236
|
Speech input is now a common feature for smart devices. In many cases, the user's query involves entities such as a name from a contact list, a location, or a music title. Recognizing entities is particularly challenging for speech recognition because many entities are infrequent or out of the main vocabulary of the system. One way to improve performance is such cases is through the use of a personal language model (LM) which contains the expected user-specific entities. Because each user can have their own personalized LM, it is vital that the speech decoder be able to efficiently load the model on the fly, so it can be used in decoding, without any noticeable increase in latency.
Many state-of-the-art speech recognition decoders are based on the weighted finite state transducer (WFST) paradigm BIBREF0, BIBREF1. A conventional WFST decoder searches a statically composed $H C L G$ graph, where $H$ is the graph that translates HMM states to CD phones, $C$ translates CD phones to graphemes, $L$ translates graphemes to words and $G$ is graph that represents the language model. Using a statically composed graph has two limitations. First, it is both compute and memory intensive when the vocabulary and LM are large. Second, the static graph approach makes it hard to handle personalized language models BIBREF2. Many common tasks a user may want to perform with a voice assistant such as making phone calls, messaging to a specific contact or playing favorite music require a personalized language model. A dynamic WFST decoder is better suited for such cases. As denoted in Eq (DISPLAY_FORM1), in a dynamic WFST decoder, $HCL$ is composed and optimized offline, while $G$ is composed on the fly with lazy (on-demand) composition, denoted as $\circ $.
To handle dynamic entities, a class LM $G_c$ is normally used as background $G$ and a personalized LM $G_p$ is replaced on-the-fly, before applying lazy composition.
Since the non-terminal states are composed on-the-fly, it means the states of recognition FST will also contain personalized information that cannot be used by other users or service threads.
In previous work, a method was proposed to do a pre-initialized composition for a non-class LM BIBREF3. However, it the dynamic part is still expanded on-the-fly. In this work, we propose two improvements in order to best leverage class language models. First, we use simpler methods for pre-initialization which do not need to pre-generate decoder state statistics. Second, we propose a two-layer pre-initialization mechanism that also avoids performing dynamic expansion on per user basis. In the two-layer pre-initialization method, we make use of a class LM with class tag. We build a personalized FST that contains the members of the class for each user. Using the FST replacement algorithm, we obtain a personalized language transducer BIBREF4. We perform a pre-composition for all FST states whose transitions do not contain class tags. By doing so, the actual on-demand composition is only required for the states in personalized FST. For a multi-threaded service, the pre-composed FST can be shared by all threads, since it does not contain personalized FST states (non-terminals). The personalized part will be shared for all utterances from the same user, which will take full advantage of memory usage.
Unlike the previous pre-initialization approach that is based on calculating the state statistics BIBREF3, our simplified pre-initialization methods do not rely on pre-calculated state frequencies. Instead, we directly expand the graph with breadth-first search or through a data-driven approach where a small numbers of utterances are processed by the decoder offline. We found that both methods are effective, but the data-driven approach outperforms the breadth first search algorithm. Both methods can be combined to achieve the best performance. Through a series of experiments on a speech recognition task for the calling domain, we found that pre-initialization on the public graph speeds up the decoding time by a factor of three. Futhermore, sharing the private graph further reduces decoding time and results in factor of five improvement in efficiency.
In the two-layer pre-initialization method, we make use of a class LM with class tag.
|
What model do they use in their method?
|
A class language model (LM) with class tag.
|
null | false
| null |
In chess, the threefold repetition rule states that a player may claim a draw if the same position occurs three times during the game. The rule is also known as repetition of position and, in the USCF rules, as triple occurrence of position. Two positions are by definition "the same" if the same types of pieces occupy the same squares, the same player has the move, the remaining castling rights are the same and the possibility to capture en passant is the same. The repeated positions need not occur in succession. The reasoning behind the rule is that if the position occurs three times, no real progress is being made and the game could hypothetically continue indefinitely.
The game is not automatically drawn if a position occurs for the third time – one of the players, on their turn, must claim the draw with the arbiter. The claim must be made either before making the move which will produce the third repetition, or after the opponent has made a move producing a third repetition. By contrast, the fivefold repetition rule requires the arbiter to intervene and declare the game drawn if the same position occurs five times, needing no claim by the players.
|
Why do we have a threefold repetition rule in chess?
|
This rule is to ensure the game ends and does not continue indefinitely.
|
null | false
| null |
Qui était Clovis?
|
Clovis Ier, en latin Chlodovechus, né vers 466 et mort à Paris le 27 novembre 511, est roi des Francs saliens, puis roi de tous les Francs de 481 à 511.
Issu de la dynastie des Mérovingiens, il est le fils de Childéric Ier, roi des Francs saliens de Tournai (en actuelle Belgique), et de la reine Basine de Thuringe. Chef militaire, il accroît considérablement le territoire du petit royaume des Francs saliens, dont il hérite à la mort de son père, pour finir par unifier une grande partie des royaumes francs, repousser Alamans et Burgondes et annexer les territoires des Wisigoths dans le Sud de la Gaule.
Le règne de Clovis est surtout connu par la description qu'en fit Grégoire de Tours, évêque gallo-romain dont l'Histoire des Francs est riche d'enseignements, mais dont la visée, essentiellement édifiante, s'accompagne d'un manque de précision et de cohérence historique. Les éléments de la vie de Clovis ne sont pas connus de manière certaine et leur « habillage » est le plus souvent suspect. Néanmoins, Clovis est considéré dans l'historiographie comme un des personnages les plus importants de l'histoire de France.
|
|
null | false
| 253
|
As an essential part of a task-oriented dialogue system BIBREF0 , the task of natural language generation (NLG) is to produce a natural language utterance containing the desired information given a semantic representation consisting of dialogue act types with a set of slot-value pairs. Conventional methods using hand-crafted rules often generates monotonic utterances and it requires substantial amount of human engineering work. Recently, various neural approaches BIBREF1 , BIBREF2 , BIBREF3 have been proposed to generate accurate, natural and diverse utterances. However, these methods are typically developed for particular domains. Moreover, they are often data-intensive to train. The high annotation cost prevents developers to build their own NLG component from scratch. Therefore, it is extremely useful to train a NLG model that can be generalized to other NLG domains or tasks with a reasonable amount of annotated data. This is referred to low-resource NLG task in this paper.
Recently, some methods have been proposed for low-resource NLG tasks. Apart from the simple data augmentation trick BIBREF4 , specialized model architectures, including conditional variational auto-encoders (CVAEs, BIBREF3 , BIBREF5 , BIBREF6 ) and adversarial domain adaptation critics BIBREF5 , have been proposed to learn domain-invariant representations. Although promising results were reported, we found that datasets used by these methods are simple which tend to enumerate many slots and values in an utterance without much linguistic variations. As a consequence, over-fitting the slots and values in the low-resource target domain could even outperform those versions trained with rich source domain examples BIBREF6 . Fortunately, there is a new large-scale dialog dataset (MultiWoz, BIBREF7 ) that contains a great variety of domains and linguistic patterns that allows us to conduct extensive and meaningful experimental analysis for low-resource NLG tasks.
In this paper, instead of casting the problem as model-based approaches, we propose a generalized optimization-based meta-learning approach to directly enhance the optimization procedure for the low-resource NLG task. We start by arguing that a recently proposed model-agnostic meta-learning algorithm (MAML, BIBREF8 ) is a nice fit to the low-resource NLG task. Then, we proposed a generalized NLG algorithm called Meta-NLG based on MAML by viewing languages in different domains or dialog act types as separate Meta NLG tasks. Following the essence of MAML, the goal of Meta-NLG is to learn a better initialization of model parameters that facilitates fast adaptation to new low-resource NLG scenarios. As Meta-NLG is model-agnostic as long as the model can be optimized by gradient descent, we could apply it to any existing NLG models to optimize them in a way that adapt better and faster to new low-resource tasks.
The main contribution of this paper is two-fold:
Then, we proposed a generalized NLG algorithm called Meta-NLG based on MAML by viewing languages in different domains or dialog act types as separate Meta NLG tasks.
|
What does the Meta-NLG base on?
|
It is based on MAML.
|
null | false
| null |
Most classifications of Magic the Gathering decks begin from one of four major strategies: aggro, control, combo and midrange.
Aggro
Aggro (short for "aggressive") decks attempt to reduce their opponents from 20 life to 0 life as quickly as possible, rather than emphasize a long-term game plan. Aggro decks focus on converting their cards into damage; they prefer to engage in a race for tempo rather than a card advantage-based attrition war. Aggro generally relies upon creatures as its accumulative source of damage. Aggro decks can quickly overwhelm unprepared opponents and proceed to eke out the last bit of damage they need to end the game. Aggro decks also generally have access to disruptive elements, which can inhibit the opponent's attempts to respond.
Example cards: Savannah Lions, Bitterblossom, Lightning Bolt, Rogue Elephant, Incinerate
Example decks:
White Weenie, which uses small, efficient creatures such as Savannah Lions, Icatian Javelineers, and Mother of Runes
Affinity, which uses the affinity mechanic and large numbers of artifacts to quickly play spells such as Thoughtcast and Frogmite, while efficiently dealing damage using Disciple of the Vault and Arcbound Ravager.
Zoo, which uses low-cost, high power creatures such as Tarmogoyf and Wild Nacatl to kill the opponent quickly.
Sligh, which utilizes its mana as efficiently as possible to kill the opponent quickly, using low-cost cards such as Jackal Pup and Lightning Bolt.
Suicide Black, which uses efficient but dangerous cards that cost life such as Thoughtseize, Dark Confidant, Grim Tutor, and Bitterblossom. Suicide Black epitomizes Black's philosophy—win at all costs—and treats even its life total as an expendable resource.
Control
Control decks avoid racing. They attempt to slow the game down by executing an attrition plan. As the game progresses, control decks are able to take advantage of their slower, more powerful, cards. The primary strength of control decks is their ability to devalue the opponent’s cards. They do this in four ways:
Answering threats at a reduced cost. Given the opportunity, Control decks can gain card advantage by answering multiple threats with one spell ("clearing"/"wiping" the board), stopping expensive threats with cheaper spells, and drawing multiple cards or forcing the opponent to discard multiple cards with one spell.
Not playing threats to be answered. By playing few proactive spells of their own, control decks gain virtual card advantage by reducing the usefulness of opposing removal cards.
Disrupting synergies. Even if control decks do not deal with every threat directly, they can leave out whichever ones stand poorly on their own; e.g., an enchantment which gives a bonus to creatures will never need attention if all enemy creatures are quickly neutralized.
Dragging the game out past opposing preparations. An opponent's faster, efficient cards will become less effective over time.
Example cards: Force of Will, Duress, Wrath of God, Pernicious Deed, Void
Example decks:
Tezzeret Control, which controls the game using counterspells such as Mana Drain, builds card advantage with cards such as Dark Confidant, and ends the game using Tezzeret the Seeker to find Time Vault and activate it for infinite turns.
Mono Blue Control, which uses a heavy suite of counterspells alongside card-drawing such as Thirst for Knowledge, removal such as Echoing Truth, and a win condition such as Tezzeret the Seeker. This class of deck is nicknamed "Draw-Go," because most of its players' spells are instants designed to be played during his or her opponents' turns.
Blue-White Control, which is similar to Mono-Blue Control, but features more board-control cards such as Wrath of God, and Pacifism.
Psychatog, supplemented by card-drawing like Fact or Fiction and a number of disruptive spells.
Astral Slide, which uses large numbers of cards with cycling, including those with added benefits such as Eternal Dragon and Slice and Dice, to power Astral Slide and Lightning Rift.
Mono-Black Control, which uses removal spells such as Innocent Blood and Barter in Blood to control the board, and Cabal Coffers to kill the opponent with spells such as Consume Spirit. It can also use cards like Underworld Dreams to put the opponent on a timer.
The Deck, which uses card drawing such as Fact or Fiction and deck searching cards such as Demonic Tutor to find powerful cards that are highly effective against particular strategies (such as The Abyss, Diabolic Edict, and Balance), alongside a Blue base of counterspells to control the game and obtain an insurmountable lead.
Combo
Combo decks use the interaction of two or more cards (a "combination") to create a powerful effect that either wins the game immediately or creates a situation that subsequently leads to a win. Combo decks value consistency, speed, and resilience: the deck should be reliable enough to produce the combo on a regular basis, the deck should be able to use the combo fast enough to win before the opponent, and the deck should be able to withstand disruption and still win.
Many decks have smaller, combo-like interactions between their cards, which is better described as synergy.
Example cards: Flash, Tendrils of Agony, Empty the Warrens, Aluren, Painter's Servant.
Example decks:
The Perfect Storm, which utilizes Dark Ritual and artifact mana to draw cards and fuel a lethal Tendrils of Agony, all the while disrupting the opponent with Duress and Force of Will.
Painter Combo, which uses Painter's Servant and chooses Blue to permit Red Elemental Blast to destroy any permanent or counter any spell, while also allowing Grindstone to put the opponent's entire library into their graveyard.
Worldgorger Dragon Combo, which revolves around the infinite loop triggered when Worldgorger Dragon is animated from the graveyard using an enchantment such as Animate Dead. The loop generates mana and card drawing which is then used to end the game.
Belcher Combo, which uses free and efficient mana acceleration to play and activate Goblin Charbelcher, preferably on the first turn. Because the deck has two or fewer lands, one activation of Goblin Charbelcher will almost always kill the opponent.
Hulk-Flash, which is dedicated to casting Flash and putting a Protean Hulk into play and then into the graveyard, allowing the player to find a combination of creatures which will kill the opponent instantly. Summoner's Pact and Merchant Scroll are used to find the combo pieces, while Force of Will and Pact of Negation protect the combo.
Steel City Vault, which uses "Draw 7" spells such as Timetwister to rapidly assemble the Time Vault-Voltaic Key combo for infinite turns. The deck also uses several cards such as Force of Will and Ancient Grudge to efficiently deal with Null Rod, the most effective answer to the Vault-Key combo.
Hexmage Depths, which uses Vampire Hexmage to inexpensively remove the counters from Dark Depths and put a flying, indestructible 20/20 creature token into play as early as the first turn.
Midrange
A typical midrange deck has an early game plan of mana ramp and control, but begins to play threats once it reaches four to six mana. A midrange deck will often seek to play a reactive, attrition-based game against aggro decks and a more proactive, tempo-based game against control decks. Colloquially, this is referred to as "going bigger" than aggro and "getting in under" control.
Example cards: Huntmaster of the Fells Thragtusk, Sakura-Tribe Elder
Example decks:
Jund Midrange (BRG), a powerful and flexible deck with virtually zero bad matchups thanks to the access of the most powerful cards that each color can offer: black gives Liliana of the Veil and powerful discard cards like Thoughtseize and Inquisition of Kozilek, red a reliable removal in Lightning Bolt and the green some of the best creatures like Tarmogoyf and Scavenging Ooze.
Abzan Midrange (WBG), which is similar to Jund, but trades speed for powerful white based cards like Path to Exile, Lingering Souls and Siege Rhino.
Jeskai Midrange (UWR), which has more control elements, using Sphinx's Revelation to draw into a very large number of cards and Assemble the Legion to end up swarming the board with tokens.
Temur Emerge (GUR), which leverages the Emerge mechanic introduced in Eldritch Moon to efficiently use ramp creatures such as Pilgrim's Eye and Primal Druid for their land-smoothing abilities while sacrificing them to Emerge board-stalling threats such as Elder Deep-Fiend as early as turn 4, also filling the graveyard to make Emrakul, the Promised End playable on turn 7.
Tron, which tries to build the triplet of lands known as "Urzatron" (Urza's Tower, Urza's Mine and Urza's Power Plant) as quickly as possible and then uses the massive amount of mana to play powerful cards like Karn Liberated and Wurmcoil Engine as early as turn 3.
|
Given a reference text about Magic the Gathering decks, please suggest cards I should consider in a Midrange deck.
|
You should consider Huntmaster of the Fells Thragtusk and Sakura-Tribe Elder
|
null | false
| null |
Why is lacrosse considered the fastest game on two feet?
|
During both mens and women's lacrosse games, the ball moves very quickly over a large field, both by passing between players as well as the players running the ball themselves from the defensive to offense ends of the field.
|
|
null | false
| null |
Horizon Zero Dawn is a 2017 action role-playing game developed by Guerrilla Games and published by Sony Interactive Entertainment. It is the first game of the Horizon video game series. The plot follows Aloy, a young hunter in a world overrun by machines, who sets out to uncover her past. The player uses ranged weapons, a spear, and stealth to combat mechanical creatures and other enemy forces. A skill tree provides the player with new abilities and bonuses. The player can explore the open world to discover locations and take on side quests. It is the first game in the Horizon series and was released for the PlayStation 4 in 2017 and Windows in 2020.
|
Extract the title of the game, the name of its developer and the name of the main character and separate them by a comma.
|
Horizon Zero Dawn, Guerrilla Games, Aloy
|
1703.06492
| false
| null |
The Co-Attention mechanism inspires us to build part of our VQABQ model, illustrated by Figure 2 . In the VQABQ model, there are two main modules, the basic question generation module (Module 1) and co-attention visual question answering module (Module 2). We take the query question, called the main question (MQ), encoded by Skip-Thought Vectors BIBREF9 , as the input of Module 1. In the Module 1, we encode all of the questions, also by Skip-Thought Vectors, from the training and validation sets of VQA BIBREF0 dataset as a 4800 by 215623 dimension basic question (BQ) matrix, and then solve the LASSO optimization problem, with MQ, to find the 3 BQ of MQ. These BQ are the output of Module 1. Moreover, we take the MQ, BQ and the given image as the input of Module 2, the VQA module with co-attention mechanism, and then it can output the final answer of MQ. We claim that the BQ can help Module 2 get the correct answer to increase the VQA accuracy. In this work, our main contributions are summarized below:
In the VQABQ model, there are two main modules, the basic question generation module (Module 1) and co-attention visual question answering module (Module 2).
|
What two main modules their approach consists of?
|
The answers are shown as follows:
* the basic question generation module (Module 1) and co-attention visual question answering module (Module 2)
|
null | false
| null |
What are the steps to perform breathing meditation?
|
The simplest, most elegant and accessible breathing meditation technique is called "Sohum Sadhana". The way to perform it is the following:
1. Sit in a comfortable posture. Any posture which you can maintain without unsure stress anywhere on the body is fine.
2. Close your eyes.
3. Breathe in and out as you normally would.
4. While inhaling, mentally recite the syllable "So", with a long "o", which stretches as long as inhalation action continues.
5. While exhaling, mentally recite the syllable "hum", with the "um" pronounced as in "rum". Let the ending "m" sound continue until the end of the exhalation action.
6. Keeps repeating steps 5 and 6 as long as you want.
|
|
null | false
| null |
The Battle of Kuningtou or Battle of Guningtou, also known as the Battle of Kinmen, was a battle fought over Kinmen in the Taiwan Strait during the Chinese Civil War in 1949. The failure of the Communists to take the island left it in the hands of the Kuomintang (Nationalists) and crushed their chances of taking Taiwan to destroy the Nationalists completely in the war.
|
Who were fighting each other in this battle and what's the consequence of it
|
Chinese Nationalist and Chinese Communists fought this battle during Chinese Civil War in 1949. Chinese Nationalist won the battle which deterred Chinese Communists from taking over Taiwan, the last stronghold of Chinese Nationalist government fled from mainland in 1949 until today.
|
null | false
| null |
Please tell me whether these are fish or fowl: salmon, chicken, tuna, swordfish, turkey, and halibut.
|
The fish are: salmon, tuna, swordfish, and halibut. The fowl (i.e., birds) are chicken and turkey.
|
|
null | false
| null |
NGC 7314 is a spiral galaxy located in the southern constellation of Piscis Austrinus. It was discovered by English astronomer John Herschel on July 29, 1834. This is a nearby Seyfert (active) galaxy, located at a distance of approximately 54.6 megalight-years from the Milky Way. Since it appears to have detached spiral arm segments (either from dust lanes or bright star clusters), it was listed in Halton Arp's Atlas of Peculiar Galaxies.
Walter Scott Houston describes its appearance in small telescopes:
Do not let its photographic magnitude of 11.6 scare you off, for it can be seen in a 6-inch telescope as a curiously fuzzy object. But it is small, appearing only 4' by 2'.
The morphological classification of this galaxy is SAB(rs)bc, indicating a spiral galaxy with a weak central bar (SAB), an incomplete ring structure around the bar (rs), and moderately–wound arms (bc). The plane of the galactic disk is inclined by 64° to the line of sight from the Earth, with the major axis aligned along a position angle of 178°. Within the galaxy's core is an active galactic nucleus tentatively classified as a type I Seyfert. The central supermassive black hole has a relatively low mass, estimated as (0.87±0.45)×106 M☉. The core is a source for X-ray emission that is seen to vary dramatically on time scales as low as hours.
|
Tell me about NGC 7314
|
NGC 7314 is a spiral galaxy located in the southern constellation of Piscis Austrinus. It was discovered by English astronomer John Herschel on July 29, 1834
|
null | false
| null |
What is a good kids snack for school?
|
You have many different options when considering a good snack for kids? Typically if the snack is for school you will want to choose a healthy snack to keep your child alert during class and paying attention. Healthy snack options are fruits and vegetables, cheese and crackers or goldfish. Other unhealthier snack options would be sugary snacks like gummy bears and chocolate.
|
|
null | false
| 88
|
The objective of knowledge graph embedding is to encode both entities and relations of knowledge graphs into continuous low-dimensional vector spaces. Previously, most works focused on symbolic representation of knowledge graph with structure information, which can not handle new entities or entities with few facts well. In this paper, we propose a novel deep architecture to utilize both structural and textual information of entities. Specifically, we introduce three neural models to encode the valuable information from text description of entity, among which an attentive model can select related information as needed. Then, a gating mechanism is applied to integrate representations of structure and text into a unified architecture. Experiments show that our models outperform baseline by margin on link prediction and triplet classification tasks. Source codes of this paper will be available on Github.
Previously, most works focused on symbolic representation of knowledge graph with structure information, which can not handle new entities or entities with few facts well.
|
What are the disadvantages of the state-of-the-art?
|
Previously, most works focused on symbolic representation of knowledge graph with structure information, which can not handle new entities or entities with few facts well.
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Frey?
|
Emmon Frey
|
|
null | false
| null |
What are some interesting places to visit in Canada as a tourist?
|
Canada is a large country with many unique destinations. Here are a few suggestions on where to go for a vacation:
1. Toronto - visit the CN tower and walk down Bay Street to see the Canadian Wallstreet.
2. Montreal - has a mix of both old and new sights, with some buildings dating back to the founding of the city.
3. Calgary - visit the Calgary Stampede to experience on of the world's largest rodeos.
There are many other cities in Canada to visit, but these are just a few to get started!
|
|
null | false
| 170
|
To test our clusters, we employed the Word Intrusion task BIBREF35. We present the annotator with five words – four drawn from one cluster and one drawn randomly from the domain vocabulary – and we ask them to pick out the intruder. The intuition is that if the cluster is coherent, then an observer should be able to identify the out-of-cluster word as the intruder. For both domains, we report results on all clusters and on the top 8, ranked by ascending normalized sum of squared errors, which can be seen as a prediction of coherence. In the celebrity domain, annotators identified the out-of-cluster word 73% of the time in the top-8 and 53% overall. In the professor domain, annotators identified it 60% of the time in the top-8 and 49% overall. As expected, top-8 performance in both domains does considerably better than overall, but at all levels the precision is significantly above the random baseline of 20%.
To test cluster labels, we present the annotator with a label and a word, and we ask them whether the word falls under the concept. The concept is a potential cluster label and the word is either a word from that cluster or drawn randomly from the domain vocabulary. For a good label, the rate at which in-cluster words fall under the label should be much higher than the rate at which out-of-cluster words fall under. In our experiments, we tested the top 4 predicted labels and the centroid of the cluster as a strong baseline label. The centroid achieved an in-cluster rate of .60 and out-of-cluster rate of .18 (difference of .42). Our best performing predicted label achieved an in-cluster rate of .65 and an out-of-cluster rate of .04 (difference of .61), thus outperforming the centroid on both rates and increasing the gap between rates by nearly 20 points. In the Appendix, we include more detailed results on both tasks.
To test cluster labels, we present the annotator with a label and a word, and we ask them whether the word falls under the concept.
|
What do the authors present to test cluster labels?
|
To test cluster labels, the authors present the annotator with a label and a word, and they ask them whether the word falls under the concept.
|
null | false
| null |
Research from security company Trend Micro lists premium service abuse as the most common type of Android malware, where text messages are sent from infected phones to premium-rate telephone numbers without the consent or even knowledge of the user. Other malware displays unwanted and intrusive advertisements on the device, or sends personal information to unauthorised third parties. Security threats on Android are reportedly growing exponentially; however, Google engineers have argued that the malware and virus threat on Android is being exaggerated by security companies for commercial reasons, and have accused the security industry of playing on fears to sell virus protection software to users. Google maintains that dangerous malware is actually extremely rare, and a survey conducted by F-Secure showed that only 0.5% of Android malware reported had come from the Google Play store.
In 2021, journalists and researchers reported the discovery of spyware, called Pegasus, developed and distributed by a private company which can and has been used to infect both iOS and Android smartphones often – partly via use of 0-day exploits – without the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time. Analysis of data traffic by popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out by this pre-installed software. Both of these issues are not addressed or cannot be addressed by security patches.
|
Extract from this article the most common Android security threats and simple descriptions of each threat, in a bullet pointed list.
|
The most common types of android security threats are:
- Premium service abuse: where text messages are sent from infected phones to premium-rate telephone numbers without the consent or even knowledge of the user
- Malware: that displays unwanted and intrusive advertisements on the device, or sends personal information to unauthorized third parties
- Spyware: such as Pegasus, which can and has been used to infect both iOS and Android smartphones and can then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time.
|
null | false
| null |
What is National Financial Switch or NFS?
|
National Financial Switch (NFS) is the largest network of shared automated teller machines (ATMs) in India. It was designed, developed and deployed by the Institute for Development and Research in Banking Technology (IDRBT) in 2004, with the goal of inter-connecting the ATMs in the country and facilitating convenience banking. It is run by the National Payments Corporation of India (NPCI). As on 31st January’ 22, there were 1,203 members that includes 111 Direct, 1,045 Sub members, 43 RRBs and 4 WLAOs using NFS network connected to more than 2.55 Lac ATM (including cash deposit machines/recyclers).
The National Financial Switch was launched by the IDRBT on 27 August 2004, connecting the ATMs of three banks, Corporation Bank, Bank of Baroda and ICICI Bank.The IDRBT then worked towards bringing all major banks in India on board and by December 2009, the network had grown to connect 49,880 ATMs of 37 banks, thereby emerging as the largest network of shared ATMs in the country.
|
|
null | false
| 6
|
Our motivation for using templates for data synthesis is that seq2seq synthesis models (as discussed in related work) tend to generate irrelevant and repeated words BIBREF17, while templates can produce more coherent and concise output. Also, extracting templates can be done either manually or automatically typically by training a few parameters or even doing no training, then external information in the form of keywords or snippets can be populated into the templates with the help of more sophisticated models. Accordingly, using templates can be very tempting for domains with limited resources such as ours.
Model Structure. The model consists of 4 modules:
1. Template extraction: To convert human summaries into templates, we remove keywords in the summary to leave only non-keywords. We use Rapid Automatic Keyword Extraction (RAKE) BIBREF18 to identify keywords.
2. Template clustering: Upon converting human summaries into templates, we cluster them into $N$ clusters with the goal of using any template from the same cluster interchangeably. A template is first converted into embeddings using a pretrained BERT model BIBREF19, where template embedding is constructed by average pooling word embeddings. Templates are then clustered using k-medoid.
3. Summary rewriting: An encoder-attention-decoder with pointer network is trained to perform the rewriting task. The model is trained to inject keywords into a template and perform rewriting into a coherent paragraph. The produced rewrites are considered as candidate summaries.
4. Summary selection: After producing candidate summaries, we need to pick the best ones. We argue that the best candidates are those that are coherent and also convey the same meaning as the original human summary. We thus use a hybrid metric to score candidates, where the metric is a weighted sum of two scores and is calculated using Equations 1, 2, and 3. Eq.1 measures coherency using a language model (LM), Eq.2 measures how close a candidate is to a human summary using ROUGE scores, while Eq.3 picks the highest scored $N$ candidates as the final synthetic set.
CS and HS are a candidate and human summary. $P(w)$ is the probability of word $w$ using a language model. $\alpha , \beta $ are weighting parameters. In this work we use $\alpha =\beta =1$ for all experiments. $R_{i}(CS,HS)$ is ROUGE-i score between CS and HS for i=1, 2, and $l$.
Model Training. Before using the synthesis model, some of the constructing modules (rewriting module, scoring LM) need training. To train the rewriting model, we use another dataset consisting of a set of samples, where each sample can be a text snippet (sentence, paragraph, etc.). For each sample, keywords are extracted using RAKE, then removed. The keywords plus the sample with no keywords are then passed to the rewriting model. The training objective of this model is to reconstruct the original sample, which can be seen as trying to inject extracted keywords back into a template. Model Usage. To use the synthesis model to generate new samples, the set of human summaries are fed to the model, passing through the sub-modules in the following order:
1. Human summaries first pass through the template extraction module, converting each summary $s_i$ into template $t_i$ and the corresponding keywords $kw_i$.
2. Templates are then passed to the clustering module, producing a set of clusters. Each cluster $C$ contains a number of similar templates.
3. For each template $t_i$ and corresponding keywords $kw_i$ from step 1, find the cluster $C_i$ that contains the template $t_i$, then pass the set of templates within that clusters $\lbrace t_j\rbrace \forall {j},$ if $t_j \in C_i$ alongside the keywords $kw_i$ to the summary rewriting module. This will produce a set of candidate summaries.
4. The summary selection module scores and selects the highest $N$ candidates as the synthetic summaries.
The model consists of 4 modules: 1. Template extraction: To convert human summaries into templates, we remove keywords in the summary to leave only non-keywords. We use Rapid Automatic Keyword Extraction (RAKE) (Rose et al. 2010) to identify keywords. 2. Template clustering: Upon converting human summaries into templates, we cluster them into N clusters with the goal of using any template from the same cluster interchangeably. A template is first converted into embeddings using a pretrained BERT model5 (Devlin et al. 2018), where template embedding is constructed by average pooling word embeddings. Templates are then clustered using k-medoid. 3. Summary rewriting: An encoder-attention-decoder with pointer network is trained to perform the rewriting task. The model is trained to inject keywords into a template and perform rewriting into a coherent paragraph. The produced rewrites are considered as candidate summaries. 4. Summary selection: After producing candidate summaries, we need to pick the best ones.
|
What does the model consist of?
|
The model consists of 4 modules: 1. Template extraction. 2. Template clustering. 3. Summary rewriting. 4. Summary selection.
|
null | false
| null |
The Samsung E1200 (also known as Samsung Pusha and Samsung Keystone 2) is a mobile phone made by Samsung. This phone was released in 2012 through Tesco in the UK. It was discontinued in 2015 when the Samsung Guru FM E1202 was released as its successor.
|
If the year is 2020, is the Samsung E1200 still in production?
|
No, the phone was discontinued in 2015.
|
null | false
| null |
I am a reader in the US and would like to get started in collecting special edition books. Where should I start?
|
Special edition books are printed in limited qualities, sometimes signed by the author, numbered, have unique cover designs compared to the trade editions, and some have sprayed or stenciled edges. In the US, a popular publisher is Subterranean Press that offers many titles typically of the science fiction, fantasy, and horror genres. The UK has several options such as Illumicrate and Fairy Loot. These companies usually offer signed special editions with author signatures, unique covers, and decorative edges. Genres primarily offered include young adult fantasy. For the company with the widest selection, Goldsboro Books is the best option. The special editions are hand signed, numbered, and have sprayed or stencil edges. While Goldsboro Books offers many science fiction and fantasy options, they also offer many other genres including literary fiction, historical fiction, mysteries, and thrillers. Please keep in mind that readers it will cost US readers more when ordering from UK companies due to the pound-dollar conversion and costlier international shipping fees.
|
|
null | false
| null |
What are some qualities that make a good computer programmer?
|
Being a computer programmer is hard work, so a programmer should be resilient, hard-working, and never afraid to fail. Being a programmer also requires constant change, so a good one will be open to learning new things, have natural curiosity, and be able to adapt to changing requirements quickly. Lastly, a good programmer should be a good team player and able to communicate effectively with other team members.
|
|
null | false
| null |
What's it like to live in the belly of a whale?
|
The giant whale that can swallow a human (Blue Whale) doesn't have teeth, so the experience was "painless." However, you would never understand how dark the darkness could be until you've been in the belly of a whale. Also, talk about being seasick. Everything inside sloshes around and smells terribly as it's slowly being digested. This only adds insult to injury when it comes to feeling nauseous.
|
|
null | false
| null |
A battery storage power station is a type of energy storage power station that uses a group of batteries to store electrical energy. Battery storage is the fastest responding dispatchable source of power on electric grids, and it is used to stabilise those grids, as battery storage can transition from standby to full power in under a second to deal with grid contingencies.
At full rated power, battery storage power stations are generally designed to output for up to a few hours. Battery storage can be used for short-term peak power and ancillary services, such as providing operating reserve and frequency control to minimize the chance of power outages. They are often installed at, or close to, other active or disused power stations and may share the same grid connection to reduce costs. Since battery storage plants require no deliveries of fuel, are compact compared to generating stations and have no chimneys or large cooling systems, they can be rapidly installed and placed if necessary within urban areas, close to customer load.
|
How long does it typically take a battery storage power station to transition to full power from standby?
|
Under a second
|
null | false
| null |
A city can be defined by its administrative boundaries, otherwise known as city proper. UNICEF defines city proper as, "the population living within the administrative boundaries of a city or controlled directly from the city by a single authority." A city proper is a locality defined according to legal or political boundaries and an administratively recognised urban status that is usually characterised by some form of local government. Cities proper and their boundaries and population data may not include suburbs.
The use of city proper as defined by administrative boundaries may not include suburban areas where an important proportion of the population working or studying in the city lives. Because of this definition, the city proper population figure may differ greatly with the urban area population figure, as many cities are amalgamations of smaller municipalities (Australia), and conversely, many Chinese cities govern territories that extend well beyond the traditional "city proper" into suburban and rural areas. The Chinese municipality of Chongqing, which claims the largest city proper in the world by population, comprises a huge administrative area of 82,403 km2, around the size of Austria. However, more than 70% of its 30-million population are actually agricultural workers living in a rural setting.
|
What is the largest city in the world?
|
ChoingQing, China is the largest city proper in the world by population. It has a huge administrative area of 82403 km2. While, it governs territories that covers both suburban and rural areas, which extend well beyond the traditional city proper.
|
1909.11189
| false
| null |
The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work). We find that around 51k texts are annotated with the label ’verse’ (TGRID-V), not distinguishing between ’lyric verse’ and ’epic verse’. However, the average length of these texts is around 150 token, dismissing most epic verse tales. Also, the poems are distributed over 229 authors, where the average author contributed 240 poems (median 131 poems). A drawback of TGRID-V is the circumstance that it contains a noticeable amount of French, Dutch and Latin (over 400 texts). To constrain our dataset to German, we filter foreign language material with a stopword list, as training a dedicated language identification classifier is far beyond the scope of this work.
The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered as part of the literary canon, even though it is far from complete (e.g. it contains only half of Rilke’s work).
|
What is the corpus used in the study?
|
The answers are shown as follows:
* The Digital Library in the TextGrid Repository
|
null | false
| null |
Suwon (Korean: 수원; Hanja: 水原; lit. Waterfield, Korean pronunciation: [sʰu.wʌn]) is the capital and largest city of Gyeonggi-do, South Korea's most populous province which surrounds Seoul, the national capital. Suwon lies about 30 km (19 mi) south of Seoul. It is traditionally known as "The City of Filial Piety". With a population close to 1.3 million, it is larger than Ulsan, although it is not governed as a metropolitan city.
|
What is the population in Suwon?
|
The population in Suwon is close to 1.3 million.
|
null | false
| null |
What is the distance from the Earth to its moon?
|
The average distance to the moon is 382,500 kilometers. The distance varies because the moon travels around Earth in a elliptical orbit. At perigee, the point at which the moon is closest to Earth, the distance is approximately 360,000 kilometers.
|
|
null | false
| null |
Which of these titles are by Walter Dean Myers and which are by Rick Riordan Jr.? Hoops, The Legend of Tarik, Ambush in the Amazon, Crystal, The Life of a Harlem Man, The Lost Hero, The Titan's Curse, The Lightning Thief, The Red Pyramid, The Demigod Diaries
|
Walter Dean Myers titles: Hoops, The Legend of Tarik, Ambush in the Amazon, Crystal, The Life of a Harlem Man
Rick Riordan Jr. titles: The Lost Hero, The Titan's Curse, The Lightning Thief, The Red Pyramid, The Demigod Diaries
|
|
null | false
| 219
|
Human-computer dialog/conversation is one of the most challenging problems in artificial intelligence. Given a user-issued utterance (called a query in this paper), the computer needs to provide a reply to the query. In early years, researchers have developed various domain-oriented dialogue systems, which are typically based on rules or templates BIBREF4 , BIBREF5 , BIBREF6 . Recently, open-domain conversation systems have attracted more and more attention in both academia and industry (e.g., XiaoBing from Microsoft and DuMi from Baidu). Due to high diversity, we can hardly design rules or templates in the open domain. Researchers have proposed information retrieval methods BIBREF7 and modern generative neural networks BIBREF8 , BIBREF9 to either search for a reply from a large conversation corpus or generate a new sentence as the reply.
In open-domain conversations, context information (one or a few previous utterances) is particularly important to language understanding BIBREF1 , BIBREF9 , BIBREF10 , BIBREF11 . As dialogue sentences are usually casual and short, a single utterance (e.g., “Thank you.” in Figure FIGREF2 ) does not convey much meaning, but its previous utterance (“...writing an essay”) provides useful background information of the conversation. Using such context will certainly benefit the conversation system.
However, tracking all previous utterances as the context is unwise. First, commercial chat-bots usually place high demands on efficiency. In a retrieval-based system, for example, performing a standard process of candidate retrieval and re-ranking for each previous utterance may well exceed the time limit (which is very short, e.g., 500ms). Second, we observe that not all sentences in the current conversation session are equally important. The sentence “Want to take a walk?” is irrelevant to the current context, and should not be considered when the computer synthesizes the reply. Therefore, it raises the question of session segmentation in conversation systems.
Document segmentation for general-purpose corpora has been widely studied in NLP. For example, Hearst BIBREF12 proposes the TextTiling approach; she measures the similarity of neighboring sentences based on bag-of-words features, and performs segmentation by thresholding. However, such approaches are not tailored to the dialogue genre and may not be suitable for conversation session segmentation.
In this paper, we address the problem of session segmentation for open-domain conversations. We leverage the classic TextTiling approach, but enhance it with modern embedding-based similarity measures. Compared with traditional bag-of-words features, embeddings map discrete words to real-valued vectors, capturing underlying meanings in a continuous vector space; hence, it is more robust for noisy conversation corpora. Further, we propose a tailored method for word embedding learning. In traditional word embedding learning, the interaction between two words in a query and a reply is weaker than that within an utterance. We propose to combine a query and its corresponding reply as a “virtual sentence,” so that it provides a better way of modeling utterances between two agents.
In early years, researchers have developed various domain-oriented dialogue systems, which are typically based on rules or templates [5, 6, 7].
|
What are dialogue systems based on in the early years?
|
Rules or templates.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.