input
stringlengths
14
315
answer
stringlengths
9
2.16k
gold_ctxs
listlengths
1
15
ctxs
listlengths
11
186
How could the algorithm infer amplitude and phase in the case of showing only half of the input range?
As written in the paper, the authors claim that the meta-learned model f can learn the periodicity of a sine wave without looking at the entire dataset [27].
[ 27 ]
[ { "id": "1703.03400_all_0", "text": " Learning quickly is a hallmark of human intelligence, whether it involves recognizing objects from a few examples or quickly learning new skills after just minutes of experience. Our artificial agents should be able to do the same, learning and adapting quickly from onl...
What is the maximum memory capacity of FPGA?
Near 10 MB of on-chip memory and no off-chip memory or storage(For example, the Xilinx Vertex-7 FPGA has a maximum of 85 MB (ie [0].
[ 0 ]
[ { "id": "1602.07360_all_0", "text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur...
What problem can occur when you train a generator only with an adversarial loss and a domain classification loss?
When loss function does not contains reconstruction loss, minimizing the losses does not guarantee that translated images preserve the content of its input images while changing only the domain-related part of the inputs [14].
[ 14 ]
[ { "id": "1711.09020_all_0", "text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tra...
What makes GMPool and NGMPool novel compared to existing graph pooling methods?
GMPool and NGMPool overcome the limitation of existing pooling frameworks that require a universal number of clusters as user parameter by first building a grouping matrix and decomposing the matrix into its square-root form [10].
[ 10 ]
[ { "id": "2209.02939_all_0", "text": " Graph Neural Networks (GNNs) learn representations of individual nodes based on the connectivity structure of an input graph. For graph-level prediction tasks, the standard procedure globally pools all the node features into a single graph representation without weight ...
Similar to some of the related work, do the authors also use CTC-training for their proposed model?
The authors used attention-based recurrent sequence generator (ARSG) in this paper, which is different from CTC [23]. While CTC based models are close to the proposed ARSG model, it has different characteristics from those models such as how the alignment is treated and whether the model can result in non-monotonic alignment [38].
[ 23, 38 ]
[ { "id": "1506.07503_all_0", "text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation  and visual object classification .111An early version of this work was presented at th...
Is it inspired by transformer network?
The authors extend the spatial self-attention in the T2I model from one image to multiple images to maintain content consistency across frames [2]. It is useful in spatiotemporal domain like generating videos [36].
[ 2, 36 ]
[ { "id": "2212.11565_all_0", "text": " The large-scale multimodal dataset , consisting of billions of text-image pairs crawled from the Internet, has enabled a breakthrough in Text-to-Image (T2I) generation (30, 35, 6, 42, 40). To replicate this success in Text-to-Video (T2V) generation, recent works (42, 15...
What are some common metrics used to evaluate the performance of face recognition systems?
Common metrics that are used to evaluate the accuracy of FR models are: ROC, Acc [6]. for face verification; rank-N and CMC curve for closed-set face identification, and DET curve for open-set face identification [61]. Also, the metrics for the complexity and size of FR models are important [62]. Lastly, the metrics that measure the age/gender/racial bias of the FR models are becoming necessary [63].
[ 6, 61, 62, 63 ]
[ { "id": "1804.06655_all_0", "text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early...
How does the author show the mitigation of interference?
Use interference ratio [31].
[ 31 ]
[ { "id": "2206.03715_all_0", "text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e...
What is the definition of 'episodes'?
[Matching networks utilize sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points [1]. The use of episodes makes the training problem more faithful to the test environment and thereby improves generalization [5].
[ 1, 5 ]
[ { "id": "1703.05175_all_0", "text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely over...
How is normal cell different from reduction cell for NASNets?
We learn two separate architectures for reduction and normal cells [10]. During prediction, the first 5B predictions are for the Normal Cell and the second 5B predictions are for the Reduction Cell [11]. For Reduction cell authors make the initial operation applied to the cell’s inputs have a stride of two to reduce the height and width which is not done for Normal cell [12].
[ 10, 11, 12 ]
[ { "id": "1707.07012_all_0", "text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of  on using convolutional architectures (17, 34) for ImageNet  classification, successive advancements through architecture enginee...
How did they edit the prompt based on previous examples? Was it by editing the original text of the prompt or simply by concatenating the examples?
They seem to be including memory in prompts by adding the natural language feedback (fb) that users provide on a prompt (x) and it's response (u) by including the tuple (x, u, fb) in a structured format [12]. It does seem like they are merely concatenating multiple of these tuples and adding them to the prompt, but the exact format of the prompt itself is not fully explained in the paper [18].
[ 12, 18 ]
[ { "id": "2201.06009_all_0", "text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining wi...
What are the weaknesses of conventional phrase-based translation systems compared to neural machine translation?
The weakness of conventional phrase-based translation systems over Neural Machine Translation are their brittle design choices especially when it's trained on very large-scale datasets, large scale, production quality and it lacks the ability to learn directly in an end-to-end fashion [0].
[ 0 ]
[ { "id": "1609.08144_all_0", "text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi...
Does it really assist bridge long-term temporal dependencies early in learning, as the authors say, to bias the gates in LSTM networks initially?
Contrary to the authors' expectations, most of the biases were said to be reduced during training [31].
[ 31 ]
[ { "id": "1507.06228_all_0", "text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, withi...
Why did the authors focus on the verb? Is there any reason?
The T2V generator is expected to capture necessary motion knowledge from the input video and synthesize novel videos guided by edited prompts [1]. They use the pre-trained T2I model which is able to generate images that align well with the text, including the verb terms [11].
[ 1, 11 ]
[ { "id": "2212.11565_all_0", "text": " The large-scale multimodal dataset , consisting of billions of text-image pairs crawled from the Internet, has enabled a breakthrough in Text-to-Image (T2I) generation (30, 35, 6, 42, 40). To replicate this success in Text-to-Video (T2V) generation, recent works (42, 15...
What was the goal behind train a multi-scale model ?
Single-scale models can only be used for one scale, so a new model must be trained for new scales [15]. However, this would take too long, so a multi-scale model that uses the same parameters no matter the scale would reduce model capacity and make training more efficient [16].
[ 15, 16 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
What does "meta" means in the term graph meta network (GMN)?
The Graph Meta Network (GMN) refines knowledge in a meta-graph [33]. A meta-graph is a graph that is constructed by constructing multi-hop paths between the entities in a query and a passage using the knowledge from a global graph [7]. The meaning for “meta” in both graph meta network (GMN) and meta-graph is not explicitly defined in this paper [24].
[ 33, 7, 24 ]
[ { "id": "2204.11673_all_0", "text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap...
Does complex bypass connections add extra parameters to the model?
Yes, complex bypass connections add extra parameters to the model as we add 1x1 convolution layer with the number of filters set equal to the number of output channels [38].
[ 38 ]
[ { "id": "1602.07360_all_0", "text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur...
How pointwise group convolutions is different from 1x1 convolutions?
The group convolution divides the channels into groups and applies the convolution only within the groups, thus reducing the computational complexity of 1x1 convolutions [1]. However, when several group convolutions are stacked together it may block the information flow and weaken the representation [21].
[ 1, 21 ]
[ { "id": "1707.01083_all_0", "text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa...
What is the role of meta-data in the proposed method?
In the proposed method, meta-data serves as a signal to guide the update of the model's parameters in a way that improves the primary task [7].
[ 7 ]
[ { "id": "2007.08294_all_0", "text": " Graph neural networks  have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including...
What is the definition of 'task'?
Task means what the agent wants to learn [14]. It can be Supervised Learning or Reinforcement Learning, which is represented by initial state, loss function, transition distribution, and episode length H [6].
[ 14, 6 ]
[ { "id": "1703.03400_all_0", "text": " Learning quickly is a hallmark of human intelligence, whether it involves recognizing objects from a few examples or quickly learning new skills after just minutes of experience. Our artificial agents should be able to do the same, learning and adapting quickly from onl...
What does "2.5D data" means ?
25D data is 2D information (image plane) plus the information about the relative depths of points, ie, voxels - whether the points are behind or in front of the visible surface [9].
[ 9 ]
[ { "id": "1604.03265_all_0", "text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn...
How does a role-shift captioning model contribute to generating captions?
By using RNN-based-role-shift caption model consists of two LSTM layers [24]. the model generates the word "yt", by taking two inputs to the model which are 1- Semantic structure sequence, and 2- corresponding proposal feature sequence [4].
[ 24, 4 ]
[ { "id": "2103.12204_all_0", "text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks,...
Why is KG Modularization needed?
KG modularization is crucial for maintaining the intrinsic knowledge of each individual KG [11].
[ 11 ]
[ { "id": "2206.03715_all_0", "text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e...
Alignment functions refer to 4 calculation methods (dot, general, concat, location) for obtaining an alignment vector. Is that true?
Yes, alignment functions refer to 4 distinct functions which are "Location, dot, general and concat" [38].
[ 38 ]
[ { "id": "1508.04025_all_0", "text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con...
What is the instance of 'paired examples'?
[Paired training data consists of training examples ({xi, yi}^N i=1), where the correspondence between xi and yi exists [4]. An instance of 'paired examples' is labels↔photos from the CMP Facade Database] [42].
[ 4, 42 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed...
What does an "affine classifier" mean?
The affine classifier is the classifier in the form of an affine function [10]. The general form that is used in the paper is the function f: R^n -> R^m, where f(x) = W^T * x + B, for the given matrix and vector W and B [5].
[ 10, 5 ]
[ { "id": "1511.04599_all_0", "text": " Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in many research areas such as bioinformatics (1, 16), speech (12, 6), and computer vision (10, 8). Though deep networks have exhibited very good performance ...
What are the pretraining datasets used in analyses?
English Wikipedia, BookCorpus, Stories, OpenWebText, CC-NEWS, and C4En datasets were used in pretraining [6].
[ 6 ]
[ { "id": "2204.08110_all_0", "text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling...
What are the metrics used to evaluate the trade-off between the quality and diversity of generated captions?
Authors used BlEU, METOR, ROUGE, CIDEr, and SPICE to evaluate quality based generated captions, And used Accuracy-based, Diversity-based metrics to evaluate diversity based generation captions [39].
[ 39 ]
[ { "id": "2103.12204_all_0", "text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks,...
Are English pretrained language models good at transfer to other language?
No, they are not, relative to models trained on corpora with non-English text [1].
[ 1 ]
[ { "id": "2204.08110_all_0", "text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling...
Define how a linear convolution layer functions.
Linear convolution layer projects the filtered high-dimensional representation to low-dimensional subspace [2].
[ 2 ]
[ { "id": "1801.04381_all_0", "text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resour...
What exactly is "smoothing" and how does it help count-based LMs account for unseen sequences?
The paper does not discuss what smoothing is or how it helps some LMs account for unseen sequences [10].
[ 10 ]
[ { "id": "1602.02410_all_0", "text": " Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sentences not only encode complexities of language such as grammatical structure, but also distill a fair amoun...
Authors explain the challenges AI models face when dealing with longer time sequences. Would human performance on this task also decline when analyzing longer sequnces?
While the authors explain the issue of dealing with longer time sequences, this question cannot be answered in this paper since there is no evidential information about human performance [4].
[ 4 ]
[ { "id": "1506.07503_all_0", "text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation  and visual object classification .111An early version of this work was presented at th...
How do the authors recognize that reducing the classifier-free guidance parameter improves reconstruction but constrains the ability to perform significant manipulation?
By observed that in the referenced [18] work shop "Generative models and downstream applications, 2021" [35].
[ 35 ]
[ { "id": "2208.01626_all_0", "text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2  and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o...
YoloV3 is most suited for small, medium or large size objects?
YOLOv3 now struggles more with medium and larger size objects, ie, performs worse than before [21].
[ 21 ]
[ { "id": "1804.02767_all_0", "text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B...
What are the results we can get after going through the proposed Transformer-based model?
SAT predicts class of nodes and graphs better than other SOTA models [36]. Also, SAT is more explainable compared to other transformer-based models [43].
[ 36, 43 ]
[ { "id": "2202.03036_all_0", "text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019...
The brain tumour segmentation data consists of 274 cases in total, is this dataset large enough to not consider adding regularisation technics ?
The authors mention that they reduced the amount of regularization techniques as they consider the BRATS database to be large [56].
[ 56 ]
[ { "id": "1603.05959_all_0", "text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient out...
How can author claim that language models perform better if they use prior knowledge as well as context?
People share a lot of prior knowledge when they talk each other [1].
[ 1 ]
[ { "id": "1904.09223_all_0", "text": " Language representation pre-training Mikolov et al. (2013); Devlin et al. (2018) has been shown effective for improving many natural language processing tasks such as named entity recognition, sentiment analysis, and question answering. In order to get reliable word rep...
The authors claims that the probing mechanism combined with multi-orientation pooling can capture any 3D structure, is this true ?
It is true that the probing mechanism has a relationship with the Radon transform, which is an integral transform whose inverse is used to reconstruct images from medical CT scans and other complex 3D structures such as a map of a planet's polar regions (https://mathworldwolframcom/RadonTransformhtml#:~:text=The%20Radon%20transform%20is%20an,(Roulston%20and%20Muhleman%201997)) [20]. Additionally, orientation pooling aggragates information from different orientations, thus carrying only partial information about the object, which makes it robust to different objects and avoids overfitting to the objects from the training dataset [28].
[ 20, 28 ]
[ { "id": "1604.03265_all_0", "text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn...
Who is responsible for designating the control signal?
control signal is designated by each author in their work [0]. as the authors of this paper proposed a "verb-specific semantic role" "VSR" as control signal for customized captions [13]. while a recent surge of efforts by other works introduced extra control signals as constrains of the generated captions [16, 10, 19, 78, 48, 77, 27, 20] [47].
[ 0, 13, 47 ]
[ { "id": "2103.12204_all_0", "text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks,...
How we can say Wikia dataset is zero-shot dataset?
In the Wikia dataset, entities in the validation and test sets are from different domains than the train set [28]. This setting allows the model evaluation can be done in a zero-shot setting since the set of entities are separated in training and test so that the model can't see the entities when linked at the test time [8].
[ 28, 8 ]
[ { "id": "1911.03814_all_0", "text": " Scale is a key challenge for entity linking; there are millions of possible entities to consider for each mention. To efficiently filter or rank the candidates, existing methods use different sources of external information, including manually curated mention tables Gan...
How does the knowledge distilation works if meta-graph can't be constructed (i.e. there is no corresponding entities in knowledge graph for query/passage)?
Entities that exactly match entities in E are selected from q and s* to construct the meta-graph [26].
[ 26 ]
[ { "id": "2204.11673_all_0", "text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap...
What is the Difference Between Additive and Multiplicative Attention?
Dot product attention is calculated using optimized matrix multiplication operations whereas Additive attention is computed by compatibility function using a feed-forward network with a single hidden layer [16]. Multiplicative Attention is much faster and more space-efficient than the additive attention [17].
[ 16, 17 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
Is the computational complexity per data point of wake sleep algorithm same as AEVB or is it different?
Wake-Sleep has the same computational complexity as AEVB per datapoint [20].
[ 20 ]
[ { "id": "1312.6114_all_0", "text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi...
The results section concludes that momentum affected neither the training compute cost nor the performance - why was this a surprising or unexpected result?
It is not clear why was the result an unexpected result since there is no evidential information of what the authors expected when choosing the hyperparameters to assess their importance [54].
[ 54 ]
[ { "id": "1503.04069_all_0", "text": " Recurrent neural networks with Long Short-Term Memory (which we will concisely refer to as LSTMs) have emerged as an effective and scalable model for several learning problems related to sequential data. Earlier methods for attacking these problems have either been tail...
What are the advantages of using a flat architecture in SegNet?
The flat architecture avoids parameter explosion, unlike an expanding deep encoder network with full feature connectivity (same for decoder) and the training time remains almost same for each additional/deeper encoder-decoder pair [7].
[ 7 ]
[ { "id": "1505.07293_all_0", "text": " Semantic segmentation is an important step towards understanding and inferring different objects and their arrangements observed in a scene. This has wide array of applications ranging from estimating scene geometry, inferring support-relationships among objects to auto...
BLINK have two different versions, bi-encoding version and cross-encoding version. Is this true?
BLINK model is a two-stage method using two encoders: bi-encoder and cross-encoder [37]. With the qualitative analysis, the authors compared the BLINK with its bi-encoding version which uses a bi-encoder for candidate ranking instead of a cross-encoder, and showed that the cross-encoding version utilizing context information better than the bi-encoding version [47]. Therefore we can say that BLINK has two different versions [9].
[ 37, 47, 9 ]
[ { "id": "1911.03814_all_0", "text": " Scale is a key challenge for entity linking; there are millions of possible entities to consider for each mention. To efficiently filter or rank the candidates, existing methods use different sources of external information, including manually curated mention tables Gan...
Is there any different way to construct RE model instead of using PLM strategy previously?
Yes, there are several works of constructing RE model [1].
[ 1 ]
[ { "id": "2102.01373_all_0", "text": " As one of the fundamental information extraction (IE) tasks, relation extraction (RE) aims at identifying the relationship(s) between two entities in a given piece of text from a pre-defined set of relationships of interest. For example, given the sentence “Bill Gates f...
Why did the author choose to use the standard COCO metrics for the comparison of Mask R-CNN to the state of the art on the COCO dataset ?
The standard COCO metrics were used for the comparison of Mask R-CNN to the state of the art on the COCO dataset, but the reason is not strongly discussed int paragraphs [34].
[ 34 ]
[ { "id": "1703.06870_all_0", "text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ...
How many images do the ILSVRC dataset has?
ILSVRC dataset has 12 million training images, 50 thousand validation images and 100 thousand test images [45].
[ 45 ]
[ { "id": "1409.0575_all_0", "text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompa...
What's the difference between pix2pix and CycleGAN?
[CycleGAN builds on the “pix2pix” framework, which uses a conditional generative adversarial network to learn a mapping from input to output images [38]. However, unlike pix2pix, the paper learns the mapping without paired training examples [9].
[ 38, 9 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed...
What is a major drawback of deep learning approaches adapting networks designed for object categorization to pixel wise labeling?
Due to the use of non-overlapping max-pooling-subsampling layers, the resulting feature map is reduced compare to the input dimension [0].
[ 0 ]
[ { "id": "1505.07293_all_0", "text": " Semantic segmentation is an important step towards understanding and inferring different objects and their arrangements observed in a scene. This has wide array of applications ranging from estimating scene geometry, inferring support-relationships among objects to auto...
What do the equations for Q-value and value represent?
Q and V are mathematically expressed as: V_{t}^{\pi}=\mathbb{E}_{a_{t}\sim\pi}[\sum_{\tau=t}^{T}\gamma R(\bm{s}_{\tau},a_{\tau},{\bm{y}})],Q_{t}^{\pi}(\bm{s}_{t},a_{t})=R(\bm{s}_{t},a_{t},{\bm{y}})+\gamma\mathbb{E}_{s_{t+1}\sim P}[V_{t+1}^{\pi}(\bm{s}_{t+1})] where R is the reward function, s means states, and the variable 'a' denotes actions [9].
[ 9 ]
[ { "id": "2210.01241_all_0", "text": " The ultimate aim of language technology is to interact with humans. However, most language models are trained without direct signals of human preference, with supervised target strings serving as (a sometimes crude) proxy. One option to incorporate user feedback is via ...
What weaknesses would a dataset that without entity replacement or anonymization have when training a reading comprehension model? Why is this a necessary step in the process?
Since the authors are attempting to build a reading comprehension model, not anonymizing the entities before using the dataset might lead to a situation where models use external information, or statistics on the distribution/frequency of words themselves to guess answers [8]. These steps are needed to ensure that models use the context to answer the questions [9].
[ 8, 9 ]
[ { "id": "1506.03340_all_0", "text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ...
What is the difference between BERT paper and RoBERTa paper’s point of views? Give an answer in NSP loss and their performance perspective.
In BERT paper, author said that removing NSP can hurt the performance of the model [34]. However, in RoBERTa paper, author said that removing NSP improves downstream task performance [37].
[ 34, 37 ]
[ { "id": "1907.11692_all_0", "text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of...
What does Active learning means?
Active learning is training strategy which uses both labeld and unlabeld data in training as well as semi-supervised learning [4].
[ 4 ]
[ { "id": "1711.04043_all_0", "text": " Supervised end-to-end learning has been extremely successful in computer vision, speech, or machine translation tasks, thanks to improvements in optimization technology, larger datasets and streamlined designs of deep convolutional or recurrent architectures. Despite th...
Which framework achieved state-of-the-art COCO object detection results with NASNets?
Faster-RCNN framework along with the features learned by NASNets from ImageNet achieved state-of-the-art COCO object detection results with NASNets [4].
[ 4 ]
[ { "id": "1707.07012_all_0", "text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of  on using convolutional architectures (17, 34) for ImageNet  classification, successive advancements through architecture enginee...
How does authors verify that searching in a small supernet then scaling is good tactic to search a big network?
Searching in a small supernet then scaling was better than directly searching in a big supernet; The accuracy was similar, but the direct search needed much higher search cost [75].
[ 75 ]
[ { "id": "2009.02009_all_0", "text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall...
Does DarkNet-53 backbone of YoloV3 uses any skip connections?
"residuals" = skip connections, which means that DarkNet-53 uses skip connections [14].
[ 14 ]
[ { "id": "1804.02767_all_0", "text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B...
What are the consequences of stacking a really big number of attention module in the performance of attention modules?
If naive stacking, too many attention modules will cause a drastic performance drop as the mask values will converge to 0 [21]. However, the model in the paper uses their own stacking method, which avoids the downfall of naive stacking [3]. The only consequence when using the paper's stacking method is that the model will require more parameters and FLOPs [32].
[ 21, 3, 32 ]
[ { "id": "1704.06904_all_0", "text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio...
Is Faster R-CNN +++ another different architecture or just some refinement for the known models?
Faster R-CNN+++ is a refined model of Faster R-CNN [34].
[ 34 ]
[ { "id": "1605.06409_all_0", "text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar...
All the models proposed in this paper are sequence-to-sequence models. True or False?
The authors seem to be using LSTM models for performing their analysis and experiments [17]. However, the term “sequence-to-sequence” models is not defined in this paper, so answering True or False for this question is not possible based on the contents of this paper alone [31].
[ 17, 31 ]
[ { "id": "1506.03340_all_0", "text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ...
Can’t it be generated by video interpolation? I thought we can do this by giving two images and running interpolation.
Figure 4 (c) compares the task of interpolation between two images [11]. A frame interpolation network generates high frame rate and it can be interpreted as interpolating between two images [32].
[ 11, 32 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
What does "synset" mean?
SynSets are part of WordNet structured directed graph that represent similar concepts such as canine and domestic animals both can represent a dog [52]. In the WordNet grph mny synsets have one path through the graph [53].
[ 52, 53 ]
[ { "id": "1612.08242_all_0", "text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ...
What is the definition of 'translate'?
[To "translate" an image means to convert an image from one representation of a given scene to another, eg, grayscale to color, image to semantic labels, etc ] [4].
[ 4 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed...
How many benchmark dataset are used to evaluate the proposed model?
Three benchmark datasets are used [18].
[ 18 ]
[ { "id": "1710.10903_all_0", "text": " Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classification (He et al., 2016), semantic segmentation (Jégou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has ...
What does the confusion matrix Q in the authors noisy label robustness experiment refers to?
The confusion matrix Q shows how many images were correctly labeled and how many images were purposely incorrectly labeled for the noise experiment [35].
[ 35 ]
[ { "id": "1704.06904_all_0", "text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio...
Compare the detection feature map of both of single shot detectors (SSD and YOLO) ?
SSD uses multi-scale feature map while YOLO operates on single scale feature map [5].
[ 5 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
How is the conditioning vector obtained?
From text prompts, the conditioning embedding vector is obtained [15]. The text prompt is first tokenized to provide a fixed length token identification vector, and then the language model maps this vector to produce an embedding that serves as a conditioning vector [1].
[ 15, 1 ]
[ { "id": "2208.12242_all_0", "text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi...
How does using the fixed attention approach affect performance differently when compared to the masked attention approach?
There is no discussion on how the performance difference is brought about [20].
[ 20 ]
[ { "id": "2112.05364_all_0", "text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att...
What are localization errors?
FAST R-CNN and other state of the art methods predict the bounding boxes more accurately which means they don’t suffer from localization errors whereas YOLO model has localization problems which are addressed in this paper [7].
[ 7 ]
[ { "id": "1612.08242_all_0", "text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ...
Why the normalization wasn't done taken all anchors into account?
The normalisation is not done by taking all anchors into account because vast majority of anchors are easy negatives and receive negligible loss values under the focal loss [33].
[ 33 ]
[ { "id": "1708.02002_all_0", "text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ...
The authors use a different number of layers and rectified units for experiments on MNIST and speech recognition. What factors might the authors have considered while deciding on these numbers?
For speech recognition, the architecture was based on the acoustic model used by Android voice search [12]. For MNIST, the architecture was strongly regularized using dropout and weight constraints as described in prior work [18].
[ 12, 18 ]
[ { "id": "1503.02531_all_0", "text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typi...
What is meant by "row-major" order?
"row-major" means that consecutive small grayscale images of each row reside next to each other unlike "column-major" and both are methods of storing elements in memory [14].
[ 14 ]
[ { "id": "1506.06579_all_0", "text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a...
What was the highest performing estimation method for the authors' experiments
There is no ""highest"" performer by any single measure [45].
[ 45 ]
[ { "id": "2112.05364_all_0", "text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att...
What is the difference between foreground and background voxels?
At the output of the V-Net foreground voxels represents the score for the anatomy and background voxels represents score for not having the anatomy at a region [10].
[ 10 ]
[ { "id": "1606.04797_all_0", "text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes...
What kind of text prompt does it contain? What were the criteria to set these prompts?
For a more thorough evaluation than existing literature in T2V, the authors collect an evaluation set from Amazon Mechanical Turk (AMT) that consists of 300 prompts and filtered out prompts that were incomplete, too abstract, or offensive and then identified 5 categories (animals, fantasy, people, nature and scenes, food and beverage) and selected prompts for these categories [27]. It is used for zero-shot T2V human evaluation [29].
[ 27, 29 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
What are some examples of "explicit planning"?
An example of "explicit planning" would be the plan or strategy of abruptly increasing dynamics for performing a climax within the music to highlight a certain emotion such as anger [2].
[ 2 ]
[ { "id": "2208.14867_all_0", "text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ...
Does the paper's DNN use a a larger width kernel or multiple smaller width kernels?
The paper's DNN uses multiple smaller Gaussian kernels iteratively as a way of regularization during the optimization process as seen in equation 2 [26].
[ 26 ]
[ { "id": "1506.06579_all_0", "text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a...
How is the Uniform Reader model different from the base LSTM model?
Beyond some information that the uniform reader has poor performance, the paper does not explicitly define what this is [32].
[ 32 ]
[ { "id": "1506.03340_all_0", "text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ...
What types of non-linearities is used for both layers of the depthwise separable convolution?
MobileNet layers use batchnorm and ReLU nonlinearities [14].
[ 14 ]
[ { "id": "1704.04861_all_0", "text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t...
What are the two roles of autoencoder in proposed model?
Two roles of autoencoder in proposed models are to help to learn co-occurunce relationship among applications and to help transformer encoder to learn effective user retention representations [26]. The reason is that autoencoder helps to learn high-quality app embeddings from the co-occurrence relationship [27].
[ 26, 27 ]
[ { "id": "2005.13303_all_0", "text": " Personalized mobile business, e.g., recommendations, and advertising, often require effective user representations. For better performance, user modeling in industrial applications often considers as much information as possible, including but not limited to gender, loc...
How can this prevent flickering artifacts? Any backup publications for further explanation?
To prevent flickering artifacts, they sustain hallucinating information to be consistent across frames [15]. They use the same noise initialization for each frame to encourage consistent detail hallucination [35].
[ 15, 35 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
What optimizer did the authors used for the distilled models?
The model was trained with a distributed stochastic gradient descent approach [17].
[ 17 ]
[ { "id": "1503.02531_all_0", "text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typi...
What are the obstacles of RE? Does this paper solved them?
There are two obstacles of RE [2].
[ 2 ]
[ { "id": "2102.01373_all_0", "text": " As one of the fundamental information extraction (IE) tasks, relation extraction (RE) aims at identifying the relationship(s) between two entities in a given piece of text from a pre-defined set of relationships of interest. For example, given the sentence “Bill Gates f...
When defining the reading comprehension task, the authors explain that they wish to estimate p(a|c, q). What would a model trained on this task do if the context "c" itself had factually incorrect information?
The authors are training a reading comprehension model [2]. Therefore, if the context “c” has incorrect information, the model is likely to answer based on the factually incorrect information itself [4]. The authors clearly explain that the task their model is being built for and evaluated on is of identifying answers from a given text (ie [7].
[ 2, 4, 7 ]
[ { "id": "1506.03340_all_0", "text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ...
Does the author showed that the distillation on the knowledge graph can be useful for re-ranking task?
This work proposes using knowledge graph distillation as it can help retain only informative knowledge needed for passage re-ranking [20]. By investigating the effect of global and local distillation separately, this work found that the MRR@10 score and efficiency decreased slightly without global distillation, and that time efficiency decreased the most without local distillation [54]. Therefore, this work demonstrates that both global and local distillation of knowledge graphs is useful for re-ranking tasks in terms of performance and efficiency [58].
[ 20, 54, 58 ]
[ { "id": "2204.11673_all_0", "text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap...
How can this value be calculated? Why does the authors set the value as 0.5?
In NSFW images, there are toxic words in the text, or images with a watermark [27]. Therefore, authors filter out sample pairs with probability larger than 05 [36].
[ 27, 36 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
Can we use image classification models for semantic segmentation?
Since a patch is fed into a classifier to predict the class probabilities of the center pixel, it is evident that image classification models for semantic segmentation [3].
[ 3 ]
[ { "id": "1505.07293_all_0", "text": " Semantic segmentation is an important step towards understanding and inferring different objects and their arrangements observed in a scene. This has wide array of applications ranging from estimating scene geometry, inferring support-relationships among objects to auto...
They perform only a qualitative analysis of the proposed model. Is it true?
False [30]. They provided not only qualitative analysis but also quantitive analysis for their model [32].
[ 30, 32 ]
[ { "id": "1711.09020_all_0", "text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tra...
What is the reason that the space-time attention does not work well to generate consistent content?
Factorized space-time attention in VDM baselines is insufficient to generate consistent content in the task of One-Shot Video Generation [36]. The self-attention layers in T2I models are only driven by spatial similarities rather than pixel positions [2]. Using full attention in space-time leads to quadratic growth in computation [18]. It is thus infeasible for generating long-form videos with increasing frames [20].
[ 36, 2, 18, 20 ]
[ { "id": "2212.11565_all_0", "text": " The large-scale multimodal dataset , consisting of billions of text-image pairs crawled from the Internet, has enabled a breakthrough in Text-to-Image (T2I) generation (30, 35, 6, 42, 40). To replicate this success in Text-to-Video (T2V) generation, recent works (42, 15...
Are newly trained concepts weaker than prior ones?
If this question on whether newly trained concepts are weaker, is with regards to existing work, it depends on what specific method is used [2]. For example, the authors mention that finetuning based approaches suffer from catastrophic forgetting [68].
[ 2, 68 ]
[ { "id": "2208.01618_all_0", "text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and com...
If LMs have been found to be biased and limited in creativity, how would LMs be able to create instructions that cover a greater diversity of tasks compared to humans who might be able to imagine possible tasks?
LMs might be able to create instructions covering a greater diversity of tasks since they are trained on a large corpus of material that encompasses the work of many humans [4]. Additionally, the authors' proposed approach, Self-Instruct, uses a bootstrapping phase where humans provide the first set of instructions, and a LM uses those as examples to generate more instructions [7]. Approaches such as these, combining the efforts of a human and a language model, might be one way to ensure LMs create a wider array of tasks [11]. However, the authors do acknowledge that LMs are prone to be biased towards commonly occurring sequences, at the cost of rarer sequences, meaning that this is an open research question [27].
[ 4, 7, 11, 27 ]
[ { "id": "2212.10560_all_0", "text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d...
What metric is used to compare VLAD methods with their Max pooling counterparts?
They use in comparison the recall@1 on Tokyo 24/7 [43].
[ 43 ]
[ { "id": "1511.07247_all_0", "text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ...
How does the form of the encoding distribution affect the ability to uncover true generative factors in VAEs?
The exact form of the encoding distribution affect the ability to uncover true generative factors in VAEs [46].
[ 46 ]
[ { "id": "1812.02833_all_0", "text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor...
What is the number of images and classes does the ImageNet dataset have?
ImageNet has more than 12 million images and about 1000 classes [0].
[ 0 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot...
To investigate the effect of the squeeze ratio on model size and accuracy, were the models fine-tuned or trained from scratch?
To investigate the effect of the squeeze ratio on model size, models were trained from scratch so that one can make comparisons for these separate models [31].
[ 31 ]
[ { "id": "1602.07360_all_0", "text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur...
How does the trade-off between fidelity and diversity vary with the Gradient Scale?
When using a scale of 1, we observed that the classifier assigned reasonable probabilities (around 50%) to the desired classes for the final samples, but these samples did not match the intended classes upon visual inspection [4]. Scaling up the classifier gradients remedied this problem, and the class probabilities from the classifier increased to nearly 100% [41]. Using a larger gradient scale focuses more on the modes of the classifier, which is potentially desirable for producing higher fidelity (but less diverse) samples [42].
[ 4, 41, 42 ]
[ { "id": "2105.05233_all_0", "text": " Over the past few years, generative models have gained the ability to generate human-like natural language Brown et al. (2020), infinite high-quality synthetic images Brock et al. (2018); Karras et al. (2019b); Razavi et al. (2019) and highly diverse human speech and mu...
What is the difference ratio of non-English text in pretraining data between T5 and RoBERTa?
T5 data contains 026%, and RoBERTa data contains 078% [20].
[ 20 ]
[ { "id": "2204.08110_all_0", "text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling...
What is the role of adding 1x1 convolution before the 3x3 and 1x7 convolutions, How does it help?
1x1 convolution block is added before 3x3 and 1x7 convolutions for scaling up the dimensionality of the filter bank before the additionto match the depth of the input [10].
[ 10 ]
[ { "id": "1602.07261_all_0", "text": " Since the 2012 ImageNet competition  winning entry by Krizhevsky et al , their network “AlexNet” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection , segmentation , human pose estimation , video classification , o...
What is the difference between deep spatial features of the noisy image \phi(z_t) and noisy image z_t?
A noisy image is the output image of a diffusion step, and the features of a noisy image can't be answered using this paper only [18].
[ 18 ]
[ { "id": "2208.01626_all_0", "text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2  and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o...