input
stringlengths
14
315
answer
stringlengths
9
2.16k
gold_ctxs
listlengths
1
15
ctxs
listlengths
11
186
Why did the authors use hidden states only at the top LSTM layers in both the encoder and decoder?
To derive a context vector that captures relevant source-side informations that help predicting the current target word [10].
[ 10 ]
[ { "id": "1508.04025_all_0", "text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con...
The paper test classifying nodes on evolving graphs with 2 datasets(Citation and Reddit). What is the difference between two experiments with each dataset?
Citation and Reddit data differ in the semantic of their edges and word vectors [29]. The reason is that an edge in citation indicates a paper cite others, whereas an edge in Reddit indicates that they're written by a common writer [30].
[ 29, 30 ]
[ { "id": "1706.02216_all_0", "text": " Low-dimensional vector embeddings of nodes in large graphs111While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs f...
What are the metrics used to compare authors' approach with image retrieval benchmarks?
Authors reported of using mAP to compare their approach with image retrieval benchmark [47].
[ 47 ]
[ { "id": "1511.07247_all_0", "text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ...
What term do they add in loss function to guarantee that generated images are not differentiable from real images?
Adversarial loss [12]. Paper noted that adversarial loss was adopted in order to make indistinguishable image [7].
[ 12, 7 ]
[ { "id": "1711.09020_all_0", "text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tra...
Does this also guarantee a generalizable performance over several domains? Did the authors evaluate the performance by specific domain?
They collect 300 text prompts for human evaluation and the prompts include 5 categories [28]. For quantitative results, Make-A-Video outperform CogVideo in both Chinese and English settings that it can infer that Make-AVideo has significantly better generalization capabilities than prior work [29]. Moreover, table 2 demonstrates that Make-A-Video’s zero-shot performance is already competitive than other approaches that are trained on UCF-101, and is much better than CogVideo [30]. It indicates that Make-A-Video can generalize better even to such a specific domain [31].
[ 28, 29, 30, 31 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
What are some examples of roles of node?
The roles of nodes can be their protein function or categories [32]. The reason is this paper classifies the function of proteins for PPI network, while classifies categories of nodes in Reddit network and Citation network [6].
[ 32, 6 ]
[ { "id": "1706.02216_all_0", "text": " Low-dimensional vector embeddings of nodes in large graphs111While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs f...
Does Conditional Computation have things in common with chain rule between statistical independent variables?
Yes as the chain rule has a sequence of computations similarly conditional computation method is also sequential [1].
[ 1 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
What are the reasons for such results? Why does it excel in such cases?
A model that has only seen text describing images is surprisingly effective at generating short videos, as demonstrated by our temporal diffusion-based method [0]. A diffusion-based T2I model to T2V through a spatiotemporally factorized diffusion model [2]. They leverage joint text-image priors to bypass the need for paired text-video data, which in turn allows us to potentially scale to larger quantities of video data [34].
[ 0, 2, 34 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
What is the difference between (x,y) possible pairs and number of pixels n in each bin?
The number of possible pairs (x,y) is n, as (x,y) iterates through each pixel in the bin [9].
[ 9 ]
[ { "id": "1605.06409_all_0", "text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar...
The paper mentions that SqueezeNet achieves AlexNet-level accuracy on ImageNet. Was the accuracy exactly the same as AlexNet or roughly the same?
It is the same as AlexNet and SqueezeNet maybe,exceed it for some experimental cases [23].
[ 23 ]
[ { "id": "1602.07360_all_0", "text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur...
Among the 5 GNNs used for evaluation, is there a GNN for heterogeneous graphs?
Yes, GTN (Graph Transformer Networks) among the five GNNs used for evaluation is designed for heterogeneous graphs [21].
[ 21 ]
[ { "id": "2007.08294_all_0", "text": " Graph neural networks  have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including...
Based on the statement that the authors used a beam size of 20 during inference, how many total sentences would be generated till timestep t=10?
Since BeamSearch always keep only the resulting best k (=beam size) candidates in every time step, 20 sentences would be generated till timestep t=10 [19].
[ 19 ]
[ { "id": "1411.4555_all_0", "text": " Being able to automatically describe the content of an image using properly formed English sentences is a very challenging task, but it could have great impact, for instance by helping visually impaired people better understand the content of images on the web. This task...
What are the different approaches that have been used in face recognition technology over the years?
Starting from the 1990s to the 2000s, holistic approaches were the most prominent direction in face recognition, Later on, local-feature-based face recognition was introduced [0]. In the 2010s, shallow learning-based-local-descriptors were used [1]. In 2014, the DeepFace, a deep learning-based model, was invented [3]. And ever since, the state-of-the-art techniques were from deep learning-based approaches [39].
[ 0, 1, 3, 39 ]
[ { "id": "1804.06655_all_0", "text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early...
How can auxiliary tasks help the volumetric CNN avoid overfitting and improve performances ?
The auxiliary tasks are closely related to the main tasks but are difficult to overfit to keep the learning from early convergence even when the main task is overfitted [17]. The property of the auxiliary tasks is that they are supposed to be challenging by using only only partial subvolumes for the predictions [21]. The auxiliary tasks better exploit the discrimnative power of local regions because they do not use additional knowledge about the semantics of the object [22].
[ 17, 21, 22 ]
[ { "id": "1604.03265_all_0", "text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn...
The paper mentions how training two models, one forward model and one backward model (to achieve bidirectionality) results in a performance gain. Would it be possible to achieve bidirectionality with just one model via some form of masked language modelling in this specific approach?
In this work, the authors train two models: a forward and a backward LM [28].
[ 28 ]
[ { "id": "1801.06146_all_0", "text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS...
What is electron microscopic recordings used for ?
Electron microscopic recordings can be used to highlight neuronal structures [16].
[ 16 ]
[ { "id": "1505.04597_all_0", "text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini...
The Daily Mail part of the dataset is approximately 2x larger than the CNN section of the dataset. True or false?
True [6].
[ 6 ]
[ { "id": "1506.03340_all_0", "text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ...
Why is the action space of language modeling particularly large? Is it because of the vocab size? But then, moving in the real world also has a huge action space (degrees of movement).
The action space for language modeling is equal to the vocabulary set of language models [11]. Since the vocabularies are very large (ie [4]. tens of thousands of possible tokens), the action space is also very large [7].
[ 11, 4, 7 ]
[ { "id": "2210.01241_all_0", "text": " The ultimate aim of language technology is to interact with humans. However, most language models are trained without direct signals of human preference, with supervised target strings serving as (a sometimes crude) proxy. One option to incorporate user feedback is via ...
The authors mention that their framework, MXNet, uses "lazy evaluation". Define lazy evaluation.
Lazy evalution is the way of evalution of data like NDArray, which the actual data push and pull are scheduled by the backend engine so that the data dependency can be correctly resolved [11].
[ 11 ]
[ { "id": "1512.01274_all_0", "text": " The scale and complexity of machine learning (ML) algorithms are becoming increasingly large. Almost all recent ImageNet challenge  winners employ neural networks with very deep layers, requiring billions of floating-point operations to process one single sample. The ri...
What is the difference between classification and object detection?
For the image classification task every image was annotated with one object class label, corresponding toone object that is present in an image [49].
[ 49 ]
[ { "id": "1409.0575_all_0", "text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompa...
What is the range of the number of non-English tokens found in English corpus?
Non-English tokens make up 300k to 406M in the datasets investigated [8].
[ 8 ]
[ { "id": "2204.08110_all_0", "text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling...
Will the pseudo-query generator generalize well when the target corpus is significantly different from the source domain on which the generator was trained?
The pseudo-query generator generalize well when the target corpus is significantly different from the source domain on which the generator was trained [52].
[ 52 ]
[ { "id": "2004.14503_all_0", "text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);...
Is it true that this paper's learning process can be viewed as maximizing the sensitivity of the loss functions of new tasks with respect to the parameters?
It is true [10]. As many sentences mention, it can be seen as increasing the sensitivity of the loss function [2].
[ 10, 2 ]
[ { "id": "1703.03400_all_0", "text": " Learning quickly is a hallmark of human intelligence, whether it involves recognizing objects from a few examples or quickly learning new skills after just minutes of experience. Our artificial agents should be able to do the same, learning and adapting quickly from onl...
Does this method likely to show similar tendency of performance improvement when other backbone model (like BERT_large) is used?
Through the experiments, this work demonstrated that the KERM model was able to significantly improve on the performance of its backbone model, ERNIE [43]. The authors posit that this is due to how KERM explicitly introduces external knowledge which can improve semantic matching performance [49].
[ 43, 49 ]
[ { "id": "2204.11673_all_0", "text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap...
How did the authors judge that the generated instructions were "meaningful"?
The authors judged a generated instruction as meaningful by seeing if it described a valid task [17].
[ 17 ]
[ { "id": "2212.10560_all_0", "text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d...
I understand that the quality of xb depends on xa. So the quality could get worse if we generate more frames?
No, it doesn't [11]. It enables generating longer videos by applying this model autoregressively using a new method for a conditional generation [26]. Also, P4 explains that the conditioning method helps the model outperform the existing method [27]. The samples from the reconstruction guidance method are temporally coherent over the course of the entire autoregressive generation process and we can infer that the quality is not affected by generated frames [30].
[ 11, 26, 27, 30 ]
[ { "id": "2204.03458_all_0", "text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we prese...
How could reducing the number of operations into constant result in decreasing resolution ?
The resolution is decreased due to averaging in the attention position with Multi-Head Attention [18].
[ 18 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
What features of the SSD algorithm contributed to major improvements in detection speed ?
According to authors, the improvement in speed of SSD algorithm comes from eliminating bounding box proposals and the subsequent pixel or feature resampling stage [1].
[ 1 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
Explain the meaning of N-Way and M-shot.
N-way and M-shot means that we sample N random classes from the dataset, and we sample M random samples from each class [43].
[ 43 ]
[ { "id": "1711.04043_all_0", "text": " Supervised end-to-end learning has been extremely successful in computer vision, speech, or machine translation tasks, thanks to improvements in optimization technology, larger datasets and streamlined designs of deep convolutional or recurrent architectures. Despite th...
Why does the performance increase of TWIST deteriorates when the epoch further increases over 400?
The direct optimization constraint used in TWIST can lead to sub-optimal solution [8].
[ 8 ]
[ { "id": "2211.02284_all_0", "text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n...
How can we check if the model suffers from mode collapse?
[If all input images map to the same output image and the optimization fails to make progress, then the model is suffering from “mode collapse” [40]. For example, the paper evaluates its method with the cycle loss in only one direction: GAN + forward cycle loss, or GAN + backward cycle loss (in Equation 2) and finds that it often incurs training instability and causes mode collapse, especially for the direction of the mapping that was removed] [5].
[ 40, 5 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed...
What is the difference between calculating the Taylor expansion and the Hessian?
Hessian is the second-order partial derivative matrix itself, and Taylor expansion is the method used to approximate it [27].
[ 27 ]
[ { "id": "2112.05364_all_0", "text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att...
What is the difference between C2Q and Q2C?
C2Q deals about which query words are most relevant to each context word [12]. However, Q2C deals about which context words have the closest similarity to one of the query words [13].
[ 12, 13 ]
[ { "id": "1611.01603_all_0", "text": " The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision communities. Systems trained end-to-end now achieve promising results on a variety o...
Is it true that GAN training is sensitive to every aspect of its setup (from optimization parameters to model architecture)?
Yes its true that GAN training is sensitive to every aspect of its setup (from optimization parameters to model architecture [0].
[ 0 ]
[ { "id": "1809.11096_all_0", "text": " The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks (GANs, Goodfellow et al. (2014)) at the forefront of efforts to generate high-fidelity, diverse images with models learned directly from data. GAN trai...
As a result, a model that has only seen text describing images is surprisingly effective at generating short videos, as demonstrated by our temporal diffusion-based method. Make-A-Video sets the new state-of-the-art in T2V generation.
Text describing images does not capture the entirety of phenomena observed in videos [1]. That said, one can often infer actions and events from static images [2]. as done in image-based action recognition systems (Girish et al, 2020) [32]. Moreover, even without text descriptions, unsupervised videos are sufficient to learn how different entities in the world move and interact (eg [4].
[ 1, 2, 32, 4 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
Why -while having data of next positions in training dataset- is it important to modify the self-attention sub-layer in the decoder stack to ensure that the predictions for position i can depend only on the known outputs at positions less than ?
Self attention layer in transformer is modified in decoder stack to attend only the past predictions to preserve the auto-regressive property in the language models [11].
[ 11 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
What would be the possible genres or composers to use in the experiments for further investigation?
The possible genres or composers to use in the experiments for further investigation would be more contemporary genres, such as jazz or blues, since the trained dataset is completely Classical while the test dataset is more contemporary [21].
[ 21 ]
[ { "id": "2208.14867_all_0", "text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ...
Does each channel maps the a response from a different position of the image ?
The k^2 channels for each category C of the final convolutional layer each map to cells within a spatial grid that correspond to a position relative to an object [18].
[ 18 ]
[ { "id": "1605.06409_all_0", "text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar...
How is using an encoder-decoder structure as a mask different than local convolutions soft masks, a part from the test error ?
The local convolutions' soft mask only consists of three Residual units, which remain the same size [34].
[ 34 ]
[ { "id": "1704.06904_all_0", "text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio...
What is the goal behind constructing individual single scale SR system?
Since SRCNN is only trained for a single scale, we need to train individual single scale SRCNNs to deal with multiple scales [15].
[ 15 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
What is IS as a measure of fidelity?
IS measures of fidelity but it has a drawback that it does not reward covering the whole distribution or capturing diversity within a class, and models which memorize a small subset of the full dataset will still have high IS [16].
[ 16 ]
[ { "id": "2105.05233_all_0", "text": " Over the past few years, generative models have gained the ability to generate human-like natural language Brown et al. (2020), infinite high-quality synthetic images Brock et al. (2018); Karras et al. (2019b); Razavi et al. (2019) and highly diverse human speech and mu...
What are benefits of using learnable parameters for capturing positional information rather than using sines and cosines to capture these positions?
The two choices of Positional encoding are learned and fixed [28]. In the experiments the two versions produced nearly identical results [32].
[ 28, 32 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
How the proposed LSTM future predictor model is different from the Ranzato model.
Ranzato model predict only the next frame but LSTM future predictor model predicts a long sequence into the future [15].
[ 15 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
Is there any limitation of this technique?
Training with opendomain data, it is hard to capture necessary motion knowledge from the input video and synthesize novel videos guided by edited prompts [1].
[ 1 ]
[ { "id": "2212.11565_all_0", "text": " The large-scale multimodal dataset , consisting of billions of text-image pairs crawled from the Internet, has enabled a breakthrough in Text-to-Image (T2I) generation (30, 35, 6, 42, 40). To replicate this success in Text-to-Video (T2V) generation, recent works (42, 15...
What do authors mean by saying that RoI layer was added "unnaturally" in the ResNet ?
Old object detection networks used two subnetworks, one being a convolutional subnetwork with a pooling layer, and another being fully connected layers [0]. The pooling layer served as a RoI pooling layer [1].
[ 0, 1 ]
[ { "id": "1605.06409_all_0", "text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar...
For the paper's pretrained DNN, if the input does not contain a training set class, why does the probability vector show sensitivity towards the noise in input?
The reason is that convolution layers learn parameters that can extract useful information and relations from the feature map that can help it afterwards to judge and give suitable responses of what this category is [16].
[ 16 ]
[ { "id": "1506.06579_all_0", "text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a...
How does the proposed method address the issue of cluster collapse?
Mutual information regularizer unfavors collapsed representation [19].
[ 19 ]
[ { "id": "2211.02284_all_0", "text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n...
What is meant by "hacks"?
'Hacks' means that they are not likely to naturally exist (non-natural looking images) [8].
[ 8 ]
[ { "id": "1506.06579_all_0", "text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a...
What is the difference between five-fold cross validation and leave-one-patient out?
LOO performs better than five-fold cross validation [51].
[ 51 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot...
What are the reasons behind the guidelines of choosing values of {-1,-2 and -3} for the initial bias for convolutional highway network of depth {10, 20 and 30} ?
Although not explicitly stated, given that the authors selected the best hyperparameters out of 100 experiments, it is highly likely that the initial bias was also selected by these experiments [24].
[ 24 ]
[ { "id": "1507.06228_all_0", "text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, withi...
What does the authors means by reframing object detection as a "single regression problem" ?
Reframing object detection as a simple regression problem means predicting bounding boxes and class probabilities directly from image pixels avoiding complex pipelines and steps which most of the existing (classifier-based) methods do [1]. YOLO can be trained end-to-end and can predict bounding boxes and respective class probabilities directly from an entire image [10]. Also, its loss function directly corresponds to detection performance, which makes optimizing it more intuitive and easier [3].
[ 1, 10, 3 ]
[ { "id": "1506.02640_all_0", "text": " Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms fo...
Why does the author use a rectangular shaped RoI?
in order to make a fixed feature map [10].
[ 10 ]
[ { "id": "1504.08083_all_0", "text": " Recently, deep ConvNets (14, 16) have significantly improved image classification and object detection (9, 19) accuracy. Compared to image classification, object detection is a more challenging task that requires more complex methods to solve. Due to this complexity, c...
Does a zero-shot scenario in this context refer to cases where relevance annotations are not available? Or are you referring to the case where the query set is also unavailable?
creating a large training corpus is often time-consuming and expensive and hence many retrieval systems are applied in a zero-shot setup, with no available training data to train the system [2].
[ 2 ]
[ { "id": "2104.08663_all_0", "text": " Major natural language processing (NLP) problems rely on a practical and efficient retrieval component as a first step to find relevant information. Challenging problems include open-domain question-answering , claim-verification , duplicate question detection , and man...
What are some examples of the SR model that uses deep neural network to encode user behavior sequences?
BERT4Rec and S3-Rec are two examples [35].
[ 35 ]
[ { "id": "2202.02519_all_0", "text": " Recommender systems have been widely used in many scenarios to provide personalized items to users over massive vocabularies of items. The core of an effective recommender system is to accurately predict users’ interests toward items based on their historical interactio...
How does this paper experimentally show that auxiliary tasks are not beneficial?
This paper experimentally shows that auxiliary tasks are not always beneficial by comparing four different learning strategies [21]. The first strategy, "Vanilla," involves standard training of base models only with the primary task samples [7].
[ 21, 7 ]
[ { "id": "2007.08294_all_0", "text": " Graph neural networks  have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including...
Normally, GAN training is unstable. Does this framework help to make the model stable?
[The paper applies two techniques to stabilize its model training procedure [23]. First, for \mathcal{L}_{\text{GAN}} (Equation 1), the paper replaces the negative log likelihood objective by a least-squares loss [24]. Secondly, to reduce model oscillation, the paper follows Shrivastava et al’s strategy and updates the discriminators using a history of generated images rather than the ones produced by the latest generators [40].
[ 23, 24, 40 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed...
What is the aim of using VAEs ?
The aim to use VAEs is to explore their decomposition capability to fulfill latent encoding of overlapping data and aggregate encoding of this data conforming to desired structure [10]. The approaches that explore disentanglement in the context of VAEs aims to achieve independence between the dimensions of the aggregate encoding [16].
[ 10, 16 ]
[ { "id": "1812.02833_all_0", "text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor...
How do the authors deal with the numerical instability that may occur due to incorporating SVD into the proposed method?
GMPool decomposes the grouping matrix using a method that approximates gradients in SVD to stabilize gradient computations [20].
[ 20 ]
[ { "id": "2209.02939_all_0", "text": " Graph Neural Networks (GNNs) learn representations of individual nodes based on the connectivity structure of an input graph. For graph-level prediction tasks, the standard procedure globally pools all the node features into a single graph representation without weight ...
How does training pre-training a latent space in Optimus lead to higher performance for dialog generation? Is it because the whole dialog can be encoded in the latent space?
There is no information about dialog generation specifically, to explain if this outperformance by OPTIMUS can be attributed specifically to being able to encode the entire dialog in latent space [34].
[ 34 ]
[ { "id": "2004.04092_all_0", "text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)...
How did the authors imporved the accuarcy by making the model deeper but decreased the training time simultaneously?
They changed the model architecture to multi-scale to reduce model capacity and used residual learning with high learning rates to increase training speed [41].
[ 41 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
How scales of the default bounding boxes for a particular feature map is computed?
The scale of the default boxes for each feature map is computed as:s_{k}=s_{\text{min}}+\frac{s_{\text{max}}-s_{\text{min}}}{m-1}(k-1),\quad k\in[1,m](4)where s_{\text{min}} is 02 and s_{\text{max}} is 09, meaning the lowest layer has a scale of 02 and the highest layer has a scale of 09, and all layers in between are regularly spaced [12].
[ 12 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
Did the method proposed in this paper perform on par with or better than the state-of-the-art methods that require users to provide spatial masks for editing?
Yes their method did perform better than mask editing methods, as authors demonstrated by examples that their method is more intuitive for users using only prompt, and doesn't require to explicitly mask parts of the image which results to remove important structural information and doesn't modify complex structures information [1]. And their work enables local or global modifications as well and besides their method doesn't require a training network [2].
[ 1, 2 ]
[ { "id": "2208.01626_all_0", "text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2  and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o...
What is the issue that come with very deep networks?
Training is difficult for very deep networks as they have many parameters and it takes a long time for them to converge [21]. Also, since convolutional layers shrink feature maps, having too deep a network could be bad [31].
[ 21, 31 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
The authors restrict batch size to 1 during training. Why did the authors do this, and what problems have might been encountered with higher batch size?
Since there is no evidential information about how the authors decide the value of batch size and the paper lacks an ablation study about the batch size, this question cannot be answered [29].
[ 29 ]
[ { "id": "1506.07503_all_0", "text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation  and visual object classification .111An early version of this work was presented at th...
Which metric is used to compare different unsupervised models?
Error in predicting the future and the performance on supervised tasks are the metrics used to compare different unsupervised models [39].
[ 39 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
Is the trained wordpiece model the same as the Google speech recognition system developed to solve a Japanese/Korean segmentation problem the authors mentioned above?
Yes, the word piece model was initially developed to solve a Japanese/Korean segmentation problem [27].
[ 27 ]
[ { "id": "1609.08144_all_0", "text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi...
What are the consequences of using class labels and box layouts ?
Using class labels and box layouts make collapsing into short output vector by fully connected layers inevitable [17].
[ 17 ]
[ { "id": "1703.06870_all_0", "text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ...
According to the paper, does BERT imporved by several other papers?
Yes, several other papers improved BERT [19]. Baevski et al [42].
[ 19, 42 ]
[ { "id": "1907.11692_all_0", "text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of...
How did the authors speed up the training?
The authors used residual learning, extremely high learning rates, and adjustable gradient clipping [32].
[ 32 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
What are the networks that were constructed from the best three searches?
The networks constructed from the best three searches are NASNet-A, NASNet-B and NASNet-C [20].
[ 20 ]
[ { "id": "1707.07012_all_0", "text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of  on using convolutional architectures (17, 34) for ImageNet  classification, successive advancements through architecture enginee...
Can collective communication primitives such as all_reduce or all_gather be implemented using MXNet?
Since MXNet provides distributed key-value store mechanism and user-defined updater logics, it is likely to be able to implement collective communication primitives using MXNet [13].
[ 13 ]
[ { "id": "1512.01274_all_0", "text": " The scale and complexity of machine learning (ML) algorithms are becoming increasingly large. Almost all recent ImageNet challenge  winners employ neural networks with very deep layers, requiring billions of floating-point operations to process one single sample. The ri...
Why did the authors choose the four particular regularizations instead of others?
Authors mainly introduce four different and newly used regularizations that would help researchers in visualizing responses from different layers [24]. These regularizations are designed to overcome different pathologies commonly encountered by gradient descent without regularization : L2 decay to penalize large pixel values which do not naturally occur, Gaussian blur:a useful regularization to iteratively penalize high frequency information associated with generated images via gradient ascent through each optimization step, Clipping pixels with small norm or Clipping pixels with small contribution [26].
[ 24, 26 ]
[ { "id": "1506.06579_all_0", "text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a...
Give examples of learning methods that are used in mapping from LR to HR?
Neighbor embedding, sparse coding, random forests and CNN have been used to map from LR to HR [2].
[ 2 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
What did the author mean by “Hard Negative Mining”?
When the number of available default boxes is high, the majority of the default boxes after the matching phase are negatives [14].
[ 14 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
How did the knowledge masking strategies of proposed model different with the basic making strategy?
Compared to basic masking strategy, ERNIE use knowledge masking strategies [2].
[ 2 ]
[ { "id": "1904.09223_all_0", "text": " Language representation pre-training Mikolov et al. (2013); Devlin et al. (2018) has been shown effective for improving many natural language processing tasks such as named entity recognition, sentiment analysis, and question answering. In order to get reliable word rep...
Why is it helpful to mask out less relevant tokens if these are less likely to be sampled anyways?
The authors hypothesize that their dynamic masking function helps because it adds a new constraint that the RL algorithm has to abide by [12]. Additionally, since this is a dynamic function, being updated oftenly (every mu steps), it is likely that the masking function ensures that the selected top-p tokens are more relevant to the current state the RL algorithm needs to analyse and decide on [22].
[ 12, 22 ]
[ { "id": "2210.01241_all_0", "text": " The ultimate aim of language technology is to interact with humans. However, most language models are trained without direct signals of human preference, with supervised target strings serving as (a sometimes crude) proxy. One option to incorporate user feedback is via ...
What are the other loss functions experimented by the authors'?
The main loss function used by authors is The Focal Loss [12]. Besides this, the other loss functions experimented on are: 1) Hinge Loss 2) Dynamically scaled cross entropy loss 3) \alpha-balanced CE loss 4) \alpha-balanced variant of the focal loss 5) Huber loss 6) The CE loss [14].
[ 12, 14 ]
[ { "id": "1708.02002_all_0", "text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ...
How does the author convert the triplet in KG into synthetic QA specifically?
given a triple (e^{head},r,e^{tail}) in a KG, where e^{head}, e^{tail} and r denote head/tail entity and relation respectively, we transform e^{head} and r into a natural language question Q_{i} using templates [10].
[ 10 ]
[ { "id": "2206.03715_all_0", "text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e...
Why 1x1 convolution is used after the original convolutions in Inception-Resnet architectures?
1x1 convolution is used after the original convolutions in Inception-Resnet architectures for scaling up the dimensionality of the filter bank before the additionto match the depth of the input [10].
[ 10 ]
[ { "id": "1602.07261_all_0", "text": " Since the 2012 ImageNet competition  winning entry by Krizhevsky et al , their network “AlexNet” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection , segmentation , human pose estimation , video classification , o...
What existing baselines are there? Thought this was the first work.
To benchmark their single-word embedding approach, the authors create a bunch of reference baselines to gauge the relative improvement their method offers [35]. One reference baseline they create merely spews out images from the train set itself, while ignoring the new prompt [58]. The second reference baseline that they create is a model which uses the text prompt only, while ignoring the personalization aspect of their task [61]. In addition, they also compare the ability of their model to generate variations of an existing image to two existing approaches: namely, DALLE-2 and LDM [62].
[ 35, 58, 61, 62 ]
[ { "id": "2208.01618_all_0", "text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and com...
What input size images were tested for SSD experiments?
SSD experiments were tested for images with input size 300x300 [20].
[ 20 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
Why do we use a class descriptor while fine tuning the model?
Visual prior to specific class can generate new poses and articulations of the subject in different contexts [17].
[ 17 ]
[ { "id": "2208.12242_all_0", "text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi...
What are pros and cons of these models illustrated in Figure 2, and what are distinctions of the proposed model?
Using figure 2, These increasingly expressive architectures are in tension [5]. While interaction-based models (ie, Figure 2 (b) and (c)) tend to be superior for IR tasks (Guo et al, 2019; Mitraet al, 2018), a representation-focused model—by isolating the computations among q and d—makes it possible to pre-compute document representations offline (Zamani et al, 2018), greatly reducing the computational load per query [7].
[ 5, 7 ]
[ { "id": "2004.12832_all_0", "text": " Over the past few years, the Information Retrieval (IR) community has witnessed the introduction of a host of neural ranking models, including DRMM (Guo et al., 2016), KNRM (Xiong et al., 2017; Dai et al., 2018), and Duet (Mitra et al., 2017; Mitra and Craswell, 2019). ...
All the relevant learning-based approaches fall into one or both of the following two categories: (i) learning for an auxiliary task , and (ii) learning on top of shallow hand-engineered descriptors that cannot be fine-tuned for the target task. How does the authors' approach differs from these two categories?
Authors see that both approaches are end-to-end learning [8].
[ 8 ]
[ { "id": "1511.07247_all_0", "text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ...
How BLINK can achieved zero-shot linking?
The BLINK model used a two-stage approach for entity linking based on fine-tuned BERT architectures that first encode the mention context and entity text with the bi-encoder for the candidate retrieval and utilize the cross-encoder to score and rank them [0]. These pre-trained architectures are simple yet scalable and effective for entity link tasks without the help of task-specific heuristics or external knowledge [1]. The authors showed that BLINK can achieve state-of-the-art performance for the large-scale entity linking on the dataset with a zero-shot setup [48]. (WikilinksNED Unseen-Mentions and TACKBP-201) [5].
[ 0, 1, 48, 5 ]
[ { "id": "1911.03814_all_0", "text": " Scale is a key challenge for entity linking; there are millions of possible entities to consider for each mention. To efficiently filter or rank the candidates, existing methods use different sources of external information, including manually curated mention tables Gan...
How short is it? What if the video that I want to generate is longer than its limitation? It would not be very pragmatic if it has too many restrictions in its length.
Leveraging frame rate conditioning, authours enable an additional augmentation method to tackle the limited volume of available videos at training timee, and provides additional control on the generated video at inference time by a varying number of frames-per-second [1].
[ 1 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
Wouldn't training on sub-volumes of a 3D object that isn't much representative of the global object affect the learning of the model negatively ?
The purpose of the auxiliary tasks is twofold: 1 [22].
[ 22 ]
[ { "id": "1604.03265_all_0", "text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn...
How is deep learning used in the process of feature extraction in face recognition systems?
The deep learning model does feature extraction by processing the image through many layers and giving an encoding of the face that can be used to solve different FR tasks [18]. The early layers of a deep learning model tend to represent simple textures that continuously evolve into facial structures in the later layers [2].
[ 18, 2 ]
[ { "id": "1804.06655_all_0", "text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early...
Just out of curiosity, how humans are so well at this? Cab human's technique be used for the machines as well? or do we need a totally different approach?
One-Shot Tuning acquires temporal knowledge from one training video, which is enabled by Sparse-Causal Attention (SCAttn) and temporal self-attention (Temp-Attn) [0]. It aptures spatial information and yields similar semantics as the training video to perform semantic mixing [1].
[ 0, 1 ]
[ { "id": "2212.11565_all_0", "text": " The large-scale multimodal dataset , consisting of billions of text-image pairs crawled from the Internet, has enabled a breakthrough in Text-to-Image (T2I) generation (30, 35, 6, 42, 40). To replicate this success in Text-to-Video (T2V) generation, recent works (42, 15...
Do conv Nets succeed in sequence modelling in general?
Conv Nets can compute the hidden state of the sequence data in parallel for all input and output positions [37]. However conv nets are still more expensive than the recurrent networks [4].
[ 37, 4 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
What are the main loss functions that have been explored for improving deep FR methods and how have they evolved over time?
There are 3 categories of loss functions for FR: Euclidean-distance-based loss, angular/cosine-margin-based loss, and softmax loss variations [20]. Initially, cross-entropy softmax loss was used, then some models tried using Euclidean-distance-based loss functions which started from contrastive loss and triplet loss [4]. However, due to their instability, the center loss and its variants (range loss, center-invariant loss) were introduced [82]. With a better understanding of loss functions for FR angular/cosine-margin-based loss functions were used [21]. It began with a reformulation of a softmax loss called L-Softmax, later A-Softmax appeared which adopted the L-Softmax idea but tried normalizing the weights [22]. Afterward, there were several improvements such as ArcFace, CosFace, and AMS which facilitated the convergence, while Fairloss and AdaptiveFace dealt with unbalanced data [23]. Lastly, there are different variations of softmax that try to normalize the L2-norms (L2-softmax, Ring loss), the weights, the features, or both weights and features (CoCo loss and vMF mixture loss) [24].
[ 20, 4, 82, 21, 22, 23, 24 ]
[ { "id": "1804.06655_all_0", "text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early...
What is the main motivation of this work?
This work has been motivated from the fact that real-world QA systems require simultaneously considering different types of reasoning abilities [1].
[ 1 ]
[ { "id": "2206.03715_all_0", "text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e...
Is Google Street View Time Machine used for the first time to create a dataset by the authors, or has it been previously used in another reserach?
As authors have considered : Google Street View Time Machine was a novel source (at that time) for learning an image representation for place recognition [26].
[ 26 ]
[ { "id": "1511.07247_all_0", "text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ...
Is the segmented training data 2d or 3d ?
V-Net is trained on the 3D MRI prostate volumes [1].
[ 1 ]
[ { "id": "1606.04797_all_0", "text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes...
Targeting memory-efficient indexing, can we also prune out redundant tokens in documents while preserving a sufficient level of fine granularity?
Targeting memory-efficient indexing, tokens are not appended in documents [24].
[ 24 ]
[ { "id": "2004.12832_all_0", "text": " Over the past few years, the Information Retrieval (IR) community has witnessed the introduction of a host of neural ranking models, including DRMM (Guo et al., 2016), KNRM (Xiong et al., 2017; Dai et al., 2018), and Duet (Mitra et al., 2017; Mitra and Craswell, 2019). ...
What does it mean to perform better in Text-to-Video generation? Does it mean that generated videos are aligned well with the text description?
The higher performance in Text-to-Video Generation requires not only excellent fidelity of video samples but also good handling of social bias in text-description given as a condition [25].
[ 25 ]
[ { "id": "2204.03458_all_0", "text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we prese...
How "superkernel" is different from supernet?
A superkernel is a component for searching expansion ratio and kernel sizes [36]. Supernet defines the largest network we can search [60].
[ 36, 60 ]
[ { "id": "2009.02009_all_0", "text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall...
Define local contrast normalization (LCN)?
Local Contrast Normalization (LCN) is a pre-processing step that normalize the input to a non-uniform scene illumination, highlight edges, and decorrelates the input dimensions [8].
[ 8 ]
[ { "id": "1505.07293_all_0", "text": " Semantic segmentation is an important step towards understanding and inferring different objects and their arrangements observed in a scene. This has wide array of applications ranging from estimating scene geometry, inferring support-relationships among objects to auto...
Is taking a closer look at the current state of the field of categorical object recognition the only goal behind this challenge?
This paper has three primary goals: 1To address the difficulty of producing this large-scale object identification benchmark dataset, 2To highlightthe improvements in object categorization and detection that have emerged from this work, and3To take a deeper look at the present status of the fieldof categorical object identification [7].
[ 7 ]
[ { "id": "1409.0575_all_0", "text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompa...
How the CADe problem can be improved?
They can be improved by either training CNNs from scratch and fine-tuning them or using hand-crafted features [60].
[ 60 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot...