input stringlengths 14 315 | answer stringlengths 9 2.16k | gold_ctxs listlengths 1 15 | ctxs listlengths 11 186 |
|---|---|---|---|
The authors claims that the performance increase with the number of attention module, is that true, knowing that they tried only m = {1,2,3,4} ? | It seems true as they also tried m = 5 and 6 and performance still improved, as seen in Table 6 [6]. | [
6
] | [
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio... |
What are the examples of suitable prompt for inversion? | Examples of inversion with prompts can be found in Figure12, where they used mask-based editing to limit inversion distortion [27]. | [
27
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
What are the problems associated with class imbalance for single stage detectors ? | The problems associated with class imbalance for single stage detectors are: 1) Training is inefficient as most locations are easy negatives that contribute no useful learning signal [11]. | [
11
] | [
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ... |
What are two kinds of pretrained language models? | They are monolingual and multilingual [0]. | [
0
] | [
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling... |
Why meta learning is better than transfer learning? | While transfer learning requires learned parameters, meta-learning does not require learned parameters [35]. | [
35
] | [
{
"id": "1703.03400_all_0",
"text": " Learning quickly is a hallmark of human intelligence, whether it involves recognizing objects from a few examples or quickly learning new skills after just minutes of experience. Our artificial agents should be able to do the same, learning and adapting quickly from onl... |
Does codeBERT trained by natural language? | No, CodeBERT trained by code from six programming language [31]. | [
31
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
How different would a BC version of chain of thought be than Lambada model? | Lambada, is an algorithm for text-based deductive logical reasoning that combines the ability of LMs to handle realistic text input with the backward chaining (BC) technique for high-level reasoning [58]. | [
58
] | [
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is... |
How does TransE learns entity and relatio embeddings in unsupervised way? | TransE is an unsupervised learning method that learns latent representations for a knowledge triplet [21]. | [
21
] | [
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap... |
How has the quality and diversity of generated 3D face images improved over time, and what advances have contributed to these improvements? | The paper only talks about the line of work on 3D image reconstruction, in other words, the methods and approaches to reconstruct 3D face images [42]. | [
42
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
What are the computer-aided detection problems studied in this paper? | The paper studies thoraco-abdominal lymph node detection and interstitial lung disease classification [5]. | [
5
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
What is instance segmentation ? | Instance segmentation is a new type of computer vision task that aims to solve the problem of how to represent all objects in an image [1]. | [
1
] | [
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ... |
What is distillation and why it is used? | Distillation is a knowledge transfer technique for deep networks which is used for compute efficient model design [47]. | [
47
] | [
{
"id": "1704.04861_all_0",
"text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t... |
Is Data Augmentation always sufficient to support performance in the segmentation task? | Performance of microscopy image segmentation task can be improved by using elastic deformation based segmentation [15]. | [
15
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
What are the hyper-parameters used to design the neural architecture search network?
| The number of cell repeats and the number of filters in the initial convolutional cell are the hyper-parameters used to design the Neural Architecture Search network [21]. | [
21
] | [
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture enginee... |
The authors proposed approach only works for classification models, and not for models that have other types of outputs. True or False? | In this work, the approach assumes that there are classes that the models should be able to predict [23]. The work focuses on classification models [27]. Thus, whether the approach can work on models with other types of outputs cannot be answered from this paper [28]. | [
23,
27,
28
] | [
{
"id": "1503.02531_all_0",
"text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typi... |
What was used as the backbone network for RetinaNet? | For RetinaNet, Feature Pyramid Network (FPN) was used as a backbone [25]. | [
25
] | [
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ... |
What is the advantage of stacking encoders and decoders for semantic segmentation? | Stacking encoders and decoders architecture produce smooth segment labels [0]. | [
0
] | [
{
"id": "1505.07293_all_0",
"text": " Semantic segmentation is an important step towards understanding and inferring different objects and their arrangements observed in a scene. This has wide array of applications ranging from estimating scene geometry, inferring support-relationships among objects to auto... |
Why is it adequate to say this problem is strictly convex? | We prove that hessian is a positive definite matrix [22]. | [
22
] | [
{
"id": "2211.02284_all_0",
"text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n... |
I believe T2I models can do this using latent exploration. What is the difference between them? What is novel about T2V’s interpolation? | A frame interpolation network for high frame rate generation can make a semantically similar video by taking the average CLIP embedding of all frames from a video as the condition [10]. | [
10
] | [
{
"id": "2209.14792_all_0",
"text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid... |
Why does the approach need a gating mechanism when a good retrieval should be able to correctly filter out irrelevant feedback from the memory? | A gating function is needed precisely because the retrieval function might not be able to filter out irrelevant feedback from memory [16]. This is a challenging thing to implement since syntactically or lexically similar things might or might not refer to similar concepts [36]. Another challenge is with adversarial feedback, made by users intending to mess with the system [62]. | [
16,
36,
62
] | [
{
"id": "2201.06009_all_0",
"text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining wi... |
Which specific metrics are improved when increasing attention modules ? | The Top-1 and Top-5 error metrics are improved when increasing attention modules [27]. | [
27
] | [
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio... |
Does deconvolution and unpooling conduct the same goal in the network? | Yes the purpose of the de-convolution layer is to increase the size similar to un-pooling operation [10]. | [
10
] | [
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes... |
Why did the authors chose to do experiments on different basic units to prove the generalization of the residual attention network? | Proving generalization shows that the proposed method can be applied to multiple structures without a significant loss in performance [43]. | [
43
] | [
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio... |
How is the "average attention sparsity" measured in the experiments? | The average attention sparsity is measured by the densities of masks sampled in SBM-Transformer averaged across all attention heads [32]. | [
32
] | [
{
"id": "2210.15541_all_0",
"text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attenti... |
Is choosing NAL as a baseline a good choice knowing that it always results in performance drop ? | As there was no other available comparison, NAL seems to be the only choice for the baseline [31]. | [
31
] | [
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio... |
What are the two place recognition benchmarks used by the authors? | Pittsburgh(Pitts250k) and Tokyo 24/7 benchmarks [39]. | [
39
] | [
{
"id": "1511.07247_all_0",
"text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ... |
Increasing the input size improved the detection for small objects. Is this true? | According to above evidential sentence, the answer is True [19]. | [
19
] | [
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S... |
What is a calibrated stereo twin? | calibrated stereo twin is the supervision method used by Garg et al [5]. | [
5
] | [
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h... |
Could the authors have used a BiLSTM instead of an LSTM to improve the performance of their proposed model further? | While the paper shows that LSTM has shown good performance on some sequence tasks, since there is no evidential information about BiLSTM in this paper this question cannot be answered in this paper [13]. To answer the question, external knowledge about BiLSTM is required to compare how it would work compared to existing LSTM model [17]. | [
13,
17
] | [
{
"id": "1411.4555_all_0",
"text": " Being able to automatically describe the content of an image using properly formed English sentences is a very challenging task, but it could have great impact, for instance by helping visually impaired people better understand the content of images on the web. This task... |
Are the three stages sequentially conducted in the model? | No [18]. It is hard to see the three stages are conducted sequentially [19]. | [
18,
19
] | [
{
"id": "1904.09223_all_0",
"text": " Language representation pre-training Mikolov et al. (2013); Devlin et al. (2018) has been shown effective for improving many natural language processing tasks such as named entity recognition, sentiment analysis, and question answering. In order to get reliable word rep... |
Is it true that large text to image models cannot mimic and create novel rendition of images in a reference set? | This is true that large text to image models cannot mimic and create novel rendition of images in a reference set [1]. | [
1
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
How do the specific contributions of this work make the construction of deep generative models, like VAEs, for language modeling more practical? | The main thesis of this work is around the idea that large VAE models for language tasks can work effectively, and the authors attempt to provide initial evidence for this by implementing a large model which they named OPTIMUS [2]. The first major contribution the authors make is in showing how the KL vanishing issue is addressed in the pretraining phase [4]. Next, the authors explain how conditioning vectors can be injected into GPT without the need for retraining, which brings down the cost and barrier to entry to develop models such as these [44]. Finally, the authors also discuss how to combine multiple pretrained language models (PLMs) such as BERT and GPT, which have very different input formats (ie [7]. | [
2,
4,
44,
7
] | [
{
"id": "2004.04092_all_0",
"text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)... |
Difference between data preparation for the proposed model and the SRCNN? | The proposed model's input size is the same as the receptive field size, and images were divided with no overlap [38]. | [
38
] | [
{
"id": "1511.04587_all_0",
"text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med... |
What are the CNN architectures that were explored in this paper? | The paper uses AlexNet, CifarNet, and GoogLeNet with various numbers of parameters [19]. | [
19
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
How long is this challenge been running? | The challenge has been running for past 5 years [175]. | [
175
] | [
{
"id": "1409.0575_all_0",
"text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompa... |
Is it true then that YOLOv2's classification network is first trained with 416 x 416 images, then finetuned with 448 x 448 images? | Yes YOLOv2 uses reduced size resolution of 416\times416 [11]. And during fine tuning on the ImageNet it uses 448\times 448 resolution for 10 epochs [31]. | [
11,
31
] | [
{
"id": "1612.08242_all_0",
"text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ... |
How many different types of experiments are performed to test the proposed models? | 5 different types of experiments are performed to test the proposed models [30]. They are Generalization over time scales, Experiments on MNIST, Experiments on Natural Image Patches, Out-of-domain Inputs, and Visualizing Features [27]. | [
30,
27
] | [
{
"id": "1502.04681_all_0",
"text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq... |
In language models, which method would be better for preventing overfitting from batch normalization and dropout? | According to this work, without dropout, a vanilla LM can run the risk of overfitting, which decreases performance [44]. | [
44
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
How is the inversion of text-guided diffusion models different from the inversion of GAN? | Inversion of GANs requires finding the initial noise vector that produces the edit we want [33]. | [
33
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
Increasing default box shape will increase in model performance. How many default boxes are used in the SSD framework? | In SSD framework, generally 6 default boxes per location are used [22]. | [
22
] | [
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S... |
What is meant by "differential interference contrast"? | Differential interference contrast (DIC) is a microscopy technique which can be used to record HeLa cells [21]. | [
21
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
What are the metrics they used for measuring efficiency and effectiveness? | They used (MRR@10) for measuring efficiency and effectiveness [56]. | [
56
] | [
{
"id": "2004.12832_all_0",
"text": " Over the past few years, the Information Retrieval (IR) community has witnessed the introduction of a host of neural ranking models, including DRMM (Guo et al., 2016), KNRM (Xiong et al., 2017; Dai et al., 2018), and Duet (Mitra et al., 2017; Mitra and Craswell, 2019). ... |
The reason why the diffusion step can be applied on both z_{t-1} and z^*_t in parallel is their one timestep difference is matched each other. Is it right? | The reason is in the diffusion process a noisy image outputted "zt-1" at a single time-step "t" can be computed as DM(zt,P,t,s) [18]. | [
18
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
Which gives better performance: using more than one image in the batch or larger input tiles with only one image in the batch ? | According to the experiments in the paper use of large tiles instead of large size is preferred which reduces the overhead and maximize the use of GPU memory [10]. | [
10
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
What is the issue with intractable posterior distribution? | The variational Bayesian (VB) approach involves the optimization of an approximation to the intractable posterior [4]. | [
4
] | [
{
"id": "1312.6114_all_0",
"text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi... |
How are objective control signals more advantageous than subjective control signals when controlling the caption generation process? | Subjective control signals are harder to control the generation process effectively and precisely [1]. | [
1
] | [
{
"id": "2103.12204_all_0",
"text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks,... |
What metrics should be used for comparison of Mask R-CNN to the state of the art on the COCO dataset ? | Metrics used for comparison are AP , multi-scale train/test, horizontal flip test, and online hard example mining (OHEM) [34]. | [
34
] | [
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ... |
How do the authors verify that the two characteristics mentioned in the sentence are indispensable for the ideal control signal? | Authors verify their work using a conventions evaluation metrics in prior CIC works [38]. As their quantitative results report in Table 1, you can observe that author's framework can achieve the best performance over almost all metrics and benchmarks [40]. and as for the visualized evaluation, you can observe in Figure 5 that the author's framework always learns a human-like semantic structure based on the VSR and grounded visual regions [41]. | [
38,
40,
41
] | [
{
"id": "2103.12204_all_0",
"text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks,... |
What is the resolution of the CUB image? | [The CUB dataset contains 11,788 images of 200 bird species [22]. | [
22
] | [
{
"id": "1703.05175_all_0",
"text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely over... |
What is an example of a DSE approach? | An example of DSE approach can be Bayesian optimization, simulated annealing, randomized search or genetic algorithms and all tend to develop automated approaches to find NN architectures exhibiting higher accuracy [9]. | [
9
] | [
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur... |
what is limitations of gnns? | There are two most widely adopted limitations of GNNs : over-smoothing and over-squashing [1]. Over-smoothing is a phenomenon that indicates representations of GNNs get similar to each others as the number of layers increases [6]. | [
1,
6
] | [
{
"id": "2202.03036_all_0",
"text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019... |
Is it true, as the authors suggest, that a neural network's depth is essential to its success? | As mentioned in many paragraphs, network depth is essential for expressing more complex functions, which is also essential for success [0]. | [
0
] | [
{
"id": "1507.06228_all_0",
"text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, withi... |
What is the key difference in model structure between Mobilenet style models and Shufflenet? | ShuffleNet introduces group convolutions and shuffling, while existing mobilenet style models do not have [46]. | [
46
] | [
{
"id": "1801.04381_all_0",
"text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resour... |
What does it mean to be multimodal in the context of "multimodal inputs"? | It means that the system is able to handle multiple modalities of input data, such as audio and video, text and image data, and even RGB-D data; challenging tasks which require multiple modalities of information to perform well [114]. | [
114
] | [
{
"id": "1301.3592_all_0",
"text": " Robotic grasping is a challenging problem involving perception, planning, and control. Some recent works (54, 56, 28, 67) address the perception aspect of this problem by converting it into a detection problem in which, given a noisy, partial view of the object from a ca... |
What happens if author removes the linear supernet design and opt to use the covnentional supernet design? | Using the conventional, constant depth method would drop the accuracy [65]. | [
65
] | [
{
"id": "2009.02009_all_0",
"text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall... |
For matching default boxes with ground truth ones, what metric was used? | Best Jaccard Overlap was used to match default boxes with ground truth ones [9]. | [
9
] | [
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S... |
Is the last 1*1 convolutional layer used because the task requires to output a map of segmentation? | Yes 1*1 last convolution layer helps to get desired number of classes for the segmentation map [8]. | [
8
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
How can the DeepFool algorithm be adapted to find minimal adversarial perturbations for any L`p norm? | To adapt the algorithm to use any l-p norm, only 2 lines in the algorithm (10 and 11) should be substituted with \displaystyle\hat{l}\displaystyle\leftarrow\operatorname*{arg\,min}_{k\neq{\hat{k}(\bm{x}_{0})}}\frac{\left|f^{\prime}_{k}\right|}{\|\bm{w}^{\prime}_{k}\|_{q}},(11)\displaystyle\bm{r}_{i}\displaystyle\leftarrow\frac{|f^{\prime}_{\hat{l}}|}{\|\bm{w}^{\prime}_{\hat{l}}\|_{q}^{q}}|\bm{w}^{\prime}_{\hat{l}}|^{q-1}\odot\text{sign}(\bm{w}^{\prime}_{\hat{l}}), where q = p/(p-1) [13]. | [
13
] | [
{
"id": "1511.04599_all_0",
"text": " Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in many research areas such as bioinformatics (1, 16), speech (12, 6), and computer vision (10, 8). Though deep networks have exhibited very good performance ... |
What’s the effect of the gradient of the lower bound w.r.t. φ on the naïve Monte Carlo estimator? | The gradient of the lower bound wrt [10]. \boldsymbol{\phi} is a bit problematic [13]. The usual (naïve) Monte Carlo gradient estimator for this type of problem is impractical for our purposes [2]. Because gradient estimator exhibits exhibits very high variance [21]. Optimization of this objective is equivalent to approximate MAP estimation, where the likelihood gradient is approximated by the gradient of the lower bound [27]. | [
10,
13,
2,
21,
27
] | [
{
"id": "1312.6114_all_0",
"text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi... |
Would it be better to use 1 prototype per class rather than multiple prototypes? | [If the number of prototypes per class is fixed and greater than 1, then this would require a partitioning scheme to further cluster the support points within a class [11]. | [
11
] | [
{
"id": "1703.05175_all_0",
"text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely over... |
How this paper define a prototype? | [The paper learns a non-linear mapping of the input into an embedding space using a neural network and takes a class’s prototype to be the mean of its support set in the embedding space [2]. It learns the embedding of the meta-data into a shared space to serve as the prototype for each class [5]. | [
2,
5
] | [
{
"id": "1703.05175_all_0",
"text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely over... |
What metrics are used to compare the performance of ULMFiT against existing approaches? | For consistency, the authors reported all results as error rates where lower is better [40]. | [
40
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
What are the applications of dreambooth? | Applications of Text-based image generation includes recontextualization & manipulation of subjects, , original art renditions, novel view synthesis and much more [5]. | [
5
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
Can't we use parallelization with RNN layers approach with any possible way? | Because hidden state of each input position depends on previous hidden state therefore RNN can not be parallelized [1]. Whereas Transformer due to attention layers are highly parallel [36]. | [
1,
36
] | [
{
"id": "1706.03762_all_0",
"text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)... |
What can be the future work related to this paper? | In the future, our work can be extended to adapt our methods to further various multiple KGs with studies of appropriate scale for KG modularization [36]. | [
36
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
How did adding the segment branch affect the results ? | Adding a segment branch improves the AP, additionally, the mask branch only adds a small computational overhead, enabling a fast system and rapid experimentation [2]. | [
2
] | [
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ... |
What are some common methods used in facial recognition and how do they compare in terms of effectiveness and challenges? | There are broadly 4 methods that are used in FR [0]. Holistic methods were the first-ever attempt to solve the FR problem [1]. But they were too primitive and could not account for uncontrolled facial changes that did not fit its assumptions [3]. Then, there are local feature-based methods that try to extract invariant properties with local filtering [67]. However, although better than holistic methods, these are also short of complexity and capacity to address the vastness of facial appearances [85]. The first learning-based methods also lacked the robustness to address the non-linearity and complexity of FR [86]. | [
0,
1,
3,
67,
85,
86
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
What is an example of model compression approaches? | different examples can be: Applying SVD to a pretrained CNN model through which we can obtain most effective parameters or features of largest singular values of this factorization if we want [3]. | [
3
] | [
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur... |
Is it crucial to use 6 layers in the encoder? if it is free to change, Does increasing those layers need more data to avoid overfitting or just would take longer time to converge? | For translation tasks the result shows that 6 layers are the optimal number of layers [10]. | [
10
] | [
{
"id": "1706.03762_all_0",
"text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)... |
How was the hyperparameters chosen for the baseline methods, and what were the chosen values for he experiments presented? | For baseline pooling methods, we perform grid search following previous work, and present best results [25]. | [
25
] | [
{
"id": "2209.02939_all_0",
"text": " Graph Neural Networks (GNNs) learn representations of individual nodes based on the connectivity structure of an input graph. For graph-level prediction tasks, the standard procedure globally pools all the node features into a single graph representation without weight ... |
What type of scenes were used for training? | Proposed model is trained on the Cityscapes dataset and then fine tuned on KITTI scenes [24]. The training split used is from [7] [28]. | [
24,
28
] | [
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h... |
Why does negative transfer occur when learning with auxiliary tasks? | Negative transfer happens when the learning of an auxiliary task negatively impacts the performance of the primary task [1]. In the case of graph-based tasks, it can happen because the graph structure, such as the number of nodes, edges, and diameter, can be vastly different between domains [15]. | [
1,
15
] | [
{
"id": "2007.08294_all_0",
"text": " Graph neural networks have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including... |
What type of parameter would be considered a 'good' initial parameter? | A good initial parameter is a parameter that gives good performance in many tasks even with a little fine-tuning of the parameter [2]. This means that the loss function defined in many tasks is sensitive, and this sensitive loss leads to good updates [23]. | [
2,
23
] | [
{
"id": "1703.03400_all_0",
"text": " Learning quickly is a hallmark of human intelligence, whether it involves recognizing objects from a few examples or quickly learning new skills after just minutes of experience. Our artificial agents should be able to do the same, learning and adapting quickly from onl... |
How well RoBERTa language modeling on Wiki-40B? | RoBERTa performs at about 26 BPC on the MLM task with the Wiki-40B dataset [16]. RoBERTa performs better than BERT [17]. | [
16,
17
] | [
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling... |
Why did the authors use multi-scale feature maps for detection? | Authors used multi-scale feature maps for detection because they allow predictions of detections at multiple scales [5]. | [
5
] | [
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S... |
Which pretrained large text to image models have authors used? | Authors used pre-trained Imagen text-to-image diffusion model [4]. | [
4
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
What does using CLIP-based codes mean? And why is this a limitation? Why is not applicable to other methods? What do they mean with other methods here? | The definition of CLIP-based codes or its limitations cannot be found in this paper [15]. | [
15
] | [
{
"id": "2208.01618_all_0",
"text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and com... |
Is the increase in receptive field of the features being computed in subsequent network layers due to the downsampling mentioned by the authors, or is it the result of subsequent convolutions as the network goes deeper? | The increase in receptive field of the features being computed in subsequent network layers is the result of convolutions layers as the network goes deeper? [0]. | [
0
] | [
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes... |
Why did the authors didn't use other metrics to evaluate/compare the performance of the architectures ? | The authors compare with previous works wrt [12]. classification accuracy, in particular the average instance accuracy and average class accuracy [39]. | [
12,
39
] | [
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn... |
What does SNLI means? Is it a model? | SNLI is one of benchmark dataset published in 2015 [26]. | [
26
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
In the reference counter approach for managed allocated memory, is it possible that an unused variable is not cleaned because of circular dependencies? | Although the paper mentions that the reference counter is used to traversing the computation graph, it does not contain the detail algorithm or not working cases [21]. | [
21
] | [
{
"id": "1512.01274_all_0",
"text": " The scale and complexity of machine learning (ML) algorithms are becoming increasingly large. Almost all recent ImageNet challenge winners employ neural networks with very deep layers, requiring billions of floating-point operations to process one single sample. The ri... |
Does the challenge also include a workshop to discuss the ideas? | Yes, the challenge include discussion on challenges of creating this large-scale object recognition benchmark dataset [7]. | [
7
] | [
{
"id": "1409.0575_all_0",
"text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompa... |
How is “character”-delimited models different from “word”-delimited models? | character-delimited models takes characters as input and outputs characters, the words spitted into constituent characters, resulting typically in a few hundred basic characters including special characters appeared in the data [34]. While in word-delimited models OOv words are collapsed into a single UNK symbols [81]. | [
34,
81
] | [
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi... |
What is the criteria for training multiple substitute DNNs? | The criteria for training multiple substitute DNNs is achieve good accuray but mainly the goal is to create a substitute capable of mimicking the oracle decision boundaries [24]. | [
24
] | [
{
"id": "1602.02697_all_0",
"text": " A classifier is a ML model that learns a mapping between inputs and a set of classes. For instance, a malware detector is a classifier taking executables as inputs and assigning them to the benign or malware class. Efforts in the security (5, 2, 9, 18) and machine learn... |
Is it better to use class specific or class agnostic masks in general ? | Both class specific or class agnostic masks in general is nearly as effective [40]. | [
40
] | [
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ... |
What if we introduce zero-shot generation for the synthetic query generation by using large-scale generative language models such as GPT-3, to get rid of the assumption that the training datasets exist even for the general domain? Would this too generate quality queries? | if we introduce zero-shot generation for the synthetic query generation by using large-scale generative language models such as GPT-3, to get rid of the assumption that the training datasets exist even for the general domain, Would this still generate quality queries [9]. | [
9
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
Why is MNIST so popular? | The popularity is related to size which allows researchers to check and prototype their model [1]. | [
1
] | [
{
"id": "1708.07747_all_0",
"text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d... |
Who were the annotators of the new real-world scanning dataset used for real-world reconstruction ? | The reconstructions are not create by manual annotations [40]. Instead, the authors use publicly-available VoxelHashing framework [25] to obtain dense 3D reconstructions [5]. | [
40,
5
] | [
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn... |
Is the reason for using max pooling for permutation invariant in the paper above? | [Note:The question is not phrased correctly [21]. In order to make a model invariant to input permutation, one of the strategies is to use a simple symmetric function to aggregate the information from each point [3]. | [
21,
3
] | [
{
"id": "1612.00593_all_0",
"text": " In this paper we explore deep learning architectures capable of reasoning about 3D geometric data such as point clouds or meshes. Typical convolutional architectures require highly regular input data formats, like those of image grids or 3D voxels, in order to perform w... |
What is Majority in baselines? | Majority is the results when selecting with the most frequent label as an answer [22]. | [
22
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
Aren't YOLO9000 and YOLOv2 essentially the same thing? Why make the distinction? | YOLOv2 is the improvement over the base YOLO detection system [5]. YOLO9000 further improve YOLOv2 by using a WordTree to combine data from various sources and uses a joint optimization technique to train simultaneously on ImageNet and COCO [70]. | [
5,
70
] | [
{
"id": "1612.08242_all_0",
"text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ... |
Why BLINK valuable? | The BLINK model can be said to be valuable since the model is simple yet scalable and effective compared to existing works [0]. The proposed BERT-based model can perform entity linking with large-scale and zero-shot setups, which is crucial in real-world use cases that often contain a lot of unseen entities [2]. BLINK also achieved a new state-of-the-art result for two zero-shot benchmarks by using only the provided text description without external knowledge, which shows the effectiveness of the proposed model [48]. | [
0,
2,
48
] | [
{
"id": "1911.03814_all_0",
"text": " Scale is a key challenge for entity linking; there are millions of possible entities to consider for each mention. To efficiently filter or rank the candidates, existing methods use different sources of external information, including manually curated mention tables Gan... |
How can you come to the intuition that shorter rules have smaller sub goals? | If smaller LMs are utilised, then one may need to split the issue into sub-problems even more (eg, further decomposing the one-to-many comparisons in the selection module) (eg, further decomposing the one-to-many comparisons in the selection module) [55]. | [
55
] | [
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is... |
Based on the results of the baseline and other models, will you rule out occurrence of overfitting in the data? How? | We can observe that there's little over fitment of data [40]. | [
40
] | [
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is... |
What is the distribution of images in the training and testing set of FashionMNIST dataset? | Training set has 6,000 example from each class [8]. | [
8
] | [
{
"id": "1708.07747_all_0",
"text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d... |
How does the size of a large neural network for NMT affect memory? | Large Neural network NMT has the ability to generalize well to very long word sequences so that it doesn't have to store gigantic phrase tables and language models, which results to having a small memory footprint [0]. | [
0
] | [
{
"id": "1508.04025_all_0",
"text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con... |
What is prior preservation loss? | Prior preservation loss supervises the model with its own samples to keep the prior during few-shot fine-tuningThe loss equation is presented in the evidential paragraph [21]. | [
21
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
The paper wished to only show the main object , letting other regions be exactly zero if they are not needed. How did the authors achieve it? | The paper reaches this goal by calculating each pixel norm over the 3 colour channels and zeroing out small-norm pixels according to some threshold (the percentile of all pixel norms in x) [27]. | [
27
] | [
{
"id": "1506.06579_all_0",
"text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a... |
What are ASRGs? | ARSG is the sequence generator based on the RNN network, which utilizes the attention mechanism [7]. | [
7
] | [
{
"id": "1506.07503_all_0",
"text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation and visual object classification .111An early version of this work was presented at th... |
What is the reason for doing the joint training? Does it related to the model performance? | Due to memory limit, authors consider newly joint training method utilizing both image and video [22]. As more independent image frames are added, we can see the reduced variane of the gradient at the expense of some bias for the video modeling [23]. Table 4 shows that additional frames per video helps to improve in video and image sample quality metrics [30]. | [
22,
23,
30
] | [
{
"id": "2204.03458_all_0",
"text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we prese... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.