input stringlengths 14 315 | answer stringlengths 9 2.16k | gold_ctxs listlengths 1 15 | ctxs listlengths 11 186 |
|---|---|---|---|
How was Godard et al.'s architecture design better than Deep3D's? | Deep3D predicts second stereo viewpoint from an input image using stereoscopic film footage as training data [5]. | [
5
] | [
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h... |
What are the benefits of normalization with zero-mean techniques compared to other normalization techniques? have they been tested ? | The paper cites Jarrett et al [50]. | [
50
] | [
{
"id": "1603.05959_all_0",
"text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient out... |
What is the reason for adopting this? | To imitate human's ability to adapt new visual concets, One-Shot Video Generation task is proposed [0]. Tune-A-Video generates videos with novel visual concepts (eg, subjects, backgrounds, attributes, styles, etc) [1]. guided by the text prompt [2]. It is expensive to finetune T2I models on large-scale text-video datasets and not affordable to everyone [24]. | [
0,
1,
2,
24
] | [
{
"id": "2212.11565_all_0",
"text": " The large-scale multimodal dataset , consisting of billions of text-image pairs crawled from the Internet, has enabled a breakthrough in Text-to-Image (T2I) generation (30, 35, 6, 42, 40). To replicate this success in Text-to-Video (T2V) generation, recent works (42, 15... |
How could we check whether D is overfitting the training set? | By observing that D’s loss approaches zero during training, but undergoes a sharp upward jump at the collapse [28]. | [
28
] | [
{
"id": "1809.11096_all_0",
"text": " The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks (GANs, Goodfellow et al. (2014)) at the forefront of efforts to generate high-fidelity, diverse images with models learned directly from data. GAN trai... |
During architecture search, did the models inherently learn skip connections? | Yes, the models inherently learn the skip connections [26]. | [
26
] | [
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture enginee... |
What are the roles of attention connections from the decoder network to the encoder? | Attentions connections improve parallelism allowing to decrease training time and allows the decoder to focus on different regions of the source sentence [2]. | [
2
] | [
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi... |
Searching for the best cell structure is less computationally expensive than searching for an entire network. If so, how the architecture search learns to connect the network? | The architecture learns to connect the network by searching for the best cell structure instead of searching for the best convolutional architectures [1]. | [
1
] | [
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture enginee... |
What are the three main modules of a face recognition system? | The 3 main modules are: face detection, facial landmark detector, and FR module [10]. | [
10
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
Do the authors use different ratios of test-train-validation split for each dataset? | The authors use different ratios of test-train-validation split for each dataset [23]. Speficially, the authors did not use the predefined ratio value when splitting the data into train-validation-test sets for the three datasets (TIMIT Speech corpus, IAM Online Handwriting Database, and JSB Chorales dataset) used in the experiment [22]. Instead, they used the predefined data split for IAM Online Handwriting Database and JSB Chorales dataset [25]. (5355:3859:2956 and 229:77:76) They also followed Halberstadt [37] in splitting the TIMIT dataset (3696:400:192) [29]. | [
23,
22,
25,
29
] | [
{
"id": "1503.04069_all_0",
"text": " Recurrent neural networks with Long Short-Term Memory (which we will concisely refer to as LSTMs) have emerged as an effective and scalable model for several learning problems related to sequential data. Earlier methods for attacking these problems have either been tail... |
Why did the authors use the randomly sampled z(str) to measure explicit planning, while using z(pln) from zero explicit planning to measure structural attributes? | The authors use the randomly sampled z(str) to measure explicit planning as they aim to disentangle explicit planning from any structural attribute [27]. | [
27
] | [
{
"id": "2208.14867_all_0",
"text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ... |
What would happen to the model performance if we just use connections from earlier layers of contracting path while going deeper without upsampling to perform localization? | Model need to match the size of expansion path with the contacting path at each stage [3]. Otherwise the localization performance would suffer [4]. | [
3,
4
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
Which dataset is used for KG-Classifier adapter training? | For KG-Classifier adapter training, KG classification dataset has been used [3]. This dataset is generate by transforming a QA sample into a KG classification sample, using the concatenation of question and answer of synthetic QA as a question and the KG source as an answer [16]. | [
3,
16
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
Is this true? Despite using three different pretraining data (text domain), the model shows similar accuracy in big sample case. | It's false [27]. Models which pretrained using three different data outperform all non-pretrained data [32]. | [
27,
32
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
How many tokens are changed to [MASK] in BERT training? Give a ratio. | 80% of tokens are replaced with [MASK] during training [9]. | [
9
] | [
{
"id": "1907.11692_all_0",
"text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of... |
Was the GPT3 model finetuned on Self-Instruct also finetuned only on 50k instances? | Based on the introduction, it appears as though the authors may have finetuned their GPT3-Self Instruct model with 82k samples [2]. | [
2
] | [
{
"id": "2212.10560_all_0",
"text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d... |
What is the correlation between the number of KGs and the performance when using zero-shot fusion? | Zero-shot fusion obtains relative performance improvement across most of benchmark when more KGs are utilized for training [34]. | [
34
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
Given an input tensor of size (224, 224, 16), a convolution layer transforms the input to an output tensor of size (224, 224, 8), what would the computational cost of this operation be according to this paper? | with hi = 224, wi = 224, di = 16 and dj = 8, the computation cost of this operation would be 224 × 224 × 16 × 8 × k × k = 6,422,528 × k^2 [7]. | [
7
] | [
{
"id": "1801.04381_all_0",
"text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resour... |
How does KG-Classifier work in framework? | using the hidden representation h^{l}_{KGC} of a KG-Classifier adapter parameterized by ΦKGC as a query [18]. | [
18
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
Are (1) slanted triangular learning rate and (2) linear warmup followed by linear decay the same thing? | Slanted triangular learning rate (SLTR) involves first linearly increasing the learning rate and then linearly decaying it according to a given update schedule [20]. It modifies triangular learning rates by using a short increase and long decay period [21]. | [
20,
21
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
What is the demerit of using GNN? | Demerit of GNN is high computational complexity [10]. | [
10
] | [
{
"id": "1711.04043_all_0",
"text": " Supervised end-to-end learning has been extremely successful in computer vision, speech, or machine translation tasks, thanks to improvements in optimization technology, larger datasets and streamlined designs of deep convolutional or recurrent architectures. Despite th... |
In related work, what is the most relevant method to this paper? | [The paper’s method is most similar to the non-linear extension of NCA because the paper uses a neural network to perform the embedding and optimizes a softmax based on Euclidean distances in the transformed space, as opposed to a margin loss] [25]. | [
25
] | [
{
"id": "1703.05175_all_0",
"text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely over... |
How flexible can the residual function be in residual learning? | the residual function in residual learning is flexible enough to be used for single/multiple and fully-connected/convolutional layers [19]. | [
19
] | [
{
"id": "1512.03385_all_0",
"text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can... |
How do angular/cosine-margin-based loss functions improve the separability of learned features in deep face recognition? | Angular/cosine-margin-based loss allows the separation of learned features with larger angular/cosine distance [24]. | [
24
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
What is a 128-dim correction? | It is a means of improvement over words that have different meaning but are spelled similarly [30]. | [
30
] | [
{
"id": "1602.02410_all_0",
"text": " Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sentences not only encode complexities of language such as grammatical structure, but also distill a fair amoun... |
Explain how phoneme error rate (PER) is calculated? | While the phoneme error rate PER is used for the evaluation metric in this paper since there is no evidential information on how this metric can be calculated this question cannot be answered and requires external knowledge [29]. | [
29
] | [
{
"id": "1506.07503_all_0",
"text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation and visual object classification .111An early version of this work was presented at th... |
Why are both map point matches and visual odometry matches required? | map point matches and visual odometry matches are required for Localization Mode which can be useful for lightweight long-term localization in well mapped areas, as long as there are not significant changes in the environment [30]. | [
30
] | [
{
"id": "1610.06475_all_0",
"text": " Simultaneous Localization and Mapping (SLAM) has been a hot research topic in the last two decades in the Computer Vision and Robotics communities, and has recently attracted the attention of high-technological companies. SLAM techniques build a map of an unknown enviro... |
Does prediction of Unknown values have an influence on proved and disproved? | The prediction of Unknown values does not have an influence on proved and disproved [40]. | [
40
] | [
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is... |
Why are the patterns only defined between pairs of tokens instead of other possible options (e.g., trios, sequence, sets)? | The structural definition of a pattern in this paper follows only naturally from the design of the attention mechanism [16]. | [
16
] | [
{
"id": "2112.05364_all_0",
"text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att... |
How does a "network-centric" approach differ from a "dataset-centric approach"? | "Dataset-centric approach" requires the trained network together with some dataset to run through the network showing high or low responses of different units while interacting with most significant images of such dataset [6]. This approach can also use deconvolution layers and upsampling to map and highlight the regions of an image that were responsible of the firing of the different units [7]. | [
6,
7
] | [
{
"id": "1506.06579_all_0",
"text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a... |
Are the comparisons provided for the task enough to replace the consultant or doctor in such a critical field? | Keeping in view the cost and expert requirements the doctors can only be replaced if the V-Net can be trained on sufficiently large dataset annotated by experts [16]. | [
16
] | [
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes... |
How can author claim that only using absolute positional encoding with Transformer can show the relaxed structural inductive bias? | Author claims that Transformer only using absolute positional encoding often generates dissimilar representations for nodes with similar local structures [26]. | [
26
] | [
{
"id": "2202.03036_all_0",
"text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019... |
What is AdaGN? | AdaGN incorporates the timestep and class embedding into each residual block after a group normalization operation [69], similar to adaptive instance norm [27] and FiLM [24]. | [
24
] | [
{
"id": "2105.05233_all_0",
"text": " Over the past few years, generative models have gained the ability to generate human-like natural language Brown et al. (2020), infinite high-quality synthetic images Brock et al. (2018); Karras et al. (2019b); Razavi et al. (2019) and highly diverse human speech and mu... |
What is the definition of 'cycle consistency loss'? | [Cyclic consistency implies that for each image x from domain X, the image translation cycle should be able to bring x back to the original image [12]. The data can be of any nature other than an image as well] [16]. | [
12,
16
] | [
{
"id": "1703.10593_all_0",
"text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed... |
What is the cost function of the depthwise convolution? | D_{K}\cdot D_{K}\cdot M\cdot D_{F}\cdot D_{F}+M\cdot N\cdot D_{F}\cdot D_{F} is the cost function for depthwise separable convolution [22]. With two hyperparameter settings, the function looks like this - D_{K}\cdot D_{K}\cdot\alpha M\cdot\rho D_{F}\cdot\rho D_{F}+\alpha M\cdot\alpha N\cdot\rho D_{F}\cdot\rho D_{F} [31]. | [
22,
31
] | [
{
"id": "1704.04861_all_0",
"text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t... |
Why author build their model jointly? | They build their model jointly to make model can use the correct candidates from other mentions in the cluster [16]. | [
16
] | [
{
"id": "2108.13530_all_0",
"text": " In this paper we explore a principled approach to solve entity linking (EL) jointly with coreference resolution (coref). Concretely, we formulate coref+EL as a single structured task over directed trees that conceives EL and coref as two complementary components: a core... |
Does it have to be integrated into the network in an end-to-end manner? I guess it could make the network heavier. | They extend the spatial layers at the model initialization stage, to include temporal information, and the extended spatial-temporal network learn significantly accelerates the T2V training process by instantaneously transferring the knowledge
from a previously trained T2I network to a new T2V one [14]. Because of the fact that using 3D convolutional layers is computationally heavy, they followed the work of (Ho et al, 2022) extending dimension decomposition strategy to attention layers [20]. In contrast to VDM, they apply an additional 3x1x1 convolution projection (after each 1x3x3) such that the temporal information will also be passed through each convolution layer [23]. | [
14,
20,
23
] | [
{
"id": "2209.14792_all_0",
"text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid... |
What are the existing public datasets that contain instructions for tuning large language models? | PromptSource and Super-NaturalInstructions (also called "super-NI" in short) are two existing public datasets that contain instructions for tuning large language models [0]. | [
0
] | [
{
"id": "2212.10560_all_0",
"text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d... |
How are T0 and Tk-INSTRUCT different? | T0 and Tk-Instruct are two related models, proposed in papers published in 2022 [0]. T0 was proposed by Bach et al [21]. | [
0,
21
] | [
{
"id": "2212.10560_all_0",
"text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d... |
What is the goal behind using a single model SR approach? | Existing methods are only trained for a single scale, so adapting them to other scales requires retraining [15]. However, this would be impractical, so having a single model that accepts multiple scales would fix the problem [16]. | [
15,
16
] | [
{
"id": "1511.04587_all_0",
"text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med... |
Is this true?: Calculating length of a string is string reasoning task. | Calculating length of a string is not a string reasoning task because it does not require character composition within or with another string [11]. | [
11
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
What does "Active Units" mean and how is it measured? | “Active units” is a measurement metric used by the authors to measure the learning capacity of their model [29]. | [
29
] | [
{
"id": "2004.04092_all_0",
"text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)... |
What are the advantages of KG modularization using adapters? | Adapter enables to store the corresponding knowledge separately without any interference [20]. | [
20
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
If dense/sparse retrievers are pre-trained on target corpus to enable the retrievers to be corpus-aware, can the fine-tuned retrievers outperform lexical models? | If dense/sparse retrievers are pre-trained on target corpus to enable the retrievers to be corpus-aware, the fine-tuned retrievers underperform lexical models [4]. | [
4
] | [
{
"id": "2104.08663_all_0",
"text": " Major natural language processing (NLP) problems rely on a practical and efficient retrieval component as a first step to find relevant information. Challenging problems include open-domain question-answering , claim-verification , duplicate question detection , and man... |
How has the evolution of network architectures in deep face recognition systems, such as the transition from AlexNet to ResNet and SENet, impacted the performance of these systems? | As deep FR models followed the footsteps of deep object classification network architectures the performance got better, training got more controllable, and models got deeper [27]. It started with DeepFace which was based on AlexNet that achieved 9735% on the LFW benchmark [29]. Then came the FaceNet based on GoogleNet which achieved 9963% [3]. | [
27,
29,
3
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
What added benefits do the ILSVRC provide over the existing PASCAL-VOC challenge? | Images from the ILSVRC2012 single-object localization validation set are compared to images from the PASCAL VOC benchmark for object recognition [102]. They have also analyzed the level of difficulty of object localization in these images compared to those of objects from the PASCAL VOC benchmark [13]. The level of difficulty of object localization is also analyzed [62]. | [
102,
13,
62
] | [
{
"id": "1409.0575_all_0",
"text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompa... |
What is catastrophic forgetting? | Catastrophic forgetting involved increasing error as a model start to overfit and knowledge captured through pretraining is lost [24]. | [
24
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
What are the fields that needs to be carried out in a timely fashion and on a computationally limited platform? | For robotics, self-driving cars and AR, the recognition task needs to be carried timely and with less computational cost [0]. | [
0
] | [
{
"id": "1704.04861_all_0",
"text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t... |
What is an example of a "dataset-centric" approach? | An example of "dataset-centric" approach can be deconvolution method which is used to highlight certain regions of some image that has the highest effects in the response of different units [6]. | [
6
] | [
{
"id": "1506.06579_all_0",
"text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a... |
Can the methods of "one-to-many augmentation" like data augmentation and 3D face reconstruction effectively improve the performance of deep FR algorithms in terms of accuracy and diversity of training data? | In terms of accuracy, the paper mentions a set of work done on assembled multi-input networks that used "one-to-many augmentation" methods to expand their dataset and achieve better results compared to individual networks [40]. In terms of diversity, all data augmentation, 3D face reconstruction, autoencoders, and especially GANs were found to be effective in generating faces in certain poses, angles, with different expressions, etc [41]. | [
40,
41
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
How the features are converted to pixel labels in SegNet? | SegNet performs feed-forward computation to obtain pixel-wise labelling [13]. | [
13
] | [
{
"id": "1505.07293_all_0",
"text": " Semantic segmentation is an important step towards understanding and inferring different objects and their arrangements observed in a scene. This has wide array of applications ranging from estimating scene geometry, inferring support-relationships among objects to auto... |
What are the benefits of using the predictor to calculate user-item interaction score instead of directly encoding into their inner product? | Using predictor can optimize the representation without any negative sample [21]. | [
21
] | [
{
"id": "2105.06323_all_0",
"text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on... |
Given input size (3x224x224) and bottleneck channels being 64, compare the computational complexity between ResNet, ResNext and ShuffleNet. | ResNet: hw(2cm+9m^{2}) = 244^2*2 * 3 * 64 + 244^2*9 * 64^2 = 2 217 596 928
ResNeXt: hw(2cm+9m^{2}/g) = 244^2*2 * 3 * 64 + 244^2*9 * 64^2/g = 22 861 824 + 2 194 735 104 / g
ShuffleNet: hw(2cm/g + 9m) = 244^2*2 * 3 * 64 / g + 244^2*9 * 64 = 22 861 824 / g + 34 292 736
Even with the group size of 1 (g=1), ShuffleNet have much less complexity compared to ResNet and ResNeXt [13]. | [
13
] | [
{
"id": "1707.01083_all_0",
"text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa... |
Why the training was unstable without these activations scaled before addition? | The training was stabilised after scaling down the residuals before adding them to the previous layer activations [14]. Even when this scaling was not strictly necessary, it helped stabilise the training without affecting the accuracy [15]. But this paper does not answer why the training was unstable without these scaled activations [16]. | [
14,
15,
16
] | [
{
"id": "1602.07261_all_0",
"text": " Since the 2012 ImageNet competition winning entry by Krizhevsky et al , their network “AlexNet” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection , segmentation , human pose estimation , video classification , o... |
What is the purpose of bounding box regression? | Bounding box regression is needed to find the coordinates of the bounding boxes of the objects within the RoI [11]. | [
11
] | [
{
"id": "1605.06409_all_0",
"text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar... |
What was the size of model obtained by applying Deep Compression
to SqueezeNet, using 33% sparsity and 8-bit quantization? | size after taking these considerations would be a 066 MB model 363× smaller than 32-bit AlexNet with equivalent accuracy to AlexNet [24]. | [
24
] | [
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur... |
Is there a reason of not realizing pre-processing techniques to the real data to remove noise before the training ? | The reason is that, in the real data, it will not always be possible to do the pre-processing steps, especially if they require tedious manual noise removal which cannot be completely done automatically [34]. Thus, by using the noisy dataset, the authors demonstrate that their model is robust to real-world noise and occlusions [53]. | [
34,
53
] | [
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn... |
Assuming the authors performed a brute force hyperparameter search on all permutations of the five hyperparameters - hidden layer sizes, depths, LR, batch size and dropout - how many total experiments would they have had to perform? | For Deep LSTM readers 3 values of hidden layer sizes, 3 values of depths, 3 starting LRs, 2 batch sizes and 3 dropout fractions are considered [27]. | [
27
] | [
{
"id": "1506.03340_all_0",
"text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ... |
What are the examples of the high level features that separate the anatomical structures for lesions regions identification ? | Figure 14 shows that the network learns to identify the ventricles, CSF, white and gray matter, with each filter identifying different tissue types, indicating that learning the differences in the features of different tissue types is helpful for lesion segmentation [72]. | [
72
] | [
{
"id": "1603.05959_all_0",
"text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient out... |
What is a "Hamiltonian path"? | A Hamiltonian path is a path that visits all nodes in a graph [35]. | [
35
] | [
{
"id": "2210.15541_all_0",
"text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attenti... |
How many experimental setting factors have been considered in experiments? | As far as my knowledge, seven factors are considered in experiments [25]. The reason is that authors state rectified linear units, K, two sample sizes, identical implementation of minibatch iterators, loss function, and neighborhood sampler [27]. | [
25,
27
] | [
{
"id": "1706.02216_all_0",
"text": " Low-dimensional vector embeddings of nodes in large graphs111While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs f... |
What is a mixed-membership Stochastic Block Model? | The mixed-membership Stochastic Block Model (SBM) is a generative model that encodes the latent structure of graphs by assigning each node into multiple clusters [8]. | [
8
] | [
{
"id": "2210.15541_all_0",
"text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attenti... |
What are the approaches that led to improved accuracy with lesser parameters for NASNets compared to Inception, ResNet and PolyNet? | Ensembling multiple inferences across multiple model instances and image crops led to improved accuracy with lesser parameters for NASNets compared to Inception, ResNet and PolyNet [31]. | [
31
] | [
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture enginee... |
ImageNet challenge benchmarks which problems in computer vision domain? | It emphasizes the importance of examining the bias inherent in any standardized dataset [16]. | [
16
] | [
{
"id": "1409.0575_all_0",
"text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompa... |
Does the inception model run different convolutions in parallel on cropped portions of the original images or on the same image? | Inception model runs convolutions in parallel but it is not clear from paper that it runs on patches or complete image [8]. | [
8
] | [
{
"id": "1503.03832_all_0",
"text": " In this paper we present a unified system for face verification (is this the same person), recognition (who is this person) and clustering (find common people among these faces). Our method is based on learning a Euclidean embedding per image using a deep convolutional ... |
How do we obtain the noise(epsilon) in a diffusion model? | Diffusion models sample from a distribution by reversing a gradual noising process [7]. In particular, sampling starts with noise xT and produces gradually less-noisy samples xT −1, xT −2, until reaching a final sample x0 [8]. | [
7,
8
] | [
{
"id": "2105.05233_all_0",
"text": " Over the past few years, generative models have gained the ability to generate human-like natural language Brown et al. (2020), infinite high-quality synthetic images Brock et al. (2018); Karras et al. (2019b); Razavi et al. (2019) and highly diverse human speech and mu... |
What are the term-based techniques they used in their experiments? | Traditional term-based methods like BM25 Robertson et al [23]. | [
23
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
Among MNIST and FashionMNIST, which dataset poses more challenging classification task? | MNIST provides more challenging classification task [10]. | [
10
] | [
{
"id": "1708.07747_all_0",
"text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d... |
How do they train a single-generator to learn a variety in translating images? | By randomly generated domain label, model can practice multiple domain [11]. | [
11
] | [
{
"id": "1711.09020_all_0",
"text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tra... |
Are both of them use user-provided masks for guidance but Diffusionclip (Kim et al.) perform global changes while Blended diffusion (Avrahami et al.) perform local manipulations? | (Kim et al) [6]. Doesn't use user-provided masks and exploited recent Diffusion models to perform global changes as most editing works are limited to global editing if no masks were provided, While (Avrahami et al) [7]. | [
6,
7
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
What if a query term can be matched to multiple document terms? Does MaxSim suffice for capturing query-document relevance, for this case too? | if a query term can be matched to multiple document terms, MaxSim suffice for capturing query-document relevance [17]. ColBERT computes the relevance score between q and d via late interaction, which we define as a summation of maximum similarity (MaxSim) operators [26]. | [
17,
26
] | [
{
"id": "2004.12832_all_0",
"text": " Over the past few years, the Information Retrieval (IR) community has witnessed the introduction of a host of neural ranking models, including DRMM (Guo et al., 2016), KNRM (Xiong et al., 2017; Dai et al., 2018), and Duet (Mitra et al., 2017; Mitra and Craswell, 2019). ... |
As the data in Frey Face dataset is continuous, how did the authors process it? | They consider some dataset \mathbf{X}=\{\mathbf{x}^{(i)}\}_{i=1}^{N} consisting of N iid [2]. samples of some continuous or discrete variable \mathbf{x} [26]. They assume that the data are generated by some random process, involving an unobserved continuous random variable \mathbf{z} [3]. | [
2,
26,
3
] | [
{
"id": "1312.6114_all_0",
"text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi... |
In one of the examples in the paper, the longer rule gave validated fact check over short rule. Does that hinder your intuition? | the longer rule gave a validated fact check over the short rule in some examples but it doesn't hinder our intuition [9]. | [
9
] | [
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is... |
When discussing the JFT specialist training, the authors refer to a "dustbin" class. Give an example of a sample that might be assigned to this class. | The dustbin class was the combination of all of the classes that a specialist does not care about [25]. Examples of samples that might be assigned to the dustbin class cannot be provided from this paper [36]. | [
25,
36
] | [
{
"id": "1503.02531_all_0",
"text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typi... |
Why should the proposed method have smoothly varying weights for transformations? | The proposed method uses smoothly varying weights for transformations to generate realistic and locally transformed samples [13]. | [
13
] | [
{
"id": "2110.05379_all_0",
"text": " Modern deep learning techniques, which established their popularity on structured data, began showing success on point clouds. Unlike images with clear lattice structures, each point cloud is an unordered set of points with no inherent structures that globally represent... |
What scale and aspect ratios is used for designing anchor boxes? | Authors have used anchor boxes spanning 4 sub-octave scales (2^{k/4}, for k\leq 3) and with 3 aspect ratios [05, 1, 2] [49]. | [
49
] | [
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.