input
stringlengths
14
315
answer
stringlengths
9
2.16k
gold_ctxs
listlengths
1
15
ctxs
listlengths
11
186
How accurate or correct was their few-shot approach to making GPT-3 verbalize its understanding?
The authors mention in multiple places how their iterative correction/feedback process depends on GPT verbalizing it's thinking process or understanding of the user's inputs or needs [20]. They explain how they encourage this sort of behaviour through modifying the prompt, but this paper does not seem to quantifiably measure how "accurate" this verbalization would be [3].
[ 20, 3 ]
[ { "id": "2201.06009_all_0", "text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining wi...
Which datasets are used by the paper for training and testing of unsupervised learning?
UCF-101, HMDB-51 and YouTube videos datasets are used for supervised learning [23].
[ 23 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
What is the strategy used in [54] for directly warping between different views?
Appearance Flows [54] is an end-to-end learning method to reconstruct novel views [15]. In this method warping coordinates for pixel warping are obtained through projective geometry that enables the factorization of depth and camera pose [4].
[ 15, 4 ]
[ { "id": "1704.07813_all_0", "text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h...
How important is data augmentation for final model accuracy using SSD?
The data augmentation is improving the performance of SSD on small datasets like PASCAL VOC [29].
[ 29 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
Shouldn't the inputs in the decoder updated while training after initially setting them to -inf ?
Transformer decoder generates an output sequence (y_{1},,y_{m}) of symbols one element at a time by using the encoder information [8].
[ 8 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
Does Conditional Computation mentioned by the authors mean to perform operations depending on the need to perform them?
Paper only mention the advantages of conditional computation that is to improve computational efficiency and model performance [1].
[ 1 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
Why are the parameters important? What can we do with them?
Authors introduce a practical estimator of the lower bound and its derivatives wrt [0]. the parameters [27]. They introduce a recognition model q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x}): an approximation to the intractable true posterior p_{\boldsymbol{\theta}}(\mathbf{z}|\mathbf{x}) [3]. It is possible to perform efficient approximate inference and learning with directed probabilistic modelswhose continuous latent variables and/or parameters have intractable posterior distributions [4].
[ 0, 27, 3, 4 ]
[ { "id": "1312.6114_all_0", "text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi...
Is RACE is binary classification task?
RACE is the task of classifying one correct answer from 4 options [27].
[ 27 ]
[ { "id": "1907.11692_all_0", "text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of...
Why did the author add one more direction in attention flow?
In order to obtain a query-aware context representation, author used bi-directional attention flow [1].
[ 1 ]
[ { "id": "1611.01603_all_0", "text": " The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision communities. Systems trained end-to-end now achieve promising results on a variety o...
How does NGMPool work exactly? How is it different from GMPool?
NGMPool is a single-pooling variant of GMPool that does not perform SVD on the grouping matrix, but rather uses the grouping matrix as is [4].
[ 4 ]
[ { "id": "2209.02939_all_0", "text": " Graph Neural Networks (GNNs) learn representations of individual nodes based on the connectivity structure of an input graph. For graph-level prediction tasks, the standard procedure globally pools all the node features into a single graph representation without weight ...
What is the anchor point in this paper?
In this paper, the anchor points are a subset of a set of points (denoted as P) that are selected using the Farthest Point Sampling (FPS) algorithm [11].
[ 11 ]
[ { "id": "2110.05379_all_0", "text": " Modern deep learning techniques, which established their popularity on structured data, began showing success on point clouds. Unlike images with clear lattice structures, each point cloud is an unordered set of points with no inherent structures that globally represent...
Why does the depth model suffer from close objects?
Proposed model failed sometimes for objects close to the front of the camera [10].
[ 10 ]
[ { "id": "1704.07813_all_0", "text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h...
How could we define whether two domains are distributed identically?
[Given one set of images in domain X and a different set in domain Y, it is possible to train a mapping G:X→ Y such that the output \hat{y}=G(x), x\in X, is indistinguishable from images y\in Y by an adversary trained to classify \hat{y} apart from y [5].
[ 5 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed...
Why is the masking policy only updated every certain number of steps?
The authors mention that they update the masking function every "mu" steps, but the main text of the paper itself does not appear to contain the exact value of mu itself - there is a possibility that the author's model could work with mu=1 instead (ie [11]. update every step) instead of updating it every couple of steps (ie [12]. mu > 1), though the authors do not explain in this paper if this were done [22].
[ 11, 12, 22 ]
[ { "id": "2210.01241_all_0", "text": " The ultimate aim of language technology is to interact with humans. However, most language models are trained without direct signals of human preference, with supervised target strings serving as (a sometimes crude) proxy. One option to incorporate user feedback is via ...
How the author extract the subgraph of each node?
Author extract entire k-hop subgraphs for each node [18].
[ 18 ]
[ { "id": "2202.03036_all_0", "text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019...
How many layers does the MobileNet has?
MobileNet has 28 layers [27].
[ 27 ]
[ { "id": "1704.04861_all_0", "text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t...
The paper mentions how training two models, one forward model and one backward model (to achieve bidirectionality) results in a performance gain. Is the performance gain proportional to the 2x increase in training cost?
Training a second model to ensemble a bidirectional model brings a performance boost of around 05 to 07 and, on IMDb, it lowers test error from 530 to 458 [50].
[ 50 ]
[ { "id": "1801.06146_all_0", "text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS...
What was the problem of previous work, R-CNN in terms of the efficiency?
R-CNN needs multi-stage pipeline training and time consumes when evaluate them [4].
[ 4 ]
[ { "id": "1504.08083_all_0", "text": " Recently, deep ConvNets (14, 16) have significantly improved image classification and object detection (9, 19) accuracy. Compared to image classification, object detection is a more challenging task that requires more complex methods to solve. Due to this complexity, c...
What are the likely problems authors would have encountered if they did not use batch normalization and dropout during training?
Since there is no evidential information about the effect of batch normalization and dropout, this question cannot be answered and requires external knowledges [22].
[ 22 ]
[ { "id": "1801.04381_all_0", "text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resour...
What differences exist in the labeling procedure for classification datasets/detection datasets for there to be such a large difference in scale?
The reason between different scale of availability between classification and detection datasets is due to the fact that labelling images for detection is far more expensive than labelling for classification or tagging [1]. For example common object detection datasets contain only 10 to 100 thousands images with dozen to hundred tags whereas image classification datasets have million of images with thousands of classes [2]. Object detection methods like YOLO can utilize the large amount of classification data to help the detection task [3].
[ 1, 2, 3 ]
[ { "id": "1612.08242_all_0", "text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ...
Can we enclose all appearances of prostate MRI volumes?
Due to the training/testing and augmentation on the diverse set of prostate scans all appearances of prostate can be encoded with V-Net [15].
[ 15 ]
[ { "id": "1606.04797_all_0", "text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes...
Is this true? NILM has only classification tasks.
It's true, becase NILM has three kinds of tasks, and all tasks it classification task [8].
[ 8 ]
[ { "id": "2210.12302_all_0", "text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al...
What is used as a recognition model in variational auto encoder?
The AEVB algorithm exposes a connection between directed probabilistic models (trained with a variational objective) and auto-encoders [1]. Authors introduce a recognition model q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x}) [22]. All parameters, both variational and generative, were initialized by random sampling from \mathcal{N}(0,001), and were jointly stochastically optimized using the MAP criterion [26].
[ 1, 22, 26 ]
[ { "id": "1312.6114_all_0", "text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi...
What component of the model eliminates the effect of uncertain negative interactions after the positive interaction augmentation?
Online encoders prevent models from collapsing into trivial solutions without explicitly using negative interactions for optimization [5].
[ 5 ]
[ { "id": "2105.06323_all_0", "text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on...
What does “receptive fields” mean?
receptive fields mean the higher layer which is connected to the original layer [14].
[ 14 ]
[ { "id": "1411.4038_all_0", "text": " Convolutional networks are driving advances in recognition. Convnets are not only improving for whole-image classification (19, 31, 32), but also making progress on local tasks with structured output. These include advances in bounding box object detection (29, 12, 17), ...
Is this work focused only on solving cases where GPT-3 misunderstands the users' intents?
Yes, this work is primarily focused on solving cases when GPT-3 misunderstands user input [11]. The authors' do discuss one specialized use case, on how memory assisted models such as these could be used to personalize models, but even this use-case could be seen as a subset of the broader use-case of users correcting a model's misunderstanding [2].
[ 11, 2 ]
[ { "id": "2201.06009_all_0", "text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining wi...
What is the difficulty of augmentation on point clouds compared to augmentation on traditional 2d images?
The difficulty of augmentation on point clouds compared to traditional 2D images is primarily due to the unordered and unstructured nature of point clouds [0].
[ 0 ]
[ { "id": "2110.05379_all_0", "text": " Modern deep learning techniques, which established their popularity on structured data, began showing success on point clouds. Unlike images with clear lattice structures, each point cloud is an unordered set of points with no inherent structures that globally represent...
How can the difference between the black and orange lines, which represent two samples from different z(str), be specificaly interpreted from a musical perspective?
The difference between the black and orange lines can be interpreted as a granular variety in the performing strategies with respect to the given musical structure by different performers [3]. Those different strategies can represent the common technique that the performers may choose to represent the musical structure, but they may vary since they are induced from two human behaviors that cannot be identical to each other [33].
[ 3, 33 ]
[ { "id": "2208.14867_all_0", "text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ...
In what ways can it be said that the concatenation acts as a skip connection?
Skip connection is to consider information from different search depths or layers simultaneously [12]. GraphSAGE use a set of weight matrices and concatenation to consider information from diverse search depths [5]. It can be interpreted as a skip connection [28].
[ 12, 5, 28 ]
[ { "id": "1706.02216_all_0", "text": " Low-dimensional vector embeddings of nodes in large graphs111While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs f...
What's the value of the λ scale?
For all the experiments, the paper set lambda=10 [25]. For flower photo enhancement and Monet’s paintings\rightarrowphotos, the identity mapping loss of weight 05*lambda was used, while lambda=10 was kept throughout [66].
[ 25, 66 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed...
How many sets of candidate object location is sampled across an image for RetinaNet
~100k sets of candidate object locations were sampled accross an image for RetinaNet [4].
[ 4 ]
[ { "id": "1708.02002_all_0", "text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ...
What are some examples of unseen nodes in the real world?
New posts on Reddit, new users and videos on Youtube are examples of unseen data [1].
[ 1 ]
[ { "id": "1706.02216_all_0", "text": " Low-dimensional vector embeddings of nodes in large graphs111While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs f...
What values were used for lambda.s and lambda.e?
For all the experiments, paper uses lambdas =05 and lambdae =02 [23].
[ 23 ]
[ { "id": "1704.07813_all_0", "text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h...
What does "variable-length alignment" mean?
a variable-length alignment is a vector derived by comparing the current target hidden state with each source hidden state, and the size of it equals the number of time steps on the source side as it's explained in Figure 2 [14].
[ 14 ]
[ { "id": "1508.04025_all_0", "text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con...
What are the layers of the Model SRCNN?
SRCNN has three layers in the following order: patch extraction/representation, non-linear mapping, and reconstruction [11].
[ 11 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
How do we get the silhoutte code for the class labels?
The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando [7].
[ 7 ]
[ { "id": "1708.07747_all_0", "text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d...
In models inserting token expression "([CLS],x1,...,xN,[SEP],y1,...,yM,[EOS])", calculate maximum value of N + M in RoBERTa case.
It is not true [5]. BERT takes concatenated two sequences as input like [\mathit{CLS}],x_{1},\ldots,x_{N},[\mathit{SEP}],y_{1},\ldots,y_{M}, They calculate N+M to control maximum sequence length [67].
[ 5, 67 ]
[ { "id": "1907.11692_all_0", "text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of...
In what areas has face recognition technology been commonly used?
Face recognition is widely used in the military, finance, public security, and daily life [0].
[ 0 ]
[ { "id": "1804.06655_all_0", "text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early...
What are the layers of depthwise separable convolution and discuss the function of each of them.
Depthwise separable convolutions have two layers—depthwise and pointwise [14]. Depthwise convolutions apply one filter per input channel (input depth) [6].
[ 14, 6 ]
[ { "id": "1704.04861_all_0", "text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t...
What information from the input images do ORB features extract?
ORB features are extracted at salient keypoints in both view of image [15]. For every left ORB image a matching feature can be found at right image [17]. ORB extract such features from images which are robust to rotation and scale and present a good invariance to camera auto-gain and auto-exposure, and illumination changes [18].
[ 15, 17, 18 ]
[ { "id": "1610.06475_all_0", "text": " Simultaneous Localization and Mapping (SLAM) has been a hot research topic in the last two decades in the Computer Vision and Robotics communities, and has recently attracted the attention of high-technological companies. SLAM techniques build a map of an unknown enviro...
How the YOLOv3 algorithm calculates the coordinates of the predicted box from anchor box and output coordinates?
The question is partially answered as "If the cell is offset from the top left corner of the image by (c_{x},c_{y}) and the bounding box prior has width and height p_{w}, p_{h}, then the predictions correspond to:", but is completely answered in the continuation of the paper (in the expression) [4].
[ 4 ]
[ { "id": "1804.02767_all_0", "text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B...
What does "real information seeking queries" mean in NQ dataset, compared to other datasets?
Natural Questions contains general domain queries, which aligns well with the question-answer pairs for training the QA model [46].
[ 46 ]
[ { "id": "2004.14503_all_0", "text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);...
Does NASNets perform better than MobileNet, ShuffleNet under resource-constraint setting?
From the above evidential sentence, it is obvious that NASNets with 74% accuracy perform better than MobileNet and ShuffleNet with 706% and 709% accuracies respectively [28].
[ 28 ]
[ { "id": "1707.07012_all_0", "text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of  on using convolutional architectures (17, 34) for ImageNet  classification, successive advancements through architecture enginee...
why would we need to increase learning rate for the first few training steps while we initially use Adam?
In adam optimizer learning rate is linearly increased ay startup for the purpose of warmup during training [42].
[ 42 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
What do the authors mean by attention-aware features in the context of images ?
The authors are talking about features that were learned using the attention mechanism [0]. The model focuses on such features, which can include color, scale, or spatial information, when it processes an image for classification [15]. For example, the attention mechanism can learn that blue pixels in the background of the image from the sky are not important for image classification, and the model will consequently reduce the contribution of those pixels to the final classification result [20].
[ 0, 15, 20 ]
[ { "id": "1704.06904_all_0", "text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio...
What is gradient clipping?
Gradient clipping is a technique often used in training RNNs that alters hyperparameters such that the gradients are always within a certain range [33].
[ 33 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
What does "non-maximum suppression" mean?
The authors do not explain exactly what non-maximum suppression is [16].
[ 16 ]
[ { "id": "1605.06409_all_0", "text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar...
Why does this method use patchGAN? Are there any benefits using this model?
[The paper uses 70 × 70 PatchGANs for the discriminator networks, which aim to classify whether 70 × 70 overlapping image patches are real or fake [22].
[ 22 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed...
What if one of the generated questions was poor in quality? Wouldn't this result in errors being propagated through the dataset to generate a high number of low quality instructions?
Yes, if one of the generated examples is of poor quality, it is possible that this error could propogate through the dataset leading to a high number of erroneous or low-quality instructions [7]. However, the authors do evaluate and compare the quality of generated samples and find that most samples are of high quality [17]. Even out of those samples that are not fully correct, most of them are atleast partly correct [44].
[ 7, 17, 44 ]
[ { "id": "2212.10560_all_0", "text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d...
Compared with phrase-level masking, what are the advantages of entity-level phrase?
Compared to entity-level masking, phrase-level masking make models have better generalization and adaptability [2].
[ 2 ]
[ { "id": "1904.09223_all_0", "text": " Language representation pre-training Mikolov et al. (2013); Devlin et al. (2018) has been shown effective for improving many natural language processing tasks such as named entity recognition, sentiment analysis, and question answering. In order to get reliable word rep...
What was the goal behind reducing the filter size and stride of the ALexNet and GoogLeNet ?
The authors reduced the filter size and stride of the two models because the input size used was smaller than what the original models were trained on [20].
[ 20 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot...
What is the combinatorial irregularities and complexities of meshes?
[Point clouds are simple and unified structures that avoid the combinatorial irregularities and complexities of meshes, and thus are easier to learn from [1].
[ 1 ]
[ { "id": "1612.00593_all_0", "text": " In this paper we explore deep learning architectures capable of reasoning about 3D geometric data such as point clouds or meshes. Typical convolutional architectures require highly regular input data formats, like those of image grids or 3D voxels, in order to perform w...
How did the authors measure "interpretability"?
The authors' definition of interpretability is measured in terms of higher importance scores in the attention heads [3].
[ 3 ]
[ { "id": "2112.05364_all_0", "text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att...
Is RPN is used as a secondary classifier for proposing boundary boxes for F-RCNN framework ?
Yes, RPN is used as a secondary classifier for proposing boundary boxes for F-RCNN framework [8].
[ 8 ]
[ { "id": "1708.02002_all_0", "text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ...
What is the difference between the local model and the global model?
Local model optimize the marginalized probability of the correct antecedents for each given span whereas global model overcomes inherent limitation by using bidirectional connections between mentions [1].
[ 1 ]
[ { "id": "2108.13530_all_0", "text": " In this paper we explore a principled approach to solve entity linking (EL) jointly with coreference resolution (coref). Concretely, we formulate coref+EL as a single structured task over directed trees that conceives EL and coref as two complementary components: a core...
RoBERTa uses large batch size. How many times larger than BERT-large one?
RoBERTa use 32 times larger batch size than BERT because batch size of BERT and RoBERTa are 256 and 8K, respectively [40].
[ 40 ]
[ { "id": "1907.11692_all_0", "text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of...
What metric is used for measuring Computational demand of a network?
To measure the computational demand of the network, top-1 accuracy metric was used [3].
[ 3 ]
[ { "id": "1707.07012_all_0", "text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of  on using convolutional architectures (17, 34) for ImageNet  classification, successive advancements through architecture enginee...
Why the authors prefer to learn video representations through unsupervised models?
Labelling videos is a tedious job and that makes supervise training very expensive [2].
[ 2 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
What are MobileNets are primarily built on and what is it main goal?
MobileNets are built primarily on depthwise separable convolutions, a specialized method which reduces the computational cost [1]. The main goal for MobileNets to design an efficient architecture is to reduce latency while maintaining state of the art accuracy [2].
[ 1, 2 ]
[ { "id": "1704.04861_all_0", "text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t...
How to obtain SGVB estimator from variational lower bound?
Authors apply Monte Carlo estimates of expectations of some function f(\mathbf{z}) wrt [0]. q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x}) to the variational lower bound (eq [10]. (2)), yielding generic Stochastic Gradient Variational Bayes (SGVB) estimator [2]. The KL-diverdnence in eq [21]. (3) can be integrated, such that only the reconstruction error requires estimation by sampling [27]. The KL-divergence term regularizes \phi, encouraging the approximate posterior to be close to the prior p_\theta(z) yielding a SGVB estimator [32].
[ 0, 10, 2, 21, 27, 32 ]
[ { "id": "1312.6114_all_0", "text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi...
How the loss was calculated in the proposed model ?
The loss is calculated as the Euclidean distance between the reconstructed image and the ground truth image [28].
[ 28 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
What is the role of epoch in self-supervised learning?
Our method show better result in only half of the training [5].
[ 5 ]
[ { "id": "2211.02284_all_0", "text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n...
How is Nesterov-style momentum different from other momentum based optimizers such as Adam/AdamW?
Since there is no evidential information about the detail of how Nesterov-style momentum and other optimizers works, this question cannot be answered and requires external knowledge [36].
[ 36 ]
[ { "id": "1503.04069_all_0", "text": " Recurrent neural networks with Long Short-Term Memory (which we will concisely refer to as LSTMs) have emerged as an effective and scalable model for several learning problems related to sequential data. Earlier methods for attacking these problems have either been tail...
What are different types of categories in the FashionMNIST dataset?
categories are men , women , kids and neutral [5].
[ 5 ]
[ { "id": "1708.07747_all_0", "text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d...
What are the two factors to show potential reason for cross-lingual generalization
They are quantity of target language data in the pre-training corpora and language similarity [22].
[ 22 ]
[ { "id": "2204.08110_all_0", "text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling...
Can the architecture the authors' proposed be replaced with newer model architectures such as attention-based models or transformers, or is their task incompatible with these newer architectures?
Since the proposed method use RNN architecture for sequence modeling and not utilizing RNN-specific structures, other newer model like attention-based models or transformers also can be used instead of RNN [8].
[ 8 ]
[ { "id": "1411.4555_all_0", "text": " Being able to automatically describe the content of an image using properly formed English sentences is a very challenging task, but it could have great impact, for instance by helping visually impaired people better understand the content of images on the web. This task...
In what ways has the use of residual representations and shortcut connections improved the accuracy of deep neural networks in image recognition tasks?
The residual representation presents a powerful shallow representation for image retrieval tasks [21].
[ 21 ]
[ { "id": "1512.03385_all_0", "text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can...
How did the authors tackle the zero-shot passage retrieval?
the authors tackle the zero-shot passage retrieval by generating synthetic questions [11].
[ 11 ]
[ { "id": "2004.14503_all_0", "text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);...
In terms of the effectivenesses of coverage penalty and length normalization, how does having RL-based model refinement differ from not having RL-based model refinement?
It was found that models with RL refinement are less affected by length normalization "α" and coverage penalty "β", authors explain this to the fact that during RL refinement, models already learn to pay attention to the full source sentence to not under-translate or over-translate [62]. The authors also found an overlap between the wins from RL refinement and decoder fine-tuning, and the win from RL on a less fine-tuned decoder would have been bigger [63]. The impact of length normalization "α" and coverage penalty "β" on RL-based and non-RL-based models can be found in Tables 2 and 3 [89].
[ 62, 63, 89 ]
[ { "id": "1609.08144_all_0", "text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi...
What is "“Vector of Locally Aggregated Descriptors” image representation ?
“Vector of Locally Aggregated Descriptors” image representation is a compact representation of an image created by the VLAD technique which is a popular descriptor pooling method that can extract statistical information of the local descriptors aggregated over the image [17]. IT calculates the difference between the feature vectors of an image and a set of learned reference vectors, then summing up these differences to create the image representation vector [18].
[ 17, 18 ]
[ { "id": "1511.07247_all_0", "text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ...
Why is the paper's proposed multimodal feature learning algorithm better than other methods that "ignore modality information at the first layer" or "train separate first-layer features for each modality"?
The proposed approach incorporates a structured penalty term into the optimization problem to be solved during learning [3]. This technique allows the model to learn correlated features between multiple input modalities,but regularizes the number of modalities used per feature (hidden unit),discouraging the model from learning weak correlations between modalitiesWith this regularization term, the algorithm can specify how mode-sparse or mode-dense the features should be, representing a continuum between the two extremes outlined above [54].
[ 3, 54 ]
[ { "id": "1301.3592_all_0", "text": " Robotic grasping is a challenging problem involving perception, planning, and control. Some recent works (54, 56, 28, 67) address the perception aspect of this problem by converting it into a detection problem in which, given a noisy, partial view of the object from a ca...
Why are the constraint value of δ and γ separated?
Yes they are separated, as [-δ,δ] is a clipping range to input yt while [−γ, γ] is the clipping range for raw logits [49].
[ 49 ]
[ { "id": "1609.08144_all_0", "text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi...
What does s in "ShuffleNet s x" mean?
"s" means the scale factor by which the number of channels is multiplied to adapt the ShuffleNet to the given computational complexity [17].
[ 17 ]
[ { "id": "1707.01083_all_0", "text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa...
Why does annealing the value of beta and, as a consequence, decrease the KL regularization during training cause the decoder to make greater use of z?
While the authors explained in detail the method in which they cyclically annealed the value of beta while training their VAE and that KL regularization impacts features on the previous layer, this paper does not delve into the reasons why annealing the value of beta causes the decoder to make greater use of z [24]. This is possibly because annealing beta as described in the paper is a widely used practice while training models such as these, which is why the authors may not have chosen to explain this assuming that these are broadly known pieces of information [37].
[ 24, 37 ]
[ { "id": "2004.04092_all_0", "text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)...
What do we mean by i.i.d. datasets?
They show that for iid [3]. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator [32].
[ 3, 32 ]
[ { "id": "1312.6114_all_0", "text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi...
Will normal cells and reduction cells that come out as search results be different for each dataset ?
The number of normal and reduction cells that come out as search results depend on at least one factor, the input image size in the dataset [11].
[ 11 ]
[ { "id": "1707.07012_all_0", "text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of  on using convolutional architectures (17, 34) for ImageNet  classification, successive advancements through architecture enginee...
How many future frames can be predicted by the proposed LSTM Future Predictor Model
It is directly answered that 10 future frames can be predicted by the proposed LSTM Future Predictor Model [30].
[ 30 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
Does this attention mechanism resemble the human attention attitude in the intuition or idea?
Yes the attention mechanism resemble to human attention attitude because it can yield more interpretable models which can extract syntactic and semantic structure from sentences [38].
[ 38 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
Why did the GoogLENet-RI-H performs poorly in the Thoracoabdominal Lymph Node Detection task?
The model suffers from over-fitting, as it is a very complex model but it does not have enough training data for training [35].
[ 35 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot...
What is RoIPool used for in relation to Mask R-CNN ??
RoIPool performs coarse spatial quantization for feature extraction [20]. Quantization introduces misalignments between the RoI and the extracted features [3].
[ 20, 3 ]
[ { "id": "1703.06870_all_0", "text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ...
QNLI and WNLI is a part of GLUE. Is this true?
Yes, both QNLI and WNLI are part of GLUE [57].
[ 57 ]
[ { "id": "1907.11692_all_0", "text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of...
What is language drift and how to deal with it?
Language drift occurs when a language model pre-trained on a large text corpus and fine-tuned for a specific task loses syntactic and semantic understanding as it improves learning the target task only [4]. Authors suggested that their novel autogenous class-specific prior-preserving loss solves this issue [19].
[ 4, 19 ]
[ { "id": "2208.12242_all_0", "text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi...
How a target sequence is produced from a input frame sequence using LSTM?
From P0 and P1, It is directly answered that, through encoder-decoder the target sequence is produced [12].
[ 12 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
What are the factors that should be considered for memory footprint for indexing?
During indexing, we use another server with the same CPU and system memory specifications but which has four Titan V GPUs attached, each with 12 GiBs of memory [51].
[ 51 ]
[ { "id": "2004.12832_all_0", "text": " Over the past few years, the Information Retrieval (IR) community has witnessed the introduction of a host of neural ranking models, including DRMM (Guo et al., 2016), KNRM (Xiong et al., 2017; Dai et al., 2018), and Duet (Mitra et al., 2017; Mitra and Craswell, 2019). ...
How does the negative pairs prevent the problem of collapsed solution during optimization in contrastive learning methods?
To prevent the problem of collapsed sollution, they update target encoder and online encoder differently [12].
[ 12 ]
[ { "id": "2105.06323_all_0", "text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on...
Why author said that “the data used for pretraining” have been under-emphesized? Give an evidence data on Table 4.
Author said that “the data used for pretraining” have been under-emphesized [3]. Because they improved model performance by using additional training data for pretraining [51].
[ 3, 51 ]
[ { "id": "1907.11692_all_0", "text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of...
Why was Graph Neural Networks(GNNs) proposed before even if Convolutional Neural Networks(CNNs) have been successful in many tasks?
Because CNN can process only grid-like structure, GNN, which can process general graph structure proposed [0].
[ 0 ]
[ { "id": "1710.10903_all_0", "text": " Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classification (He et al., 2016), semantic segmentation (Jégou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has ...
Does the features learned by unsupervised learning improved the performance of supervised learning tasks?
The improvement in classification by using unsupervised learning was not as big as we expected, we still managed to yield an additional improvement over a strong baseline [37]. If the unsupervised learning model comes up with useful representations then the classifier perform better, especially when there are only a few labelled examples [41]. Based on the above evidence, it can be safely said that features learned by unsupervised learning improved the performance of supervised learning tasks [46].
[ 37, 41, 46 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
How foreground-background class imbalance is encountered for two stage detectors ?
In the two-stage mechanism for object detection, the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of the foreground or background classes using a CNN [0].
[ 0 ]
[ { "id": "1708.02002_all_0", "text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ...
How would "assigning weak labels to classification data" improve detection results?
Assigning weak labels to classification data can improve detection task because it also improved segmentation task [72].
[ 72 ]
[ { "id": "1612.08242_all_0", "text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ...
What are the different input types used for the proposed model?
Image patches and high-level percepts are the two types of inputs used in the proposed model [4].
[ 4 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
What is the main difference between synchronous vs asynchronous SGD?
In training of Google’s baseline model of JFT, asynchronous stochastic gradient descent (SGD) involved running replicas of the neural net different sets of cores to compute gradients on given mini-batches, which are then sent to a shared parameter server which returns new values for the parameters [24].
[ 24 ]
[ { "id": "1503.02531_all_0", "text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typi...
Why does inception-v4’s inception module use average pooling instead of max pooling?
One possible reason to use average pooling instead of max pooling is that max pooling introduces some kind of technical constraints which are reduced by average pooling [2].
[ 2 ]
[ { "id": "1602.07261_all_0", "text": " Since the 2012 ImageNet competition  winning entry by Krizhevsky et al , their network “AlexNet” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection , segmentation , human pose estimation , video classification , o...
Which part of the Inception architecture was replaced with residual connections?
The filter concatenation stage of the Inception Architecture was replaced with Residual connections [1].
[ 1 ]
[ { "id": "1602.07261_all_0", "text": " Since the 2012 ImageNet competition  winning entry by Krizhevsky et al , their network “AlexNet” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection , segmentation , human pose estimation , video classification , o...
What is Zalando?
Zalando’s website222Zalando is the Europe’s largest online fashion platform [4].
[ 4 ]
[ { "id": "1708.07747_all_0", "text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d...
Is the paper's method also could be interpreted as a weighted nearest-neighbor classifier applied within an embedding space?
[The paper’s approach, prototypical networks, is based on the idea that there exists an embedding in which points cluster around a single prototype representation for each class [2]. Prototypical networks differ from matching networks in the few-shot case with equivalence in the one-shot scenario [10].
[ 2, 10 ]
[ { "id": "1703.05175_all_0", "text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely over...
Can clustering-based self-supervised approaches learn a piece of local information? If not, task applicability would be limited.
MIRA does not perform well in the detection task [32].
[ 32 ]
[ { "id": "2211.02284_all_0", "text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n...
How are loops detected and validated?
In ORB-SLAM2 a full BA optimization is used for loop detection and validation [27]. Loop detection is part of Loop closing [28].
[ 27, 28 ]
[ { "id": "1610.06475_all_0", "text": " Simultaneous Localization and Mapping (SLAM) has been a hot research topic in the last two decades in the Computer Vision and Robotics communities, and has recently attracted the attention of high-technological companies. SLAM techniques build a map of an unknown enviro...
why PointNet is highly robust to small perturbation of input points as well as to corruption through point insertion (outliers) or deletion (missing data)?
The paper shows the universal approximation ability of its neural network to continuous set functions [40]. By the continuity of set functions, intuitively, a small perturbation to the input point set should not greatly change the function values, such as classification or segmentation scores [34].
[ 40, 34 ]
[ { "id": "1612.00593_all_0", "text": " In this paper we explore deep learning architectures capable of reasoning about 3D geometric data such as point clouds or meshes. Typical convolutional architectures require highly regular input data formats, like those of image grids or 3D voxels, in order to perform w...
What is the activation function for a ShuffleNet Unit?
The use of activation functions in the ShuffleNet unit happens only after the first 1x1 group convolution and the last concatenation of shortcut and residual paths, following the suggestions of referenced papers [3, 9, 40] [12]. And the only non-linear activation function that is used is ReLU [19].
[ 12, 19 ]
[ { "id": "1707.01083_all_0", "text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa...