input stringlengths 14 315 | answer stringlengths 9 2.16k | gold_ctxs listlengths 1 15 | ctxs listlengths 11 186 |
|---|---|---|---|
How did the authors find potential causes of cross-lingual transfer? | Authors do not discuss how they pointed to these potential causes [22]. | [
22
] | [
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling... |
What if there is no underlying relationship? | [The translation between domains, with no underlying relationship, will have input x and output y paired up in an unmeaningful way ] [5]. | [
5
] | [
{
"id": "1703.10593_all_0",
"text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed... |
What does inductive bias means? | Inductive bias is performance gain of pretrained model in different linguistic structure [7]. | [
7
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
Why was R-FCN unable to converge using only one score map? | R-FCN depends on the score maps of R-FCN for its output as the score maps are used for the rod pooling layer [3]. Therefore, when the k = 1 and there is only one score map, RoIs don't actually capture any spatial information and therefore the model fails to learn the task [31]. | [
3,
31
] | [
{
"id": "1605.06409_all_0",
"text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar... |
What is the full form of ILSVRC? | Since there are no information about the form of ILSVRC in this paper, this question cannot be answered and requires external knowledge [14]. | [
14
] | [
{
"id": "1411.4555_all_0",
"text": " Being able to automatically describe the content of an image using properly formed English sentences is a very challenging task, but it could have great impact, for instance by helping visually impaired people better understand the content of images on the web. This task... |
Are batch normalisation and dropout used for the same reasons? | This work follows standard practices for CV classifiers by using batch normalization and dropout in each block [22]. Dropout is used to reduce the risk of overfitting which can decrease performance [44]. | [
22,
44
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
What does “scale invariance” mean? | scale invariance means there are different scales on the table for model [32]. | [
32
] | [
{
"id": "1504.08083_all_0",
"text": " Recently, deep ConvNets (14, 16) have significantly improved image classification and object detection (9, 19) accuracy. Compared to image classification, object detection is a more challenging task that requires more complex methods to solve. Due to this complexity, c... |
What does it mean for perception or feedback to be open loop? | The open-loop method observes the scene prior to the grasp, extracts image patches, chooses the patch with the highest probability of a successful grasp, and then uses a known camera calibration to move the gripper to that location [30]. | [
30
] | [
{
"id": "1603.02199_all_0",
"text": " When humans and animals engage in object manipulation behaviors, the interaction inherently involves a fast feedback loop between perception and action. Even complex manipulation tasks, such as extracting a single object from a cluttered bin, can be performed with hardl... |
What is the hole algorithm? | The authors do not explain exactly what the hole algorithm is [17]. | [
17
] | [
{
"id": "1605.06409_all_0",
"text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar... |
Does the paper show how each component of KERM can contribute to passage re-ranking performance quantitatively and qualitatively? | This work conducted ablation studies to investigate the contribution of each component in the performance of KERM [44]. By testing different settings for the knowledge injector, this work found that performance decreases without knowledge interaction and also without knowledge propagation [51]. By testing the model without global or local distillation, they also demonstrated that performance decreases without global distillation and efficiency decreases without either global or local distillation [54]. | [
44,
51,
54
] | [
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap... |
What is the evaluation metric to evaluate a semantic segmentation model? | Pixel accuracy, mean accuracy, mean IU and frequency weighted IU are answers [59]. | [
59
] | [
{
"id": "1411.4038_all_0",
"text": " Convolutional networks are driving advances in recognition. Convnets are not only improving for whole-image classification (19, 31, 32), but also making progress on local tasks with structured output. These include advances in bounding box object detection (29, 12, 17), ... |
What kind of loss function is used in training SSD? | The loss function used for training is a weighted sum of the localization loss (loc) and the confidence loss conf):L(x,c,l,g)=\frac{1}{N}(L_{conf}(x,c)+\alpha L_{loc}(x,l,g))(1)where N is the number of matched default boxes [10]. | [
10
] | [
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S... |
Which evaluation criteria was used to compare the performance of action recognition models? | Evaluation criteria are measure on RGB data(single or multiple frames) and flow features [42]. | [
42
] | [
{
"id": "1502.04681_all_0",
"text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq... |
Describe how mobile net use depthwise separable convolution to reduce computation and the model size | MobileNets use depthwise convolution with one filter per input channel [6]. | [
6
] | [
{
"id": "1704.04861_all_0",
"text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t... |
The Caffe framework does not natively support a convolution layer that contains multiple filter resolutions .To get around this, the authors implement the expand layer with two separate convolution layers. What is the additional cost incurred by using two convolution layers? | The additional cost of using 2 convolutional layers may be that the parameters of the 2 layers are now trained separately; they are not benefiting from each other being jointly optimized to perform some task and share useful information between each other while training, but output shape is still not affected by the separation ie,this is numerically equivalent to have one layer that contains both 1x1 and 3x3 filters [20]. | [
20
] | [
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur... |
Would more recent approaches such as DECAF extreme classification (2021) serve as a stronger baseline than the specialized models discussed in the paper? | The specialist models were started from the baseline model which was Google’s deep convolutional network for JFT [33]. The function and performance of DECAF, and how it compares to the JFT baseline model used in this work cannot be answered from this paper [24]. | [
33,
24
] | [
{
"id": "1503.02531_all_0",
"text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typi... |
What is an example of a "network-centric" approach? | An example of such approach would be to consider a trained network, start with some initial input and compute the forward path activations [7]. | [
7
] | [
{
"id": "1506.06579_all_0",
"text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a... |
Why is KG-Classifier adapter suggested? | To compensate that usage a mixture of synthetic QA for fusion training, which is not exactly a training task [15]. | [
15
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
How is the authors' work different from the “fast gradient sign” method? | The fast gradient sign method is very quick but may lead to sub-optimal perturbations thus damaging the overall robustness estimation, and fine-tuning with such adversarial samples may sometimes result in a drop in the overall performance of the model [16]. On the other hand, DeepFool creates adversarial perturbations that are closer to the absolute minimum compared to others thus giving us a more reliable tool in terms of robustness estimation and fine-tuning [18]. | [
16,
18
] | [
{
"id": "1511.04599_all_0",
"text": " Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in many research areas such as bioinformatics (1, 16), speech (12, 6), and computer vision (10, 8). Though deep networks have exhibited very good performance ... |
Does the proposed approach in the paper require a labelled dataset? | ULMFiT first pretrains a language model on a large general-domain corpus, and does not require any additional in-domain documents or labels for this [41]. Then, the method will fine-tune the model on a target task using novel techniques [12]. | [
41,
12
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
The authors claim that low-frequency information in 3D is discriminative for object classification, is that true ? | The reasoning is that low-frequency information in 3D seems to be quite discriminative, because the authors use the resolution of only 30x30x30, which is really low resolution in any case [15]. | [
15
] | [
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn... |
What is the truncation setting? | First, the FID/IS values at the truncation setting attain the best FID [30]. | [
30
] | [
{
"id": "1809.11096_all_0",
"text": " The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks (GANs, Goodfellow et al. (2014)) at the forefront of efforts to generate high-fidelity, diverse images with models learned directly from data. GAN trai... |
What will be the effect in performance if group numbers for convolution is increased? | For ShuffleNet, having more than 1 group seems to show consistently better results for all complexities [16]. As the model gets smaller, the performance gain seems to increase more as the number of groups increases [21]. However, for larger models, a too large number of groups led to saturation or a drop in classification error, possibly due to reduced representative capabilities [22]. | [
16,
21,
22
] | [
{
"id": "1707.01083_all_0",
"text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa... |
What are the reasons of using residual learning ? | Residual learning solves the vanishing/exploding gradients problem, allowing for the model to converge faster and perform better [27]. | [
27
] | [
{
"id": "1511.04587_all_0",
"text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med... |
What differentiates a bottleneck block from a residual block? | Inverted residuals differentiates a bottleneck block from a residual block, which shortcuts directly between the bottlenecks [17]. | [
17
] | [
{
"id": "1801.04381_all_0",
"text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resour... |
How did the authors manage the model fast and accurately enough for real-time applications? | Authors did the following things to manage fast and accurate model for real-time applications: a) A deep network based object detector that does not resample pixels or features for bounding box hypotheses and and is as accurate as approaches that do [1]. | [
1
] | [
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S... |
Why didn't the authors try different methods of data augmentation for 3D objects ? | The authors improve upon previous augmentation strategies and provide analyses to compare each combination of the augmentation strategy (azimuth rotation (AZ), AZ + translation, and AZ + elevation rotation), and conclude that the latter gives the best results [19]. Please note that augmentations on 3D objects are not as trivial as the ones in 2D so providing novel insights wrt [32]. data augmentation is quite valuable [47]. | [
19,
32,
47
] | [
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn... |
Is task-independent sentence representations same thing as embedding? | Task-independent sentence representations learns text embedding and can be implemented efficiently using self-attention [5]. | [
5
] | [
{
"id": "1706.03762_all_0",
"text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)... |
How can the authors verify if the attention reflects the overall composition of the given image? | Injecting the cross-attention maps of the input image enabled the authors to preserve the original composition and structure, and as illustrated in Figure [11]. 4, The average attention maps are plotted, and pixels are more attracted to words that describe them, eg [16]. | [
11,
16
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
Is convolution and up-convolution considered as the transform and its inverse? | Ye up-convolution operation upsamples the feature resolution back to original and also reduces the number of feature channels [8]. | [
8
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
What does this initial results mean? | Video diffusion models present the first results on a large text-conditioned video generation tasks, and they achieve state-of-the-art results on popular video datasets [16]. They train the model with image-video jointly to improve sample quality [20]. Moreover, the conditional sampling method, introduced in Section 31, shows better quality compared to the existing replacement method [21]. | [
16,
20,
21
] | [
{
"id": "2204.03458_all_0",
"text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we prese... |
What are some of the limitations of the YOLOv3 object detection model? | Some of the limitations of YOLOv3, based on the information given in the paper are: it is still quite a bit behind other models like RetinaNet in the "COCO's weired average mAP" metric (COCO average AP between 95 IOU metric), performance drops significantly as the IOU threshold increases indicating YOLOv3 struggles to get the boxes perfectly aligned with the object, it has comparatively worse performance on medium and larger size objects [19]. | [
19
] | [
{
"id": "1804.02767_all_0",
"text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B... |
What is the purpose of the experiments described? | [Using imageNet/CIFAR-10 data set , classification is evaluated for plain/residual networks and compared with state of the art [21]. | [
21
] | [
{
"id": "1512.03385_all_0",
"text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can... |
By how much does the proposed approach outperform CoVE? | On IMDb, the proposed approach reduced the error by 439% when compared to CoVe but, on TREC-6, the approach did not improve performance significantly [37]. The results are shown in Table 2 [38]. | [
37,
38
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
What is the difference between conducting polynomial regression and predicting explicit planning with the learned representation? | Conducting polynomial regression is different from predicting explicit planning from the learned representation since polynomial regression would be based on a finite set of data in a certain length [17]. | [
17
] | [
{
"id": "2208.14867_all_0",
"text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ... |
Is it true that they used the output from the bottom decoder layer for y_{i-1}, not the decoder-RNN output from the past decoding time step? | Authors used only the decoder-RNN output from the past decoding time step in the bottom decoder layer to obtain recurrent attention context which is sent directly to all the remaining decoder layers [15]. | [
15
] | [
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi... |
How black box attacks perform on larger and finite perturbation? | Black-box attacks can not be successful against a model that is robust to larger and finite perturbations [71]. | [
71
] | [
{
"id": "1602.02697_all_0",
"text": " A classifier is a ML model that learns a mapping between inputs and a set of classes. For instance, a malware detector is a classifier taking executables as inputs and assigning them to the benign or malware class. Efforts in the security (5, 2, 9, 18) and machine learn... |
What theoretical backing, if any, exists to support the authors' numerical arguments around how their techniques minimze catastrophic forgetting? | Fine-tuning the full model leads to low error early in training, but then error increases as the model overfits and loses knowledge captured through pretraining [49]. | [
49
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
If the same information from the latent vector is being added during decoding, why did the Memory scheme yield higher performance? | The authors theorize that the memory scheme is better since the latent information is accessible to every layer in the neural network, instead of being available to only two layers (input, output) in the embedding approach [22]. | [
22
] | [
{
"id": "2004.04092_all_0",
"text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)... |
Is it possible to say the proposed method learns better representation? What is the meaning of the downstream tasks? | SSL methods learn useful representation by solving pretext tasks without labels [0]. In P7, we can get a hint that this benchmarks are testbed for evaluating SSL methods [4]. | [
0,
4
] | [
{
"id": "2211.02284_all_0",
"text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n... |
How can Hint Network help with challenging auxiliary tasks? | The HintNet is designed to make challenging tasks more solvable by providing the model with additional information at the point of need, specifically by correcting the answer of the learner with its own answer from an augmented graph with hub nodes [18]. The amount of help (correction) provided by the HintNet is optimized to maximize the learner's gain, and the help is determined by weighting functions for HintNet, which are optimized by meta-learning [17]. | [
18,
17
] | [
{
"id": "2007.08294_all_0",
"text": " Graph neural networks have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including... |
Why the YoloV3 performs poorly with higher values of AP when compared with RetinaNet? | YOLOv3 performs poorly because ot struggles to get the perfect bounding box alignment with the objects [20]. | [
20
] | [
{
"id": "1804.02767_all_0",
"text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B... |
Do the authors use gradient accumulation while training the model? | This work uses Backpropagation Through Time for Text Classification (BPT3C) where a document is divided into fixed-length batches and the gradients are back-propagated to batches [27]. | [
27
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
What is an example attribute used in experiment of joint training? | We can't know from given paragraphs [2]. | [
2
] | [
{
"id": "1711.09020_all_0",
"text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tra... |
How is MIRA different to TWIST fundamentally? | TWIST's direct optimization of MI through model parameters leads to the suboptimal solution while MIRA optimizes MI between pseudo-label and data without updating model parameters [3]. | [
3
] | [
{
"id": "2211.02284_all_0",
"text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n... |
How can the limited set of common sub-word units (“wordpieces”) provide a good balance between the flexibility of “character”-delimited models and the efficiency of “word”-delimited models? | Authors assume that's due to the fact that it deals efficiently with an essentially infinite vocabulary without restoring to characters only [33]. | [
33
] | [
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi... |
What sentence splitter did you use for chunking? | we split its abstract into chunks with sentence boundaries preservedA passage is constructed by concatenating the title and one chunk [28]. | [
28
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
Why is the "hard attention model" non-differentiable? | Hard attention model selects patch of the image to attend at a time, however this question can't be fully answered within this paper limit [18]. | [
18
] | [
{
"id": "1508.04025_all_0",
"text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con... |
Do Ciresan et al. treat each patch as a separated input in their previous approach in segmentation? | Yes Ciresan et al approach treated segmentation of each patch differently [1]. | [
1
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
How did author reduce the noise in user behavior data? | For each user, authors keep the most recent 10 installation or uninstallation operations in a week [14]. | [
14
] | [
{
"id": "2005.13303_all_0",
"text": " Personalized mobile business, e.g., recommendations, and advertising, often require effective user representations. For better performance, user modeling in industrial applications often considers as much information as possible, including but not limited to gender, loc... |
Is there any problems from using web crawl data? If so, what is the problem? | Models perform worse on web-crawled data [13]. | [
13
] | [
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling... |
What are the metrics used to compare the performance between YOLO & DPM/RCNN? | Different approaches to evaluating object detection models are presented in the paper where they mostly use mean average precision (mAP) and frames per second (fps) for accuracy and speed respectively [51]. Qualitatively, the YOLO's errors are compared to R-CNN, and mAP on different classes of objects is shown [52]. Moreover, YOLO was shown to boost the performance of R-CNN, and better generalize for new domains [68]. | [
51,
52,
68
] | [
{
"id": "1506.02640_all_0",
"text": " Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms fo... |
How did the authors decide lambda, and what is the optimal value? | lambda=10 is used to avoid any domain-targeted tuning [37]. | [
37
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
What does NILM means? Is it different to GLUE? | NILM is the dataset of measuring Non-linguistic Inductive bias in Language Models [0]. It is different with GLUE since GLUE focus on tasks require linguistic knowledge and reasoning [8]. | [
0,
8
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
What does distortion-editability tradeoff mean? | In order to fully answer this question we have to review reference [43] [35]. | [
35
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
Is extrapolating the missing context enough to predict the pixels in the border region of the image? | By extrapolating based tiling strategy, U-Net can predict the pixels in the border region of the image [4]. | [
4
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
How are the input images labelled while fine tuning the model? | Authors fine-tuned all the layers that contain text embeddings and these embeddings were created from labels( "a [identifier] [class noun”) of all input photographs of the subject [17]. | [
17
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
Can the number of frames be extended to more than 16? If exists, what is the upper bound for the number of frames? | They train a new masked frame interpolation and extrapolation network ↑F , capable of increasing the number of frames of the generated video either by frame interpolation for a smoother generated video, or by pre/post frame extrapolation for extending the video length [14]. Additionally, the spatial super-resolution models enable to increase a higher (controllable) frame rate [24]. Therefore, using the extrapolation network ↑F, it can possible to extend the video length from 16 frames to 76 frames [25]. | [
14,
24,
25
] | [
{
"id": "2209.14792_all_0",
"text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid... |
What was Faster R-CNN developed to overcome in the Fast R-CNN? | Faster R-CNN uses a RPN component to predict bounding boxes of objects it detects instead of sliding windows, which is what Fast R-CNN uses [20]. | [
20
] | [
{
"id": "1605.06409_all_0",
"text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar... |
Give examples of two techniques of employing CNNs to medical image | There are three main techniques that are used to apply CNNs to tasks involving medical images: 1) training from scratch, 2) using pre-trained CNNs as feature extractors, then using those features with hand-crafted features, and 3) performing unsupervised pre-training then using CNN for fine-tuning [1]. An example of the "training from scratch" technique for employing CNNs to medical images is a CNN that was trained from scratch for LN detection [3]. An example of the "CNN fine-tuning" technique is a CNN pre-trained on ImageNet that was used for X-ray and CT images for chest pathology identification and detection [8]. | [
1,
3,
8
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
How would stacking attention modules directly woud lead to performance drop? Why is the attention residual learning mechanism necessary? | Because of the mask values being between 0 and 1, and the fact that naive stacking attention modules means using a dot product on the resulting masks, naive stacking causes a performance drop as the dot product of several modules will converge towards 0 [21]. The attention residual learning mechanism changes this by making the lower bound of the mask values the original features instead of 0 [22]. | [
21,
22
] | [
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio... |
What hyperparameters values were used to train both the plain and highway networks ? | Both the plain network and the highway network set the best hyperparameters after 100 experiments [22]. | [
22
] | [
{
"id": "1507.06228_all_0",
"text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, withi... |
Why it is needed to have a two channel volumetric segmentation in the output? | Two channel volumetric segmentation is used at the output to perform binary classification of foreground and background classes using soft-max [1]. Each volume represent the logits for each class at each pixel location [10]. | [
1,
10
] | [
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes... |
Where do the authors source their labelled dataset from? | The source of the labelled dataset in the paper is two news websites, namely, CNN and Daily News [2]. The authors created the dataset of approximately one million data points from ~93k CNN and ~220k Daily Mail online news articles [6]. | [
2,
6
] | [
{
"id": "1506.03340_all_0",
"text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ... |
Would the performance be improved if the PLM model is pre-trained or fine-tuned on bio-medical domain datasets? | In their experiments, the authors showed that all of the models performed poorly on the bio-medical domain due to the textual data of the domain not being covered widely in the PLMs’ pretraining dataset [1]. This lack of data can cause the PLM to struggle to reveal and capture knowledge specific to that domain [56]. | [
1,
56
] | [
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap... |
When does cross-encoder powerful? Give an example. | Cross-encoder exerts its power for the cases requiring disambiguation with the given context [46]. Table 8 shows some examples where the cross-encoder can accurately identify and linked entities through the context while the bi-encoder failed [47]. | [
46,
47
] | [
{
"id": "1911.03814_all_0",
"text": " Scale is a key challenge for entity linking; there are millions of possible entities to consider for each mention. To efficiently filter or rank the candidates, existing methods use different sources of external information, including manually curated mention tables Gan... |
What is the difference between AIDA and AIDA+? | AIDA+ is extended AIDA by adding missing mention links [9]. | [
9
] | [
{
"id": "2108.13530_all_0",
"text": " In this paper we explore a principled approach to solve entity linking (EL) jointly with coreference resolution (coref). Concretely, we formulate coref+EL as a single structured task over directed trees that conceives EL and coref as two complementary components: a core... |
What does "smooth feature regularization" mean? | The authors do not explicitly define what "smooth" means anywhere in the paper, though possible meanings could be interpolated from the author's statements in the papers [15]. The authors mention that the regularization term in Optimus is what helps a basic VAE learn a smooth feature space [37]. They also use t-SNE to visualize learned features, which indicates that "smooth" in this context just means cleaner, free-flowing boundaries of the latent space [39]. | [
15,
37,
39
] | [
{
"id": "2004.04092_all_0",
"text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)... |
What makes SBM-Transformer novel compared to existing efficient Transformer variants? | SBM-Transformer is the first Transformer architecture that can data-adaptively choose between linear to full attention with respective computational costs [3]. | [
3
] | [
{
"id": "2210.15541_all_0",
"text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attenti... |
How is using a conditional GAN to produce a latent vector from a label different to using the encoder in OPTIMUS with just the label as its input? | The conditional GAN generates a latent vector which is then passed to OPTIMUS' decoder, which produces the output [36]. | [
36
] | [
{
"id": "2004.04092_all_0",
"text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)... |
How do the authors generate synthetic QA? | They generate syntheticQS by transforming a triplet of KG into question and answer pair [10]. | [
10
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
Why does assumning unobserved user-item pairs negative leads to limited performance for generative methods? | Assuming unobserved user-item pairs negative leads to limited performance since there are some cases of positive but unobserved, and the number of this case is increased [3]. | [
3
] | [
{
"id": "2105.06323_all_0",
"text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on... |
How did the authors handle complexity constraints for their mobile networks? | The authors have constructed simple scaling to reduce the size of the ShuffleNet to fit the computational constraints [14]. Also, they report the specific outcomes of their implementation and its reasons when the network is run on mobile devices [17]. However, it is hard to understand what does handle complexity constraints and mobile networks mean in the question [24]. | [
14,
17,
24
] | [
{
"id": "1707.01083_all_0",
"text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa... |
What does STL stand for? | Single-Task Learning (STL): The model is pre-trained on a synthetic QA dataset generated from a single KG [23]. | [
23
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
How is the tokenized BLEU different from the NIST BLEU? | A tonkenized BLEU: all text are tokenized with tonkenizerperl and BLEU scores are computed with multi-bleuper [26]. | [
26
] | [
{
"id": "1508.04025_all_0",
"text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con... |
How SSD will predict boundary boxes after training as there is no ground truth anymore? | After training, SSD predicts the boundary box by doing a non-maximum suppression on boundary boxes with the presence of object class instance [4]. | [
4
] | [
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S... |
Why did the authors use a different set of phonemes for decoding and scoring? | There is no evidential information about using a different set of phonemes for decoding and scoring, this question cannot be answered in this paper [27]. | [
27
] | [
{
"id": "1506.07503_all_0",
"text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation and visual object classification .111An early version of this work was presented at th... |
What is the meaning of invariant to permutations? | [Point cloud is just a set of points and therefore invariant or indifferent to permutations of its members, necessitating certain symmetrizations in the net computation [17]. Point cloud is a set of points without a specific order, unlike pixel arrays in images or voxel arrays in volumetric grids [0]. | [
17,
0
] | [
{
"id": "1612.00593_all_0",
"text": " In this paper we explore deep learning architectures capable of reasoning about 3D geometric data such as point clouds or meshes. Typical convolutional architectures require highly regular input data formats, like those of image grids or 3D voxels, in order to perform w... |
How do Projected Attention Layers work? | Projected Attention Layers (PAL) takes the hidden state from the previous layer and runs parallel to the self-attention layer [49]. | [
49
] | [
{
"id": "2112.05364_all_0",
"text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att... |
Why does the approach not simply add all feedback examples in memory to the prompt if they will be adding examples anyways? | The proposed approach (MemPrompt) probably does not add all feedback examples in memory to the prompt since the size of the prompt is limited to 2048 tokens [53]. | [
53
] | [
{
"id": "2201.06009_all_0",
"text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining wi... |
For the query/passage encoder, did the authors use siamese network (i.e., parameters are shared) or non-siamese network? | Siamese networks are used For the query/passage encoder [16]. | [
16
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
How does residual learning address the degradation problem in deep neural networks? | [ Residual learning address the degradation problem in deep neural network using reformulation] [21]. | [
21
] | [
{
"id": "1512.03385_all_0",
"text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can... |
What is the similarities and differences between the NaturalInstructions dataset and the SuperNaturalInstructions dataset? | This paper only mentions and discusses the SuperNaturalInstructions (or SuperNI) dataset, and does not explicitly discuss the NaturalInstructions dataset [27]. However, based on the naming, it is possible that both the Supe- NaturalInstructiuons and NaturalInstructions datasets are datasets that contain annotated instructional data to help large language models (LLMs) perform a wider range of specialized tasks [32]. | [
27,
32
] | [
{
"id": "2212.10560_all_0",
"text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d... |
Would 'bijection' always be guaranteed in this case? | If there is a translator G:X\rightarrow Y and another translator F:Y\rightarrow X, then G and F should be inverses of each other, and both mappings should be bijections [6]. | [
6
] | [
{
"id": "1703.10593_all_0",
"text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed... |
How did the performance of the proposed model differ on shorter vs longer audio sequences? | While the proposed model showed good performance on shorter audio sequences (~200 phones), it failed to align most of phones on longer audio sequences [37]. | [
37
] | [
{
"id": "1506.07503_all_0",
"text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation and visual object classification .111An early version of this work was presented at th... |
What are the benefits of being dependent on both the input data and the metric? | [In comparison with volumetric CNNs that scan the space with fixed strides, the paper's local receptive fields are dependent on both the input data and the metric [4]. | [
4
] | [
{
"id": "1706.02413_all_0",
"text": " We are interested in analyzing geometric point sets which are collections of points in a Euclidean space. A particularly important type of geometric point set is point cloud captured by 3D scanners, e.g., from appropriately equipped autonomous vehicles. As a set, such d... |
Does it have any performance degradation if we generate too many frames? | Figure 8 shows that the VDM baselines with factorized space-time attention fail to generate consistent content [36]. But Tune-A-Video can generate better temporal consistency video [18]. | [
36,
18
] | [
{
"id": "2212.11565_all_0",
"text": " The large-scale multimodal dataset , consisting of billions of text-image pairs crawled from the Internet, has enabled a breakthrough in Text-to-Image (T2I) generation (30, 35, 6, 42, 40). To replicate this success in Text-to-Video (T2V) generation, recent works (42, 15... |
Instead of editing the prompt based on previous cases, did they test a simpler approach where previous cases are concatenated to the prompt? | Yes, they did [40]. | [
40
] | [
{
"id": "2201.06009_all_0",
"text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining wi... |
The authors claim that autonomous cars would be able to drive without specialized sensors using only fast and accurate algorithms, is that true ? | Theoretically, if the detection algorithms were as fast and accurate as the human visual system, they could drive an autonomous car, but no further discussion is included in the paper [0]. At the time of the writing of the paper, even YOLO was still inferior to other detectors in terms of accuracy [8]. | [
0,
8
] | [
{
"id": "1506.02640_all_0",
"text": " Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms fo... |
How good the LSTM based encode/decoder work for real time applications keeping in view their sequential nature? | Since LSTM based encoder/decoder method successfully worked for real time sequential nature application, it is a good method [0]. | [
0
] | [
{
"id": "1502.04681_all_0",
"text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq... |
How many images did the dataset consist of and the number of unique patients ? | The ILD dataset has 905 image slices from 120 patients [14]. | [
14
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
Considering that GMPool requires matrix decomposition, how good is the efficiency aspect of the algorithm? Can the algorithm be used for large graphs? | GMPool may not be eligible to be used for large graphs as is due to the large cubic time complexity [20]. | [
20
] | [
{
"id": "2209.02939_all_0",
"text": " Graph Neural Networks (GNNs) learn representations of individual nodes based on the connectivity structure of an input graph. For graph-level prediction tasks, the standard procedure globally pools all the node features into a single graph representation without weight ... |
What do hard-positive mining techniques mean? | Hard-positive mining techniques use spherical clusters for the embeddings of a single person [4]. | [
4
] | [
{
"id": "1503.03832_all_0",
"text": " In this paper we present a unified system for face verification (is this the same person), recognition (who is this person) and clustering (find common people among these faces). Our method is based on learning a Euclidean embedding per image using a deep convolutional ... |
What performance metric does IOU measure? | The IOU measures the overlap between the ground-truth RoI and the RoI detected by the model [13]. | [
13
] | [
{
"id": "1605.06409_all_0",
"text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar... |
How many features are used in each layer of SegNet? | 64 features are used in each layer of SegNet [1]. | [
1
] | [
{
"id": "1505.07293_all_0",
"text": " Semantic segmentation is an important step towards understanding and inferring different objects and their arrangements observed in a scene. This has wide array of applications ranging from estimating scene geometry, inferring support-relationships among objects to auto... |
What does "injecting" information into attention layers mean? | Pattern injection means pre-determining the weights of the transformer layer's scaled dot product attention values, such that run-time complexity can be lowered while maintaining the interpretability of the model [1]. | [
1
] | [
{
"id": "2112.05364_all_0",
"text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att... |
What is the other example of frameworks that can be used in PaddlePaddle like Paddle Graph Learning? | This work mentions using the Paddle Graph Learning (PGL) framework from the deep learning framework PaddlePaddle [43]. | [
43
] | [
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap... |
Does DeBERTa has a larger representations dimensions than BERT large? | No, because dimension of BERT-large is 1024, and DeBERTa is 768 [12]. | [
12
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
Do the authors measure the quantify the impact on their model's performance when using RELU6 instead of RELU? | While the authors showed the effect of inverted residual connections and linear bottlenecks, they did not measure the impact of using RELU6 instead of RELU in the ablation study [41]. | [
41
] | [
{
"id": "1801.04381_all_0",
"text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resour... |
Does making higher resolution have to be incorporated into the network? Can't we do this as a separate process? | Video diffusion models modify little of the archicture to accommodate video data within the memory constraints of deep learning accelerators [1]. They approach with the standard diffusion modelformalism [11]. In their method, one of skill to make high resolution video is the spatial upsampling introduced by Menick and Kalchbrenner (2019) [14]. Also, reconstruction guidance is extended to constuct the high-resolution model [7]. | [
1,
11,
14,
7
] | [
{
"id": "2204.03458_all_0",
"text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we prese... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.