input
stringlengths
14
315
answer
stringlengths
9
2.16k
gold_ctxs
listlengths
1
15
ctxs
listlengths
11
186
How do they utilize fully-connected layers on dense prediction of image?
they used in-network upsampling [10].
[ 10 ]
[ { "id": "1411.4038_all_0", "text": " Convolutional networks are driving advances in recognition. Convnets are not only improving for whole-image classification (19, 31, 32), but also making progress on local tasks with structured output. These include advances in bounding box object detection (29, 12, 17), ...
What are transformation modules? Is this related to transformers?
The authors explain how trainable “transformation modules” attached to a frozen (non-trainable) base model might allow for existing models to be used for new concepts, instead of finetuning or retraining (both of which have their associated challenges) [2].
[ 2 ]
[ { "id": "2208.01618_all_0", "text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and com...
Why the focal loss strategy did not worked for the authors?
The authors hypothesize that YOLOv3 may already be robust to the problem which the focal loss is tryin to solve because it has spearate objectness predictions and conditional class predictions [26].
[ 26 ]
[ { "id": "1804.02767_all_0", "text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B...
Why should we care about the batch size in cost of performance on the unsupervised representation learning methods?
Curernt trend of self-supervised learning methods employ a large-scale dataset [0]. We care about batch size since this corresponds to speed of the method [4].
[ 0, 4 ]
[ { "id": "2211.02284_all_0", "text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n...
What kind of output variations are possible with dreambooth?
Output variants include changing the subject's location, species, color, shape, pose, expression, material, and semantics [14].
[ 14 ]
[ { "id": "2208.12242_all_0", "text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi...
Why did the authors finetune the VAE on language understanding tasks?
The authors might be performing finetuning of the pretrained model and classifier weights to perform better on low resource language understanding tasks [37].
[ 37 ]
[ { "id": "2004.04092_all_0", "text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)...
Which are the metrics used by authors to compare the performance of the models?
They use FID as our default metric for overall sample quality comparisons as it captures both diversity and fidelity and has been the de facto standard metric for state-of-the-art generative modeling work [16]. Moreover, they use Precision or IS to measure fidelity, and Recall to measure diversity or distribution coverage [17]. In Table 4, they report FID, sFID, IS, Precision, and Recall as metrics [22].
[ 16, 17, 22 ]
[ { "id": "2105.05233_all_0", "text": " Over the past few years, generative models have gained the ability to generate human-like natural language Brown et al. (2020), infinite high-quality synthetic images Brock et al. (2018); Karras et al. (2019b); Razavi et al. (2019) and highly diverse human speech and mu...
How many hyperparameter combinations were used for the random hyperparameter search?
300 sets of possible hyperparameter combinations then choose four of them that complement each other well [29].
[ 29 ]
[ { "id": "1506.06579_all_0", "text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a...
Did the authors use AlexNet for evaluation of SqueezeNet?
Yes, as told by authors that they used AlexNet and the associated model compression results as a basis for comparison when evaluating SqueezeNet [22].
[ 22 ]
[ { "id": "1602.07360_all_0", "text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur...
How does the performance change when a dense retriever is evaluated on out-of-domain queries and documents that are different from the domain on which the retriever was trained?
It is said that when evaluating a retriever trained on a source domain in an out-of-domain setting, the performance is obtained lower than BM25 [2]. Also, dense retrievers are said to be sensitive to domain shift and models that perform well on MS MARCO do not perform well on COVID-19 data [23]. There have been many studies on unsupervised sentence embedding learning, but it is said that they do not work well in unsupervised dense retrieval [7]. Therefore, the performance of the retriever in out-of-domain may be worse [8].
[ 2, 23, 7, 8 ]
[ { "id": "2112.07577_all_0", "text": " Information Retrieval (IR) is a central component of many natural language applications. Traditionally, lexical methods (Robertson et al., 1994) have been used to search through text content. However, these methods suffer from the lexical gap (Berger et al., 2000) and a...
Explain the motivation of this paper
The motivation of this paper is analyzing whether pretraining on text is inherently about learning language or if pretraining inject non-linguisitc reasoning to LMs [1].
[ 1 ]
[ { "id": "2210.12302_all_0", "text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al...
How can we solve the chllenges of image segmentation ?
In this paper, It is shown that a surprisingly simple, flexible, and fast system can surpass prior state-of-the-art instance segmentation results [1]. We use object detection to denote detection via bounding boxes, not masks, and semantic segmentation to denote per-pixel classification without differentiating instances [10]. Given this, one might expect a complex method to be required to achieve good results [36].
[ 1, 10, 36 ]
[ { "id": "1703.06870_all_0", "text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ...
Why DropPath regularization didn’t work well for NASNets?
Authors found that ScheduledDropPath, a modified version of DropPath works well for NASNets [22].
[ 22 ]
[ { "id": "1707.07012_all_0", "text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of  on using convolutional architectures (17, 34) for ImageNet  classification, successive advancements through architecture enginee...
Why is it a good idea to apply the convolutions across patches of the video instead of whole frames?
To extract motion information it is a good idea to apply the convolutions across patches of the video instead of whole frames [47].
[ 47 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
With "memory of cases" here, do they simply mean a prompt that contains all of these cases as examples?
Yes, "memory of cases" in this context does mean a prompt with all of these relevant cases as examples listed out [18]. However, the MemPrompt model's input size is limited to 2048-tokens, so adding all possible matches to the prompt would not be possible, which is why the authors' proposed approach specifically focuses on selecting which prompts to include [2].
[ 18, 2 ]
[ { "id": "2201.06009_all_0", "text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining wi...
There will be differences in basic units depending on the language. As a result, what should be changed in the model, and did the author consider that part?
To consider differences in basic language unit between english and chinese, authors set the basic units per language differently [18]. Then, they use some language dependent segmentation tools to get the word/phrase information [19].
[ 18, 19 ]
[ { "id": "1904.09223_all_0", "text": " Language representation pre-training Mikolov et al. (2013); Devlin et al. (2018) has been shown effective for improving many natural language processing tasks such as named entity recognition, sentiment analysis, and question answering. In order to get reliable word rep...
Have the authors experimented with extending to other auxiliary tasks other than meta-path prediction?
In this paper, the authors did not conduct experiments on extending the framework to other auxiliary tasks besides meta-path prediction [28]. However, the authors mention that it is a possible direction for future work [7].
[ 28, 7 ]
[ { "id": "2007.08294_all_0", "text": " Graph neural networks  have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including...
What are the limitations of the SRCNN in the SISR task?
It requires context from small image regions, it converges too slowly during training, and the network only works for one set scale [15].
[ 15 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
What is the mentioned threshold?
Threshold is used to traverse the tree down, taking the highest confidence path at every split [61].
[ 61 ]
[ { "id": "1612.08242_all_0", "text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ...
Did author consider only three user behaviors on mobile usage?
Authors only consider three behaviors: retention, installation, and un-installation [69]. The reason is that authors note the three behaviors, and also model behavior type embeddings of those 3 behaviors [7].
[ 69, 7 ]
[ { "id": "2005.13303_all_0", "text": " Personalized mobile business, e.g., recommendations, and advertising, often require effective user representations. For better performance, user modeling in industrial applications often considers as much information as possible, including but not limited to gender, loc...
What are the metrics used to compare the performance of the residual network to the other models ?
The main metrics used to compare different methods were Top-1 and Top-5 error, test error on the CIFAR datasets, the number of parameters and the number of FLOPs [28]. The mean absolute response of output features of each stage was also used to compare their method with ResNet [32].
[ 28, 32 ]
[ { "id": "1704.06904_all_0", "text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio...
Does using horizontal connections depend on the amount and complexity of the data wanted to be segmented?
Yes, more complex data can be finely segmented by using horizontal connections in the CNN network [11].
[ 11 ]
[ { "id": "1606.04797_all_0", "text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes...
In Figure 6, why are the BLEU scores fluctuating when the sentence lengths are less than 40?
Author's model quality is more effective in handling long sentence as the quality doesn't degrade as sentences become longer [37].
[ 37 ]
[ { "id": "1508.04025_all_0", "text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con...
What is an example of an autonomous car that uses CNN?
Tesla ( Model S for example ) autopilot system uses a convolutional neural network to detect objects on its way [0].
[ 0 ]
[ { "id": "1602.07360_all_0", "text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur...
The paper mentions using Daily News and CNN bullet-point summaries to generate queries. Would the authors' approach towards building this supervised dataset work effectively if these news sources created the summaries by merely extracting sentences from the whole article, instead of rephrasing and condensing text?
The authors, in multiple places, emphasize that their approach relies on the fact that DailyMail and CNN both use abstractive summaries for their bullet points [2]. This fact probably implies that the authors approach would not work on news sources that merely use excerpts or extracts for summaries [6].
[ 2, 6 ]
[ { "id": "1506.03340_all_0", "text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ...
How channel shuffle operation works for two groups?
In the case of channel shuffle operation for two groups, each group is divided into two and shuffled so each new group has a subgroup from both old groups [10]. For example, |A|B| -> |aa|bb| -> |ab|ab| [16]. In terms of performance, two groups seem to work consistently better than the single group case and consistently worse than having more than 2 groups [22].
[ 10, 16, 22 ]
[ { "id": "1707.01083_all_0", "text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa...
YOLO detectors are now being used everywhere including both civil and military use. As a researcher how much authors should be concerned on positive and negative use of their research work?
A sarcastic comment means a concern for authors that Google, Facebook, and similar corporations use these kind of models to harvest and use our personal information [31]. A similar sarcastic comment regarding military [32]. The authors should be responsible for their work and consider possible consequences to the world [33].
[ 31, 32, 33 ]
[ { "id": "1804.02767_all_0", "text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B...
Was the performance difference between Self-Instruct training and SuperNI training significant?
While it does appear as though there is a measurable performance improvement from SuperNI to Self-Instruct, quantifying the impact and magnitude of that improvement is not straightforward [2]. Evaluations with ROGUE-L scores find that the absolute difference between both these methods is not very high, though additional information and context may be needed to judge the meaning of the absolute difference between these numbers [23]. The authors do claim that they outperform T0 or SuperNI by a large margin, which is strong evidence to suggest that the difference was indeed significant, but such claims must be taken with some grains of salt since authors are usually incentivized to show their models are the best [25].
[ 2, 23, 25 ]
[ { "id": "2212.10560_all_0", "text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d...
What is the difference between zero-shot fusion and original AdapterFusion?
In contrast to AdapterFusion where the focus is learning to transfer knowledge to a specific target task, our zero-shot fusion aims to generalize this transfer to any arbitrary target task [13].
[ 13 ]
[ { "id": "2206.03715_all_0", "text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e...
How did authors claim that their approach overcome the problems that Ultrasound made to earlier approaches?
V-Net solves the problem of patch based CNNs for ultrasound by using a 3D image volume [19].
[ 19 ]
[ { "id": "1606.04797_all_0", "text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes...
Why did the authors have to scale the classifier gradients by a constant factor larger than 1?
When using a scale of 1, they observed that the classifier assigned reasonable probabilities (around 50%) to the desired classes for the final samples, but these samples did not match the intended classes upon visual inspection [35]. Scaling up the classifier gradients remedied this problem, and the class probabilities from the classifier increased to nearly 100% [41]. When using a larger gradient scale focuses more on the modes of the classifier, which is potentially desirable for producing higher fidelity (but less diverse) samples [42].
[ 35, 41, 42 ]
[ { "id": "2105.05233_all_0", "text": " Over the past few years, generative models have gained the ability to generate human-like natural language Brown et al. (2020), infinite high-quality synthetic images Brock et al. (2018); Karras et al. (2019b); Razavi et al. (2019) and highly diverse human speech and mu...
What are the pros and cons of a global approach and a local approach?
A drawback of the global attention is it had to attend to all words on the source side for each target word, which is expensive and potentially will render it impractical to translate longer sequences, and despite that global attention gives a significant boost of +28 BLEU making it better than the base attention system, but the local approach gave further improvement of +09 BLEU on top of the global attention model [17]. Also the local approach achieved lower AERs [19]. Not to mention that the local approach is simpler, easier to implement and train, and computationally less expensive [2]. as it focus only on a small subset of the source positions per target word [31].
[ 17, 19, 2, 31 ]
[ { "id": "1508.04025_all_0", "text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con...
What does MTL stand for?
Conventionally, MTL stands for multiple learning tasks [8]. Here, for experiment, the author call the model pre-trained on multiple synthetic QA datasets as MTL [23].
[ 8, 23 ]
[ { "id": "2206.03715_all_0", "text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e...
What is the goal of this work ?
Vision community has rapidly improved object detection and semantic segmentation results over a short period of time [0].
[ 0 ]
[ { "id": "1703.06870_all_0", "text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ...
Does all the layers of the MobileNet use depthwise separable convolution?
The first layer of MobileNet is a full convolution, and the rest are depthwise separable convolutions [27].
[ 27 ]
[ { "id": "1704.04861_all_0", "text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t...
What drives SAT possible to study the expressiveness of the output representation?
Thanks to the unique design of our SAT, which relies on a subgraph structure extractor, it becomes possible to study the expressiveness of the output representations [28].
[ 28 ]
[ { "id": "2202.03036_all_0", "text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019...
What is the loss function used by authors?
Authors of the paper used a prior-preserving loss function [4].
[ 4 ]
[ { "id": "2208.12242_all_0", "text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi...
what does "kernel smoother" mean?
Kernel-smoother is a kernel defined on node features to capture local structure of nodes by calculating similarity between node pairs [15].
[ 15 ]
[ { "id": "2202.03036_all_0", "text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019...
Would more recent frameworks, such as JAX be considered imperative or declarative?
While some recent framework (Tensorflow) has aspect of both imperative and declarative paradigms, the question cannot be answered in this paper since there is no evidential information about the paradigm shift over time [1].
[ 1 ]
[ { "id": "1512.01274_all_0", "text": " The scale and complexity of machine learning (ML) algorithms are becoming increasingly large. Almost all recent ImageNet challenge  winners employ neural networks with very deep layers, requiring billions of floating-point operations to process one single sample. The ri...
The authors explored the possibility of using residual networks on the inception model to reduce complexity. Is that true?
True [1]. The authors explored the possibility of using residual networks on the inception model to reduce complexity [27].
[ 1, 27 ]
[ { "id": "1602.07261_all_0", "text": " Since the 2012 ImageNet competition  winning entry by Krizhevsky et al , their network “AlexNet” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection , segmentation , human pose estimation , video classification , o...
What is the goal behind transforming axial, and coronal, and sagittal representations to RGB ?
The goal behind transforming the axial, coronal and sagittal representations to RGB is to help the learning process of transfer learning models that were pre-trained on ImageNet [11].
[ 11 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot...
What kind of pretrained language models they mentioned?
Authors mention BERT, RoBERTa, T5, mBERT, and XLM-R [14].
[ 14 ]
[ { "id": "2204.08110_all_0", "text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling...
What is the "verb-noun structure" and what does it show?
The "verb-noun structure" of a sentence is created by identifying the primary verb (action word) in a sentence and identify the corresponding noun that action is being performed on [15].
[ 15 ]
[ { "id": "2212.10560_all_0", "text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d...
How well do the proposed model in this paper and the model in Dai and Lee (2015) generalize to documents of varying lengths?
This work’s method was evaluated in six widely-studied datasets which varied in document length and experimental results demonstrated that the method outperformed existing approaches in all of these datasets [12]. This indicates that this work’s method generalizes to documents of varying lengths [30]. Also compared to Dai and Le, this method achieved a lower error rate on IMDb showing that it also generalizes better to document lengths reflective of the real world [37].
[ 12, 30, 37 ]
[ { "id": "1801.06146_all_0", "text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS...
Can we generalize applying transformers to translate from any language to another rather than English, like German-to-Arabic for example?.
Since the Transformer performed great on English-to-French and English-to-German translation tasks and can be trained significantly faster than architectures based on recurrent or convolutional layers therefore it can be hoped that it can be used for any language other than English [48].
[ 48 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
Do the differences in hardware include different gripper shapes?
The differences in hardware include different gripper shapes, illustrating the range of variation in gripper wear and geometry as in Figure 7 [13]. Uneven wear and tear on the robots resulted in many differences in the shape of the gripper fingers [25].
[ 13, 25 ]
[ { "id": "1603.02199_all_0", "text": " When humans and animals engage in object manipulation behaviors, the interaction inherently involves a fast feedback loop between perception and action. Even complex manipulation tasks, such as extracting a single object from a cluttered bin, can be performed with hardl...
I was wondering whether this results came from various settings (e.g., training only on video dataset).
Video Diffusion Models demonstrates their inital attempt to generate text-to-image generation results in various settings such as classifier-free guidance, jointly training of video-image, and unconditional and conditional generation [1]. They consider several additional image frames for joint training of video-image [15]. Moreover, they adjust the weight of classifier-free guidance, and conditioning method with the newly proposed reconstruction guidance for autoregressive extension and simultaneous spatial and temporal super-resolution [21].
[ 1, 15, 21 ]
[ { "id": "2204.03458_all_0", "text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we prese...
Why do the authors claim that their proposed model is "sample efficient"?
The authors claim that their proposed method enables sample-efficient transfer learning through experiments where they showed that, with only 100 labelled examples, ULMFiT could match the performance of training from scratch with 10 to 20 times more data [5].
[ 5 ]
[ { "id": "1801.06146_all_0", "text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS...
What does curriculum learning aim at ?
Curriculum learning is a method for good triplets selection [4].
[ 4 ]
[ { "id": "1503.03832_all_0", "text": " In this paper we present a unified system for face verification (is this the same person), recognition (who is this person) and clustering (find common people among these faces). Our method is based on learning a Euclidean embedding per image using a deep convolutional ...
What is back propogation through time?
Backpropagation through time (BPTT) is used to train language models to enable gradient propagation for large input sequences [27].
[ 27 ]
[ { "id": "1801.06146_all_0", "text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS...
Why is the recognition model also referred to as a probabilistic encoder?
A connection between linear auto-encoders and a certain class of generative linear-Gaussian models has long been known [1]. Therefore, in this paper, given a datapoint \mathbf{x} it produces a distribution (eg [17]. a Gaussian) over the possible values of the code \mathbf{z} from which the datapoint \mathbf{x} could have been generated [22].
[ 1, 17, 22 ]
[ { "id": "1312.6114_all_0", "text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi...
How is this paper and other previous works which have explored the ability of RNN and Transformer architecture?
Previous works only focus on the learnability of tasks [6].
[ 6 ]
[ { "id": "2210.12302_all_0", "text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al...
What does the author mean by “empirically” consider some token sequences to be forgotten?
Since the forgetting definition is dependent on a held-out validation corpora, it is considered 'empirically' forgotten [27].
[ 27 ]
[ { "id": "2210.01504_all_0", "text": " Recent work has shown that an adversary can extract training data from Pretrained Language Models (LMs) including Personally Identifiable Information (PII) such as names, phone numbers, and email addresses, and other information such as licensed code, private clinical n...
What if we do not share the parameter?
[No sharing of the parameters means no sharing of computations, causing deceleration of inference speed [53].
[ 53 ]
[ { "id": "1706.02413_all_0", "text": " We are interested in analyzing geometric point sets which are collections of points in a Euclidean space. A particularly important type of geometric point set is point cloud captured by 3D scanners, e.g., from appropriately equipped autonomous vehicles. As a set, such d...
Why didn't the authors use the previous studies mentioned in the Introduction section as baseline models?
The authors did not use the previous studies as the baseline models since the proposed work attempts a new approach that disregards a typical assumption from the previous studies [3].
[ 3 ]
[ { "id": "2208.14867_all_0", "text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ...
What is the number of images in the dataset, that is gathered by the authors to train the architecture for place recognition?
They used Weak Supervision as a solution for the lack of labelled data [33]. They gather a large dataset of multiple panoramic images depicting the same place from different viewpoints over time from the Google Street View Time Machine which is of weak supervision [34]. They depended on Pitts250k which contains 250k database images downloaded from Google Street View and 24k test queries generated from Street View but taken at different times, years apart [5].
[ 33, 34, 5 ]
[ { "id": "1511.07247_all_0", "text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ...
How does the SSD match the default bounding box with ground truth ones?
To match the ground truth box with default box, authors used Best Jaccard Overlap [10]. The default boxes are matched with any ground truth box with jaccard overlap higher than a threshold which is 05 [9].
[ 10, 9 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
Why the slightest change in the textual prompt can lead to a completely different output image in the large-scale language-image models?
Because trained large models on large dataset lack control over generated images as it really depends on the random seed and the interaction between pixels to text embedding through the diffusion process which results in the spatial information from the internal layers of the generative model [0].
[ 0 ]
[ { "id": "2208.01626_all_0", "text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2  and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o...
Does utilizing the multi-hop neighbor information in meta-graph help improve the performance of the proposed model?
Through experiments, the authors demonstrated that the performance of the model (ie, MRR@10) decreased without knowledge propagation and that it was comparable to vanilla ERNIE, which demonstrated that multi-hop neighbors were essential for ranking performance [33]. This result can be attributed to how using multi-hope neighbors allows for knowledge to propagate between query and passage [36].
[ 33, 36 ]
[ { "id": "2204.11673_all_0", "text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap...
How does the authors claim that the proposed method could improve the accuracy-latency tradeoff over existing SoTA CNN models?
They compared their network with state-of-the-art models in Table 6 [67]. Table 6 shows that the baseline model achieved higher accuracy than comparisons with similar latency [68].
[ 67, 68 ]
[ { "id": "2009.02009_all_0", "text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall...
How did SSD handle Object localization better than F-CNN ?
SSD does better object localization than F-CNN because directly learns to regress the object shape and classify object categories instead of using two decoupled steps [19].
[ 19 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
Objects are divided into how many classes in the ILSVRC dtaset?
It's divided into 1000 classes [45].
[ 45 ]
[ { "id": "1409.0575_all_0", "text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompa...
What is the goal of using width multiplier and how it is used ?
Width multiplier reduces computational cost and parameters by defining a new,untrained & reduced structure [30].
[ 30 ]
[ { "id": "1704.04861_all_0", "text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t...
What is the MNIST dataset?
The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al [0].
[ 0 ]
[ { "id": "1708.07747_all_0", "text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d...
What if the text prompt is exactly the same? I guess there can be cases where the text prompt is the same, but the videos are different. Did the authors remove such cases prior to running the evaluation?
Authors suggest future works that their our approach can not learn associations between text and phenomenon that can only be inferred in videos [1]. How to incorporate these (eg, generating a video of a person waving their hand left-to-right or right-to-left), along with generating longer videos, with multiple scenes and events, depicting more detailed stories, is left for future work [2].
[ 1, 2 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
Compared to AEVB, what is the drawback of Wake Sleep algorithm?
Wake-Sleep has the same computational complexity as AEVB per datapoint [20]. Moreover, a drawback of the wake-sleep algorithm is that it requires a concurrent optimization of two objective functions, which together do not correspond to optimization of (a bound of) the marginal likelihood and its optimization is slow compared to AEVB [30].
[ 20, 30 ]
[ { "id": "1312.6114_all_0", "text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi...
How did author sampling a node's local neighborhood features to generate the embeddings?
Authors sample the required neighborhood sets (up to depth K) [15].
[ 15 ]
[ { "id": "1706.02216_all_0", "text": " Low-dimensional vector embeddings of nodes in large graphs111While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs f...
How to reinject the missed predictions to help SSD to learn from negative predictions ?
The negative samples are sorted using highest confidence loss for each default box and the top ones are picked [14].
[ 14 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
How SPPnet address the drawback of R-CNN?
SPPnet computes a convolutional feature map for the entire input image and then classifies each object proposal using a feature vector extracted from the shared feature map [5].
[ 5 ]
[ { "id": "1504.08083_all_0", "text": " Recently, deep ConvNets (14, 16) have significantly improved image classification and object detection (9, 19) accuracy. Compared to image classification, object detection is a more challenging task that requires more complex methods to solve. Due to this complexity, c...
Are there any similar approaches to OPTIMUS, but on bigger and more modern architectures (e.g., GPT-J, T5)?
This paper does not mention any existing approaches similar to OPTIMUS which use larger models such as GPT-J or T-5 [44]. The authors do mention in multiple places that using VAEs (which is what OPTIMUS is) is not very common in the field, and that existing attempts to use VAEs for language modelling typically use smaller models that are not very deep [2].
[ 44, 2 ]
[ { "id": "2004.04092_all_0", "text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)...
What are the main differences between the large-scale training datasets MS-Celeb-1M, VGGface2, and Megaface in terms of depth versus breadth, long tail distribution, and data engineering practices?
In terms of depth vs breadth, VGGFace2 stands as the dataset with the most depth among the 3 [52]. Although it contains a smaller number of subjects, it includes a large number of images per subject [53]. It lets models focus on intra-class variations such as lighting, age, pose, etc [54].
[ 52, 53, 54 ]
[ { "id": "1804.06655_all_0", "text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early...
How is Mask R-CNN used to estimate human poses ?
A keypoint's position is represented as a one-hot mask, then adopt Mask R-CNN to predict K masks, one for each of K keypoint types (eg, left shoulder, right elbow) Each keypoint is seen as a single-hot binary mask, with minimum modification that may be done to identify instance-specific poses [52].
[ 52 ]
[ { "id": "1703.06870_all_0", "text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ...
How does the authors verify adding h-swish and SE is beneficial?
Previous works showed that using swish activation function instead of ReLU showed better accuracy, and h-swish shows similar impact on accuracy [54]. The authors verify that replacing ReLU with h-swish and adding SE improves the accuracy by around 1% [69].
[ 54, 69 ]
[ { "id": "2009.02009_all_0", "text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall...
How much percentage computation do pointwise convolutions take up in each residual unit?
Only for ResNeXt, the pointwise convolutions seem to take 934% of multiplication-adds [13]. However, it is impossible to say how much percentage the pointwise convolutions take for all the models that are mentioned in the paper [8].
[ 13, 8 ]
[ { "id": "1707.01083_all_0", "text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa...
What is the main mathematical difference between the attentive LSTM reader and the vanilla Deep LSTM?
The main difference between the attention-based LSTM and the vanilla one is that the former addresses the limitation of vanilla LSTM’s fixed and limited context size by taking into account the entire context of every token via a token-level attention mechanism [22].
[ 22 ]
[ { "id": "1506.03340_all_0", "text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ...
What is the reason behind the Mask R-CNN outperforming all the winner of COCO 2015 - 2016 ??
The reason is that Faster R-CNN [12]. has two outputs for each candidate object, a class label, and a bounding-box offset; to this, we add a third branch that outputs the object mask [3].
[ 12, 3 ]
[ { "id": "1703.06870_all_0", "text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ...
Are default box and predicted boxes are different?
The default boxes are used during training to tune the model's weights [13]. On the other hand, predicted boxes are compared with default boxes to optimise the model [14]. There is only one predicted box but the default box number can be huge [9].
[ 13, 14, 9 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
One stage detectors are computationally inexpensive than two stage detectors. Is it true?
From the above evidential sentence, we can see that a one-stage detector has to process larger set of candidate object locations regularly sampled across an image [4].
[ 4 ]
[ { "id": "1708.02002_all_0", "text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ...
What is the used model inspired from?
It was inspired by VGG-net [75].
[ 75 ]
[ { "id": "1511.04587_all_0", "text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med...
What is the optimal transport used in SwaV?
Adding equipartition constraint to the objective induces the optimal transport method used in SwaV composition: False [20].
[ 20 ]
[ { "id": "2211.02284_all_0", "text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n...
Why can’t we use a string of random letters as an identifier while fine tuning the model?
Theoretically, concatenating random characters to create an unique identifier has weak prior in both the language model and the diffusion model [18].
[ 18 ]
[ { "id": "2208.12242_all_0", "text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi...
Why an output layer of 1*1 convolution was added at the end of the architecture?
1\times 1 convolution is used to compute linear combination of depthwise convolutions [14].
[ 14 ]
[ { "id": "1704.04861_all_0", "text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t...
Why is it beneficial to fix the prototype embedding g to have unit length?
[In Zero-shot learning, since the meta-data vector and query point come from different input domains, the paper found it empirically beneficial to fix the prototype embedding g to have unit length, however the paper did not constrain the query embedding f] [15].
[ 15 ]
[ { "id": "1703.05175_all_0", "text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely over...
What are the uses of approximate posterior inference of the latent variable z given an observed value x for parameters θ?
For coding or data representation tasks, it is useful to approximate posterior inference of the latent variable \mathbf{z} given an observed value \mathbf{x} efficiently because the unobserved variables z have an interpretation as a latent representation or code [10]. In this paper, authors assume an approximate posterior in the form q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x}) [18]. They introduce a recognition model q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x}): an approximation to the intractable true posterior p_{\boldsymbol{\theta}}(\mathbf{z}|\mathbf{x}) [19]. Contrast to mean-field variational inference, this algorithm can compute its parameters \phi from some closed-form expectation by introducing learning the recognition model parameters \boldsymbol{\phi} jointly with the generative model parameters \boldsymbol{\theta} [2]. Given a datapoint \mathbf{x}, it produces a distribution (eg [3]. a Gaussian) over the possible values of the code \mathbf{z} from which the datapoint \mathbf{x} could have been generated [5].
[ 10, 18, 19, 2, 3, 5 ]
[ { "id": "1312.6114_all_0", "text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi...
How does the paper show that the clustering result can be interpreted as users' intent?
It does not [13]. K as a hyperparameter is only best believed as the number of user intents and does not necessarily equal the actual number of user intents [46].
[ 13, 46 ]
[ { "id": "2202.02519_all_0", "text": " Recommender systems have been widely used in many scenarios to provide personalized items to users over massive vocabularies of items. The core of an effective recommender system is to accurately predict users’ interests toward items based on their historical interactio...
What is the relative improvement achieved by authors over the other benchmaeks for image retrieval?
Their architecture managed to improve over current state-of-the-art compact image representations on standard image retrieval benchmarks by large margin on available datasets, obtaining an mAP of 635%, 735% and 799% on Oxford 5k, Paris 6k, Holidays, respectively; which is a +20% relative improvement on Oxford 5k [47]. Their proposed representations learnt end-to-end, outperformed the pretrained image representations and off-the-shelf CNN descriptors [7].
[ 47, 7 ]
[ { "id": "1511.07247_all_0", "text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ...
What is the motivation behind learning disentangled representations of data ?
The motivation behind learning disentangled representations of data is a desire to achieve interpretability particularly the decomposability of latent representations to admit intuitive explanations [0]. Another motivation of learning disentangled representations is to generalize framework of decomposition [48].
[ 0, 48 ]
[ { "id": "1812.02833_all_0", "text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor...
How are 2D multi resolution filter approaches similar to 3D approaches ?
Similar to 2D multi-resolution filtering approaches, the 3D multi-resolution approaches such as this one capture information at multiple scales [34].
[ 34 ]
[ { "id": "1604.03265_all_0", "text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn...
Why is this method unable to achieve compelling results with any of the baselines in Figure 5, 6?
[The paper’s generator architectures which are tailored for good performance on the appearance changes might cause failures [53]. For example, on the task of dog→cat transfiguration, the learned translation degenerates into making minimal changes to the input [54]. Some failure cases are caused by the distribution characteristics of the training datasets [55].
[ 53, 54, 55 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed...
What is the definition of intra-class variability?
intra-class variability here means images with multiple objects at a variety of scales [35].
[ 35 ]
[ { "id": "1809.11096_all_0", "text": " The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks (GANs, Goodfellow et al. (2014)) at the forefront of efforts to generate high-fidelity, diverse images with models learned directly from data. GAN trai...
The paper mentions GMPool can be used with any GNN architecture besides DMPNN. Are there any results leveraging more recent GNN architectures such as GIN or Graph Transformers?
While the authors chose DMPNN due to its superior performance over GNN architectures, the proposed pooling layer is module-agnostic and can be combined with any GNN [12].
[ 12 ]
[ { "id": "2209.02939_all_0", "text": " Graph Neural Networks (GNNs) learn representations of individual nodes based on the connectivity structure of an input graph. For graph-level prediction tasks, the standard procedure globally pools all the node features into a single graph representation without weight ...
What is the speed of YOLO, when it pushes its mAP performance to 63.4% ?
When the basic YOLO model reaches 634% mAP on the Pascal dataset, it can run at 45 fps [5]. On the other hand, Fast YOLO can show 537% mAP but run at more than 150 fps [53].
[ 5, 53 ]
[ { "id": "1506.02640_all_0", "text": " Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms fo...
In what ways does the ELBO objective in this work allow for control of decomposability?
Elbo objective allows control of the two factors of decomposability through an additional regularisation term [49].
[ 49 ]
[ { "id": "1812.02833_all_0", "text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor...
Why does higher performance than hierarchical VAE (hVAE) show that it is important to pre-train a latent space? Did hVAE not pre-train a latent space or was the approach different?
This paper does not contain detailed information about VAEs so answering it with information from this paper is not possible [31].
[ 31 ]
[ { "id": "2004.04092_all_0", "text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)...
What are a couple of examples of "lexical QA tasks"?
The five kinds of lexical question-and-answer tasks that the authors mention are: synonyms, antonyms, homophones, definitions and sentence usage generation [25]. An example of a synonym task could be "what is a word that has the same meaning as encumbrance" [26].
[ 25, 26 ]
[ { "id": "2201.06009_all_0", "text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining wi...
Can we try to re-generate those unreachable images using recent methods to enhance medical tasks?
unreachable images can be generated by using elastic deformation based augmentation methods [15].
[ 15 ]
[ { "id": "1505.04597_all_0", "text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini...
What is the ratio of the total number of articles collected from CNN and Daily News?
Assuming “Daily News” here refers to “Daily Mail”, one of the websites the authors sourced the data from, the ratio of CNN:(Daily Mail) articles is approximately 93:220 or 1:236 [6].
[ 6 ]
[ { "id": "1506.03340_all_0", "text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ...
What are similarities and differences between Key, Value, and Query?
Key, Value and Query are all vectors, All are used to compute attention [12].
[ 12 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
What are actor-critic algorithms and how do they differ to other RL algorithms like Q-learning?
Actor critic models are a class of reinforcement learning algorithms [9].
[ 9 ]
[ { "id": "2210.01241_all_0", "text": " The ultimate aim of language technology is to interact with humans. However, most language models are trained without direct signals of human preference, with supervised target strings serving as (a sometimes crude) proxy. One option to incorporate user feedback is via ...
Why do the authors think RWR failed to perform on more challenging tasks?
RWR is the only gradient-based algorithm we implemented that does not require any hyperparameter tuning [42].
[ 42 ]
[ { "id": "1604.06778_all_0", "text": " Reinforcement learning addresses the problem of how agents should learn to take actions to maximize cumulative reward through interactions with the environment. The traditional approach for reinforcement learning algorithms requires carefully chosen feature representati...