input stringlengths 14 315 | answer stringlengths 9 2.16k | gold_ctxs listlengths 1 15 | ctxs listlengths 11 186 |
|---|---|---|---|
Is there any benefit to using fader control instead of numbers (e.g., percentages)? | Fader control allows users to control the magnitude of the effect induced by specific words [32]. | [
32
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
Yolo makes different kinds of mistakes, but it is still really accurate, wouldn't that play against it when using it to boost Fast R-CNN ? | Due to YOLO's architecture, it can handle the background objects better as it has a larger context (it processes the entire image end-to-end) when predicting bounding boxes compared to other models [51]. However, YOLO struggles with localizing objects, especially small ones [6]. On the other hand, Fast R-CNN can localize objects much better, but it has 3 times more problems (136%) with background errors compared to YOLO's 475% [63]. Thus, assisting the best Fast R-CNN model with YOLO can give a 32% boost of accuracy (718% to 75%), because it can handle the background objects better [64]. | [
51,
6,
63,
64
] | [
{
"id": "1506.02640_all_0",
"text": " Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms fo... |
What components of the proposed method aggregate explicit knowledge into implicit knowledge for query and passage embedding? | This work proposes an aggregation module that employs a PLM and a Graph Neural Network (GMN) to model the interaction between explicit and implicit knowledge [33]. The PLM encodes text to obtain word representations (ie, implicit knowledge), and the Graph Neural Network (GMN) encodes knowledge meta-graphs to obtain entity representations (ie, explicit knowledge) [7]. This module aggregates the word and entity representations to aggregate the implicit and explicit knowledge [8]. | [
33,
7,
8
] | [
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap... |
Does this use text input as well or not? I thought it should use a text prompt to reflect a natural flow of images, but it does not seem to. | Make-A-Video adopt unsupervised learning method by leveraging joint text-image prior that it is not need paried text-video data [0]. But, for training of the prior \operatorname{P}, text input is required [1]. | [
0,
1
] | [
{
"id": "2209.14792_all_0",
"text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid... |
What is an example of an FPGA? | Xilinx Vertex-7 FPGA which has a maximum of 85 MB (ie [0]. | [
0
] | [
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur... |
What is training method used for decreasing the gap between monolingual model and multilingual model? | It is fine tuning [21]. | [
21
] | [
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling... |
What is CTC-training? | According to the related work section in the paper, CTC training is the deep-learning-based speech recognization model which performs MAP inference on the alignment as a latent random variable [23]. There is no detailed information on how CTC training works in this paper and presumed to exist in [13] [24]. | [
23,
24
] | [
{
"id": "1506.07503_all_0",
"text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation and visual object classification .111An early version of this work was presented at th... |
How did the authors come to the conclusion that the pose network likely uses image correspondence and the depth estimation network likely recognizes common structural features? | New scenes can be synthesized given camera poses which will require image correspondence [11]. Whereas we can also synthesize a target view given a per-pixel depth in that image, plus the pose and visibility in a nearby view including structural features [35]. | [
11,
35
] | [
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h... |
I agree that increasing the size of the passage embedding vectors is better, especially for long passages. What I am curious about is the motivation behind introducing the weight metirx. What if we just use CLS vector? | weight matrix preserves the original size of h_{\text{CLS}} and perform better than down-projecting to a lower dimensional of CLS vector [17]. | [
17
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
They generated the cross-attention output weight by calculating the similarity between spatial features of the noise image and textual embedding. Is it right? | True, It's correlated to the similarity between a query matrix of projected noisy image "Q" and a key matrix of a projected textual embedding "K" [13]. | [
13
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
Doesn’t breaking the problem into sub problems increase computation? | Yes, breaking the problem into sub-problems increases computation [10]. | [
10
] | [
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is... |
Give two examples of public BERT-style english corpora. | CC-News and OpenWebText are BERT-style english corpora [20]. | [
20
] | [
{
"id": "1907.11692_all_0",
"text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of... |
How can author said that their results illustrate the importance of previously overlooked design decisions on BERT? | They could say that the importance of previously overlooked design decisions on BERT, because they improved the performance significantly by training the model longer, with bigger batches over more data; removing the next sentence prediction objective; training on longer sequences; and dynamically changing the masking pattern applied to the training data [70]. | [
70
] | [
{
"id": "1907.11692_all_0",
"text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of... |
What are the benefits of selective pooling? | Selective pooling allows the model to learn position-sensitive score maps, which contain information crucial for learning [12]. This can be seen from the k = 1 case, meaning there is only one score map and no spatial information is learned, where the model fails to converge [31]. Selective pooling also changes the architecture in a way that there is no need for additional layers after the final RoI layer, which greatly decreases computation time [7]. | [
12,
31,
7
] | [
{
"id": "1605.06409_all_0",
"text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar... |
Was the whole ImageNet dataset used for the 10 epochs of resolution fine tuning? | After initial training on images at 224\times 224 Proposed network is fine-tuned at a larger size, 448 [11]. During initial training Proposed network is first trained on the standard ImageNet 1000 class classification dataset for 160 epochs using stochastic gradient descent with a starting learning rate of 01, polynomial rate decay with a power of 4, weight decay of 00005 and momentum of 09 using the Darknet neural network framework [42]. Similarly for YOLOv2 is also fine tuned on standard ImageNet 1000 class dataset [43]. | [
11,
42,
43
] | [
{
"id": "1612.08242_all_0",
"text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ... |
By ignoring IoU between 0.4 and 0.5, Are we losing some positive samples too? | As evident from the above sentence, since we are ignoring the anchors, it is possible that we may lose some positive samples if their IoU is between 04 and 05 [28]. | [
28
] | [
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ... |
How can the attention mechanism connecting the bottom layer of the decoder to the top layer of the encoder contribute to improving parallelism? | First we have to establish that LSTM layers reduces parallelism as each layer would have to wait until both forward and backward directions of the previous layer to finish [25]. Then notice in Figure 1, the model architecture consists of 8 LSTM encoder layers (1 bi-directional and 7 uni-directional layers), and 8 decoder layers [25]. | [
25,
25
] | [
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi... |
This paper deals with NIL. It is true? | Yes, they add new annotations of linked and NIL coreference clusters [2]. | [
2
] | [
{
"id": "2108.13530_all_0",
"text": " In this paper we explore a principled approach to solve entity linking (EL) jointly with coreference resolution (coref). Concretely, we formulate coref+EL as a single structured task over directed trees that conceives EL and coref as two complementary components: a core... |
What is the purpose of using a non-isotropic Gaussian prior in the VAE model? | the purpose of using a non-isotropic Gaussian prior in the VAE model is to get better disentanglement scores, with further improvement achieved when the prior variance is learnt [35]. | [
35
] | [
{
"id": "1812.02833_all_0",
"text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor... |
How focal loss can be extended to use for multi class problem? | The focal loss can be extended to multi-class as follows:
Extending the focal loss to the multi-class case is straightforward and works well; for simplicity we focus on the binary loss in this work [13]. | [
13
] | [
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ... |
Is it correct that The separation border equation for a cell is meant majorly by critical nearer cells and classes and when these cells become more and more distant in the map, the boarder equation goes to the weight assigned to this class ? | Yes the loss function can handle the changes in distance between the cell [6]. | [
6
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
The paper's pre-trained network is nearly identical to the “AlexNet”. Does it use the same training set as the "AlexNet"? | Yes both were trained on ImageNet 2012 dataset but paper's network first subtracted the per-pixel mean of examples in ImageNet before inputting training examples to the network [12]. Hence, direct input to the network, x, can be thought of as a zero-centered input [20]. | [
12,
20
] | [
{
"id": "1506.06579_all_0",
"text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a... |
Is the additional image I0 unchanged throughout the entire training process? | The additional image I0 is unchanged throughout the entire training process [17]. | [
17
] | [
{
"id": "1603.02199_all_0",
"text": " When humans and animals engage in object manipulation behaviors, the interaction inherently involves a fast feedback loop between perception and action. Even complex manipulation tasks, such as extracting a single object from a cluttered bin, can be performed with hardl... |
How the embeddings of visual and textual features are fused during the noise prediction process? | They are fused using Cross-attention layers, to illustrate more in Figure 3, the deep spatial features of noisy image φ(zt) are projected to a query matrix Q = lQ(φ(zt)), and the textual embedding is projected to a key matrix K = lK(ψ(P)) and a value matrix V = lV (ψ(P)), via learned linear projections lQ, lK, lV [12]. | [
12
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
What does "information highways" mean ? | 'information highways' means that some information is not lost while passing through the layer [32]. | [
32
] | [
{
"id": "1507.06228_all_0",
"text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, withi... |
Which networks introduced efficient depthwise seperable convolution into the building blocks of a state-of-the-art network? | Although AlexNet introduced the idea of group convolutions, the Xception and ResNeXt generalized depthwise separable convolutions and achieved state-of-the-art results under large computationl budget (~1 GFLOPs) [1]. | [
1
] | [
{
"id": "1707.01083_all_0",
"text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa... |
Why does it have to be fixed? Can't we extend it to more frames? | Due to memory constraints of deep learning accelerators, a fixed number of video frames should be used [1]. If memory constraints are addressed, a larger number of frames can be used [11]. To address this issue, they introduce joint training on video and image [22]. They concatenate random independent image frames to the end of each video sampled from the dataset to consider more frames during training and implement a memory optimization to fit more independent examples in a batch [23]. | [
1,
11,
22,
23
] | [
{
"id": "2204.03458_all_0",
"text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we prese... |
Why was ResNet network chosen as baseline method | ResNet was state-of-the-art at the time, according to the paper [38]. Therefore, it makes sense to compare their method with ResNet [45]. | [
38,
45
] | [
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio... |
Can joint model fit on coreference resolution task? | Yes, Joint model achieve superior performance on coreference resolution task [14]. | [
14
] | [
{
"id": "2108.13530_all_0",
"text": " In this paper we explore a principled approach to solve entity linking (EL) jointly with coreference resolution (coref). Concretely, we formulate coref+EL as a single structured task over directed trees that conceives EL and coref as two complementary components: a core... |
What is dropout and how does it alleviate overfitting? | Dropout is a neural network component parametrized with a probability [42]. The paper does not discuss how it alleviates overfitting [49]. | [
42,
49
] | [
{
"id": "1602.02410_all_0",
"text": " Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sentences not only encode complexities of language such as grammatical structure, but also distill a fair amoun... |
How does the authors show utilizing augmented views of positive interactions can lead the performance improvement, especially in sparser datasets? | They show augmented views of positive interactions can lead the performance improvement, especially in sparser datasets by showing the experimental result of stochastic data augmentation achieved a big improvement compared to the case of using the fixed neighborhood information as encoder inputs [44]. | [
44
] | [
{
"id": "2105.06323_all_0",
"text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on... |
How do these automated metrics for human preferences differ and what factors do they consider when predicting human preferences? | The automated metrics that are mentioned while discussing related work are BERTScore (Zhang et al, 2019), BLEURT (Sellam et al, 2020), and Ouyang et al (2022) [0]. | [
0
] | [
{
"id": "2210.01241_all_0",
"text": " The ultimate aim of language technology is to interact with humans. However, most language models are trained without direct signals of human preference, with supervised target strings serving as (a sometimes crude) proxy. One option to incorporate user feedback is via ... |
What point is different in GATs in terms of assigning weight compared to GCN? | GATs uses implicit weight assigning while GCN doesn't [16]. | [
16
] | [
{
"id": "1710.10903_all_0",
"text": " Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classification (He et al., 2016), semantic segmentation (Jégou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has ... |
What are the difference between lymph nodes and the hear or liver? | Lymph nodes have no predetermined orientation relative to the human anatomy, while the heart and liver do [11]. | [
11
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
Can the FashionMNIST dataset be used to train and test deep learning models? | yes FashionMNIST dataset can be used to train and test deep learning models [10]. | [
10
] | [
{
"id": "1708.07747_all_0",
"text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d... |
What does "multi-orientation pooling" means ? | Multi-orientation pooling is a learning strategy in which the rotations around vertical axis are combined with the elevation rotations, although I am not sure what are elevation rotations [20]. | [
20
] | [
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn... |
Why was the random edge exploration technique used during training of SBM-Transformer? | The random edge exploration technique allows SBM-Transformer to avoid the problem of having edge probabilities accidentally collapsing to zero and to explore new edges and resuscitate their sampling probabilities if necessary [9]. | [
9
] | [
{
"id": "2210.15541_all_0",
"text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attenti... |
Could we consider "randomly sampled negative faces added to each mini-batch" as a mean of making the model more regularized against negative faces? | Model can be regularized against negative and poorly labeled faces by adding hard positive and negative examples in each mini-batch [24]. | [
24
] | [
{
"id": "1503.03832_all_0",
"text": " In this paper we present a unified system for face verification (is this the same person), recognition (who is this person) and clustering (find common people among these faces). Our method is based on learning a Euclidean embedding per image using a deep convolutional ... |
What does "stochastic" mean in the stochastic data augmentation technique that the author introduced? | Stochastic means it use random neighborhood information of each user and item during data augmentation [44]. | [
44
] | [
{
"id": "2105.06323_all_0",
"text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on... |
What value of π is used for experimentations ? | The authors have used \pi = 001 for all the experimentations [34]. | [
34
] | [
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ... |
List down supervised and unsupervised tasks on which the proposed model is tested? | The supervised task is action recognition and unsupervised tasks are representation reconstruction, which can be inferred from P4 [5]. | [
5
] | [
{
"id": "1502.04681_all_0",
"text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq... |
BLINK is Scalable. Is this true? | The paper shows the scalability of the proposed simple two-stage method with the experiments conducted on the zero-shot entity-linking dataset where external entity knowledge is not available, which enables the model to be used on various entity linking tasks that contain millions of possible entities to consider [0]. The state-of-the-art result and the extensive evaluation of the accuracy-speed trade-off support that the proposed method is efficient and scalable [1]. | [
0,
1
] | [
{
"id": "1911.03814_all_0",
"text": " Scale is a key challenge for entity linking; there are millions of possible entities to consider for each mention. To efficiently filter or rank the candidates, existing methods use different sources of external information, including manually curated mention tables Gan... |
How NSP plays a role in BERT? | NSP helps to improve the ability of distinguishing the observed document segments come from the same or distinct documents in BERT [33]. | [
33
] | [
{
"id": "1907.11692_all_0",
"text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of... |
In BUIR, how does the online encoder updated compared to the target encoder? | The online encoder is updated to minimize the error between the output and the target and updated by the gradients back-propagated from the loss, but target network is updated based on the momentum update and updated as the moving average of the online encoder [16]. | [
16
] | [
{
"id": "2105.06323_all_0",
"text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on... |
Compare accuracy and speed of Darknet-53 with ResNet-101. | Darknet-53 is better than ResNet-101 and 15×15\times15 × faster [16]. | [
16
] | [
{
"id": "1804.02767_all_0",
"text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B... |
How do you compare background and foreground samples as γ changes? | The effect of changing γ on the distribution of the loss for positive examples is minor [43]. | [
43
] | [
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ... |
What is the computational cost of the standard convolutions and what does it depends on? | Standard convolutions have the computational cost of: D_{K}\cdot D_{K}\cdot M\cdot N\cdot D_{F}\cdot D_{F} where the computational cost depends multiplicatively on the number of input channels M, the number of output channels N the kernel size D_{k}\times D_{k} and the feature map size D_{F}\times D_{F} [12]. | [
12
] | [
{
"id": "1704.04861_all_0",
"text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t... |
Few-shot learning and semi-supervised learning is the same term. Is this true? | Unlike Few-shot learning, semi-supervised learning is training strategy which uses both labeled and unlabeled examples [4]. | [
4
] | [
{
"id": "1711.04043_all_0",
"text": " Supervised end-to-end learning has been extremely successful in computer vision, speech, or machine translation tasks, thanks to improvements in optimization technology, larger datasets and streamlined designs of deep convolutional or recurrent architectures. Despite th... |
How is MIRA similar or different compared to other clustering-based methods? (e.g. SwaV) | MIRA does not require any artificial constraints or techniques in training, unlike other self-supervised methods [18]. However, MIRA uses some of the techniques used in the other paper [2]. | [
18,
2
] | [
{
"id": "2211.02284_all_0",
"text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n... |
How do they show the importance of mask vectors? | In order to ignore unspecified vectors and link different domains, mask vectors are necessray [17]. | [
17
] | [
{
"id": "1711.09020_all_0",
"text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tra... |
What is Disentanglement ? | Disentanglement refers to independence among features in a representation [12]. The idea stems back to traditional methods such as ICA and conventional autoencoders [15]. Disentanglement implicitly makes a choice of decomposition: that the latent features are independent of one another [4]. Much of the prior work in the field has either implicitly or explicitly presumed a slightly more ambitious definition of disentanglement than considered above: that it is a measure of how well one captures true factors of variation (which happen to be independent by construction for synthetic data), rather than just independent factors [9]. | [
12,
15,
4,
9
] | [
{
"id": "1812.02833_all_0",
"text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor... |
How is SNLI sort and SNLI shuffle different? | SNLI short consists of sentences with sorted words [29]. | [
29
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
What makes the extremely deep architectures important to study ? | Deep architecture has made a lot of research and breakthroughs with this deep architecture, making it important that it can express many kinds of functions [0]. | [
0
] | [
{
"id": "1507.06228_all_0",
"text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, withi... |
Why did the simple bypass achieve a higher accuracy improvement than complex bypass? | This was one of the experimental investigations that was interesting [8]. | [
8
] | [
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur... |
What are the metrics used to evaluate the model performance in question answering experiments? | Exact Match and F1 score [27]. | [
27
] | [
{
"id": "1611.01603_all_0",
"text": " The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision communities. Systems trained end-to-end now achieve promising results on a variety o... |
What is the difference in test results according to the presence or absence of adapters? | KG-C adapter improves the average accuracy of zero-shot fusion by 04% [27]. | [
27
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
What is the problem authors have tried to solve? | The author tried to solve subject-driven generation that is to synthesize novel depictions of the subject in different contexts [6]. | [
6
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
For channel shuffle, were they applied for first pointwise convolution or second ? | The channel shuffle in the ShuffleNet unit occurs only after the first pointwise group convolution [10]. | [
10
] | [
{
"id": "1707.01083_all_0",
"text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa... |
What are the reasons that there is no large-scale annotated medical image dataset such as the ImageNet? | The authors say that no such dataset exists because data acquisition and annotation in the medical image field is hard and costly [0]. | [
0
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
How are the results of the input-first and output-first approach different? | If by results, you are referring to the outputs of these approaches, then the final output will look very similar - the output for each instance will consist of a tuple of an (input, output) where input and output follow the instructions for a certain task [8]. However, the order in which this output is generated will differ -- for example, in "input first" approach, the input is generated first, while in the output first case, the language model is conditioned to provide the required output [9]. On the other hand, if, by results, you are referring to "performance" of both of these approaches, the authors mention that the input first approach performs very poorly on classification instances, which is why they proposed the alternative approach of output-first generation for classification tasks [10]. | [
8,
9,
10
] | [
{
"id": "2212.10560_all_0",
"text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d... |
What is the ratio of background errors that Yolo does compared to Fast R-CNN ? | YOLO is 3 times less likely to make background mistakes compared to Fast R-CNN (it has 136% false positives) as it can reason about the entire image and see the larger context [51]. On top of that, combining YOLO and Fast R-CNN can give a 23% improvement in terms of accuracy [6]. | [
51,
6
] | [
{
"id": "1506.02640_all_0",
"text": " Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms fo... |
What distance metric this paper uses | [In the paper, in a point set sampled from a metric space, the neighborhood of a point is defined by metric distance [17]. For instance, suppose \mathcal{X}=(M,d) is a discrete metric space whose metric is inherited from a Euclidean space \mathbb{R}^{n}, where M\subseteq\mathbb{R}^{n} is the set of points and d is the distance metric [49]. The paper targets at points sampled from a metric space and explicitly considers the underlying distance metric in its design] [7]. | [
17,
49,
7
] | [
{
"id": "1706.02413_all_0",
"text": " We are interested in analyzing geometric point sets which are collections of points in a Euclidean space. A particularly important type of geometric point set is point cloud captured by 3D scanners, e.g., from appropriately equipped autonomous vehicles. As a set, such d... |
Is it true that any regular exponential family distribution pψ(z|θ) with parameters θ and cumulant function ψ can be written in terms of a uniquely determined regular Bregman divergence? | [Yes, any regular exponential family distribution p_{\psi}(\mathbf{z}|\bm{\theta}) with parameters \bm{\theta} and cumulant function \psi can be written in terms of a uniquely determined regular Bregman divergence as: p_{\psi}(\mathbf{z}|\bm{\theta})=\exp\{\mathbf{z}^{T}\bm{\theta}-\psi(\bm{\theta})-g_{\psi}(\mathbf{z})\}=\exp\{-d_{\varphi}(\mathbf{z},\bm{\mu}(\bm{\theta}))-g_{\varphi}(\mathbf{z})\}] [8]. | [
8
] | [
{
"id": "1703.05175_all_0",
"text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely over... |
Is 7x7 convolution is similiar in computational complexity with two 7x1 and 1x7 convolution ? | From the above evidential sentence, it can be concluded that the computational complexity of 7x7 convolution is similar to that of 7x1 and 1x7 convolution [3]. | [
3
] | [
{
"id": "1602.07261_all_0",
"text": " Since the 2012 ImageNet competition winning entry by Krizhevsky et al , their network “AlexNet” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection , segmentation , human pose estimation , video classification , o... |
How can highest-loss examples be selected? | Losses are calculated individually for each RoI, then sorted [14]. | [
14
] | [
{
"id": "1605.06409_all_0",
"text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar... |
What is the difference between a 1-d CNN and a 2-layer highway network? | The paper does not discuss the difference between 1D CNN and highway networks [16]. | [
16
] | [
{
"id": "1602.02410_all_0",
"text": " Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sentences not only encode complexities of language such as grammatical structure, but also distill a fair amoun... |
What are the different aspects that MRR@10 and Recall@50/200/1000 capture, as evaluation metrics for end-to-end retrieval performance ? | In fact, using ColBERT in the end-to-end setup is superior in terms of MRR@10 to re-ranking with the same model due to the improved recall [64]. | [
64
] | [
{
"id": "2004.12832_all_0",
"text": " Over the past few years, the Information Retrieval (IR) community has witnessed the introduction of a host of neural ranking models, including DRMM (Guo et al., 2016), KNRM (Xiong et al., 2017; Dai et al., 2018), and Duet (Mitra et al., 2017; Mitra and Craswell, 2019). ... |
Do we really no longer need hand crafted feature in the ML life cycle? | Deep Neural Networks such as CNNs are much better option than the handcrafted features for computer vision problem such as segmentation [0]. | [
0
] | [
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes... |
What does "anisotropic probing kernel" means ? | The anisotropic probing kernels can be seen as a special type of convolutional layer [17]. These kernels are elongated in 3D and can thus encode long-range interactions between the points [26]. They are an alternative to using standard computer graphics rendering [27]. Using anisotropic probing kernels helps to capture the global structure of the 3D volume [28]. | [
17,
26,
27,
28
] | [
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn... |
What does this condition include? Text input? | Video Diffusion Model can be conditioned on text descriptions or image frame [15]. When conditioned on a text description, they generate a video explaining the text [18]. | [
15,
18
] | [
{
"id": "2204.03458_all_0",
"text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we prese... |
Did increase in Anchor Density improves AP? | Increasing anchor density does improve the AP value, but beyond 6-9 anchors, there was no further gain [49]. | [
49
] | [
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ... |
Is the v-net model used in other medical tasks or on another MRI data as a pretrained model? What are the results? | Yes V-Net is tested as a pretrained model on 30 MRI volumes for which ground truth segmentation was hidden, [20]. | [
20
] | [
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes... |
What does “RoI” mean? | The RoI is a rectangular window into a conv feature map [10]. | [
10
] | [
{
"id": "1504.08083_all_0",
"text": " Recently, deep ConvNets (14, 16) have significantly improved image classification and object detection (9, 19) accuracy. Compared to image classification, object detection is a more challenging task that requires more complex methods to solve. Due to this complexity, c... |
How is deep spatial features of the noisy image \phi(z_t) different from noisy image z_t? | A noisy image is the output image of a diffusion step, and the features of a noisy image can’t be answered using this paper only as it’s assumed to be a basic knowledge to the reader background in Machine Learning “And this question is repeated” [18]. | [
18
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
How did the authors verify that YOLO learns very general representation of objects ? | Since YOLO is trained on full images and end-to-end it can encode contextual information about each class and its appearance [4]. Moreover, it can learn shapes, sizes, and the relationship between objects [6]. Thus it was shown to be generalizable to artwork, although pixel-wise they are different from natural images, and it makes twice as less mistakes with background objects compared to R-CNN [7]. | [
4,
6,
7
] | [
{
"id": "1506.02640_all_0",
"text": " Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms fo... |
Why does existing knowledge enhanced PLMs (such as CokeBERT and CoLake) cannot be used directly for re-ranking tasks? | While approaches like CokeBERT and CoLake integrate sophisticated knowledge into PLMs through knowledge graphs, they did not focus specifically on using knowledge to empower PLMs for re-ranking tasks [12]. The reasons for why CokeBERT or CoLake cannot be directly used in re-ranking cannot be answered from this paper [24]. | [
12,
24
] | [
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap... |
How much does the the exact retrieval increase the latency compared to approximate nearest neighbor search? | increase the latency in the exact retrieval cannot be answered in this paper [22]. | [
22
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
What was the size of the model? | The author's model consists of 4 layers of LSTM, each has 100 cells, and 1000-dimensional embeddings [28]. | [
28
] | [
{
"id": "1508.04025_all_0",
"text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con... |
How does the contrastive loss function work in deep face recognition? | In Euclidean space, contrastive loss, pulls together positive pairs and pushes apart negative pairs: \begin{split}\mathcal{L}=&y_{ij}max\left(0,\left\|f(x_{i})-f(x_{j})\right\|_{2}-\epsilon^{+}\right)\\&+(1-y_{ij})max\left(0,\epsilon^{-}-\left\|f(x_{i})-f(x_{j})\right\|_{2}\right)\end{split}(2)where y_{ij}=1 means x_{i} and x_{j} are matching samples and y_{ij}=0 means non-matching samples [21]. f(\cdot) is the feature embedding, \epsilon^{+} and \epsilon^{-} control the margins of the matching and non-matching pairs respectively [22]. | [
21,
22
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
Why use a single word embedding instead of multiple? Which could capture more expressivity | The authors did experiment with both single and multi word embeddings, and found that the single-pseudo word approach allowed greater editability, while still having similar accuracy and reconstruction quality when compared to multi-word approaches [61]. These reasons might explain why the authors chose a single-word embedding as their main approach [62]. | [
61,
62
] | [
{
"id": "2208.01618_all_0",
"text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and com... |
What TACKBP-2010 means? | TACKBP-2010 is the dataset for evaluating entity linking systems that are widely used in research in this field like "Khalife and Vazirgiannis (2018)" and "Raiman and Raiman (2018)" [29]. This dataset is made in 2010 and contains the entities in the TAC Reference Knowledge Base which contains 818,741 entities with titles, descriptions, and other meta information [36]. This paper also used the TACKBP-2010 for fine-tuning the model [38]. | [
29,
36,
38
] | [
{
"id": "1911.03814_all_0",
"text": " Scale is a key challenge for entity linking; there are millions of possible entities to consider for each mention. To efficiently filter or rank the candidates, existing methods use different sources of external information, including manually curated mention tables Gan... |
If MXNet supported only a push mechanism (and no pull mechanism) available for inter-rank communication, would it still be possible to implement an all_reduce primitive on MXNet? | Since there are no evidential information about all_reduce and inter-rank communication, this question cannot be answered and requires external knowledges [0]. | [
0
] | [
{
"id": "1512.01274_all_0",
"text": " The scale and complexity of machine learning (ML) algorithms are becoming increasingly large. Almost all recent ImageNet challenge winners employ neural networks with very deep layers, requiring billions of floating-point operations to process one single sample. The ri... |
What was Memorization Accuracy Metric first used to quantify? | MA was first used to quantify the training dynamics of large LMs [26]. | [
26
] | [
{
"id": "2210.01504_all_0",
"text": " Recent work has shown that an adversary can extract training data from Pretrained Language Models (LMs) including Personally Identifiable Information (PII) such as names, phone numbers, and email addresses, and other information such as licensed code, private clinical n... |
What are the six classes of the data used for training ? | The six classes are healthy, emphysema, ground glass, fibrosis, micronodules, and consolidation [14]. | [
14
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
Why is it crucial for the pipeline to identify whether the instruction represents a classification task? How are classification tasks particularly distinct or special? | The main reason why this is a crucial step is because the authors’ pipeline uses a different approach for classification tasks [6]. For non-classification tasks, the authors first prompt a language model to come up with the input fields require, then provide sample inputs, for which the language model generates outputs [8]. However, for classification tasks, the authors first generate the list of classes, and then require the model to provide an example for that instruction for each class [9]. They do this because the first approach, used for non-classification instructions, does not work well for unbalanced classes [10]. | [
6,
8,
9,
10
] | [
{
"id": "2212.10560_all_0",
"text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d... |
Why are dense 1X1 convolutions computationally expensive? | The 1x1 convolutions are expensive in extremely reduced versions of Xception and ResNeXt as they might take 934% of multiplication-adds for each residual unit [1]. | [
1
] | [
{
"id": "1707.01083_all_0",
"text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa... |
What are quantized depth planes, probabilistic disparity maps, and view-dependent flow fields? | quantized depth planes, probabilistic disparity maps, and view-dependent flow fields are methods to represent the underlying geometry of the scene [4]. | [
4
] | [
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h... |
Given the triggered sentences, how can this problem be rectified? | The triggered sentence is "One solution to the aforementioned problems is to integrate the strength and reliability of classical AI models in logical reasoning with LMs Garcez and Lamb " [2]. | [
2
] | [
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is... |
How is DPR retriever different from BM25? | BM25 and DPR are both examples of retrievers used in large-scale passage collection [11]. BM25 is described as a traditional sparse retriever and DPR leverages PLM to empower the retriever by a single vector [43]. How both BM25 and DPR function is not described in detail in this paper and thus their differences cannot be answered in this paper [14]. | [
11,
43,
14
] | [
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap... |
Going deep through network layers makes it harder to remember shallower local information, wouldn't that make segmentation harder? | Since non anatomy part has a much larger spatial support than the anatomy and As we move down the layers in a CNN the receptive field of the features increases therefore proposed CNN would work fine for the local information [0]. | [
0
] | [
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes... |
Why was a zero-centered input used for training the paper's DNN, instead of using the training images as input directly? | Zero mean input data and Standardization in general improve the convergence properties of BP training, so it can help to reach desired solution fast [20]. | [
20
] | [
{
"id": "1506.06579_all_0",
"text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a... |
What are the examples in which important structural information is removed when masking the image content? | Examples such as modifying textures of specific objects or changing bicycles in an image to a car [1]. | [
1
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
How input image resolution affects the accuracy of the SSD framework? | The accuracy of SSD framework is relatively more on higher resolution images than on lower resolution images [1]. | [
1
] | [
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S... |
The authors claims that the LSTM networks systems allow the flow of information across many layers without attenuation, is that true? | Inspired by LSTM, the authors designed an information highway that adaptively passes information back, which is effective when there are many layers, so LSTM is also effective for many layers [32]. | [
32
] | [
{
"id": "1507.06228_all_0",
"text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, withi... |
What is the numeric value of dropout fraction used during the training phase? | The neural net in this work was strongly regularized using dropout and weight constraints as described in [5] [12]. | [
12
] | [
{
"id": "1503.02531_all_0",
"text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typi... |
Which RGBD robotic grasping dataset was used for verification? | We used the extended version of the Cornell grasping dataset for our experiments [61]. | [
61
] | [
{
"id": "1301.3592_all_0",
"text": " Robotic grasping is a challenging problem involving perception, planning, and control. Some recent works (54, 56, 28, 67) address the perception aspect of this problem by converting it into a detection problem in which, given a noisy, partial view of the object from a ca... |
Why author did 7.3 Non-english and computer languages tast? What is the objective of this section? | They did Non-english and computer languages test to shows that the benefits of from pretraining have little to do with the format of the tasks [3]. Therefore, objective of this section is to show that advantage of pretraining persist with various degrees [32]. | [
3,
32
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
Is it true that prototype computation can be viewed in terms of hard clustering on the support set? | [Yes, prototype computation can be viewed in terms of hard clustering on the support set, with one cluster per class and each support point assigned to its corresponding class cluster] [7]. | [
7
] | [
{
"id": "1703.05175_all_0",
"text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely over... |
How would be the results and performance considering accuracy and losses while using window-with-size r self-attention approach with shorter sequences? | Window-with-size r self-attention approach is only recommended to only improve computational performance for tasks involving very long sequences [36]. | [
36
] | [
{
"id": "1706.03762_all_0",
"text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)... |
How do author categorize each apps in this paper? | Authors select four typical categories in the next week's prediction [15]. | [
15
] | [
{
"id": "2005.13303_all_0",
"text": " Personalized mobile business, e.g., recommendations, and advertising, often require effective user representations. For better performance, user modeling in industrial applications often considers as much information as possible, including but not limited to gender, loc... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.