input stringlengths 14 315 | answer stringlengths 9 2.16k | gold_ctxs listlengths 1 15 | ctxs listlengths 11 186 |
|---|---|---|---|
What are the steps in AEVB algorithm? | The AEVB algorithm connects between directed probabilistic models
(trained with a variational objective) and auto-encoders [0]. It is connected between linear auto-encoders and a certain class of generative linear-Gaussian models [1]. They use a neural network for the probabilistic encoder q_{\boldsymbol{\phi}}(\mathbf{z}|\mathbf{x}) (the approximation to the posterior of the generative model p_{\boldsymbol{\theta}}(\mathbf{x},\mathbf{z})) and where the parameters \boldsymbol{\phi} and \boldsymbol{\theta} are optimized jointly with the AEVB algorithm [17]. Using the SGVB estimator to optimize a recognition model allows us to perform very efficient approximate posterior inference using simple ancestral sampling [22]. | [
0,
1,
17,
22
] | [
{
"id": "1312.6114_all_0",
"text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi... |
If performance can be attributed to instruction style and formatting, why did the authors not test baselines that were not fine-tuned but had few-shot examples? | It is unclear why the authors did not test their baselines with few shot examples [26]. | [
26
] | [
{
"id": "2212.10560_all_0",
"text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d... |
What is a depthwise separable convolution means? | Depthwise separable convolution is made up of two layers: depthwise convolutions and pointwise convolutions where depthwise convolutions apply a single filter per each input channel and a Pointwise convolution creates a linear combination of the output of the depthwise layer [13]. | [
13
] | [
{
"id": "1704.04861_all_0",
"text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t... |
Beyond simple addition of a constant to the attention values, how can the patterns applied back to the original model? | Patterns can be applied to a pretrained transformer either by adding a set of pre-computed constants or by adding an input-dependent weight matrix [20]. | [
20
] | [
{
"id": "2112.05364_all_0",
"text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att... |
The authors claim that no attention mechanism has been applied for image classification task before, is that true ? | The related works that the authors mention do not use the same attention mechanism they use in this paper, but it is impossible to know just from this paper whether their claim that the attention method they used was never applied before to the image classification task is true [11]. | [
11
] | [
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio... |
Why did authors recompute some of the metrics using public samples or models? | P1 demonstrates why did authors recompute some of the metrics [17]. | [
17
] | [
{
"id": "2105.05233_all_0",
"text": " Over the past few years, generative models have gained the ability to generate human-like natural language Brown et al. (2020), infinite high-quality synthetic images Brock et al. (2018); Karras et al. (2019b); Razavi et al. (2019) and highly diverse human speech and mu... |
How is N, the number of disjoint sets of proposals determined? | A set of object proposals is extracted with an object detector from an image [18]. as authors utilized a Faster R-NN with ResNet-101 to obtain all proposals for each image [35]. | [
18,
35
] | [
{
"id": "2103.12204_all_0",
"text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks,... |
When discussing the information flow interpretation, the authors mention how expressiveness and capacity of their model can be independently analyse. What does "capacity" in this context mean? | In this paper, capacity is the input/output domain of the bottleneck layers, which can be separated from the expressiveness part (layer transformation) for the proposed architecture [19]. | [
19
] | [
{
"id": "1801.04381_all_0",
"text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resour... |
What is the BERTSum model and how does it differ from just the BERT model? | BERTSum is a specialized variant of BERT on the task of extractive summarization, picking out the sentences from a text to constitute its summary [24]. | [
24
] | [
{
"id": "2112.05364_all_0",
"text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att... |
How could English models performs well on non-English POS tasks? | Authors do not discuss how this performance is achieved [18]. | [
18
] | [
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling... |
Can improvement of NER technology improves RE technology? | Entity names, spans and types are important to the performance of RE models Peng et al [2]. | [
2
] | [
{
"id": "2102.01373_all_0",
"text": " As one of the fundamental information extraction (IE) tasks, relation extraction (RE) aims at identifying the relationship(s) between two entities in a given piece of text from a pre-defined set of relationships of interest. For example, given the sentence “Bill Gates f... |
Is the architecture search by the original NAS and NASNet are different? | From the Table 1, the depths, number of parameters vary for both original NAS and NASNet [1]. | [
1
] | [
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture enginee... |
What does it mean to be "subpixel refined by patch correction"? | subpixel is the stereo keypoint which is subpixel obtained by patch correlation generated from the coordinates of the left ORB and the horizontal coordinate of the right match [18]. | [
18
] | [
{
"id": "1610.06475_all_0",
"text": " Simultaneous Localization and Mapping (SLAM) has been a hot research topic in the last two decades in the Computer Vision and Robotics communities, and has recently attracted the attention of high-technological companies. SLAM techniques build a map of an unknown enviro... |
What does “attention coefficient” mean? | Normalized attention coefficient is a coefficient that is used for compute linear combination of the futures [13]. | [
13
] | [
{
"id": "1710.10903_all_0",
"text": " Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classification (He et al., 2016), semantic segmentation (Jégou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has ... |
What does early stopping while training mean? | In this work, all the other methods but ULMFiT were trained with early stopping [40]. | [
40
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
What are the results and performance with no using of label smoothing ? | After applying label smoothing regularization perplexity decreases but the accuracy and BLEU score does improve [47]. | [
47
] | [
{
"id": "1706.03762_all_0",
"text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)... |
Why the authors suggest there is no truly monolingual pre-trained model? | Well-known pre-training resources already include multilingual data [25]. | [
25
] | [
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling... |
How does the authors accommodate the video datasets? | Other diffusion models that generate images use a 2D U-Net, but they use a 3D U-Net to handle video [1]. A 3D U-Net diffusion model is used to generate a fixed number of video frames [16]. A 2D U-Net is modified into each 2D convolution into a space-only 3D convolution, and inserted a temporal attention block that performs attention over the first axis and treats the spatial axes as batch axes [20]. Authors concatenate random independent image frames to the end of each video
sampled from the dataset and they choose these random independent images from random videos within the same dataset [22]. | [
1,
16,
20,
22
] | [
{
"id": "2204.03458_all_0",
"text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we prese... |
What are the applications of SISR? | SISR is used in computer vision applications such as surveillance imaging and medical image to enhance low-resolution images [0]. | [
0
] | [
{
"id": "1511.04587_all_0",
"text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med... |
Does "previously unseen data" mean node that did not appear on training data? | Yes [1]. Unseen data indicates node that is not contained in training data [28]. The reason is that the purpose of this paper is to generate embeddings quickly for the systems which constantly encounter entirely new nodes and graphs [29]. | [
1,
28,
29
] | [
{
"id": "1706.02216_all_0",
"text": " Low-dimensional vector embeddings of nodes in large graphs111While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs f... |
Explain Class Prior Ablation | We observe that the class prior of the erroneous class remains entangled and we cannot create new images of our subject when the model is trained in this manner [29]. | [
29
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
What are the advantages of using relative encoding compared to absolute encoding, which performs well? | The advantage of relative encoding compared to absolute encoding is the flexibility of using representations of position or distances into the self-attention mechanism directly [8]. | [
8
] | [
{
"id": "2202.03036_all_0",
"text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019... |
What are some methods that have been proposed to address the security vulnerabilities of deep face recognition systems, such as presentation attacks and adversarial attacks? | Despite some defense systems for face spoofing that use two-stream CNN, classification with CNN, and LSTM, there is still a presentation attack with a 3D model that can crack them [80]. Also, since the root cause of adversarial perturbations is unclear, methods like detecting and removing vulnerable layers are insufficient [84]. | [
80,
84
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
What is the most widely used dataset in the deep learning comminity? | MNIST dataset is widely used in DL [0]. | [
0
] | [
{
"id": "1708.07747_all_0",
"text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d... |
Whaat is the difference between inverting to the latent space and inverting to a concept? Aren't you inverting to a latent space anyways? | A concept, in this work, is represented by a set of multiple (3 to 5) images [16]. The authors' approach attempts to invert these images collectively, to identify one word, S*, that can represent the concept embodied in all these input images [5]. | [
16,
5
] | [
{
"id": "2208.01618_all_0",
"text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and com... |
What do you mean by Top-1 and Top-5 error rate? | Top-1 and Top-5 error rate are the evaluation metrics used to compare the performace of various models [18]. | [
18
] | [
{
"id": "1602.07261_all_0",
"text": " Since the 2012 ImageNet competition winning entry by Krizhevsky et al , their network “AlexNet” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection , segmentation , human pose estimation , video classification , o... |
How do they make StarGAN to ignore unspecified labels? | By using mask vector, StarGAN was allowed to ignore unspecified labels [17]. | [
17
] | [
{
"id": "1711.09020_all_0",
"text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tra... |
How was the value of n set to 10? | The n value is set to 10 because we consider an extraction attack to be successfuly when 10 token sequences are successfully extracted by the LM [27]. | [
27
] | [
{
"id": "2210.01504_all_0",
"text": " Recent work has shown that an adversary can extract training data from Pretrained Language Models (LMs) including Personally Identifiable Information (PII) such as names, phone numbers, and email addresses, and other information such as licensed code, private clinical n... |
How much the quality of generated pseudo-queries affect retrieval performance on target domain? | larger generation models lead to improved generators [53]. | [
53
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
To obtain the final compact descriptor of the image, why did the authors use PCA instead of other compression algorithms?. | Maybe authors found that PCA is computationally less expensive and much memory and time saving in experiments than other methods [38]. PCA is used to reduce the dimensions of the descriptor to 4096 learnt on the training set, which is discovered experimentally to help in achieving state-of-the-art results on the challenging Tokyo 24/7 dataset as comparisons show that the lower dimensional fVLAD performs similarly to the full size vector [40]. | [
38,
40
] | [
{
"id": "1511.07247_all_0",
"text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ... |
What are the hyperparameters used for inference ? | various hyperparameter are:
learning rate of 002 ,bounding-box NMS with a threshold of 05 , floating-number mask output is resized to the RoI size, and binarized at a threshold of 05 [32]. | [
32
] | [
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ... |
Which benchmark has been used for evaluation? | We evaluate our proposed framework on five question-answering benchmarks for commonsense reasoning: SocialIQA (SIQA) (Sap et al, 2019b), CommonsenseQA (CSQA) (Talmor et al, 2018), Abductive NLI (a-NLI) (Bhagavatula et al, 2020), PhysicalIQA (PIQA) (Bisk et al, 2020), and WinoGrande (WG) (Sakaguchi et al, 2020) [19]. | [
19
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
How will the generator perform when the target corpus have different query/document (answer) distributions from the training datasets? For example, what if users in target application often ask much longer questions (e.g., longer than 12 tokens), to express more complex query intents? | when the target corpus have different query/document (answer) distributions from the training datasets, the generator will show performance based on learned system to show optimal performance [52]. | [
52
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
Does the phrase "data with larger spatial support than the typical size of the anatomy" refer to feature maps with a larger number of channels than the input map at the deepest layer, or does it refer to something else? | Yes the phrase refer to feature maps with a larger number of channels than the input map at the deepest layer [12]. | [
12
] | [
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes... |
Why did the authors use a polynomial function to extract explicit planning of the performance data? | The authors use a polynomial function to extract explicit planning as explicit planning is defined to be a high-level sketch that the performer draws as the bigger plan of progressing musical expression throughout the piece [17]. Such a sketch is assumed to be "smoothed" since it would derive from human thought that memorizes or imagines musical expression that can be also represented as an aural form by "singing out" the musical progression [3]. | [
17,
3
] | [
{
"id": "2208.14867_all_0",
"text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ... |
What is a meta-path? Please explain with examples. | A meta-path is a sequence of node types and edge types in a graph that describes a specific type of relationship between nodes [10]. An example is in a recommendation system, a meta-path could be "user-item-writtenseries-item-user" which describes a relationship between users who like the same book series [12]. | [
10,
12
] | [
{
"id": "2007.08294_all_0",
"text": " Graph neural networks have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including... |
Explain limitations of dreambooth | Authors presented numerous drawbacks, the first of which is that it cannot accurately produce the required context [34]. The second failure mode is context-appearance entanglement, in which the subject's appearance alters as a result of the prompted context [35]. | [
34,
35
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
what is the evidence for auther’s saying: “Our observation that is behavior is seen even when pretraining on synthetically generated languages”? | Author said “Our observation that is behavior is seen even when pretraining on synthetically generated languages” since they showed that the benefits of pretraining persist with various degrees in non-linguistic tasks [3]. | [
3
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
Is there any other method to do this instead of simply averaging the values out? There can be a smarter way to do this. | Pseudo-3D convolutional layers facilitates information sharing between the spatial and temporal axes, without succumbing to the heavy computational load of 3D conv layers [15]. Additionally, conditioning on a varying number of frames-per-second, enables an additional augmentation method to tackle the limited volume of available videos at training time, and provides additional control on the generated video at inference time [16]. | [
15,
16
] | [
{
"id": "2209.14792_all_0",
"text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid... |
In what ways does the authors' approach differ from how practitioners create datasets and pretrain models such as GPT? | The authors mention that their approach, which involves creating a labelled dataset and using that for supervised learning objectives, differs from existing work in the field which focusses on unsupervised approaches [2]. Authors claim that unsupervised approaches are explored more because of the difficulties and challenges associated with building labelled datasets [34]. However, this paper does not contain information on GPT-style models specifically, so answering how the pretrianing process for GPT differs from the author’s approach is not possible from information in this paper [5]. | [
2,
34,
5
] | [
{
"id": "1506.03340_all_0",
"text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ... |
Why the conditional decoder is difficult to optimize? | Since in conditional decoder have access to last few frames, often it find a easy way to pick up a correlated frame but not necessary an optimized one [18]. | [
18
] | [
{
"id": "1502.04681_all_0",
"text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq... |
Why was the model trained with synthetic data rather than reel 3D data directly? | The authors train on synthetic data for several reasons [5]. | [
5
] | [
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn... |
What is the role of encoding noise and the likelihood distribution in VAE representations? | In VAE representation, the encoding noise forces nearby encodings to relate to similar datapoints, while standard choices for the likelihood distribution ensure that information is stored in the encodings, not just in the generative network [45]. | [
45
] | [
{
"id": "1812.02833_all_0",
"text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor... |
What are the Y, U and V channels? | The first three channels are the image in YUV color space, used because it represents image intensity and color separately [42]. In particular,many of these features lack weights to the U and V (3rd and 4th channels), which correspond to color, allowing the system to be more robust to different-colored objects [69]. | [
42,
69
] | [
{
"id": "1301.3592_all_0",
"text": " Robotic grasping is a challenging problem involving perception, planning, and control. Some recent works (54, 56, 28, 67) address the perception aspect of this problem by converting it into a detection problem in which, given a noisy, partial view of the object from a ca... |
What is the effect of increasing modulating factor (γ )? | As specified wrongly in the question, modulating factor is (1-p_{\textrm{t}})^{\gamma} and not \gamma [17]. Besides this as the modulating factor increases, the loss contribution from easy examples is reduced and the range in which an example receives low loss is extended [19]. | [
17,
19
] | [
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ... |
In this sentence, do the current target state and all source states mean hidden states of the encoder? | A possible answer is yes as a global attention model considers all the hidden states of the encoder when deriving the context [14]. However, it's not clear which sentence the questioner refers to and the question needs more elaboration [9]. | [
14,
9
] | [
{
"id": "1508.04025_all_0",
"text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con... |
How different versions of NASNets with different computational demands were created?
| Different versions of NASNets with different computational demands were created by varying the number of the convolutional cells and number of filters in the convolutional cells [3]. | [
3
] | [
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture enginee... |
What is the combinatorial action space? How is this different to general RL tasks? Are they not combinatorial? | The combinatorial action space here probably refers to the set of all possible actions that a RL agent for optimizing a language model could possibly take - here, the action set consists of the entire vocabulary of the language model, which can range to tens of thousands for typical GPT/T5 models used today [11]. This is unlike general RL tasks, where the action space is an order of magnitude smaller [7]. | [
11,
7
] | [
{
"id": "2210.01241_all_0",
"text": " The ultimate aim of language technology is to interact with humans. However, most language models are trained without direct signals of human preference, with supervised target strings serving as (a sometimes crude) proxy. One option to incorporate user feedback is via ... |
What is the purpose of FashionMNIST? | The purpose of dataset is a drop-in replacement of MNIST and whilst providing a more challenging alternative for benchmarking machine learning algorithms and create good benchmark [10]. | [
10
] | [
{
"id": "1708.07747_all_0",
"text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d... |
Doesn’t the possibility of having many rules make it ambiguous? | No, the possibility of having many rules does not make it ambiguous [22]. | [
22
] | [
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is... |
Why are there tradeoffs between sample variety and fidelity? | The tradeoff is as IS does not penalize lack of variety in class-conditional models, reducing the truncation threshold leads to a direct increase in IS (analogous to precision) [18]. FID penalizes lack of variety (analogous to recall) but also rewards precision, so we initially see a moderate improvement in FID, but as truncation approaches zero and variety diminishes, the FID sharply drops [30]. | [
18,
30
] | [
{
"id": "1809.11096_all_0",
"text": " The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks (GANs, Goodfellow et al. (2014)) at the forefront of efforts to generate high-fidelity, diverse images with models learned directly from data. GAN trai... |
Why does YOLO struggle in localizing objects correctly ? | Although YOLO is a really fast model, it usually struggles with localizing small objects in a group or objects near each other [37]. In fact, localization errors take up more than half of all YOLO's errors [43]. It happens because YOLO has only a limited number of bounding boxes per grid cell and the loss function penalizes the errors in the large and small bounding boxes the same [63]. On top of that, the model uses coarse features to predict bounding boxes, and it may have problems with unusual aspect ratios and configurations of objects [68]. | [
37,
43,
63,
68
] | [
{
"id": "1506.02640_all_0",
"text": " Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms fo... |
What is the value of η used by the authors in experimentation? | The perturbation constant that is used is n = 002 [8]. | [
8
] | [
{
"id": "1511.04599_all_0",
"text": " Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in many research areas such as bioinformatics (1, 16), speech (12, 6), and computer vision (10, 8). Though deep networks have exhibited very good performance ... |
What is "bundle adjustment"? | bundle adjustment is a method to optimize the camera pose in the tracking thread, to optimize a local window of keyframes and points in the local mapping thread and after a loop closure to optimize all keyframes and points [22]. Local BA optimizes a set of covisible keyframes \mathcal{K}_{L} and all points seen in those keyframes \mathcal{P}_{L} [25]. Full BA is the specific case of local BA, where all keyframes and points in the map are optimized, except the origin keyframe that is fixed to eliminate the gauge freedom [26]. | [
22,
25,
26
] | [
{
"id": "1610.06475_all_0",
"text": " Simultaneous Localization and Mapping (SLAM) has been a hot research topic in the last two decades in the Computer Vision and Robotics communities, and has recently attracted the attention of high-technological companies. SLAM techniques build a map of an unknown enviro... |
What does "online triplet mining method" mean? | Online triplet mining method generates two matching face thumbnails and a non-matching face thumbnail from the training data [3]. | [
3
] | [
{
"id": "1503.03832_all_0",
"text": " In this paper we present a unified system for face verification (is this the same person), recognition (who is this person) and clustering (find common people among these faces). Our method is based on learning a Euclidean embedding per image using a deep convolutional ... |
Is the difference between ORB-SLAM and ORB-SLAM2 that ORB-SLAM only supports monocular cameras? | ORB-SLAM2 for stereo and RGB-D cameras is built on monocular feature-based ORB-SLAM [13]. This shows that ORB-SLAM only supports monocular cameras as compared with ORB-SLAM [2]. | [
13,
2
] | [
{
"id": "1610.06475_all_0",
"text": " Simultaneous Localization and Mapping (SLAM) has been a hot research topic in the last two decades in the Computer Vision and Robotics communities, and has recently attracted the attention of high-technological companies. SLAM techniques build a map of an unknown enviro... |
They claim that in brain tumours, there is a hierarchical layout of sub-components. Is this True ? Any related experiments that proved it ? | Previous literature state the importance of understanding the sub-component layout of brain tumors for diagnosis and treatment [1]. It can therefore be inferred that these sub-components are created in a hierarchical way as the brain tumor develops [55]. It seems unlikely that the authors conducted additional experiments [62]. | [
1,
55,
62
] | [
{
"id": "1603.05959_all_0",
"text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient out... |
What is single image super-resolution? | Single image super-resolution is the task of generating a high-resolution image from a low-resolution one [0]. | [
0
] | [
{
"id": "1511.04587_all_0",
"text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med... |
How does maximizing log-likelihood lead to optimizing cross-entropy between target probability distribution and the model prediction? | The paper does not discuss the detailed workings of the established relation [17]. | [
17
] | [
{
"id": "1602.02410_all_0",
"text": " Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sentences not only encode complexities of language such as grammatical structure, but also distill a fair amoun... |
Look Figure 4. Give your one observation by comparing (a) and (b), or pretrained and non-pretrained. Reason them. | pretrained LMs can perfectly learn the tasks with many fewer labeled examples, compared to the non-pretrained models in both tasks [18]. | [
18
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
What are the evidences that region-wise computations are of low cost ? | As seen in Table 5, Faster R-CNN is 25 times slower than R-FCN when mining for 300 RoIs, and 6 times slower when mining for 2000 RoIs, proving that region-wise computations are of low cost [33]. | [
33
] | [
{
"id": "1605.06409_all_0",
"text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar... |
What is DDIM? What does closed-form manner mean here? | DDIM Is a sampling process that is described in detail in Song et [15]. al 2020 [51]. | [
15,
51
] | [
{
"id": "2208.01618_all_0",
"text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and com... |
What is the contribution that augmentation data with deformation adds to the overall performance? | U-Net achieves good performance, less training time and less memory by using deformation based data augmentation [15]. | [
15
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
Is the theoretical speedup greater than the actual speedup when comparing ShuffleNet to AlexNet on real hardware? | Yes, while the theoretical speedup of ShuffleNet is 18 times, the actual speedup is only ~13 times, compared to the AlexNet on the real hardware [3]. | [
3
] | [
{
"id": "1707.01083_all_0",
"text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa... |
What are examples of noise for generated questions? | The passage collection of a target domain is fed into this generator to create pairs of noisy question-passage pairs [11]. | [
11
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
How does the depth of the residual networks affect their performance in the experiments? | [The increased depth of Residual network improves performance of this network, lower training error and make it generalizable to data [32]. | [
32
] | [
{
"id": "1512.03385_all_0",
"text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can... |
How did the authors optimize lambda.s and lambda.e? | lambdas and lambdae are the weighting for the depth smoothness loss and the explainability regularization, respectively [18]. For all the experiments, paper uses a fixed value for lambdas =05 and lambdae =02 [23]. | [
18,
23
] | [
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h... |
What point in StarGAN is valuable compared to Conditional GAN? | paper's model can process multiple different domains [8]. | [
8
] | [
{
"id": "1711.09020_all_0",
"text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tra... |
Is the \delta a hyper-parameter? | Yes it's hyper-parameter, as it's within fixed range during training noting that it's fixed within this range during inference [44]. | [
44
] | [
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi... |
How did previous OCCF studies mitigated the problem of performance being largely depend on negative sampling distribution? | Previous OCCF studies assume that all unobserved interactions are negative to mitigate the problem of performance being largely depend on negative sampling distribution [2]. | [
2
] | [
{
"id": "2105.06323_all_0",
"text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on... |
How many questions are generated for each passage? | At most 5 salient sentences are generated from a passage [41]. | [
41
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
How does the network achieve the correction of its own mistakes? | The network achieves the correction of its own mistakes using the continuous servoing method can correct mistakes by observing the outcomes of its past actions, achieving a high success rate [13]. Their method can use continuous feedback to correct mistakes and reposition the gripper; the servoing mechanism provides the robot with fast feedback to perturbations and object motion, as well as robustness [2]. | [
13,
2
] | [
{
"id": "1603.02199_all_0",
"text": " When humans and animals engage in object manipulation behaviors, the interaction inherently involves a fast feedback loop between perception and action. Even complex manipulation tasks, such as extracting a single object from a cluttered bin, can be performed with hardl... |
For the images used for visualization in the paper, were they selected randomly or picked by the authors? | Authors best practices were to combine effects of different ways of regularization to produce interpretable images [29]. | [
29
] | [
{
"id": "1506.06579_all_0",
"text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a... |
Does it have anything to do with the nature and complexity of data we are working with ? | In general MRI 3D volume data is complex [12]. Prostate anomaly segmentation also makes the data in consideration unique [15]. | [
12,
15
] | [
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes... |
How does the deep residual learning framework address the degradation problem in deep convolutional neural networks? | The deep residual learning framework address the degradation problem using residual mapping instead of original mapping in deep convolutional neural network [4]. | [
4
] | [
{
"id": "1512.03385_all_0",
"text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can... |
Why shouldn’t an existing word be used as an identifier while fine tuning? | Existing words in the training set of text-to-image diffusion models have stronger priors, hence they shouldn't be employed as identifiers during fine tuning [18]. | [
18
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
How do they show that single word embeddings capture unique and varied concepts? | The authors explain how their single-word embedding approach is able to pick up on finer details (such as colour schemes, or complex images) that are difficult to express using natural language alone [36]. Additionally, their results indicate that their single-word embedding approach has comparable performance to multi-word embeddings, suggesting that their single-word embeddings are not inherently limited in how much information they encode [61]. | [
36,
61
] | [
{
"id": "2208.01618_all_0",
"text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and com... |
Why are pure rotations hard to track for monocular SLAM? | Pure rotations hard to track for monocular SLAM because depth is not observable from just one camera, the scale of the map and estimated trajectory is unknown [1]. | [
1
] | [
{
"id": "1610.06475_all_0",
"text": " Simultaneous Localization and Mapping (SLAM) has been a hot research topic in the last two decades in the Computer Vision and Robotics communities, and has recently attracted the attention of high-technological companies. SLAM techniques build a map of an unknown enviro... |
I am wondering why the authors used community QA datasets for training the question generator. How about open-domain information retrieval datasets, such as MS MARCO, which also cover diverse domains? | Using open-domain information retrieval datasets for training neural retrieval models do not transfer well especially for specialized domains [3]. | [
3
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
Did the authors ever try different criteria for choosing hyperparameters? | Hyperparameter Tuning: For the DDPG algorithm, we used the hyperparametes reported in Lillicrap et al [33]. | [
33
] | [
{
"id": "1604.06778_all_0",
"text": " Reinforcement learning addresses the problem of how agents should learn to take actions to maximize cumulative reward through interactions with the environment. The traditional approach for reinforcement learning algorithms requires carefully chosen feature representati... |
How could restricting self attention to some window with size r be useful with long term dependencies? | Restricting self attention to some window with size r does improve computational performance but its effect on long term dependencies have not been explored in the paper [36]. | [
36
] | [
{
"id": "1706.03762_all_0",
"text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)... |
What are the main three properties of the model studied in this paper? | They study the deepness of their model, the multi-scale property and how well it performs, and residual learning for faster convergence [40]. | [
40
] | [
{
"id": "1511.04587_all_0",
"text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to med... |
How were the variations of the camera poses for the different robots determined? | A slightly different camera pose was selected for each robot, relative to the robot base [3]. Another related area to our method is visual servoing, which addresses moving a camera or end-effector to a desired pose using visual feedback [9]. | [
3,
9
] | [
{
"id": "1603.02199_all_0",
"text": " When humans and animals engage in object manipulation behaviors, the interaction inherently involves a fast feedback loop between perception and action. Even complex manipulation tasks, such as extracting a single object from a cluttered bin, can be performed with hardl... |
What type of new tasks was Self-Instruct able to generate which were not seen in previous human-created instruction datasets? | The paper mentions that their model is used to generate 52,000 instructions and 82,000 input-output pairs, for a wide range of tasks, which the authors have made publicly available [34]. It might be possible to find more information on the specific tasks that were created by looking at this synthetic dataset, but the paper itself does not contain concrete examples [4]. | [
34,
4
] | [
{
"id": "2212.10560_all_0",
"text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d... |
How do the authors define plain networks and residual networks? | [Plain network do not use shortcut connections whereas in residual networks, shortcut
connection is added to each pair of filters] [28]. | [
28
] | [
{
"id": "1512.03385_all_0",
"text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can... |
How does this v1 related to generating consistent content? | Conditioning the first frame, it can autoregressively extend video frames with shared verb [11]. | [
11
] | [
{
"id": "2212.11565_all_0",
"text": " The large-scale multimodal dataset , consisting of billions of text-image pairs crawled from the Internet, has enabled a breakthrough in Text-to-Image (T2I) generation (30, 35, 6, 42, 40). To replicate this success in Text-to-Video (T2V) generation, recent works (42, 15... |
How does the NetVLAD layer differ from the original VLAD? | The original VLAD method uses hand-crafted features and applies the VLAD technique to them by concatenating multiple VLADs [19]. On the other hand, NetVLAD layer uses a CNN to extract features and applies the VLAD technique in a single layer by learning the aggregation weights of the residuals (xi − ck) in different parts of the descriptor space [21]. | [
19,
21
] | [
{
"id": "1511.07247_all_0",
"text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ... |
Did authors perform a hyperparameter search before deciding the values of batch size, epsilon and L_C during training? | While the paper shows what value the author set for those hyperparameters, there is no evidential information about the hyperparameter search in this paper [29]. | [
29
] | [
{
"id": "1506.07503_all_0",
"text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation and visual object classification .111An early version of this work was presented at th... |
Does graph property prediction task create one representation corresponding to the graph? | Yes, in general, a graph representation is created by aggregating node representations to predict graph property [24]. | [
24
] | [
{
"id": "2202.03036_all_0",
"text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019... |
What is the importance of normalization in deep networks and how does it contribute to the effectiveness of the deep residual learning framework? | Normalization is used in initialization and intermediate layers [21]. | [
21
] | [
{
"id": "1512.03385_all_0",
"text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can... |
What is the difference between an online experiment and an offline experiment? | Authors conduct online A/B test, whereas coduct three downstream applications for offline experiments [47]. The reason is that they conduct online feed recommendation A/B testing from 2020-02-01 to 2020-02-10, in the “Good Morning” tab of Tencent Mobile Manager and the “Find” tab of Tencent Wi-Fi Manager [51]. For offline experiments, they compare the baseline with four different versions of AETN in three typical downstream offline experiments [63]. | [
47,
51,
63
] | [
{
"id": "2005.13303_all_0",
"text": " Personalized mobile business, e.g., recommendations, and advertising, often require effective user representations. For better performance, user modeling in industrial applications often considers as much information as possible, including but not limited to gender, loc... |
Is there a disadvantage to using low-precision arithmetic for inference, such as decreased inference accuracy? | Quantization models can perform slightly have lower results on neural network models, however in this paper authors performed some constraints during training so that's quantizable with minimal impact on the output of the model, the quantized model even performed slightly better than none-quantized training and they suggest it could be due to regularization roles those constraints had during training [43]. | [
43
] | [
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi... |
Why RoBERTa uses Dynamic masking rather than Static masking? | They use dynamic masking to avoid using the same mask in iteration [29]. | [
29
] | [
{
"id": "1907.11692_all_0",
"text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of... |
What would happen if authors replace three prediction layers with DPM (Deformable Convolution Layers) ? | If authors replace three prediction layers with DPM (Deformable Convolution Layers) the performance of SSD will degrade compared to R-CNN and other methods [34]. | [
34
] | [
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S... |
Why did the authors use only one composer rather than several composers together? | The authors use only one composer, Chopin, rather than several composers together because Chopin's music has been one of the most common resources that are analyzed by literature to investigate the development in Western musical expression with respect to various musical structures [21]. | [
21
] | [
{
"id": "2208.14867_all_0",
"text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ... |
Why can’t we create novel rendition of reference images using the pretrained model itself? | Because the output domain of the pretrained model is limited, we cannot use it to create novel renditions of reference images [1]. | [
1
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
How are the RNN architectures used for the decoder in prior research different from each other? | Kalchbrenner and Blunsom used a standard RNN hidden unit for the decoder [6]. and Sutskever and Luong stacked multiple layers of RNN with Long Short-Term Memory (LSTM) hidden unit for the encoder and the decoder [7]. | [
6,
7
] | [
{
"id": "1508.04025_all_0",
"text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con... |
Do the authors evaluate their architecture on non-mobile/cellphone type of edge devices such as FPGAs? | The authors only evaluated their architecture on mobile devices (Google Pixel 1) and did not evaluated on non-mobile type of devices [1]. | [
1
] | [
{
"id": "1801.04381_all_0",
"text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resour... |
How did previous models incorporate the knowledge embedding directly to language model? | Existing models incorporate the knowledge embedding by pre-training process [0]. They use these knowledge embedding as their initialized representation [12]. After the process, self-attention model is also used to capture the contextual information [15]. | [
0,
12,
15
] | [
{
"id": "1904.09223_all_0",
"text": " Language representation pre-training Mikolov et al. (2013); Devlin et al. (2018) has been shown effective for improving many natural language processing tasks such as named entity recognition, sentiment analysis, and question answering. In order to get reliable word rep... |
For other than the painting -> photo case (e.g. photo -> painting), does L_identity still work well? | [Without L_identitiy, the generator G and F are free to change the tint of input images when there is no need to [48]. | [
48
] | [
{
"id": "1703.10593_all_0",
"text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.