input stringlengths 14 315 | answer stringlengths 9 2.16k | gold_ctxs listlengths 1 15 | ctxs listlengths 11 186 |
|---|---|---|---|
The paper mentions Eigenvalue Decomposition (EVD) as well as Singular Value Decomposition numerous times. How are the two related, and how are they different? | The grouping matrix is symmetric and real, which guarantees to have real eigen values as well as vectors [8]. | [
8
] | [
{
"id": "2209.02939_all_0",
"text": " Graph Neural Networks (GNNs) learn representations of individual nodes based on the connectivity structure of an input graph. For graph-level prediction tasks, the standard procedure globally pools all the node features into a single graph representation without weight ... |
What is meant by “Proximal Policy Optimization”? | Proximal Policy Optimization is an optimization algorithm used to train the controller RNN [18]. | [
18
] | [
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture enginee... |
Author mentioned char-level embedding. Did they show the experimental result with char-level embedding? | They showed that [30]. | [
30
] | [
{
"id": "1611.01603_all_0",
"text": " The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision communities. Systems trained end-to-end now achieve promising results on a variety o... |
Why did the GAN-based image editing approach succeed only on highly curated datasets and struggle over large and diverse datasets? | Detailed generated images using GANs depends on the initial noise vector and the interaction between pixels to text embedding [33]. Unfortunately the reason that specifies why large and diverse datasets didn't succeed isn't mentioned in this paper, and neither the embedding size nor any related information can be exploited to complete the answer [11]. | [
33,
11
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
How does the authors select SE blocks to remove? | Removing SE blocks having similar distributions over different image classes are known to incur only a marginal loss in accuracy [56]. Thus, for each channel c, authors calculated the standard deviation \sigma_{c} of activation values over different images [57]. Small value of \sigma_{c} would mean that SE block is having similar distrubution over different images [58]. Thus they defined the metric as the average of \sigma_{c} over all channels [71]. | [
56,
57,
58,
71
] | [
{
"id": "2009.02009_all_0",
"text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall... |
Would functional programming languages be considered to be a part of the imperative or the declarative paradigm, or neither? | Although the paper describes the difference of imperative and declarative paradigm of the programming language, this question cannot be answered and requires external knowledges since there are no evidential information about the functional programming languages [1]. | [
1
] | [
{
"id": "1512.01274_all_0",
"text": " The scale and complexity of machine learning (ML) algorithms are becoming increasingly large. Almost all recent ImageNet challenge winners employ neural networks with very deep layers, requiring billions of floating-point operations to process one single sample. The ri... |
What are the benefits of hierarchical features for capturing local context? | [PointNet lacks the ability to capture local context at different scales [11]. The paper introduces a hierarchical feature learning framework to resolve this limitation [40]. The idea of hierarchical feature learning has been very successful and convolutional neural network is one of the most prominent examples [48]. | [
11,
40,
48
] | [
{
"id": "1706.02413_all_0",
"text": " We are interested in analyzing geometric point sets which are collections of points in a Euclidean space. A particularly important type of geometric point set is point cloud captured by 3D scanners, e.g., from appropriately equipped autonomous vehicles. As a set, such d... |
Why did they use different number of tokens for BioASQ and Forum/NQ datasets? | they use different number of tokens for BioASQ and Forum/NQ datasets due to difference in average length of of questions and answers in each dataset [12]. | [
12
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
The authors mention that they measure performance of their models using the character error rate metric, which they calculate using best path encoding. What does "best path" here mean? | Since there is no evidential information about the detail of best path decoding, this question cannot be answered and requires external knowledge, specifically the reference [39] [28]. | [
28
] | [
{
"id": "1503.04069_all_0",
"text": " Recurrent neural networks with Long Short-Term Memory (which we will concisely refer to as LSTMs) have emerged as an effective and scalable model for several learning problems related to sequential data. Earlier methods for attacking these problems have either been tail... |
Which factor is more related to model performance between pretraining data size and language similarity? | Pretraining data size is more related to model performance [22]. | [
22
] | [
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling... |
Would there be a performance gain if the model utilizes the IE (information extraction) model instead of the exact match for target entity recognition? | This work’s approach aims at focusing mostly on informative factors [26]. For example, the key sentence selection module focused on extracting only the most relevant sentences and the target entity recognition module focused on identifying only the most informative entities [4]. | [
26,
4
] | [
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap... |
How does increasing the embedding and projection size help with respect to the model? | The paper does not discuss how the positive effects are brought about [48]. | [
48
] | [
{
"id": "1602.02410_all_0",
"text": " Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sentences not only encode complexities of language such as grammatical structure, but also distill a fair amoun... |
Is "one-step-ahead conditional" here mean the same as the modelled probability of calculating the next token? | Yes, "one-step-ahead conditional", in this context refers to the calculation of probabilities for what the next token might be, given a sequence of past tokens in a sentence [8]. | [
8
] | [
{
"id": "2004.04092_all_0",
"text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)... |
SSD adds six auxiliary convolution layers after the VGG16. In three of those layers, we make 6 predictions instead of 4(YOLO does). Why is the case? | SSD adds 6 auxiliary convolution layers after VGG16 to produce detections with following key features: a) Multi-scale feature maps for detection b) Convolutional predictors for detection
c) Default boxes and aspect ratios [4]. | [
4
] | [
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S... |
Why authors choose to extend Single-Path NAS as the search strategy, instead of famous NAS methods such as MNASNet? | The author's aim is building a fast NAS methodology [11]. Single-Path NAS could search a good architecture faster than existing NAS techniques [18]. It builds a faster NAS technique by reducing the number of trainable parameters [37]. Another reason is that Single-Path NAS can be efficiently extended to support MixConv [5]. | [
11,
18,
37,
5
] | [
{
"id": "2009.02009_all_0",
"text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall... |
Is it true that the fact that the optimal performance was not seen with 100% of data indicates the strong generalization ability? I suspect that there can be tail documents that can be challenging for the system to memorize even when 100% data is used for training. | its true that the fact that the optimal performance was not seen with 100% of data indicates the strong generalization ability [52]. | [
52
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
What is "local attention"? | Local attention is a Transformer model that uses a sliding window of some fixed context window size [11]. | [
11
] | [
{
"id": "2210.15541_all_0",
"text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attenti... |
Will these embeddings be based on measuring similarities between features of new faces and features extracted from faces which the model was trained on ? | FaceNet embeddings can be used to measure similarity between new faces and trained faces [0]. | [
0
] | [
{
"id": "1503.03832_all_0",
"text": " In this paper we present a unified system for face verification (is this the same person), recognition (who is this person) and clustering (find common people among these faces). Our method is based on learning a Euclidean embedding per image using a deep convolutional ... |
How much the computational complexity was reduced when using depthwise separable convolution? | 3\times 3 depthwise separable convolutions use 8–9 times less computation than standard convolutions [25]. | [
25
] | [
{
"id": "1704.04861_all_0",
"text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t... |
Does RoBERTa also takes as input a concatenation of two segments, as well as BERT did? | RoBERTa takes concatenated four sequence ,candidate answer with the corresponding question and passage, input not like BERT [67]. | [
67
] | [
{
"id": "1907.11692_all_0",
"text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of... |
RoBERTa is based on BERT-large or BERT base? | RoBERTa is based on BERT-large [49]. | [
49
] | [
{
"id": "1907.11692_all_0",
"text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of... |
How should positional information be interpreted or captured with sines and cosines? | Positional encoding can be generated using sinusoidal function whose wavelengths form a geometric progression which can encode relative positions [31]. Advantage of the sinusoidal positional encoding is that it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training [32]. | [
31,
32
] | [
{
"id": "1706.03762_all_0",
"text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)... |
Why doesn't character-level embeddings degrade performance compared to word-level embeddings? | The paper discusses one advantage of character-level embeddings over word-level embeddings [28]. | [
28
] | [
{
"id": "1602.02410_all_0",
"text": " Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sentences not only encode complexities of language such as grammatical structure, but also distill a fair amoun... |
Define how fANOVA is performed? | fANOVA marginalize over hyperparameter dimensions using regression trees to predict the marginal error for single parameter while averaging over all other parameters [47]. | [
47
] | [
{
"id": "1503.04069_all_0",
"text": " Recurrent neural networks with Long Short-Term Memory (which we will concisely refer to as LSTMs) have emerged as an effective and scalable model for several learning problems related to sequential data. Earlier methods for attacking these problems have either been tail... |
Which part of the proposed model helps to solve the sparsity problem? | Both transformers and retention autoencoder parts try to solve the sparsity problem by using tied weight matrices [27]. The reason of this is that \mathbf{W}^{\Omega}=\mathbf{W}^{\Theta}=\mathbf{W}^{\Phi}=\mathbf{W}^{(4)}={\mathbf{W}^{a}}^{\mathrm{T}} [38]. Since \mathbf{W}^{\Omega} is used in transformer part and \mathbf{W}^{(4)} is used in autoencoder part, both of them try to solve the sparsity problem [39]. | [
27,
38,
39
] | [
{
"id": "2005.13303_all_0",
"text": " Personalized mobile business, e.g., recommendations, and advertising, often require effective user representations. For better performance, user modeling in industrial applications often considers as much information as possible, including but not limited to gender, loc... |
Why the authors experiment on an NPU simulator, not the real hardware chip? | MIDAP can support DWConv, SE more efficiently than other NPUs [27]. However, MIDAP is not implemented as a real hardware chip yet, but the cycle-accurate simulator is open-sourced [59]. | [
27,
59
] | [
{
"id": "2009.02009_all_0",
"text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall... |
What are the two factors the Mobilenet hyperparameters will affect? | MobileNet architecture introduces two hyper-parameters; width multiplier and resolution multiplier [1]. | [
1
] | [
{
"id": "1704.04861_all_0",
"text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t... |
How many terms are used for the loss function of a generator? | it takes three [15]. | [
15
] | [
{
"id": "1711.09020_all_0",
"text": " The task of image-to-image translation is to change a particular aspect of a given image to another, e.g., changing the facial expression of a person from smiling to frowning (see Fig. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Tra... |
What hyperparameters do each of the eight variants of LSTMs investigated by the authors of this paper have? | The authors investigated 1) number of LSTM blocks per hidden layer 2) learning rate 3) momentum 4) standard deviation of Gaussian input noise with random searches with uniform sampling [35]. | [
35
] | [
{
"id": "1503.04069_all_0",
"text": " Recurrent neural networks with Long Short-Term Memory (which we will concisely refer to as LSTMs) have emerged as an effective and scalable model for several learning problems related to sequential data. Earlier methods for attacking these problems have either been tail... |
Why is it necessary to maximize the coverage of anchor points? | Maximizing the coverage of anchor points is necessary in order to ensure that the local transformations are being applied evenly across the entire input space [10]. This allows for a more diverse set of augmented samples to be generated, which can help to improve the robustness and generalization of a model trained on the augmented data [11]. | [
10,
11
] | [
{
"id": "2110.05379_all_0",
"text": " Modern deep learning techniques, which established their popularity on structured data, began showing success on point clouds. Unlike images with clear lattice structures, each point cloud is an unordered set of points with no inherent structures that globally represent... |
What is the impact of number of training videos on the performance of supervised and unsupervised tasks? | As the number of training videos increases the performance of supervised and unsupervised tasks increases [37]. | [
37
] | [
{
"id": "1502.04681_all_0",
"text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq... |
The shuffling task of data is left to algorithm developer by authors. Is this true? | Yes, it is true that shuffling task of data is left to algorithm developer by authors [8]. | [
8
] | [
{
"id": "1708.07747_all_0",
"text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d... |
What is used for measure the quantities of non-English data? | Automatic language identification and manual qualitative analysis measure non-English data [10]. They are denominated in lines, tokens, and percentages across the paper [11]. | [
10,
11
] | [
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling... |
In models inserting token expression, ([CLS],x1,...,xN,[SEP],y1,...,yM,[EOS]) calculate maximum value of N + M in RoBERTa case. | It is not true [5]. BERT takes concatenated two sequences as input like [\mathit{CLS}],x_{1},\ldots,x_{N},[\mathit{SEP}],y_{1},\ldots,y_{M}, They calculate N+M to control maximum sequence length [67]. | [
5,
67
] | [
{
"id": "1907.11692_all_0",
"text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of... |
Why residual connections didn't help much for NASNets? | Inserting residual connections between residual connections between cells doesn't improve performance [26]. | [
26
] | [
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture enginee... |
Why there is no need to label objects in videos for the encoder-decoder model. | Since representation is another form of input, that's why it doesn't need label for any purpose [3]. | [
3
] | [
{
"id": "1502.04681_all_0",
"text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq... |
How has deep learning improved the accuracy of face recognition systems compared to traditional methods? | The traditional methods (ie holistic approaches, local-feature-based methods, shallow learning) were the approaches used before the boom of deep learning based techniques [1]. They could achieve an accuracy of 95%, while the human accuracy was 9753% [29]. The rapid progress of deep learning methods quickly equaled human performance (DeepFace 9735%) and later on surpassed them with 998% [3]. | [
1,
29,
3
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
What is the example of unreliable relations in knowledge graph for passage re-ranking scenario? | Unreliable relations in a knowledge graph involve trivial factual triplets that do not bring substantial information gain [4]. For example, in ConceptNet, the entity “hepatitis” has relations with both “infectious disease” and “adult” [23]. | [
4,
23
] | [
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap... |
How much does the success of the EL metric vary depending on which n tokens are used as a prompt for this metric? | The average LM perfomance of varying n for the EL metric is shown in Table 13 [39]. | [
39
] | [
{
"id": "2210.01504_all_0",
"text": " Recent work has shown that an adversary can extract training data from Pretrained Language Models (LMs) including Personally Identifiable Information (PII) such as names, phone numbers, and email addresses, and other information such as licensed code, private clinical n... |
Why did the authors choose MIDAP as the target NPU to experiment on? | The end-to-end latency can be estimated quite accurately, and MIDAP can efficiently support which which lower the MAC utilization in other NPUs [26]. | [
26
] | [
{
"id": "2009.02009_all_0",
"text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall... |
What does "bottom-up top-down feedforward structure" means ? | The bottom-up top-down feedforward structure is a combination of a bottom-up fast feedforward process that creates low resolution features maps to quickly collect global information, and a top-down attention feedback process that uses the global information along with the original feature maps to create features for inference [14]. | [
14
] | [
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio... |
What is the main difference between the "maximum frequency" and "exclusive frequency" baselines? | The main difference between the “maximum frequency” and “exclusive frequency” baselines is that the latter eliminates all entities found in the query as potential answers [11]. | [
11
] | [
{
"id": "1506.03340_all_0",
"text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ... |
What is the meaning of "using graph structures explicitly"? | The meaning of using graph structures explicitly is to explicity incorporate structural information into the self-attention [3]. The reason is that both P3 and P7 state the main contribution of SAT with paraphrasing [39]. | [
3,
39
] | [
{
"id": "2202.03036_all_0",
"text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019... |
How to create rare token identifier? | In order to get a list of rare token IDs, we first search for rare tokens in the vocabulary [18]. | [
18
] | [
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi... |
Will is the Inception- ResNet-v2 trained faster than pure Inception-v4 although their computational complexity is similar ? | Since the step time of Inception-v4 is significantly slower in practice, we can conclude that Inception-ResNet-v2 trained faster than pure Inception-v4 even though the computational complexity is similar [11]. | [
11
] | [
{
"id": "1602.07261_all_0",
"text": " Since the 2012 ImageNet competition winning entry by Krizhevsky et al , their network “AlexNet” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection , segmentation , human pose estimation , video classification , o... |
Does the statement "At each step the model is auto-regressive, consuming the previously generated symbols as additional input when generating the next" implies that we can use attention in handling time series forecasting? | Attention can be used for sequence modeling and can be used to build encoder decoder models which can handle time series forecasting [60]. | [
60
] | [
{
"id": "1706.03762_all_0",
"text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)... |
The authors extended which baseline framework to learn representation of image sequences? | The authors extended identical LSTM classifier framework as baseline to learn representation of image sequences [36]. | [
36
] | [
{
"id": "1502.04681_all_0",
"text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq... |
What are identity shortcuts and projection shortcuts, and how are they used in the experiments? | The identity and projection shortcut are not defined in this section [35]. Identity shortcuts help in training, do not increase the complexity of the bottleneck architectures and solve degradation problem [36]. | [
35,
36
] | [
{
"id": "1512.03385_all_0",
"text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can... |
What is the loss used during training of Faster R-CNN ? | For an RoI associated with ground-truth class k, L_{mask} is only defined on the k-th mask, where L is the average binary cross-entropy loss [15]. | [
15
] | [
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ... |
In what way can SBM-Transformer be considered better than Reformer? | SBM-Transformer allows more flexible attention mask structures between linear to full attention with respective computational costs, while Reformer can only use block-diagonal masks that cannot model hierarchical contexts [31]. | [
31
] | [
{
"id": "2210.15541_all_0",
"text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attenti... |
What do you mean by model pruning? | Although model pruning is a specific concept in deep learning, it refers to reducing the model size by removing redundant network connections or channels [0]. | [
0
] | [
{
"id": "1707.01083_all_0",
"text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa... |
What are the FID values achieved by authors using Diffusion Model on ImageNet? | They obtain state-of-the-art image generation on ImageNet 64×64 [22]. For higher resolution ImageNet [44]. Table 5 shows the performance of ADM Metrics include FID, sFID, Prec, Rec [46]. | [
22,
44,
46
] | [
{
"id": "2105.05233_all_0",
"text": " Over the past few years, generative models have gained the ability to generate human-like natural language Brown et al. (2020), infinite high-quality synthetic images Brock et al. (2018); Karras et al. (2019b); Razavi et al. (2019) and highly diverse human speech and mu... |
What is a BLEU score? | BLUE score measures the precision of n-grams between generated sentences and reference sentences, which has been shown to correlate well with human evaluation [23]. | [
23
] | [
{
"id": "1411.4555_all_0",
"text": " Being able to automatically describe the content of an image using properly formed English sentences is a very challenging task, but it could have great impact, for instance by helping visually impaired people better understand the content of images on the web. This task... |
The paper's model implies that the discriminative parameters also contain significant “generative” structure from the training dataset. What is meant by "generative" structure? | Generative structure is how the data is distributed inside the space where it lives, for example when learning to detect jaguar class, parameters encode not only the jaguar’s spots(Only to distinguish it through a rare property), but to some extent also its four legs(to learn the pattern with which the whole creature can be found) [33]. | [
33
] | [
{
"id": "1506.06579_all_0",
"text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et a... |
Why does ORB-SLAM (short) perform so poorly when the turning magnitude is low, as seen in Figure 9? | Proposed model has good performance as compared with ORB-SLAM (short) when the turning magnitude is low [1]. ORB-SLAM (short) perform so poorly because it could not learn ego-motion [32]. | [
1,
32
] | [
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h... |
What characteristics of large-scale pre-trained language models made it remarkable successful for passage re-ranking task? | Large-scale pre-trained language models (PLMs) have been found to be successful for passage re-ranking due to their ability to learn semantic relevance in the latent space from massive textual corpus [0]. | [
0
] | [
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap... |
How can the mask extracted directly from the attention maps mitigate the limitation of inversion process? | Extracted masks directly from the attention maps can restore the unedited regions of the original image [36]. | [
36
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
What kinds of RNN architectures were used for the decoder in various prior research? | Previous work included vanilla RNN, LSTM and GRU in the decoder architecture [10]. as Sutskever and Luon stacked multiple layers of RNN with LSTM hidden unit for the decoder and encoder [6]. and Cho, Bahdanau and Jeal all adopted GRU [7]. | [
10,
6,
7
] | [
{
"id": "1508.04025_all_0",
"text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con... |
How is this x converted to y using which network? | First, a prior network \operatorname{\textbf{P}}, that during inference generates image embeddings y_{e} given text embeddings x_{e} and BPE encoded text tokens \hat{x} [11]. Second, a decoder network \operatorname{\textbf{D}} that generates a low-resolution 64\times 64 RGB image \hat{y}_{l}, conditioned on the image embeddings y_{e} [13]. | [
11,
13
] | [
{
"id": "2209.14792_all_0",
"text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid... |
What type of defense strategies are evaded in this paper | Two types of defense are evaded, Adversarial training and Defensive distillation [70]. | [
70
] | [
{
"id": "1602.02697_all_0",
"text": " A classifier is a ML model that learns a mapping between inputs and a set of classes. For instance, a malware detector is a classifier taking executables as inputs and assigning them to the benign or malware class. Efforts in the security (5, 2, 9, 18) and machine learn... |
Author said that they achieved to make SOTA RE models. Give an evidences for this statement. | Their improved RE baseline achieved SOTA performance on the RE-TACRED dataset with f1 score of 911% [14]. Moreover, Using RoBERTa Liu et al [18]. (2019) as the backbone, they improved baseline model on TACRED and TACREV with f1 score 746% and 832%, respectively [3]. | [
14,
18,
3
] | [
{
"id": "2102.01373_all_0",
"text": " As one of the fundamental information extraction (IE) tasks, relation extraction (RE) aims at identifying the relationship(s) between two entities in a given piece of text from a pre-defined set of relationships of interest. For example, given the sentence “Bill Gates f... |
How can attention patterns from larger models be applied to smaller models if the models might differ in the size and number of attention layers? | BERTSum is the summarization model, and Cross-Segment BERT is the topic segmentation model [31]. | [
31
] | [
{
"id": "2112.05364_all_0",
"text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att... |
How can it actually benefit that GAT can learn regardless of the graph structure? | There are no clue to find what graph can GAT process by its regardless of the graph structure [32]. | [
32
] | [
{
"id": "1710.10903_all_0",
"text": " Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classification (He et al., 2016), semantic segmentation (Jégou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has ... |
What's the effect of expanding channel size? | expanding channel size substantially improves the performance of the model [34]. | [
34
] | [
{
"id": "1809.11096_all_0",
"text": " The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks (GANs, Goodfellow et al. (2014)) at the forefront of efforts to generate high-fidelity, diverse images with models learned directly from data. GAN trai... |
How did the authors optimize alpha and beta for the activation function of the prediction layers? | fixed values for alpha=10 and beta=001 are use to constrain the predicted depth to be always positive within a reasonable range [19]. | [
19
] | [
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h... |
How the proposed loss function is different from that of original Single-Path NAS?
| Previous method needs additional search cost for hyperparameter, since they have no information for target latency [49]. The author's method directly includes the target latency, resulting in ease of search process [77]. | [
49,
77
] | [
{
"id": "2009.02009_all_0",
"text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall... |
What is the best performing CNN model in the abdominal LN detection? | The best performing model is GoogLeNet-TL [37]. | [
37
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
According to the authors, the VGG-16 version of Faster R-CNN is 6 time slower than YOLO, what is the actual speed of the model ? | Table 1 reveals that the actual speed of Faster R-CNN with VGG-16 is 7fps with 732% mAP [58]. At the same time, YOLO has more than 6 times the higher speed of 45 fps with 634% mAP on Pascal VOC 2007 [20]. | [
58,
20
] | [
{
"id": "1506.02640_all_0",
"text": " Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms fo... |
In terms of image synthesis, do the GANs perform better than VQ-VAE or not? | Fidelity can be higher, but GANs are not always better in terms of low diversity [1]. In table 5 ImageNet256x256 experiment, BigGAN-deep beats VA-VAE2 about FID, sFID, Precision but lose about Recall [2]. | [
1,
2
] | [
{
"id": "2105.05233_all_0",
"text": " Over the past few years, generative models have gained the ability to generate human-like natural language Brown et al. (2020), infinite high-quality synthetic images Brock et al. (2018); Karras et al. (2019b); Razavi et al. (2019) and highly diverse human speech and mu... |
What is the major one structural difference between ELMO model and others (BERT small, BERT large, DeBERTa) | ELMO is LSTM based language model, but BERT and DeBERTa is transformer based language model [12]. | [
12
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
What is DPM trained with? | DPM is trained on the ImageNet detection task [67]. | [
67
] | [
{
"id": "1612.08242_all_0",
"text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ... |
What is the average video sequence length used for experiments in this study? | The UCF-101 dataset contains 13,320 videos with an average length of 62 seconds [22]. | [
22
] | [
{
"id": "1502.04681_all_0",
"text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq... |
A reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Is this estimator differentiable? | Yes, the reparameterization trick is useful and it can be constructed a differentiable estimator [0]. This trick can be used to obtain a differentiable estimator of the variational lower bound [10]. Authours show how a reparameterization of the variational lower bound yields a simple differentiable unbiased estimator of the lower bound [13]. The proposed estimator can be straightforwardly differentiated and optimized using standard stochastic gradient methods [2]. | [
0,
10,
13,
2
] | [
{
"id": "1312.6114_all_0",
"text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi... |
Is there any specific reason that Bw and Bh uses an exponential function for the location prediction? | Exponential function for the location prediction is used to bound the network’s predictions to fall in ground bounding boxes range of 0 to 1 [25]. | [
25
] | [
{
"id": "1612.08242_all_0",
"text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ... |
What are the metrics used for monocular depth and camera motion estimation? | Depth map are computed and matched across different scales for monocular depth metric [25]. ATE metric is used for camera motion estimation [31]. | [
25,
31
] | [
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h... |
How does IS and NCE compare in terms of model performance? | IS performs better [27]. | [
27
] | [
{
"id": "1602.02410_all_0",
"text": " Language Modeling (LM) is a task central to Natural Language Processing (NLP) and Language Understanding. Models which can accurately place distributions over sentences not only encode complexities of language such as grammatical structure, but also distill a fair amoun... |
In the model architecture described in this paper, how many residual connections are used? | Authors used 8 LSTM layers for the encoder, and 8 LSTM layers for the decoder with residual connections for both networks, each layer has 1024 node [2]. though it's not very clear how many residual connections are but a possible answer is 16384 [20]. | [
2,
20
] | [
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi... |
They claim that the attention mechanism bring more discriminative feature representation, is that true ? | The attention masks successfully learn meaningful information from the dataset and their usage resulted in state-of-the-art results, which indicates that the attention mechanism does learn more discriminative features [20]. | [
20
] | [
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representatio... |
Why does a deeper network with smaller kernel size have better performances ? | Deeper networks exhibit better performance as they introduce more non-linearities and converge towards better local optima [26]. But, adding more layers increase both computation time and the number of parameters [27]. | [
26,
27
] | [
{
"id": "1603.05959_all_0",
"text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient out... |
Why not just use membership inference attack recall [1,2] and exposure metric [3], which are commonly used and established metrics? These two basically do what the currently proposed metrics do. | These metrics are dependent on the specific attacks, while ours is agnostic of the type of attack [12]. | [
12
] | [
{
"id": "2210.01504_all_0",
"text": " Recent work has shown that an adversary can extract training data from Pretrained Language Models (LMs) including Personally Identifiable Information (PII) such as names, phone numbers, and email addresses, and other information such as licensed code, private clinical n... |
What type of ML models have been successful on the FashionMNIST dataset? | the author talks about using various ML and DL models and getting the accuracy as well, but exactly which models are used has not been specified [1]. | [
1
] | [
{
"id": "1708.07747_all_0",
"text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d... |
Do the authors perform hyperparameter tuning for each dataset independently? | The authors did not perform hyperparameter tuning for each dataset [35]. | [
35
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
What does it mean for a surface to be Lambertian? | Lambertian surface can show the meaningful photo-consistency error [16]. | [
16
] | [
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h... |
Give one example Relation Extraction question and its answer. | Relation Extraction(RE) task is task of finding relationship between two entities [0]. | [
0
] | [
{
"id": "2102.01373_all_0",
"text": " As one of the fundamental information extraction (IE) tasks, relation extraction (RE) aims at identifying the relationship(s) between two entities in a given piece of text from a pre-defined set of relationships of interest. For example, given the sentence “Bill Gates f... |
If there are few examples for learn, we called them few-shot learning. Guess the meaning of zero-shot learning. | Few-shot learning is learning from few examples [1]. | [
1
] | [
{
"id": "1711.04043_all_0",
"text": " Supervised end-to-end learning has been extremely successful in computer vision, speech, or machine translation tasks, thanks to improvements in optimization technology, larger datasets and streamlined designs of deep convolutional or recurrent architectures. Despite th... |
What is the difference between Mask R-CNN and Faster R-CNN ? | Mask R-CNN has pixel-to-pixel alignment whereas Faster R-CNN doesn'tMask R-CNN, extends Faster R-CNN by adding a branch for predicting segmentation masks on each Region of Interest (RoI), in parallel with the existing branch for classification and bounding box regression [12]. | [
12
] | [
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ... |
The authors say, "a very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions". What is this technique called? | The authors define an “ensemble of models” as a set of separate models with the same architecture and training procedure, but different randomly initialized parameters whose predictions are then averaged to increase performance [19]. | [
19
] | [
{
"id": "1503.02531_all_0",
"text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typi... |
What is the average number of words in a Wikipedia article from the Wikitext-103 dataset? | The Wikitext-103 dataset consists of 28,595 preprocessed Wikipedia articles and 103 million words [15]. | [
15
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
the generated videos inherit the vastness (diversity in aesthetic, fantastical depictions, etc.) of today’s image generation models. | Modeling videos require expensive computational complexity that it is challenging in high-quality video data collection [36]. Thus, large-scale paired text-video is expensive as well [6]. Because of the limitations, the progress of T2V generation lags behind [8]. | [
36,
6,
8
] | [
{
"id": "2209.14792_all_0",
"text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid... |
How author create embeddings for each post in Reddit data? | For Reddit data, authors encode Glove word vectors with GraphSAGE [30]. | [
30
] | [
{
"id": "1706.02216_all_0",
"text": " Low-dimensional vector embeddings of nodes in large graphs111While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs f... |
What is the batch size for multi-scale training? | Image size is changed after every 10 batches during multi-scale training [32]. | [
32
] | [
{
"id": "1612.08242_all_0",
"text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ... |
Could the model just choose the embedding function to be zero just to overcome the very small distance that might occur in case of choosing the hardest negatives during triplet selection? | No the zero embeddings can result in a collapsed model [29]. | [
29
] | [
{
"id": "1503.03832_all_0",
"text": " In this paper we present a unified system for face verification (is this the same person), recognition (who is this person) and clustering (find common people among these faces). Our method is based on learning a Euclidean embedding per image using a deep convolutional ... |
Is the Walker task notable for having hard-to-escape local optima? | In this category, we implement six locomotion tasks of varying dynamics and difficulty: Swimmer (Purcell, 1977; Coulom, 2002; Levine & Koltun, 2013; Schulman et al, 2015a), Hopper (Murthy & Raibert, 1984; Erez et al, 2011; Levine & Koltun, 2013; Schulman et al, 2015a), Walker (Raibert & Hodgins, 1991; Erez et al, 2011; Levine & Koltun, 2013; Schulman et al, 2015a), Half-Cheetah (Wawrzyński, 2007; Heess et al, 2015b), Ant (Schulman et al, 2015b), Simple Humanoid (Tassa et al, 2012; Schulman et al, 2015b), and Full Humanoid (Tassa et al, 2012)The goal for all the tasks is to move forward as quickly as possible [11]. | [
11
] | [
{
"id": "1604.06778_all_0",
"text": " Reinforcement learning addresses the problem of how agents should learn to take actions to maximize cumulative reward through interactions with the environment. The traditional approach for reinforcement learning algorithms requires carefully chosen feature representati... |
What is the reason for using the reconstruction metric calculated from zero explicit planning? | The reconstruction metric that measures the performance for predicting the structure attribute is calculated from zero explicit planning [27]. | [
27
] | [
{
"id": "2208.14867_all_0",
"text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ... |
What are the stage the Mask R-CNN consists of ?? | Two stages are RPN and In the second stage, in addition to predicting the class and box offset, Mask R-CNN also produces a binary mask for each RoI [14]. | [
14
] | [
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ... |
Why did the authors choose a greedy approach for general classifier? | The DeepFool method is designed iteratively starting from very simple binary classifiers to more general non-linear differentiable classifiers [11]. The effectiveness of the greedy algorithm is justified by previous work and the results show very small perturbations, thus the authors claim that it is a viable method [12]. | [
11,
12
] | [
{
"id": "1511.04599_all_0",
"text": " Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in many research areas such as bioinformatics (1, 16), speech (12, 6), and computer vision (10, 8). Though deep networks have exhibited very good performance ... |
Is random search (RS) more efficient that reinforcement learning (RL) for learning neural architectures? | No, Reninforcement Learning is more efficient than Random Search for learning neural architectures [33]. | [
33
] | [
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture enginee... |
The paper's algorithm yields very small perturbations which are believed to be good approximations of the minimal perturbation. Quantitatively, how far is the paper's approximation from the minimal perturbation? | The authors only claim that the DeepFool can be used as a baseline for adversarial perturbation calculation and that it heavily depends on existing optimization methods [18]. In the paper, its effectiveness is proven relative to other state-of-the-art methods [20]. Although the analysis of how far the estimated perturbation from the actual minimal perturbation can be found in referenced papers, the more sophisticated analysis is not mentioned in the paper [3]. Thus, it is difficult to answer the question entirely [8]. | [
18,
20,
3,
8
] | [
{
"id": "1511.04599_all_0",
"text": " Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in many research areas such as bioinformatics (1, 16), speech (12, 6), and computer vision (10, 8). Though deep networks have exhibited very good performance ... |
According to the paper, does BERT is overfitted? | They say that the reason of good performance of fine-tuned model is not caused by task specific knowledge [27]. | [
27
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
Why author said that they adopt a much simpler approach for SQuAD compared to past work? | Author said that they adopt a much simpler approach for SQuAD compared to past work to emphasize that they only finetune RoBERTa using the SQuAD training data, and they use the same learning rate for all layers, not like previous works [61]. | [
61
] | [
{
"id": "1907.11692_all_0",
"text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.