input
stringlengths
14
315
answer
stringlengths
9
2.16k
gold_ctxs
listlengths
1
15
ctxs
listlengths
11
186
In its loss function YoloV3 uses logistic regression with multilabel classification or Softmax over all class probabilities?
The authors use binary cross-entropy loss [8].
[ 8 ]
[ { "id": "1804.02767_all_0", "text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B...
Do we care in general for resolution while modelling language?
As the transformers are used for language modeling there is a resolution problem due to averaging in attention weights [4].
[ 4 ]
[ { "id": "1706.03762_all_0", "text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5)...
What was the process of creating FashionMNIST?
It is based on Zolando's website Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, ie [4]. front and back looks, details, looks with model, and an outfit [7].
[ 4, 7 ]
[ { "id": "1708.07747_all_0", "text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d...
What is mutual information means in the paper?
Mutual information between pseudo-label and data without any artificial constraints [12].
[ 12 ]
[ { "id": "2211.02284_all_0", "text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n...
What metrics are used for the evaluation of SLAM systems?
for evaluation of SLAM systems two different metrics, the absolute translation RMSE tabs proposed in [3], and the average relative translation trel and rotation rrel errors are used [32].
[ 32 ]
[ { "id": "1610.06475_all_0", "text": " Simultaneous Localization and Mapping (SLAM) has been a hot research topic in the last two decades in the Computer Vision and Robotics communities, and has recently attracted the attention of high-technological companies. SLAM techniques build a map of an unknown enviro...
What is the expectation of training FCNs end-to-end for pixelwise prediction and from supervised pre-training?
They expect that FCN exceeds the state-of-the-art without further machinery [2].
[ 2 ]
[ { "id": "1411.4038_all_0", "text": " Convolutional networks are driving advances in recognition. Convnets are not only improving for whole-image classification (19, 31, 32), but also making progress on local tasks with structured output. These include advances in bounding box object detection (29, 12, 17), ...
What are some of the specific challenges that FR models face in real-world applications and how have researchers attempted to address these challenges through the design of specialized algorithms?
Cross-pose FR is still a challenging problem for existing algorithms and over 10% decrease in accuracy was observed in frontal-frontal to frontal-profile verification [67]. Techniques like DREAM and PIM were employed to perform frontalization in the deep face and learn pose-invariant representations [79]. Cross-age FR is also a natural problem as facial appearance changes over time [68]. There were attempts to synthesize images from the same age group with a generative probabilistic model and conditional GANs were used to generate an identity-preserved face with a target age [69]. Further, local manifold adaptation (LMA) and pyramidal adversarial discriminator approaches were tried to deal with the imperfect preservation of identities of GAN-synthesized images [70]. Alternatively, decomposing the identity and age from each other was another direction [71]. Latent identity analysis (LIA) and decomposing in a spherical coordinate system are some methods from that direction [72]. Lastly, CNN fine-tuning, siamese deep network, feature extraction, and deep learning with CNN were some of the notable approaches [73]. Makeup FR is another real-world problem that needs a solution, as makeup can drastically change the appearance of the subject [74]. Bi-level adversarial network (BLAN) was used to generate non makeup images from makeup images [75]. Fine-tuning the triplet network with a small makeup dataset was another try [76]. In particular, facial disguise is a big issue for FR as people can either want to hide their identity or impersonate another one [77]. Identity hiding increases intra-class variation, while impersonation decreases inter-class distinction [78]. Using DCNN and finding the transformation matrix with PCA for face disguise, fine-tuning models with disguised faces, hard example mining, and learning the representation of images in colors, shapes, and textures are some of the attempts to solve the issue [80]. NIR-VIS FR is needed to match the NIS images, (near-infrared spectrum) that usually come from surveillance contexts to VIS (visible light spectrum) images, as most of the available datasets contain VIS images [81].
[ 67, 79, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 80, 81 ]
[ { "id": "1804.06655_all_0", "text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early...
How can the GATs work well without any assumption of node order?
GATs work with the entirety of the neighborhood, like GCN did [16].
[ 16 ]
[ { "id": "1710.10903_all_0", "text": " Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classification (He et al., 2016), semantic segmentation (Jégou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has ...
What is "KL vanishing" with respect to VAEs?
The authors do not explicitly explain the KL vanishing problem in detail, but they do cite a recent work, Bowman et al [14]. (2016), that probably contains more detailed information on this problem [23]. Additionally, the authors explain that KL regularization is a problem that specifically happens with Variational Autoencoders only (ie [4]. regular AEs do not seem to have this problem) [6].
[ 14, 23, 4, 6 ]
[ { "id": "2004.04092_all_0", "text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)...
Why author emphasized their model as “Zero-shot”?
In the zero-shot settings, the set of documents/mentions/entities from training data is not visible in test data, which means the information of the entity that should be linked at test time is not learned directly from the training set [0]. This setting is related to scalability, which is important for the entity linking tasks since there can be lots of possible entity candidates for each mention [5]. The proposed BERT-based models can deal with these settings and show their accuracy and efficiency in scale [8].
[ 0, 5, 8 ]
[ { "id": "1911.03814_all_0", "text": " Scale is a key challenge for entity linking; there are millions of possible entities to consider for each mention. To efficiently filter or rank the candidates, existing methods use different sources of external information, including manually curated mention tables Gan...
Who were the authors of the instructions?
The authors of the paper created the first set of 175 instructions themselves [5]. After that, they used an iterative bootstrapping process in which they used GPT3 to create more tasks (and instructions) [7]. In the end, they ended up with a dataset of 52k instructions [17].
[ 5, 7, 17 ]
[ { "id": "2212.10560_all_0", "text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d...
Fast YOLO processes double the mAP of other real-time detectors, what is the actual value of the mAP ?
The baseline YOLO model shows 634% mAP at 45fps on the Pascal VOC dataset, while Fast YOLO is on 527 mAP at 150fps [5]. Still, they are more than twice more accurate compared to other real-time detectors [53]. However, the YOLO network was observed to struggle with small objects but is generalizable well to other domains [68].
[ 5, 53, 68 ]
[ { "id": "1506.02640_all_0", "text": " Humans glance at an image and instantly know what objects are in the image, where they are, and how they interact. The human visual system is fast and accurate, allowing us to perform complex tasks like driving with little conscious thought. Fast, accurate algorithms fo...
What does the tradeoff look like when basic network architectures are represented in low-bit computations?
Although representing network architectures in low-bit form is mentioned as a technique for reducing the computational cost of the model, the paper does not mention anything about the tradeoff of the technique [0].
[ 0 ]
[ { "id": "1707.01083_all_0", "text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa...
Why did the authors chose the ModelNet dataset for evaluating the developed architectures ?
The authors follow previous works, such as VoxNet [24[, 3DShapeNets [33], and MVCNN [32] that also use ModelNet test set to evaluate their approaches [38]. In order to be able to compare with them and provide more quantitative results, this paper also evaluates on ModelNet's test set [47].
[ 38, 47 ]
[ { "id": "1604.03265_all_0", "text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn...
What metric did the authors use to measure generalizability on low-resource language understanding tasks?
The main performance metric the authors use to measure generalizability on low resource is the GLUE benchmark [40]. More broadly, the authors explain that their model is suitable for low resource settings to begin with since their model can be specialized at low cost (through feature based approaches) and can function with very little labelled data [41].
[ 40, 41 ]
[ { "id": "2004.04092_all_0", "text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)...
How do they solve the problem that the risk of the training process of self-attention could be unstable using multi-head attention?
to make self-attention stable, they used different mechanism which uses multi-head attention, similar to Vaswani et al [14].
[ 14 ]
[ { "id": "1710.10903_all_0", "text": " Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classification (He et al., 2016), semantic segmentation (Jégou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has ...
What does it mean for view synthesis to be the supervisory signal?
From view synthesis as the supervisory signal it means that proposed depth and pose prediction CNN require multiple new images of the scene from different poses given an input view [11]. This idea is explored by recent methodsm however all previous work requires posed image sets during training, while proposed framework can be applied to standard videos without pose information [13].
[ 11, 13 ]
[ { "id": "1704.07813_all_0", "text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h...
How was performance measured and was the performance of the human-guided knowledge distilled model significantly higher?
Interpretability is measured with the PDR framework [3]. Summarization performance measured in ROUGE is 15% better [45]. Topic segmentation performance measured in F1 is 12% better [52].
[ 3, 45, 52 ]
[ { "id": "2112.05364_all_0", "text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att...
How is the BLEU-1 score different from the BLEU-4 score?
While BLEU-4 compute precision at the 4-gram level, BLEU-1 compute precision at the unigram (1-gram) level [23].
[ 23 ]
[ { "id": "1411.4555_all_0", "text": " Being able to automatically describe the content of an image using properly formed English sentences is a very challenging task, but it could have great impact, for instance by helping visually impaired people better understand the content of images on the web. This task...
What is the ratio of 1x1 filters in the total number of filters?
The question needs to be related to some certain context but if we consider asking about the ratio of 1*1 filters in each fire module then the answer would be as follows: for a fire module ratio of 1*1 filters wrt [17].
[ 17 ]
[ { "id": "1602.07360_all_0", "text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur...
What is the example of ideal control signal?
Figure 1 (a) and Figure 1 (b), are examples of two indispensable characteristics ideal control signal, as Figure 1 (a) elaborates the "Event Compatibility" characteristic as "man, wave, surfboard" are all involved in activity riding [2].
[ 2 ]
[ { "id": "2103.12204_all_0", "text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks,...
How the proposed autoencoder architecture prevent overfitting or identity mapping?
The two factors that control the model from learning an identity mapping or prevent overfitting are fixed number of hidden units and forceful decode of the input representation recursively [14].
[ 14 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
How does equation 2 let the supernet search kernel size?
They define a trainable threshold value t, and compare the norm of the kernel weights with the threshold, to determine the kernel size [19].
[ 19 ]
[ { "id": "2009.02009_all_0", "text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall...
Is it true that ball query's local neighborhood could make the local region feature more generalizable across space?
[It is true [18].
[ 18 ]
[ { "id": "1706.02413_all_0", "text": " We are interested in analyzing geometric point sets which are collections of points in a Euclidean space. A particularly important type of geometric point set is point cloud captured by 3D scanners, e.g., from appropriately equipped autonomous vehicles. As a set, such d...
Can we use U-Net architecture in self-driving car and providing a segmentation map for the scene around?
The paper only discuss the application of U-Net for the segmentation of biomedical images [22].
[ 22 ]
[ { "id": "1505.04597_all_0", "text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini...
Did the authors use commonly used one-vs-all scheme for extending DeepFool method to the multiclass case?
Yes, the authors use the common one-vs-all scheme \hat{k}(\bm{x})=\operatorname*{arg\,max}_{k}f_{k}(\bm{x}) [11].
[ 11 ]
[ { "id": "1511.04599_all_0", "text": " Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in many research areas such as bioinformatics (1, 16), speech (12, 6), and computer vision (10, 8). Though deep networks have exhibited very good performance ...
Why does the proposed model need advanced preprocessing and feature generation to function well?
The proposed model is universal meaning that it does not require custom feature engineering or preprocessing [34]. For example, the pre-processing is taken from earlier work and only adds special tokens to capture relevant aspects for classification [12].
[ 34, 12 ]
[ { "id": "1801.06146_all_0", "text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS...
What methods refine the graph containing external knowledge in 1) global and 2) local way?
The knowledge graph is distilled globally by taking an existing knowledge graph and pruning unreliable or noise relations based on TransE embeddings [6].
[ 6 ]
[ { "id": "2204.11673_all_0", "text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful cap...
What happen to the pixels with no content in the segmentation map?
There are no pixels without any content because for the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image [4].
[ 4 ]
[ { "id": "1505.04597_all_0", "text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini...
How does the β-VAE objective contribute to disentangling in this work?
Tthe β-VAE objective to show that its contribution to disentangling is primarily through direct control of the level of overlap between encodings of the data, expressed by maximising the entropy of the encoding distribution [25].
[ 25 ]
[ { "id": "1812.02833_all_0", "text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor...
How can Lambada be adapted for other NLP tasks?
Lambada can be adopted for other NLP tasks as Lambada is an algorithm for text-based deductive logical reasoning that combines the capacity of LMs to handle naturalistic text input with the BC algorithm for high-level reasoning [58]. Lambada achieves significant improvements over existing approaches such as Chain-of-Thought and Selection-Inference in terms of prediction accuracy and proof accuracy [55].
[ 58, 55 ]
[ { "id": "2212.13894_all_0", "text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is...
How did the attention method contribute to word alignments?
Local attention method had sharper alignment weights than global one, that's due to it's designed to only focus on a subset of words each time [44].
[ 44 ]
[ { "id": "1508.04025_all_0", "text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con...
Does previous researches, which paper mentioned, using GNN?
Previous researches such as Duvenaud et al [10].
[ 10 ]
[ { "id": "1711.04043_all_0", "text": " Supervised end-to-end learning has been extremely successful in computer vision, speech, or machine translation tasks, thanks to improvements in optimization technology, larger datasets and streamlined designs of deep convolutional or recurrent architectures. Despite th...
Attention maps are calculated by query of spatial feature of the noisy image (\phi(z_t)) and key of textual embedding (\psi(P)). Is it true?
True, as attention maps are calculated by using deep spatial features of a noisy image which is projected to a "Query Matrix" and the textual embedding is projected to a "Key Matrix" and a "Value Matrix", then finally attentions maps calculated by learned linear projections of Query Matrix, Key Matrix and Value Matrix [12].
[ 12 ]
[ { "id": "2208.01626_all_0", "text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2  and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o...
What is the ImageNet classification dataset and how is it used in the experiments?
ImageNet 2012 classification dataset [36] that consists of 1000 classes [27].
[ 27 ]
[ { "id": "1512.03385_all_0", "text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can...
How did ERINE model incorporate knowledge into the language model?
ERNIE use multi-level masking to incorporate knowledge into language model, which includes entity-level masking and phrase-level masking [2]. The reason is that to learn enhanced language representation by entity-level masking and phrase-level masking is a main purpose of ERNIE [6].
[ 2, 6 ]
[ { "id": "1904.09223_all_0", "text": " Language representation pre-training Mikolov et al. (2013); Devlin et al. (2018) has been shown effective for improving many natural language processing tasks such as named entity recognition, sentiment analysis, and question answering. In order to get reliable word rep...
How controlling the variance specifically affects the level of overlap?
Controlling the variance is effective means of achieving the desired overlap behaviour [37]. Increasing variance increases level of overlap [44].
[ 37, 44 ]
[ { "id": "1812.02833_all_0", "text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor...
What types of control signals are present?
Objective control signal, and Objective control singals types are the only type mentioned in this paper [1]. thereby can't give a full answer within this paper information [2].
[ 1, 2 ]
[ { "id": "2103.12204_all_0", "text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks,...
What does non-linguistic means?
Non-linguistic is something which is not related to linguistic information, and it includes the tasks such as quantitative computation and decimal operation [1].
[ 1 ]
[ { "id": "2210.12302_all_0", "text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al...
How could we handle more varied and extreme transformations in the unsupervised setting?
[The paper mentions that the handling of more varied and extreme transformations, especially geometric changes, is an important problem for future work] [53].
[ 53 ]
[ { "id": "1703.10593_all_0", "text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed...
Is the "classic paradigm for view synthesis" referring to the same "methods that directly map from input view to the target views"?
Classic paradigm methods for view synthesis establish direct correspondence among multiple input views to get novel views [4].
[ 4 ]
[ { "id": "1704.07813_all_0", "text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h...
What happens when we perform unlearning for really big LMs?
Larger LMs are stronger unlearners because they take fewer epochs for forgetting specific target token sequences and retains most of its previous capabilities compared to smaler LMs [36].
[ 36 ]
[ { "id": "2210.01504_all_0", "text": " Recent work has shown that an adversary can extract training data from Pretrained Language Models (LMs) including Personally Identifiable Information (PII) such as names, phone numbers, and email addresses, and other information such as licensed code, private clinical n...
For the speech recognition task, based on the information provided by the authors on the total number of samples in the dataset, how long (in seconds) is each training sample?
P0: According to the authors, they used about 2000 hours of spoken English data which yielded about 700M training examples [18].
[ 18 ]
[ { "id": "1503.02531_all_0", "text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typi...
What is the role of non-English data for English pretrained models in the finding?
It can enhance cross-lingual transfer and generalization [25].
[ 25 ]
[ { "id": "2204.08110_all_0", "text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monoling...
What is volumetric neural network?
A volumetric neural network works on the input of 3D volumes and uses 3D convolution filters [23].
[ 23 ]
[ { "id": "1606.04797_all_0", "text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes...
To me, it sounds like an excuse for not collecting the dataset. How difficult is it to collect an open-source text-to-video dataset? What kind of procedure does it contain?
It is hard to collect datasets because a similarly sized (text, video) dataset cannot be easily collected [0]. For human evaluation, they employ some annotators and filtered out according to their criteria [29]. Therefore, they are not making an excuse about not collecting the dataset [4].
[ 0, 29, 4 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
What kinds of relevant documents are missing, when lexical matching is used for retrieval?
Using lexical matching makes it difficult to identify synonyms or to distinguish between ambiguous words [0].
[ 0 ]
[ { "id": "2112.07577_all_0", "text": " Information Retrieval (IR) is a central component of many natural language applications. Traditionally, lexical methods (Robertson et al., 1994) have been used to search through text content. However, these methods suffer from the lexical gap (Berger et al., 2000) and a...
Why author said that underperformance of non-pretrained models comes from small data?
Author said that underperformance of non-pretrained models comes from small data because if the model parameter size is too large compare to the data size, model training can be suffured under-fitting [21].
[ 21 ]
[ { "id": "2210.12302_all_0", "text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al...
Was Transfer learning beneficial on the CADe process?
Transfer learning was shown to be beneficial in the paper's experiments, as seen by the differences in performance between AlexNet-TL/GoogLeNet-TL and their non-transfer learning counterparts [44].
[ 44 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot...
The authors mention that their baseline model was able to decode sequences upto 120 "phones" long, when processing an audio segment with repeated sounds or utterances. What does "phones" mean, in this context?
While the paper does not include the definition of the term "phone", it is used as the unit of audio sequence, according to the experimental setup [27].
[ 27 ]
[ { "id": "1506.07503_all_0", "text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation  and visual object classification .111An early version of this work was presented at th...
Why author said that it can be challenging to determine which aspects of the methods contribute the most?
It is challenging to determine which aspects of the methods contribute the most since training is computationally expensive, limiting the amount of tuning that can be done, and is often done with private training data of varying sizes, limiting our ability to measure the effects of the modeling advances [0].
[ 0 ]
[ { "id": "1907.11692_all_0", "text": " Self-training methods such as ELMo Peters et al. (2018), GPT Radford et al. (2018), BERT Devlin et al. (2019), XLM Lample and Conneau (2019), and XLNet Yang et al. (2019) have brought significant performance gains, but it can be challenging to determine which aspects of...
How are the bullet-point summaries converted to queries?
Each article in the news websites they used (CNN, DailyMail) has a couple of bullet points containing an abstractive summary of the article [2]. They convert each bullet point into a Cloze style question and answer using entity detection algorithms [6].
[ 2, 6 ]
[ { "id": "1506.03340_all_0", "text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ...
They achieved state-of-the-art performance on several benchmark datasets. Is it true?
It is true [40].
[ 40 ]
[ { "id": "1411.4038_all_0", "text": " Convolutional networks are driving advances in recognition. Convnets are not only improving for whole-image classification (19, 31, 32), but also making progress on local tasks with structured output. These include advances in bounding box object detection (29, 12, 17), ...
How was the CNN parameters initialized?
Parameters were initialized by sampling from random Gaussian distributions [25].
[ 25 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot...
How the architecture is chosen
The adversary (attacking part) must at least have some partial knowledge of the input (eg, images, text) and expected output (eg, classification) in order to select the architecture of the attacking system [20]. The adversary selects an appropriate architecture adapted to the input-output relation [24]. For instance, if the task is image classification or machine visioon, a convolutional neural network is the best choice [47]. The parameters of the system (Deep Neural Network), like training epochs, number of layers , nodes etc, have relatively little impact on the success of the attack, so they do not determine the architecture [53].
[ 20, 24, 47, 53 ]
[ { "id": "1602.02697_all_0", "text": " A classifier is a ML model that learns a mapping between inputs and a set of classes. For instance, a malware detector is a classifier taking executables as inputs and assigning them to the benign or malware class. Efforts in the security (5, 2, 9, 18) and machine learn...
How to deal with overfitting due to small input set while fine tuning the text-to-img model?
For small input sets, Fine-tuning large image generation models can overfit context and subject appearance [16].
[ 16 ]
[ { "id": "2208.12242_all_0", "text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requi...
How does an imperfect system create a synthesized view reasonable enough to cheat metrics?
Imperfect system can create a synthesized view reasonable enough to cheat metrics for only textureless scenes [2].
[ 2 ]
[ { "id": "1704.07813_all_0", "text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision h...
What are the benefits of applying clustering-based methods to self-supervised learning?
It’s conceptually simple [2].
[ 2 ]
[ { "id": "2211.02284_all_0", "text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n...
What's the reason for sharing features between R-FCN and RPN
The feature maps contain information from the input image [16].
[ 16 ]
[ { "id": "1605.06409_all_0", "text": " A prevalent family (8, 6, 18) of deep networks for object detection can be divided into two subnetworks by the Region-of-Interest (RoI) pooling layer : (i) a shared, “fully convolutional” subnetwork independent of RoIs, and (ii) an RoI-wise subnetwork that does not shar...
How does the performance of ML algorithms on the FashionMNIST dataset compare to those on real world fashion images?
Images and labels are stored in the same file format as the MNIST data set, which is designed for storing vectors and multidimensional matrices [4]. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and is stored in 762 imes 1000 JPEG format [8].
[ 4, 8 ]
[ { "id": "1708.07747_all_0", "text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can d...
What are some other approaches for semantic similarity and how do they differ to Sentence Transformers in architecture and performance?
The authors mention that they experiment with using a sentence transformer (Reimers and Gurevych, 2019) and a custom Seq2Seq model called GUD-IR for their retrieval function [36]. The paper does not contain any information on any other models (apart from these two) that could be used for semantic similarity [38].
[ 36, 38 ]
[ { "id": "2201.06009_all_0", "text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining wi...
What are the reasons for the remarkable progress in the image recognition task?
The authors cite the increases in the number of well-constructed large-scale datasets and the usage of CNNs as the main reasons for the progress in the field of image recognition [0].
[ 0 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot...
What if we train this model with small dataset?
The model generate less of that class for which data is low [33].
[ 33 ]
[ { "id": "1809.11096_all_0", "text": " The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks (GANs, Goodfellow et al. (2014)) at the forefront of efforts to generate high-fidelity, diverse images with models learned directly from data. GAN trai...
The forward step of SBM-Transformer requires additional parameters and computation compared to the original Transformer architecture due to SBM sampling. Is this additional cost outweighed by exploiting sparsity?
SBM-Transformer is efficient compared to existing baselines in terms of FLOP count and peak memory use, but can result in longer runtimes due to sparse tensor operations being less optimized on GPU kernels [33].
[ 33 ]
[ { "id": "2210.15541_all_0", "text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attenti...
If we have 100000 easy examples (0.1 each) and 100 hard examples (2.3 each),Is it possible calculate percentage loss difference between them?
Since the imbalance between easy and hard examples is large(100000 vs 100), it is not possible to calculate the percentage loss difference between them [16].
[ 16 ]
[ { "id": "1708.02002_all_0", "text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ...
What do challenging auxiliary tasks mean?
Challenging auxiliary tasks refer to tasks that are difficult for the model to learn, which can negatively impact the performance of the primary task [17].
[ 17 ]
[ { "id": "2007.08294_all_0", "text": " Graph neural networks  have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including...
How were the probability percentages chosen for the two heuristics of the continuous serving algorithm?
We use two heuristics in particular: first, we close the gripper whenever the network predicts that (\mathbf{I}_{t},\emptyset), where \emptyset corresponds to no motion, will succeed with a probability that is at least 90\% of the best inferred motion \mathbf{v}_{t}^{\star} [21].
[ 21 ]
[ { "id": "1603.02199_all_0", "text": " When humans and animals engage in object manipulation behaviors, the interaction inherently involves a fast feedback loop between perception and action. Even complex manipulation tasks, such as extracting a single object from a cluttered bin, can be performed with hardl...
How are WordPiece Embeddings and Byte Pair Encoding tokenization different, and why do BERT and GPT-2 use them respectively?
BERT and GPT2 are different kinds of models, which is why they might be using different kinds of encoding schemes [1]. BERT is primarily an encoder model, while GPT-like models are generative models that autoregressively predict the next token based on the series of tokens seen so far [20]. This primary difference in class of models might explain why BERT uses WPE while GPT uses BPE tokenization [21]. More details on why these specific tokenization schemes are used for each model can not be found in this paper [9].
[ 1, 20, 21, 9 ]
[ { "id": "2004.04092_all_0", "text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)...
Beyond correctness, why did the authors not evaluate the actual quality, meaning, or usefulness of the generated instructions?
One reason to explain why the authors did not perform more comprehensive quality evaluation of the generated outputs is the difficulty in judging the output of the model [17]. Some tasks cannot be quickly verified by the average human (one example the authors provide for this is converting first-order logic into natural language - a task that only experts with the appropriate domain knowledge can perform) [28]. However, despite this challenge, the authors do perform some analysis to gauge the overall quality of the generated samples [29].
[ 17, 28, 29 ]
[ { "id": "2212.10560_all_0", "text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d...
How can anisotropic probing kernel encode long-range interactions between points of 3D objects ?
The anisotropic probing kernel is designed specifically to be able to capture long-range interactions between 3D points of the objects [17]. In particular, the kernel is elongated and captures only voxels of the same height and along the probing direction [28].
[ 17, 28 ]
[ { "id": "1604.03265_all_0", "text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn...
Did the authors use the entire Pittsburgh (Pitts250k) dataset for experiments or did they use a subset of the dataset?
They use the entire (Pitts30k) dataset and divide it into three equal parts for training, validation and testing, each containing around 83k database images and 8k queries which are geographically disjoint [33].
[ 33 ]
[ { "id": "1511.07247_all_0", "text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ...
What are the difference and similarities between volumetric representations CNN & multi-view representations CNN ?
The similarities between volumetric and multi-view representation are: - when stored as tensors, both representations can easily be used to train convolutional neural networks, ie, volumetric CNNs and multi-view CNNs [11]. - the multi-view CNN down-samples each rendered view to 227x227 pixels to maintain a similar computational cost, the volumetric CNN uses a 30x30x30 occupancy grid [12]. Note that 30x30x30 is approximately 227x227 [13].
[ 11, 12, 13 ]
[ { "id": "1604.03265_all_0", "text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn...
How many question-context tuple is used to train a model in question answering experiment?
90,000 tuples are used to train [27].
[ 27 ]
[ { "id": "1611.01603_all_0", "text": " The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision communities. Systems trained end-to-end now achieve promising results on a variety o...
How does authors claim that squeeze-and-excitation block removal is beneficial?
Previous work shows the potential of removing SE blocks, and authors confirms the benefit of removal with an experimental result [56].
[ 56 ]
[ { "id": "2009.02009_all_0", "text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall...
The proof of Theorem 1 is a direct application of previous results on sparse Transformers. What is the exact significance of this theoretical result?
We show that the low-rank structure of the underlying SBMs does not degrade the expressive power of Transformer, and that SBM-Transformer can universally approximate arbitrary functions with \mathcal{O}(n) connections [22].
[ 22 ]
[ { "id": "2210.15541_all_0", "text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attenti...
Using cross-encoder is time-consuming but accurate. Is this true?
Since the cross-encoder has large memory consumption and compute footprint, it is time-consuming and not suitable for tasks that require fast inference [24]. However, it is relatively accurate compared to the bi-encoder, which is the reason the author utilized knowledge distillation so that they can obtain some accuracy gain from the cross-encoder [3].
[ 24, 3 ]
[ { "id": "1911.03814_all_0", "text": " Scale is a key challenge for entity linking; there are millions of possible entities to consider for each mention. To efficiently filter or rank the candidates, existing methods use different sources of external information, including manually curated mention tables Gan...
Which benchmark dataset used in this paper has the largest data size?
It is hard to say a single benchmark set has largest size, because size definition is not given [18].
[ 18 ]
[ { "id": "1710.10903_all_0", "text": " Convolutional Neural Networks (CNNs) have been successfully applied to tackle problems such as image classification (He et al., 2016), semantic segmentation (Jégou et al., 2017) or machine translation (Gehring et al., 2016), where the underlying data representation has ...
What does SAE stand for?
In the first phase, we will use unsupervised feature learning to initialize the hidden-layer weights W^{[1]} and W^{[2]} [37].
[ 37 ]
[ { "id": "1301.3592_all_0", "text": " Robotic grasping is a challenging problem involving perception, planning, and control. Some recent works (54, 56, 28, 67) address the perception aspect of this problem by converting it into a detection problem in which, given a noisy, partial view of the object from a ca...
How about other terms like adjective?
The authors examine the ability of attribute modification with the adjectives like color annd chage [1]. Also, style transfer demonstrates the examples [11].
[ 1, 11 ]
[ { "id": "2212.11565_all_0", "text": " The large-scale multimodal dataset , consisting of billions of text-image pairs crawled from the Internet, has enabled a breakthrough in Text-to-Image (T2I) generation (30, 35, 6, 42, 40). To replicate this success in Text-to-Video (T2V) generation, recent works (42, 15...
What was the model that achieved the best performance on abdominal LN Detection?
The best performing model for abdominal LN detection was a Cifar-10 CNN [8].
[ 8 ]
[ { "id": "1602.03409_all_0", "text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot...
What is meant by the author saying that replacing pooling layers with convolutional layers reduces the memory footprint during training?
Replacement of pooling layers with convolution layers resulted in smaller memory footprint due to the fact that there is no need to map the output of pooling layers back to their inputs during back-propagation step of training, [8].
[ 8 ]
[ { "id": "1606.04797_all_0", "text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes...
What is the PromptSource dataset and what kind of data does it include?
The PromptSource dataset is a dataset of human-written instructions for various tasks [0]. It probably includes instructions (and examples) for various sorts of tasks [23].
[ 0, 23 ]
[ { "id": "2212.10560_all_0", "text": " The recent NLP literature has witnessed a tremendous amount of activity in building models that can follow natural language instructions (Mishra et al., 2022; Wei et al., 2022; Sanh et al., 2022; Wang et al., 2022; Ouyang et al., 2022; Chung et al., 2022, i.a.). These d...
Do the authors use any datasets other than TIMIT to gauge the generalizability of their model?
The authors used no dataset other than TIMIT for the evaluation of the model [3].
[ 3 ]
[ { "id": "1506.07503_all_0", "text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation  and visual object classification .111An early version of this work was presented at th...
How do shortcut connections, or identity mapping, contribute to the effectiveness of the deep residual learning framework?
The use of identity mapping/shortcut connections can skip one or more layers from network without introducing any extra parameter or computational complexity [21].
[ 21 ]
[ { "id": "1512.03385_all_0", "text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can...
Instead of automatically identifying important attention patterns, why should a human be involved in this process?
Human involvement can provide interpretable results from the identified patterns and the performance enhancement from the pattern injection [15].
[ 15 ]
[ { "id": "2112.05364_all_0", "text": " With transformer-based models (Vaswani et al., 2017) dominating the leaderboard for many key NLP tasks such as summarization (Liu and Lapata, 2019), topic segmentation (Lukasik et al., 2020), and sentiment analysis Adhikari et al. (2019), their core multi-head self-att...
How does YOLOv3 improve upon previous versions of the YOLO object detection algorithm?
YOLOv3 is faster and better than YOLO [14]. It has more layers [22]. The authors also tried some small tricks and experiments which further improved the overall performance [3].
[ 14, 22, 3 ]
[ { "id": "1804.02767_all_0", "text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B...
What datasets provide important corner cases, which was mentioned by paper?
They considered DWIE (Zaporojets et al, 2021) and AIDA (Hoffart et al, 2011) [9].
[ 9 ]
[ { "id": "2108.13530_all_0", "text": " In this paper we explore a principled approach to solve entity linking (EL) jointly with coreference resolution (coref). Concretely, we formulate coref+EL as a single structured task over directed trees that conceives EL and coref as two complementary components: a core...
The cited papers are in the NLP domain, while this paper targets Text-to-Video generation. How did the authors have confidence in adopting unsupervised learning techniques that could perform well in this Text-to-Video domain as well?
Unsupervised learning has long had great success in advancing the field of natural language processing (NLP) and this paper is inspired by these success [0]. Thus the authors have confidence in adopting unsupervised learning in Text-to-Video domain [1].
[ 0, 1 ]
[ { "id": "2209.14792_all_0", "text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid...
How efficient it is?
Tune-A-Video is based on a pre-trained T2I diffusion model and only updates the projection matrices in attention blocks, with the rest of parameters being frozen [36]. Moreover, SC-Attn reduce the computational complexity compared to CogView2 [5].
[ 36, 5 ]
[ { "id": "2212.11565_all_0", "text": " The large-scale multimodal dataset , consisting of billions of text-image pairs crawled from the Internet, has enabled a breakthrough in Text-to-Image (T2I) generation (30, 35, 6, 42, 40). To replicate this success in Text-to-Video (T2V) generation, recent works (42, 15...
In paper authors make the predictions at three different scales, but what is advantage of making object detections at different scales?
By using multi-scaled prediction, YOLOv3 has improved performance for small objects [12]. Also, the subsequent scales benefit from previous scales and the previous features from earlier layers [21].
[ 12, 21 ]
[ { "id": "1804.02767_all_0", "text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B...
What is fusion?
Fusion is used in KinectFusion method in which all depth data from the sensor is fused into a volumetric dense model which is then used to track to camera pose [3].
[ 3 ]
[ { "id": "1610.06475_all_0", "text": " Simultaneous Localization and Mapping (SLAM) has been a hot research topic in the last two decades in the Computer Vision and Robotics communities, and has recently attracted the attention of high-technological companies. SLAM techniques build a map of an unknown enviro...
Is the regularization here intended to make sure that the prior distribution is similar to a given distribution like a Gaussian distribution?
Yes, regularization is used to ensure that the model, Optimus, can organize sentences in a manner similar to some specified prior distribution [14]. Additionally, authors discuss how the degree of regularization can be controlled through a parameter, beta [37].
[ 14, 37 ]
[ { "id": "2004.04092_all_0", "text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)...
Which variants of LSTM encoder-decoder models are used in this study?
Future Predictor, Composite Model, Conditional Future Predictor, Composite Model with Conditional Future Predictor are the variants of LSTM encoder-decoder models are used in this study [15].
[ 15 ]
[ { "id": "1502.04681_all_0", "text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised seq...
How to get local or global features in the PointCloud domain
[An SVM or multi-layer perceptron classifier can be trained on the shape global features for classification [27]. However, point segmentation requires a combination of local and global knowledge [28].
[ 27, 28 ]
[ { "id": "1612.00593_all_0", "text": " In this paper we explore deep learning architectures capable of reasoning about 3D geometric data such as point clouds or meshes. Typical convolutional architectures require highly regular input data formats, like those of image grids or 3D voxels, in order to perform w...
From the sign agreement, one can see improvement in accuracy with facts. Why?
when we allow the model to just choose one fact the accuracy is 094 but that jumps to a near perfect accuracy when we enable the model to select two facts [53].
[ 53 ]
[ { "id": "2212.13894_all_0", "text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is...
Why it is sufficient to predict a binary mask without concern for the categories once the instance has been classified as a whole ?
it is sufficient to predict a binary mask without concern for the categories once the instance has been classified as a whole because Mask R-CNN decouples mask and class prediction: as the existing box branch predicts the class label,a mask is generated for each class without competition among classes (by a per-pixel sigmoid and a binary loss) [39].
[ 39 ]
[ { "id": "1703.06870_all_0", "text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ...
Why did the authors use a mix of 1x1 and 3x3 filters in the expand layer of fire module?
Authors used a mix of 1x1 and 3x3 filters in the expand layer of the fire module to reduce the number of parameters while still getting benefits from the desired properties of having reasonable scope of the input receptive field and extracting correlations and useful information by applying the 3*3 filters of the CNN [14]. To have a small number of parameters in a CNN, we need to decrease the number of input channels to the 3x3 filters and here comes the role of 1*1 filters, while the 3x3 filters are used to capture larger spatial features (Assuming only 3*3 and 1*1 kernels) [17].
[ 14, 17 ]
[ { "id": "1602.07360_all_0", "text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur...
How does the author design the model to receive unfixed-size input?
paper's model can process multiple different domains [28].
[ 28 ]
[ { "id": "1611.01603_all_0", "text": " The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision communities. Systems trained end-to-end now achieve promising results on a variety o...
Did the authors use different sampling strategies than F-RCNN or other detection algorithms?
Yes, the authors used YOLO as a sampling strategy [21].
[ 21 ]
[ { "id": "1512.02325_all_0", "text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective S...
How would the loss function of YoloV3 look after changing Mean squared errors with the logistic regression cross-entropy error terms?
Binary cross-entropy is used for the class predictions [25]. Logistic activation is used and is better than the linear activation [8].
[ 25, 8 ]
[ { "id": "1804.02767_all_0", "text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B...