input stringlengths 14 315 | answer stringlengths 9 2.16k | gold_ctxs listlengths 1 15 | ctxs listlengths 11 186 |
|---|---|---|---|
Why would the fact that PR2 had a greater gripping force be a valid reason for the difference in performance if "valid grasps which was not executed correctly by Yogi" were still counted as true positives for Yogi? | PR2 yielded a higher success rate as seen in Table V, succeeding in 89% of trials [103]. This is largely due to the much wider span of PR2’s gripper from open to closed and its ability to fully close from its widest position, as well as PR2’s ability to apply a larger gripping force [99]. | [
103,
99
] | [
{
"id": "1301.3592_all_0",
"text": " Robotic grasping is a challenging problem involving perception, planning, and control. Some recent works (54, 56, 28, 67) address the perception aspect of this problem by converting it into a detection problem in which, given a noisy, partial view of the object from a ca... |
How does temporal context-aware embedding and twin-attention network enable LSAN to be lightweighted compared to SASRec? | Authors do not discuss how [5]. | [
5
] | [
{
"id": "2202.02519_all_0",
"text": " Recommender systems have been widely used in many scenarios to provide personalized items to users over massive vocabularies of items. The core of an effective recommender system is to accurately predict users’ interests toward items based on their historical interactio... |
Is there a rule or criteria to have certain number of frames per second for a video? I think the number of frames per second can bring some bias in training. | To manage the computational requirements of training our models, they only train on a small subset of say 16 frames at a time [11]. But thier newly introduced joint training on video and image modeling, they concatenate random independent image frames to the end of each video sampled from the dataset [18]. Due to memory constraints, they use fixed number of frames but these randomly sampled frames helps to reduce bias in training [19]. For evaluation, they adopt fixed number of conditioning samples and generating a sequence of video frames to compare other baselines [20]. | [
11,
18,
19,
20
] | [
{
"id": "2204.03458_all_0",
"text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we prese... |
What is AdapterFusion? | AdapterFusion is one of the multi-task learning method based on attention-like mechanism [3]. It aggregates pre-trained adapters in a non-destructive manner mitigating catastrophic forgetting and interference between tasks [9]. | [
3,
9
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
How did the token-masking policy help in the results? | The use of token-masking policy in the proposed method (NLPO) is theorized by the authors to have been a key reason on why it was able to outperform the PPO based model [12]. They hypothesized that their masking function acts as a dynamic constraint added to the algorithm, that is able to more effectively filter and capture relevant information about the state thanks to its' dynamic nature [22]. | [
12,
22
] | [
{
"id": "2210.01241_all_0",
"text": " The ultimate aim of language technology is to interact with humans. However, most language models are trained without direct signals of human preference, with supervised target strings serving as (a sometimes crude) proxy. One option to incorporate user feedback is via ... |
The authors crop the CNN at the last convolutional layer and view it as a dense descriptor extractor. Why did the authors do the cropping at the last convolutional layer and not in the middle? | Authors mentioned that they have cropped the CNN at the last convolutional layer and view it as a dense descriptor extractor as they found it work well in experiments ie,instance retrieval and texture recognition [16]. | [
16
] | [
{
"id": "1511.07247_all_0",
"text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ... |
How did the authors design the meta-path prediction task? | The authors designed the meta-path prediction task as a variation of link prediction [12]. In meta-path prediction, instead of just predicting links between two nodes, the task is to predict the presence of a specific sequence of heterogeneous composite relations, called a meta-path [10]. | [
12,
10
] | [
{
"id": "2007.08294_all_0",
"text": " Graph neural networks have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including... |
What are the metrics used for comparing Inception-v4, Inception- ResNet-v1/2 and their ensembles? | Metrics used for the comparison of ensembles are: a) Computational Cost b) Recognition Performance c) Step Time d) top-5 error [11]. | [
11
] | [
{
"id": "1602.07261_all_0",
"text": " Since the 2012 ImageNet competition winning entry by Krizhevsky et al , their network “AlexNet” has been successfully applied to a larger variety of computer vision tasks, for example to object-detection , segmentation , human pose estimation , video classification , o... |
Do only Blended diffusion (Avrahami et al.) use user-provided masks for the guidance of manipulation? | No, a previous work by (Bau et al [1]. [7]) demonstrated how to use user-provided masks for guidance of manipulation, as well as most LLI-based methods requires masks defined by the user [5]. | [
1,
5
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
Is it true that approximating the online encoder slowly make the target encoder keep from converging to the collapsed solution? | Approximating the online encoder keep the target encoder from converging to the collapsed solution [19]. | [
19
] | [
{
"id": "2105.06323_all_0",
"text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on... |
The output VLAD image representation matrix is converted into a vector and, after normalization, used as the image representation. What is the normalization method used by authors? | L2-norm for each column of the representation matrix, converted into a vector, and finally L2-normalized over the new vector [18]. | [
18
] | [
{
"id": "1511.07247_all_0",
"text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented ... |
What does Omniglot means? | Omniglot is a dataset consist of 1623 characters from 50 different alphabets [38]. | [
38
] | [
{
"id": "1711.04043_all_0",
"text": " Supervised end-to-end learning has been extremely successful in computer vision, speech, or machine translation tasks, thanks to improvements in optimization technology, larger datasets and streamlined designs of deep convolutional or recurrent architectures. Despite th... |
What are the rescorers they used? | A supervised neural rescorer is used [47]. | [
47
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
Is it right to define partition in Euclidean space? | [The paper shows the generalizability of its approach to non-Euclidean space as well [43]. | [
43
] | [
{
"id": "1706.02413_all_0",
"text": " We are interested in analyzing geometric point sets which are collections of points in a Euclidean space. A particularly important type of geometric point set is point cloud captured by 3D scanners, e.g., from appropriately equipped autonomous vehicles. As a set, such d... |
What is the motivation behind using residual learning in deep neural networks? | [Residual learning is used to solve degradation problem in deep neural networks] [21]. | [
21
] | [
{
"id": "1512.03385_all_0",
"text": " Deep convolutional neural networks (22, 21) have led to a series of breakthroughs for image classification (21, 50, 40). Deep networks naturally integrate low/mid/high-level features and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can... |
What is the effect of having larger values of β ? | Large \beta is not universally beneficial for disentanglement, as the level of overlap can be increased too far [25]. Increasing \beta can reinforce existing inductive biases, wherein mean-field assumptions encourage representations which reduce dependence between the latent dimensions [30]. Increasing \beta increases the level of overlap in q_{{\phi}}\left(\bm{z}\right), as a consequence of increasing the encoder variance for individual datapoints [37]. | [
25,
30,
37
] | [
{
"id": "1812.02833_all_0",
"text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor... |
Transformers are typically used with multiple attention layers and heads. Why did the authors use a single-layer single-head Transformer architecture for the synthetic task of finding repeated tokens? | Using a single-layer and single-head architecture forces a constrained setting where the sole head must perform full attention to compare each token to all the others in order to attain 100% accuracy [26]. | [
26
] | [
{
"id": "2210.15541_all_0",
"text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attenti... |
Why did author choose RWPE model to compare the effective k value? | Author chooses RWPE for absolute positional encoding to show the outperformance of SAT is due to its structure-awareness [25]. The reason is that SAT is equivalent to a vanilla Transformer using RWPE that isn't structure-aware if k=0 [39]. | [
25,
39
] | [
{
"id": "2202.03036_all_0",
"text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019... |
Does "generation" and "inference" here mean the same as "decoder" and "encoder", respectively? | Yes, in this context, "generation" refers to "decoder" and "inference" refers to encoder [11]. | [
11
] | [
{
"id": "2004.04092_all_0",
"text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)... |
Why focusing on latency-aware NAS is important? | Other criteria, such as complexity of the network or the number of MAC operations, is not a proper measure of latency [68]. Thus targeting on latency is important [4]. | [
68,
4
] | [
{
"id": "2009.02009_all_0",
"text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a chall... |
How is the Optimus pre-training objectives and its information bottleneck approach any different to those of traditional VAEs (used for image generation)? | The paper explains how information theoretic principles can be used to measure the predictive power of a model and its' compactness (a measure of how complex the learned representations are)and represent it as a tradeoff [16]. They explain how they manage to inject conditioning vectors into GPT without having to pretrain it again specifically for this and also discuss how they combine GPT and BERT [4]. | [
16,
4
] | [
{
"id": "2004.04092_all_0",
"text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019)... |
What is the reasons that made the authors choose the 3D data from CAD model & RGB-D sensors ? | The authors choose CAD models and RGB-D data for several reasons [40]. First, to demonstrate that, while learning only the synthetic CAD models they are still able to generalize to real-world RGB-D reconstructions [5]. Second, the RGB-D dataset is exclusively proposed in this paper, and it is purposely difficult (contains occlusions and reconstruction noise) [9]. | [
40,
5,
9
] | [
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn... |
The authors claims that DeepMedic behaves very well in preserving the hierarchical structure tumours, is that true ? Have they tried it across different types of varying cases? | Figure 12 shows successful cases of segmentation for the hierarchy of brain tumors [59]. | [
59
] | [
{
"id": "1603.05959_all_0",
"text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient out... |
The accuracy is achieved by which ML Model used for training the substitute | The ML model used for achieving the accuracy is a DNN (Deep Neural Network) combined with LR (Logistic Regression), and the two refinements as introduced in Section 6: a periodic step size and reservoir sampling [36]. | [
36
] | [
{
"id": "1602.02697_all_0",
"text": " A classifier is a ML model that learns a mapping between inputs and a set of classes. For instance, a malware detector is a classifier taking executables as inputs and assigning them to the benign or malware class. Efforts in the security (5, 2, 9, 18) and machine learn... |
[Section 4.2]: What characteristics do the aforementioned datasets in transfer learning have? | A variety of range of classification tasks, including texture, scene [31]. | [
31
] | [
{
"id": "2211.02284_all_0",
"text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n... |
Do the authors claim that bigger datasets would improve the performance and expressiveness of reading comprehension models? | Based on the information in this paper alone, it is unclear if a bigger dataset would improve the performance of reading comprehension models [1]. While authors explain that a key contribution they make is the creation of a real-world, massive labelled reading comprehension dataset, it is unclear if such a dataset is essential to improve the performance of reading comprehension models - the authors pitch their dataset-building approach also as a way of evaluating performance of these models, which is different from the dataset itself leading to better performance [34]. | [
1,
34
] | [
{
"id": "1506.03340_all_0",
"text": " Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine reading and comprehension have been based on either hand engineered grammars , or ... |
What are some challenges that researchers have encountered when generating 3D face images from 2D images? | The paper mainly talks about research on 3D face data reconstruction in terms of "one-to-many augmentation" methods [42]. Also, attempts to enlarge 3D face datasets with the same method are mentioned [77]. | [
42,
77
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
How is the computation cost of MSG different from MRG? | [Compared with MSG, MRG is computationally more efficient since it avoids feature extraction in large scale neighborhoods at the lowest levels [26]. The MSG approach is computationally expensive since it runs local PointNet at large scale neighborhoods for every centroid point [29]. | [
26,
29
] | [
{
"id": "1706.02413_all_0",
"text": " We are interested in analyzing geometric point sets which are collections of points in a Euclidean space. A particularly important type of geometric point set is point cloud captured by 3D scanners, e.g., from appropriately equipped autonomous vehicles. As a set, such d... |
What does it mean the realistic sample? | A realistic sample in this context refers to a 3D object that has undergone a smooth deformation, meaning that the shape of the object changes gradually rather than abruptly [13]. The realistic samples that the authors aim to generate are those that resemble real-world objects with diverse shapes and deformations, such as airplanes with varying wing lengths and directions, guitars with different sizes and aspect ratios, and people with different heights and postures [9]. | [
13,
9
] | [
{
"id": "2110.05379_all_0",
"text": " Modern deep learning techniques, which established their popularity on structured data, began showing success on point clouds. Unlike images with clear lattice structures, each point cloud is an unordered set of points with no inherent structures that globally represent... |
How did the authors ensure that fair comparison between the 9 variants of LSTMs they analysed? | To ensure the fair comparison that needs to be similar for each variant, the authors tuned the hyperparameters individually for each variant, and use random search to 1) obtain good hyperparameters and 2) collect enough amount of samples for analyzing the general effect of each variant [20]. | [
20
] | [
{
"id": "1503.04069_all_0",
"text": " Recurrent neural networks with Long Short-Term Memory (which we will concisely refer to as LSTMs) have emerged as an effective and scalable model for several learning problems related to sequential data. Earlier methods for attacking these problems have either been tail... |
Why was decomposition introduced ? | Decomposition can be thought of as imposing a desired structure on the learned representations at a high level [15]. It is used to introduce a generalization of disentanglement [3]. | [
15,
3
] | [
{
"id": "1812.02833_all_0",
"text": " An oft-stated motivation for learning disentangled representations of data with deep generative models is a desire to achieve interpretability (5, 10)—particularly the decomposability (see §3.2.1 in 33) of latent representations to admit intuitive explanations. Most wor... |
What is the big reason of making difficult to decide whether a control signal is sample-suitable in advance? | Because it must be a reasonable description for the specific image sample [2]. | [
2
] | [
{
"id": "2103.12204_all_0",
"text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks,... |
Is channel shuffle operation is similiar to that of random sparse convolution? | The group convolution and channel shuffle are clearly described and evaluated in the paper [10]. The authors claim that random scarce convolution is similar to channel shuffle with group convolution [11]. However, since the purpose of random scarce convolution is different and is not described, it is hard to tell just by the paper, how exactly they are similar [23]. | [
10,
11,
23
] | [
{
"id": "1707.01083_all_0",
"text": " Building deeper and larger convolutional neural networks (CNNs) is a primary trend for solving major visual recognition tasks (21, 9, 33, 5, 28, 24). The most accurate CNNs usually have hundreds of layers and thousands of channels (9, 34, 32, 40), thus requiring computa... |
Why cutout data augmentation improve NASNet-A model error rate? | From the above evidential paragraph, we can see that the cutout data augmentation achieves a state-of-the-art error rate of 240% which is better than the previous record [23]. | [
23
] | [
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture enginee... |
How does the cluster-based method learn meaningful representation from scratch? | Clustering methods encourage the representations to encode the semantic structures of the data [1]. While this can be prone to collapse, they rely on extra techniques [2]. | [
1,
2
] | [
{
"id": "2211.02284_all_0",
"text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n... |
Where did author apply "mask apps in installation and uninstallation"? | To calculate mask loss in masked app prediction stage, authors randomly mask apps in installation and uninstallation but keep the corresponding date and behavior type [45]. | [
45
] | [
{
"id": "2005.13303_all_0",
"text": " Personalized mobile business, e.g., recommendations, and advertising, often require effective user representations. For better performance, user modeling in industrial applications often considers as much information as possible, including but not limited to gender, loc... |
What does an "adversarial perturbation" mean? | Adversarial perturbation is a small and unnoticeable change to the data that fool the given model (ie give a different class after applying the perturbation) [0]. It allows an understanding limits of existing architectures and calculation of the robustness of the models [1]. | [
0,
1
] | [
{
"id": "1511.04599_all_0",
"text": " Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in many research areas such as bioinformatics (1, 16), speech (12, 6), and computer vision (10, 8). Though deep networks have exhibited very good performance ... |
For training, did the authors intentionally use a single relevant passage or have no choice but to do that becuase the training dataset provides only one relevant passage, i.e., because of annotation scarcity? | The authors had no choice because the training dataset provides only one relevant passage, ie, because of annotation scarcity [11]. | [
11
] | [
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019);... |
What is the instance of quantization artifacts? | [The point clouds or meshes are not in a regular format, most researchers typically transform such data to regular 3D voxel grids or collections of images (eg [0]. | [
0
] | [
{
"id": "1612.00593_all_0",
"text": " In this paper we explore deep learning architectures capable of reasoning about 3D geometric data such as point clouds or meshes. Typical convolutional architectures require highly regular input data formats, like those of image grids or 3D voxels, in order to perform w... |
Why does zero-shot evaluation has been suggested as a genuine measure for reasoning capability? | It is hard to measure reasoning capability using individual datasets because the model cannot learn how to perform general semantic reasoning [0]. | [
0
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
While the proposed approach uses Adam, would there be any systemic issues with using SGD or RMSProp with discriminative fine-tuning? | Discriminative fine-tuning tunes each layer with different learning rates [18]. The systemic issues that will occur from using SGD or RMSProp with discriminative fine-tuning cannot be answered from this paper [19]. | [
18,
19
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
Why is there a larger number of basic characters used for Asian languages than for Western languages? | The number of basic characters depends on the data, and the answer to this question is not within the limit of this paper [31]. | [
31
] | [
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi... |
What does the proposed method BUIR require instead of negative sampling for training? | BUIR requires positive user-item pairs instead of negative sampling for training [35]. | [
35
] | [
{
"id": "2105.06323_all_0",
"text": " Over the past decade, one-class collaborative filtering (OCCF) problems (Pan et al., 2008; Hu et al., 2008) have been extensively researched to accurately infer a user’s preferred items, particularly for the recommender systems where only the users’ implicit feedback on... |
Is there no temporal dependency between the latent variable for explicit planning? | The latent variable for explicit planning has no temporal dependency [14]. The latent variable is derived from the standard normal distribution without the dependency on the score features [33]. | [
14,
33
] | [
{
"id": "2208.14867_all_0",
"text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ... |
Based on the authors' definition of the loss function used during training, will data points that contain longer sentences be likelier to have higher absolute value of loss and if so, why? | Since the loss function is defined as a sum of the negative log likelihood and not averaged, the value tends to be increased in proportion to the length of S (N in eq [18]. | [
18
] | [
{
"id": "1411.4555_all_0",
"text": " Being able to automatically describe the content of an image using properly formed English sentences is a very challenging task, but it could have great impact, for instance by helping visually impaired people better understand the content of images on the web. This task... |
What is an example of a "module" in CNN? | a module can be thought of as a block of some several layers may be of different filter sizes and dimensions to perform some specific functionality [5]. | [
5
] | [
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur... |
What metrics are used to measure performance on segmentation tasks? | mIOU is used to measure performance on segmentation tasks [39]. | [
39
] | [
{
"id": "1801.04381_all_0",
"text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resour... |
What are differences between “warping error”, the “Rand error” and the “pixel error” ? | If both queries and documents are short, fine-granular interaction is not required [17]. | [
17
] | [
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available traini... |
How did the authors showed that the methods performed worse on the data coming from the second clinical center? Using which metrics ? | Through Tables 2 to 5, the authors have shown that the performance of DeepMedic in terms of DSC, precision, sensitivity, ASSD, and Haussdorf for the BRATS and ISLES test datasets are worse than the performance of DeepMedic when trained with the BRATS and ISLES training datasets [60]. | [
60
] | [
{
"id": "1603.05959_all_0",
"text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient out... |
The authors mentioned that they used an all-zero frame of values to denote the end of sequence. Is this choice of all zero arbitrary (i.e. any unique set of values could be used to encode end of sequence) or is there some other benefit to choosing all-zeroes? | Since there is no evidential information about the reason or the benefit of using the all-zero frame in this paper, this question cannot be answered and requires external knowledge [27]. | [
27
] | [
{
"id": "1506.07503_all_0",
"text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation and visual object classification .111An early version of this work was presented at th... |
Why did the authors choose a format of the 3D input as 30x30x30 ? | The volumetric representation is costly - in order to keep the same computational cost as multi-view representation of 227x227, the volumetric representation can only have 30x30x30 resolution [13]. | [
13
] | [
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techn... |
How could we recognize previously unseen person using k-NN? | k-NN classification can be used to recognize the unseen faces by computing the distance between the FaceNet embeddings [1]. | [
1
] | [
{
"id": "1503.03832_all_0",
"text": " In this paper we present a unified system for face verification (is this the same person), recognition (who is this person) and clustering (find common people among these faces). Our method is based on learning a Euclidean embedding per image using a deep convolutional ... |
How does the timestamp \tau control for stylization, specification of object attributes, or global manipulations for editing image by text prompt? | The overall composition is reflected by the attenion maps, which can be injected during the diffusion process at controled time-step, which allows the necessary freedom for adapting the new prompt [17]. Composite: True [23]. | [
17,
23
] | [
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained o... |
Can adaptive-architecture networks be used in other tasks besides face recognition? | Adaptive-architecture networks have been successfully applied to various tasks like image classification, semantic segmentation, and more [31]. | [
31
] | [
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early... |
How are hard attention models different from soft attention models? | Soft attention model's weights are placed "softly" over all patches in the source image [18]. | [
18
] | [
{
"id": "1508.04025_all_0",
"text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con... |
Why should we focus on the self-supervised method? For example, the limited label can make a tremendous gap against the self-supervised approaches. | Self-supervised learning methods perform well in semi-supervised learning, transfer learning, and object detection [0]. | [
0
] | [
{
"id": "2211.02284_all_0",
"text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n... |
What is CifarNet? | CifarNet was a CNN model that was used for the object recognition task using the Cifar10 dataset [21]. | [
21
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
Are there any differences between VGG-16 and YOLO's custom framework besides size? | The YOLO framework uses a custom network based on the Googlenet architecture [39]. | [
39
] | [
{
"id": "1612.08242_all_0",
"text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained ... |
How fast is the proposed method compared to the naive approach? | Proposed method is five times faster than the naive approach [21]. | [
21
] | [
{
"id": "1411.4038_all_0",
"text": " Convolutional networks are driving advances in recognition. Convnets are not only improving for whole-image classification (19, 31, 32), but also making progress on local tasks with structured output. These include advances in bounding box object detection (29, 12, 17), ... |
Is odd classification task is linguistic? | Odd classification is one of linguistic task because it does not included in six non-linguistic tasks [27]. | [
27
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
How YoloV3 calculates the sizes of the anchor boxes? | The authors they tried multiples of the initial anchor sizes specified by the 9 clusters [24]. The clusters as specified at the cell D58 [4]. | [
24,
4
] | [
{
"id": "1804.02767_all_0",
"text": " Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year ; I managed to make some improvements to YOLO. B... |
Why is the carry gate C can be expressed in function of the transform gate T with C = 1 - T ? | By defining C = 1-T, the authors made it automatically learn how much information to change or leave as is [11]. | [
11
] | [
{
"id": "1507.06228_all_0",
"text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, withi... |
Compare between the results of mobile net and full convolutions on the ImageNet dataset | Depthwise separable convolutions reduced ImageNet accuracy by 1±% while saving considerable multiplication addition and model parameters [36]. | [
36
] | [
{
"id": "1704.04861_all_0",
"text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order t... |
Will doing batch calls use cached values? | Yes batch calls uses cache values [30]. | [
30
] | [
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is... |
Why localization, objection detection, and image segmentation downstream tasks are underwhelming? | Clustering-based methods relies on pseudo-labels on representation learning [1]. Therefore, our testbed is focused on classification-based benchmark [4]. | [
1,
4
] | [
{
"id": "2211.02284_all_0",
"text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of n... |
Can we use Faster R-CNN for multi task learning ? | mask R-CNN is bettar than Faster R-CNN in multitasking [58]. | [
58
] | [
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network ... |
What does "online hard example mining (OHEM)" means ? | In OHEM, each example is scored by its loss, non-maximum suppression (nms) is then applied, and a minibatch is constructed with the highest-loss examples [45]. | [
45
] | [
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of ... |
Give a situation that global model is necessary. | Global model is necessary when it should use correct candidates from other mentions in the cluster [16]. | [
16
] | [
{
"id": "2108.13530_all_0",
"text": " In this paper we explore a principled approach to solve entity linking (EL) jointly with coreference resolution (coref). Concretely, we formulate coref+EL as a single structured task over directed trees that conceives EL and coref as two complementary components: a core... |
What are the models that yielded the least competitive detection accuracy results on the Thoracoabdominal Lymph Node Detection? | CifarNet, AlexNet-ImNet and GoogLeNet-RI-H were the models that had the worst results [34]. | [
34
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
How does KG-Classifier affect zero-shot fusion? | zero-shot fusion with KG-C adapter fuses the knowledge from different experts with a subtle difference rather than focusing on a single expert severely [29]. | [
29
] | [
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap e... |
What is batch normalization? | While P0 shows that the authors using batch normalization, it does not contains the definition [14]. | [
14
] | [
{
"id": "1411.4555_all_0",
"text": " Being able to automatically describe the content of an image using properly formed English sentences is a very challenging task, but it could have great impact, for instance by helping visually impaired people better understand the content of images on the web. This task... |
How did authro decide the size of the neighborhood? | The size of neighborhoods is set as 25 for 1-hop and 10 for 2-hop [28]. | [
28
] | [
{
"id": "1706.02216_all_0",
"text": " Low-dimensional vector embeddings of nodes in large graphs111While it is common to refer to these data structures as social or biological networks, we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs f... |
Is it true that FPS could cover the whole set? | [For evenly covering the whole set, the paper selects centroids among the input point set by a farthest point sampling (FPS) algorithm [15]. Given input points \{x_{1},x_{2},,x_{n}\}, the paper uses iterative FPS to choose a subset of points \{x_{i_{1}},x_{i_{2}},,x_{i_{m}}\}, such that x_{i_{j}} is the most distant point (in metric distance) from the set \{x_{i_{1}},x_{i_{2}},,x_{i_{j-1}}\} with regard to the rest points [4]. | [
15,
4
] | [
{
"id": "1706.02413_all_0",
"text": " We are interested in analyzing geometric point sets which are collections of points in a Euclidean space. A particularly important type of geometric point set is point cloud captured by 3D scanners, e.g., from appropriately equipped autonomous vehicles. As a set, such d... |
The goal of authors regarding microarchitectural design space was to understand the impact of CNN architectural choices on model size and accuracy. Were they able to draw a conclusive impact? | They come to an impact that the size of the model can be reduced while still obtaining same or higher accuracy with fewer parameters through manipulating architectural design strategies as is the case in their architecture -SqueezeNet [1]. Although the authors rather design and execute experiments with the goal of providing intuitions about the shape of the microarchitectural design space with respect to the design strategies they proposed, SqueezeNet and other models reside in a broad and largely unexplored design
space of CNN architectures that need more investigations [28]. | [
1,
28
] | [
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur... |
What do we mean by hidden units? | The chosen number of hidden units is based on prior literature on auto-encoders and as the choice of dataset, hidden units affect overfitting [23]. As the number of hidden units, we can learn latent representation and it affects the performance of applications such as image denoising, inpainting and super-resolution [26]. | [
23,
26
] | [
{
"id": "1312.6114_all_0",
"text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approxi... |
What is the key difference btw GANs and Diffusion models that leads to piror work on inversions not being helpful here? | GANs and diffusion models are very different classes of models [13]. GANs attempt to invert an image by transforming an input image into a latent vector - this process of inversion occurs by attempting to optimize the latent vector directly, or alternatively by training an image encoder model on a large dataset of images [15]. Diffusion models, on the other hand, primarily function by adding noise to an image, and then training a model to denoise (or remove noise) from these noisy images [23]. The authors attempted to extend their model using ideas from these two classes of models but stated that neither of them resulted in significantly better performance [48]. The exact reason why these strategies do not work is not explicitly discussed in the paper [6]. | [
13,
15,
23,
48,
6
] | [
{
"id": "2208.01618_all_0",
"text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and com... |
From the left graph of Figure 1, we observe that even the deepest highway network has same/worse performance than the plain network, so what are the benefits of using the highway networks with deeper layers ? | Although highway networks do not perform well at best, they do not break down significantly when stacked deeply [25]. Also, there is freedom in setting the number of depths, and it can be learned well with vanilla SGD [33]. In addition, meaningful outputs come out from all layers and information can be handed over dynamically [37]. | [
25,
33,
37
] | [
{
"id": "1507.06228_all_0",
"text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, withi... |
Give two examples which fit in following case: “Despite these successes, this learning setup does not cover many aspects where learning is nonetheless possible and desirable.” | There were great succeed in computer vision and speech tasks [0]. | [
0
] | [
{
"id": "1711.04043_all_0",
"text": " Supervised end-to-end learning has been extremely successful in computer vision, speech, or machine translation tasks, thanks to improvements in optimization technology, larger datasets and streamlined designs of deep convolutional or recurrent architectures. Despite th... |
What if, instead of concatenating the feedback to the prompt, the prompt was automatically edited according to the feedback? | Strictly speaking, the proposed approach is editing the prompt - even additions or concatenations to a user prompt also count as "editing" the prompt [2]. However, the current paper contains no information on what would happen if the prompt were somehow edited without concatenation [22]. | [
2,
22
] | [
{
"id": "2201.06009_all_0",
"text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining wi... |
Why do NMT systems sometimes produce output sentences that do not translate all parts of the input sentence? | Authors implemented a coverage penalty to encourage the model to translate all of the provided input, however, it's not clear why sometimes NMT systems fail to translate all parts of the input [63]. | [
63
] | [
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashi... |
Does lower FVD value mean more coherent generation? What is a coherent video in the first place? | Coherent means a semantically similar video in spite of large differences between frames where having real-world knowledge of how objects move is crucial [31]. | [
31
] | [
{
"id": "2209.14792_all_0",
"text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, vid... |
What are the signs that showed that BigDeep+ has been overfitting ? | As seen in Figure 8, despite BigDeep+ having a similar capacity to DeepMedic, the mean validation accuracy of BigDeep+ converges to a lower accuracy than that of DeepMedic [46]. | [
46
] | [
{
"id": "1603.05959_all_0",
"text": " Segmentation and the subsequent quantitative assessment of lesions in medical images provide valuable information for the analysis of neuropathologies and are important for planning of treatment strategies, monitoring of disease progression and prediction of patient out... |
They claim that LSTIM can synthesize unseen compositions. Is this true? What are some examples? | Assuming that “LSTIM” stands for “large-scale text-to-image models”, the authors mention a list of related work that demonstrate the efficacy of models such as these to reason over natural language queries and generate new art or images [1]. | [
1
] | [
{
"id": "2208.01618_all_0",
"text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and com... |
What kinds of alignment functions are used for their attention-based models? | Authors used location, dot, general and concat alignment functions in their experiments [38]. | [
38
] | [
{
"id": "1508.04025_all_0",
"text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is con... |
What software was used to run the optimizations for BA? | For BA optimization Levenberg–Marquardt method implemented in g2o software is used [22]. | [
22
] | [
{
"id": "1610.06475_all_0",
"text": " Simultaneous Localization and Mapping (SLAM) has been a hot research topic in the last two decades in the Computer Vision and Robotics communities, and has recently attracted the attention of high-technological companies. SLAM techniques build a map of an unknown enviro... |
What metrics are used to measure the robustness of the model? | No metrics are mentioned for explicitly measuring robustness [33]. | [
33
] | [
{
"id": "2202.02519_all_0",
"text": " Recommender systems have been widely used in many scenarios to provide personalized items to users over massive vocabularies of items. The core of an effective recommender system is to accurately predict users’ interests toward items based on their historical interactio... |
The authors mention that their primary network has a compute cost of 300 million multiply-adds. By how many orders of magnitude would this compute cost increase if the authors did not use bottleneck layers? | The compute cost when using traditional layers are h · w · d' · d'' · k^2, so the cost would be increase by a factor of d'' · k^2/t(d' + k^2 + d'') [18]. | [
18
] | [
{
"id": "1801.04381_all_0",
"text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resour... |
What is the difference in error rate on IMDb dataset with and without pretraining? | The authors compared using no pretraining with pretraining on WikiText-103 and showed that pretraining was useful in improving performance for small to larg-esized datasets [43]. | [
43
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
What are the eight different LSTM variants that the authors experimented with? | The authors conducted the experiment with these LSTM variants of the vanilla architecture to empirically compare different LSTM variants: No Input Gate (NIG), No Forget Gate (NFG), No Output Gate (NOG), No Input Activation Function (NIAF), No Output Activation Function (NOAF), Coupled Input and Forget Gate (CIFG), No Peepholes (NP), Full Gate Recurrence (FGR) [19]. | [
19
] | [
{
"id": "1503.04069_all_0",
"text": " Recurrent neural networks with Long Short-Term Memory (which we will concisely refer to as LSTMs) have emerged as an effective and scalable model for several learning problems related to sequential data. Earlier methods for attacking these problems have either been tail... |
how the structure information of graphs different from the positional information of graphs? | Structural information of graphs serves a measure of structural similarity between nodes [2]. The reason is that most existing approaches fail to identify structural similarities between nodes, compared to SAIT that tries to capture structural similarities among nodes by encoding structural information [26]. | [
2,
26
] | [
{
"id": "2202.03036_all_0",
"text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019... |
Does performing augmentation "on-the-fly" prior to each optimization iteration slow down the learning process? | On the fly data augmentation will decrease the storage requirements and will increase the speed of each training iteration [17]. | [
17
] | [
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This succes... |
How would using max-pooling, min-pooling, or mean-pooling instead of the proposed concat-pooling impact memory utilisation during training? | Concat-pooling functions by concatenating the hidden state of the last time step of the document with both the max-pooled and mean-pooled representations of the hidden states [23]. | [
23
] | [
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS... |
In order to determine whether two domains are similar or not, how could we define 'similarity'? | ["Similarity" may be defined as a function between input and output, however, it may vary from one particular task or formulation to another [11]. | [
11
] | [
{
"id": "1703.10593_all_0",
"text": " What did Claude Monet see as he placed his easel by the bank of the Seine near Argenteuil on a lovely spring day in 1873 (Figure 1, top-left)? A color photograph, had it been invented, may have documented a crisp blue sky and a glassy river reflecting it. Monet conveyed... |
How is the original ILD images were reconstructed ? | A process consisting of deconvolution, back-propagation with convolution, and un-pooling from the activation maps of the pooling units was used to reconstruct the original ILD images [58]. | [
58
] | [
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annot... |
What factors could the authors have used while deciding the number of specialists to allocate for their task? | P0: Through results shown in Table 4, the authors saw a general trend that accuracy improved when more specialists covered a particular class [34]. | [
34
] | [
{
"id": "1503.02531_all_0",
"text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typi... |
What if we change the order of g and h in equation (1)? | [A symmetric function takes n vectors as input and outputs a new vector that is invariant to the input order [21]. For example, + and * operators are symmetric binary functions [24]. | [
21,
24
] | [
{
"id": "1612.00593_all_0",
"text": " In this paper we explore deep learning architectures capable of reasoning about 3D geometric data such as point clouds or meshes. Typical convolutional architectures require highly regular input data formats, like those of image grids or 3D voxels, in order to perform w... |
How does the choice of layers, in which to downsample, affect the size of activation maps? | As we can see, downsampling aim to collect summary about statistics of different regions of some feature map, and this can be addressed with stride >1 in convolution or pooling layers which affects the size of activation map [15]. | [
15
] | [
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accur... |
What are examples where we have annotation holes? | Lexical approaches like BM25 and docT5query have a rather low Hole@10 value of 64% and 28%, indicating that the annotation pool contained the top-hits from lexical retrieval systems [40]. | [
40
] | [
{
"id": "2104.08663_all_0",
"text": " Major natural language processing (NLP) problems rely on a practical and efficient retrieval component as a first step to find relevant information. Challenging problems include open-domain question-answering , claim-verification , duplicate question detection , and man... |
Why didn't the authors try the listening test for the samples from non-zero, realistic explicit planning, like other performance rendering studies? | The authors didn't try the listening test for the samples from non-zero, realistic explicit planning due to the following reason [31]. Such realistic explicit planning should be inserted by the user, or inferred from the posterior distribution with respect to the ground truth data, maybe Classical music with various musical expressions, but the existing expressions can be already constrained by the written guidelines [33]. The written expression can be a strong bias to the listeners so that the new expression against the original expression can be perceived as awkward regardless of how natural the expression itself is [21]. | [
31,
33,
21
] | [
{
"id": "2208.14867_all_0",
"text": " Computational modeling of expressive music performance focuses on mimicking human behaviors that convey the music (1, 2). For piano performance, one common task is to render an expressive performance from a quantized musical score. It aims to reproduce the loudness and ... |
How Zipf distribution and Uniform distribution different? | Zipf distribution consists of words which picked with a unigram probability that follows Zipf's law [34]. | [
34
] | [
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.