aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1811.01533
|
2898843852
|
Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.
|
Now that we have established the necessary definitions, we will dive into the recent applications of transfer learning for time series data mining tasks. In fact, transfer learning is sometimes confused with the domain adaptation approach @cite_28 @cite_6 . The main difference with the latter method is that the model is jointly trained on the source and target datasets @cite_34 . The goal of using the target instances during training, is to minimize the discrepancy between the source's and target's instances. @cite_45 , a domain adaptation approach was proposed to predict human indoor occupancy based on the carbon dioxide concentration in the room. @cite_47 , hidden Markov models' generative capabilities were used in a domain adaptation approach to recognize human activities based on a sensor network.
|
{
"cite_N": [
"@cite_28",
"@cite_6",
"@cite_45",
"@cite_47",
"@cite_34"
],
"mid": [
"2165698076",
"2159291411",
"",
"2139975922",
"2395579298"
],
"abstract": [
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.",
"",
"Activities of daily living are good indicators of the health status of elderly. Therefore, automating the monitoring of these activities is a crucial step in future care giving. However, many models for activity recognition rely on labeled examples of activities for learning the model parameters. Due to the high variability of different contexts, parameters learned for one context can not automatically be used in another. In this paper, we present a method that allows us to transfer knowledge of activity recognition from one context to the next, a task called transfer learning. We show the effectiveness of our method using real world datasets.",
"Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments."
]
}
|
1811.01533
|
2898843852
|
Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.
|
Perhaps the recent work in @cite_41 is the closest to ours in terms of using transfer learning to improve the accuracy of deep neural networks for TSC. In this work, the authors designed a CNN with an attention mechanism to encode the time series in a supervised manner. Before fine-tuning a model on a target dataset, the model is first jointly pre-trained on several source datasets with themes @cite_40 that are different from the target dataset's theme which limits the choice of the source dataset to only one. Additionally, unlike @cite_41 , we take a pre-designed deep learning model without modifying it nor adding regularizers. This enabled us to solely attribute the improvement in accuracy to the transfer learning feature, which we describe in details in the following section.
|
{
"cite_N": [
"@cite_41",
"@cite_40"
],
"mid": [
"2799773290",
"2555077524"
],
"abstract": [
"We study the use of a time series encoder to learn representations that are useful on data set types with which it has not been trained on. The encoder is formed of a convolutional neural network whose temporal output is summarized by a convolutional attention mechanism. This way, we obtain a compact, fixed-length representation from longer, variable-length time series. We evaluate the performance of the proposed approach on a well-known time series classification benchmark, considering full adaptation, partial adaptation, and no adaptation of the encoder to the new data type. Results show that such strategies are competitive with the state-of-the-art, often outperforming conceptually-matching approaches. Besides accuracy scores, the facility of adaptation and the efficiency of pre-trained encoders make them an appealing option for the processing of scarcely- or non-labeled time series.",
"In the last 5 years there have been a large number of new time series classification algorithms proposed in the literature. These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive. The archive has recently been expanded to 85 data sets, over half of which have been donated by researchers at the University of East Anglia. Aspects of previous evaluations have made comparisons between algorithms difficult. For example, several different programming languages have been used, experiments involved a single train test split and some used normalised data whilst others did not. The relaunch of the archive provides a timely opportunity to thoroughly evaluate algorithms on a larger number of datasets. We have implemented 18 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets. We use these results to test several hypotheses relating to whether the algorithms are significantly more accurate than the benchmarks and each other. Our results indicate that only nine of these algorithms are significantly more accurate than both benchmarks and that one classifier, the collective of transformation ensembles, is significantly more accurate than all of the others. All of our experiments and results are reproducible: we release all of our code, results and experimental details and we hope these experiments form the basis for more robust testing of new algorithms in the future."
]
}
|
1811.01044
|
2898756916
|
Objective: to develop quantitative methods for the clinical interpretation of the ballistocardiogram (BCG), a signal generated by the repetitive motion of the human body due to sudden ejection of blood into the great vessels with each heart beat. Methods: a closed-loop mathematical model of the cardiovascular system is proposed to theoretically simulate the mechanisms generating the BCG signal, which is then compared with the signal acquired via accelerometry on a suspended bed. Results: simulated arterial pressure waveforms and ventricular functions are in very good qualitative and quantitative agreement with those reported in the clinical literature. The simulated BCG signal exhibits the typical I, J, K, L, M and N peaks that characterize BCG signals measured experimentally and its comparison with experimental measurements is very satisfactory both qualitatively and quantitatively. Conclusion: the proposed closed-loop model can reproduce the predominant features of BCG signals on the basis of fundamental mechanisms in cardiovascular physiology. Significance: this work provides a quantitative framework for the clinical interpretation of BCG signals. The present study considers a healthy human body and will be extended to include variability among individuals and to simulate pathological conditions
|
The first computer-aided approach for quantitative interpretation of BCG signals was proposed by in @cite_45 , where the electric analogy to fluid flow was leveraged to describe the motion of blood through the arterial system during the cardiac cycle and calculate the resulting BCG signal. Since then, only a few studies have been directed to the theoretical interpretation of BCG signals. @cite_0 , utilized a three-dimensional finite element model for blood flow in the thoracic aorta to show that the traction at the vessel wall appears of similar magnitude to recorded BCG forces. @cite_48 , proposed a simplified model based on the equilibrium forces within the aorta to show that blood pressure gradients in the ascending and descending aorta are major contributors to the BCG signal.
|
{
"cite_N": [
"@cite_0",
"@cite_48",
"@cite_45"
],
"mid": [
"2160792674",
"2519059642",
"2078318670"
],
"abstract": [
"The ballistocardiogram (BCG) signal represents the movements of the body in response to cardiac ejection of blood. The BCG signal can change considerably under various physiological states; however, little information exists in literature describing how these forces are generated. A physical analysis is presented using a finite element model of thoracic aortic vasculature to quantify forces generated by the blood flow during the cardiac cycle. The traction at the fluid-solid interface of this deformable wall model generates a Central Aortic Force (CAF) which appears of similar magnitude to recorded BCG forces. The increased pulse pressure in an exercise simulation caused a significant increase in CAF, which is consistent with recent BCG measurements in exercise recovery.",
"For more than a century, it has been known that the body recoils each time the heart ejects blood into the arteries. These subtle cardiogenic body movements have been measured with increasingly convenient ballistocardiography (BCG) instruments over the years. A typical BCG measurement shows several waves, most notably the “I”, “J”, and “K” waves. However, the mechanism for the genesis of these waves has remained elusive. We formulated a simple mathematical model of the BCG waveform. We showed that the model could predict the BCG waves as well as physiologic timings and amplitudes of the major waves. The validated model reveals that the principal mechanism for the genesis of the BCG waves is blood pressure gradients in the ascending and descending aorta. This new mechanistic insight may be exploited to allow BCG to realize its potential for unobtrusive monitoring and diagnosis of cardiovascular health and disease.",
"A brief review is given of a mathematical model of the systemic arterial tree that was developed to find a quantitative interpretation of the human longitudinal ballistocardiogram. Derivation and description are presented of an electrical analog of the left ventricle and the systemic arterial tree that has fewer limitations than the mathematical model. Electrical equivalents of blood pressures, blood flows, vascular impedances, plethysmograms, and ballistocardiogram can be easily measured as a function of time, and in absolute value. Samples of such results are reproduced and compared with data reported in the literature."
]
}
|
1811.00839
|
2898912444
|
Directed graphs have been widely used in Community Question Answering services (CQAs) to model asymmetric relationships among different types of nodes in CQA graphs, e.g., question, answer, user. Asymmetric transitivity is an essential property of directed graphs, since it can play an important role in downstream graph inference and analysis. Question difficulty and user expertise follow the characteristic of asymmetric transitivity. Maintaining such properties, while reducing the graph to a lower dimensional vector embedding space, has been the focus of much recent research. In this paper, we tackle the challenge of directed graph embedding with asymmetric transitivity preservation and then leverage the proposed embedding method to solve a fundamental task in CQAs: how to appropriately route and assign newly posted questions to users with the suitable expertise and interest in CQAs. The technique incorporates graph hierarchy and reachability information naturally by relying on a non-linear transformation that operates on the core reachability and implicit hierarchy within such graphs. Subsequently, the methodology levers a factorization-based approach to generate two embedding vectors for each node within the graph, to capture the asymmetric transitivity. Extensive experiments show that our framework consistently and significantly outperforms the state-of-the-art baselines on two diverse real-world tasks: link prediction, and question difficulty estimation and expert finding in online forums like Stack Exchange. Particularly, our framework can support inductive embedding learning for newly posted questions (unseen nodes during training), and therefore can properly route and assign these kinds of questions to experts in CQAs.
|
Graph embedding approaches fall into three broad categories classified by @cite_20 : (1) Factorization based, (2) Random Walk based @cite_0 @cite_23 @cite_5 , and (3) Deep Learning based @cite_18 @cite_8 @cite_17 . Our proposed ATP is factorization based, and hence we focus on discussing about factorization based techniques in this section.
|
{
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_20",
"@cite_17"
],
"mid": [
"2574817444",
"2743104969",
"2154851992",
"2242161203",
"2799012401",
"",
"2949435814"
],
"abstract": [
"Information network mining often requires examination of linkage relationships between nodes for analysis. Recently, network representation has emerged to represent each node in a vector format, embedding network structure, so off-the-shelf machine learning methods can be directly applied for analysis. To date, existing methods only focus on one aspect of node information and cannot leverage node labels. In this paper, we propose TriDNR, a tri-party deep network representation model, using information from three parties: node structure, node content, and node labels (if available) to jointly learn optimal node representation. TriDNR is based on our new coupled deep natural language module, whose learning is enforced at three levels: (1) at the network structure level, TriDNR exploits inter-node relationship by maximizing the probability of observing surrounding nodes given a node in random walks; (2) at the node content level, TriDNR captures node-word correlation by maximizing the co-occurrence of word sequence given a node; and (3) at the node label level, TriDNR models label-word correspondence by maximizing the probability of word sequence given a class label. The tri-party information is jointly fed into the neural network model to mutually enhance each other to learn optimal representation, and results in up to 79 classification accuracy gain, compared to state-of-the-art methods.",
"We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"Representation learning has shown its effectiveness in many tasks such as image classification and text mining. Network representation learning aims at learning distributed vector representation for each vertex in a network, which is also increasingly recognized as an important aspect for network analysis. Most network representation learning methods investigate network structures for learning. In reality, network vertices contain rich information (such as text), which cannot be well applied with algorithmic frameworks of typical representation learning methods. By proving that DeepWalk, a state-of-the-art network representation method, is actually equivalent to matrix factorization (MF), we propose text-associated DeepWalk (TADW). TADW incorporates text features of vertices into network representation learning under the framework of matrix factorization. We evaluate our method and various baseline methods by applying them to the task of multi-class classification of vertices. The experimental results show that, our method outperforms other baselines on all three datasets, especially when networks are noisy and training ratio is small. The source code of this paper can be obtained from https: github.com albertyang33 TADW.",
"This work develops a representation learning method for bipartite networks. While existing works have developed various embedding methods for network data, they have primarily focused on homogeneous networks in general and overlooked the special properties of bipartite networks. As such, these methods can be suboptimal for embedding bipartite networks. In this paper, we propose a new method named BiNE, short for Bipartite Network Embedding, to learn the vertex representations for bipartite networks. By performing biased random walks purposefully, we generate vertex sequences that can well preserve the long-tail distribution of vertices in the original bipartite network. We then propose a novel optimization framework by accounting for both the explicit relations (i.e., observed links) and implicit relations (i.e., unobserved but transitive links) in learning the vertex representations. We conduct extensive experiments on several real datasets covering the tasks of link prediction (classification), recommendation (personalized ranking), and visualization. Both quantitative results and qualitative analysis verify the effectiveness and rationality of our BiNE method.",
"",
"In this paper, we propose a novel framework, called Semi-supervised Embedding in Attributed Networks with Outliers (SEANO), to learn a low-dimensional vector representation that systematically captures the topological proximity, attribute affinity and label similarity of vertices in a partially labeled attributed network (PLAN). Our method is designed to work in both transductive and inductive settings while explicitly alleviating noise effects from outliers. Experimental results on various datasets drawn from the web, text and image domains demonstrate the advantages of SEANO over state-of-the-art methods in semi-supervised classification under transductive as well as inductive settings. We also show that a subset of parameters in SEANO is interpretable as outlier score and can significantly outperform baseline methods when applied for detecting network outliers. Finally, we present the use of SEANO in a challenging real-world setting -- flood mapping of satellite images and show that it is able to outperform modern remote sensing algorithms for this task."
]
}
|
1811.00839
|
2898912444
|
Directed graphs have been widely used in Community Question Answering services (CQAs) to model asymmetric relationships among different types of nodes in CQA graphs, e.g., question, answer, user. Asymmetric transitivity is an essential property of directed graphs, since it can play an important role in downstream graph inference and analysis. Question difficulty and user expertise follow the characteristic of asymmetric transitivity. Maintaining such properties, while reducing the graph to a lower dimensional vector embedding space, has been the focus of much recent research. In this paper, we tackle the challenge of directed graph embedding with asymmetric transitivity preservation and then leverage the proposed embedding method to solve a fundamental task in CQAs: how to appropriately route and assign newly posted questions to users with the suitable expertise and interest in CQAs. The technique incorporates graph hierarchy and reachability information naturally by relying on a non-linear transformation that operates on the core reachability and implicit hierarchy within such graphs. Subsequently, the methodology levers a factorization-based approach to generate two embedding vectors for each node within the graph, to capture the asymmetric transitivity. Extensive experiments show that our framework consistently and significantly outperforms the state-of-the-art baselines on two diverse real-world tasks: link prediction, and question difficulty estimation and expert finding in online forums like Stack Exchange. Particularly, our framework can support inductive embedding learning for newly posted questions (unseen nodes during training), and therefore can properly route and assign these kinds of questions to experts in CQAs.
|
It has been recently shown that many popular random walk based approaches such as DeepWalk @cite_0 , LINE @cite_31 , and node2vec @cite_27 can be unified into the matrix factorization framework with closed forms @cite_38 . However, these methods ignore the asymmetric nature of the path sampling procedure and train the model symmetrically, which restricts their applications. Since node pairs from two hop away will be regarded as negative labels, LINE can only preserve symmetric second-order proximity when applied to directed graphs @cite_14 .
|
{
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_0",
"@cite_27",
"@cite_31"
],
"mid": [
"2761896323",
"2605234117",
"2154851992",
"2366141641",
"1888005072"
],
"abstract": [
"Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"Graph Embedding methods are aimed at mapping each vertex into a low dimensional vector space, which preserves certain structural relationships among the vertices in the original graph. Recently, several works have been proposed to learn embeddings based on sampled paths from the graph, e.g., DeepWalk, Line, Node2Vec. However, their methods only preserve symmetric proximities, which could be insufficient in many applications, even the underlying graph is undirected. Besides, they lack of theoretical analysis of what exactly the relationships they preserve in their embedding space. In this paper, we propose an asymmetric proximity preserving (APP) graph embedding method via random walk with restart, which captures both asymmetric and high-order similarities between node pairs. We give theoretical analysis that our method implicitly preserves the Rooted PageRank score for any two vertices. We conduct extensive experiments on tasks of link prediction and node recommendation on open source datasets, as well as online recommendation services in Alibaba Group, in which the training graph has over 290 million vertices and 18 billion edges, showing our method to be highly scalable and effective.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",
"This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE ."
]
}
|
1811.00839
|
2898912444
|
Directed graphs have been widely used in Community Question Answering services (CQAs) to model asymmetric relationships among different types of nodes in CQA graphs, e.g., question, answer, user. Asymmetric transitivity is an essential property of directed graphs, since it can play an important role in downstream graph inference and analysis. Question difficulty and user expertise follow the characteristic of asymmetric transitivity. Maintaining such properties, while reducing the graph to a lower dimensional vector embedding space, has been the focus of much recent research. In this paper, we tackle the challenge of directed graph embedding with asymmetric transitivity preservation and then leverage the proposed embedding method to solve a fundamental task in CQAs: how to appropriately route and assign newly posted questions to users with the suitable expertise and interest in CQAs. The technique incorporates graph hierarchy and reachability information naturally by relying on a non-linear transformation that operates on the core reachability and implicit hierarchy within such graphs. Subsequently, the methodology levers a factorization-based approach to generate two embedding vectors for each node within the graph, to capture the asymmetric transitivity. Extensive experiments show that our framework consistently and significantly outperforms the state-of-the-art baselines on two diverse real-world tasks: link prediction, and question difficulty estimation and expert finding in online forums like Stack Exchange. Particularly, our framework can support inductive embedding learning for newly posted questions (unseen nodes during training), and therefore can properly route and assign these kinds of questions to experts in CQAs.
|
Higher order proximity is considered by many traditional similarity measurements, and has been shown to be effective in many real world tasks. HOPE @cite_33 proposed to use high-order proximities (AA, CN, RPR, and KI) to approximate asymmetric transitivity. Theoretical analysis shows that APP implicitly preserves the RPR @cite_14 . However cycles in directed graphs as shown in Figure can hurt the performance of asymmetric transitivity preserving for HOPE and APP, and hence severely limit the capability of the learned embedding vectors in graph inference and analysis.
|
{
"cite_N": [
"@cite_14",
"@cite_33"
],
"mid": [
"2605234117",
"2387462954"
],
"abstract": [
"Graph Embedding methods are aimed at mapping each vertex into a low dimensional vector space, which preserves certain structural relationships among the vertices in the original graph. Recently, several works have been proposed to learn embeddings based on sampled paths from the graph, e.g., DeepWalk, Line, Node2Vec. However, their methods only preserve symmetric proximities, which could be insufficient in many applications, even the underlying graph is undirected. Besides, they lack of theoretical analysis of what exactly the relationships they preserve in their embedding space. In this paper, we propose an asymmetric proximity preserving (APP) graph embedding method via random walk with restart, which captures both asymmetric and high-order similarities between node pairs. We give theoretical analysis that our method implicitly preserves the Rooted PageRank score for any two vertices. We conduct extensive experiments on tasks of link prediction and node recommendation on open source datasets, as well as online recommendation services in Alibaba Group, in which the training graph has over 290 million vertices and 18 billion edges, showing our method to be highly scalable and effective.",
"Graph embedding algorithms embed a graph into a vector space where the structure and the inherent properties of the graph are preserved. The existing graph embedding methods cannot preserve the asymmetric transitivity well, which is a critical property of directed graphs. Asymmetric transitivity depicts the correlation among directed edges, that is, if there is a directed path from u to v, then there is likely a directed edge from u to v. Asymmetric transitivity can help in capturing structures of graphs and recovering from partially observed graphs. To tackle this challenge, we propose the idea of preserving asymmetric transitivity by approximating high-order proximity which are based on asymmetric transitivity. In particular, we develop a novel graph embedding algorithm, High-Order Proximity preserved Embedding (HOPE for short), which is scalable to preserve high-order proximities of large scale graphs and capable of capturing the asymmetric transitivity. More specifically, we first derive a general formulation that cover multiple popular high-order proximity measurements, then propose a scalable embedding algorithm to approximate the high-order proximity measurements based on their general formulation. Moreover, we provide a theoretical upper bound on the RMSE (Root Mean Squared Error) of the approximation. Our empirical experiments on a synthetic dataset and three real-world datasets demonstrate that HOPE can approximate the high-order proximities significantly better than the state-of-art algorithms and outperform the state-of-art algorithms in tasks of reconstruction, link prediction and vertex recommendation."
]
}
|
1811.00926
|
2616029431
|
Modern websites include various types of third-party content such as JavaScript, images, stylesheets, and Flash objects in order to create interactive user interfaces. In addition to explicit inclusion of third-party content by website publishers, ISPs and browser extensions are hijacking web browsing sessions with increasing frequency to inject third-party content (e.g., ads). However, third-party content can also introduce security risks to users of these websites, unbeknownst to both website operators and users. Because of the often highly dynamic nature of these inclusions as well as the use of advanced cloaking techniques in contemporary malware, it is exceedingly difficult to preemptively recognize and block inclusions of malicious third-party content before it has the chance to attack the user’s system.
|
Several recent research projects @cite_7 @cite_1 @cite_11 attempted to improve the security of browsers by isolating browser components in order to minimize data sharing among software components. The main issue with these approaches is that they do not perform any isolation between JavaScript loaded from different domains and web applications, letting untrusted scripts access the main web application's code and data. Efforts such as AdJail @cite_12 attempt to protect privacy by isolating ads into an iframe-based sandbox. However, this approach restricts contextual targeting advertisement in which ad scripts need to have access to host page content.
|
{
"cite_N": [
"@cite_1",
"@cite_12",
"@cite_7",
"@cite_11"
],
"mid": [
"1907897959",
"36927914",
"2159079348",
"1705596515"
],
"abstract": [
"Current web browsers are complex, have enormous trusted computing bases, and provide attackers with easy access to modern computer systems. In this paper we introduce the Illinois Browser Operating System (IBOS), a new operating system and a new browser that reduces the trusted computing base for web browsers. In our architecture we expose browser-level abstractions at the lowest software layer, enabling us to remove almost all traditional OS components and services from our trusted computing base by mapping browser abstractions to hardware abstractions directly. We show that this architecture is flexible enough to enable new browser security policies, can still support traditional applications, and adds little overhead to the overall browsing experience.",
"Web publishers frequently integrate third-party advertisements into web pages that also contain sensitive publisher data and end-user personal data. This practice exposes sensitive page content to confidentiality and integrity attacks launched by advertisements. In this paper, we propose a novel framework for addressing security threats posed by third-party advertisements. The heart of our framework is an innovative isolation mechanism that enables publishers to transparently interpose between advertisements and end users. The mechanism supports finegrained policy specification and enforcement, and does not affect the user experience of interactive ads. Evaluation of our framework suggests compatibility with several mainstream ad networks, security from many threats from advertisements and acceptable performance overheads.",
"Current Web browsers are plagued with vulnerabilities, providing hackers with easy access to computer systems via browser-based attacks. Browser security efforts that retrofit existing browsers have had limited success because the design of modern browsers is fundamentally flawed. To enable more secure web browsing, we design and implement a new browser, called the OP Web browser, that attempts to improve the state-of-the-art in browser security. Our overall design approach is to combine operating system design principles with formal methods to design a more secure Web browser by drawing on the expertise of both communities. Our overall design philosophy is to partition the browser into smaller subsystems and make all communication between subsystems simple and explicit. At the core of our design is a small browser kernel that manages the browser subsystems and interposes on all communications between them to enforce our new browser security features. To show the utility of our browser architecture, we design and implement three novel security features. First, we develop novel and flexible security policies that allows us to include plugins within our security framework. Our policy removes the burden of security from plugin writers, and gives plugins the flexibility to use innovative network architectures to deliver content while still maintaining the confidentiality and integrity of our browser, even if attackers compromise the plugin. Second, we use formal methods to prove that the address bar displayed within our browser user interface always shows the correct address for the current Web page. Third, we design and implement a browser-level information-flow tracking system to enable post-mortem analysis of browser-based attacks. If an attacker is able to compromise our browser, we highlight the subset of total activity that is causally related to the attack, thus allowing users and system administrators to determine easily which Web site lead to the compromise and to assess the damage of a successful attack. To evaluate our design, we implemented OP and tested both performance and filesystem impact. To test performance, we measure latency to verify OP's performance penalty from security features are be minimal from a users perspective. Our experiments show that on average the speed of the OP browser is comparable to Firefox and the audit log occupies around 80 KB per page on average.",
"Original web browsers were applications designed to view static web content. As web sites evolved into dynamic web applications that compose content from multiple web sites, browsers have become multiprincipal operating environments with resources shared among mutually distrusting web site principals. Nevertheless, no existing browsers, including new architectures like IE 8, Google Chrome, and OP, have a multi-principal operating system construction that gives a browser-based OS the exclusive control to manage the protection of all system resources among web site principals. In this paper, we introduce Gazelle, a secure web browser constructed as a multi-principal OS. Gazelle's browser kernel is an operating system that exclusively manages resource protection and sharing across web site principals. This construction exposes intricate design issues that no previous work has identified, such as crossprotection-domain display and events protection. We elaborate on these issues and provide comprehensive solutions. Our prototype implementation and evaluation experience indicates that it is realistic to turn an existing browser into a multi-principal OS that yields significantly stronger security and robustness with acceptable performance."
]
}
|
1811.00926
|
2616029431
|
Modern websites include various types of third-party content such as JavaScript, images, stylesheets, and Flash objects in order to create interactive user interfaces. In addition to explicit inclusion of third-party content by website publishers, ISPs and browser extensions are hijacking web browsing sessions with increasing frequency to inject third-party content (e.g., ads). However, third-party content can also introduce security risks to users of these websites, unbeknownst to both website operators and users. Because of the often highly dynamic nature of these inclusions as well as the use of advanced cloaking techniques in contemporary malware, it is exceedingly difficult to preemptively recognize and block inclusions of malicious third-party content before it has the chance to attack the user’s system.
|
There are multiple approaches to automatically detecting malicious web domains. Madtracer @cite_34 has been proposed to automatically capture malvertising cases. But, this system is not as precise as our approach in identifying the causal relationships among different domains. EXPOSURE @cite_33 employs passive DNS analysis techniques to detect malicious domains. SpiderWeb @cite_24 is also a system that is able to detect malicious web pages by crowd-sourcing redirection chains. Segugio @cite_42 tracks new malware-control domain names in very large ISP networks. WebWitness @cite_27 automatically traces back malware download paths to understand attack trends. While these techniques can be used to automatically detect malicious websites and update blacklists, they are not online systems and may not be effectively used to detect malicious third-party inclusions since users expect a certain level of performance while browsing the Web.
|
{
"cite_N": [
"@cite_33",
"@cite_42",
"@cite_24",
"@cite_27",
"@cite_34"
],
"mid": [
"1954903228",
"1498756827",
"2117202485",
"2182421051",
"1985683032"
],
"abstract": [
"The domain name service (DNS) plays an important role in the operation of the Internet, providing a two-way mapping between domain names and their numerical identifiers. Given its fundamental role, it is not surprising that a wide variety of malicious activities involve the domain name service in one way or another. For example, bots resolve DNS names to locate their command and control servers, and spam mails contain URLs that link to domains that resolve to scam servers. Thus, it seems beneficial to monitor the use of the DNS system for signs that indicate that a certain name is used as part of a malicious operation. In this paper, we introduce EXPOSURE, a system that employs large-scale, passive DNS analysis techniques to detect domains that are involved in malicious activity. We use 15 features that we extract from the DNS traffic that allow us to characterize different properties of DNS names and the ways that they are queried. Our experiments with a large, real-world data set consisting of 100 billion DNS requests, and a real-life deployment for two weeks in an ISP show that our approach is scalable and that we are able to automatically identify unknown malicious domains that are misused in a variety of malicious activity (such as for botnet command and control, spamming, and phishing).",
"In this paper, we propose Segugio, a novel defense system that allows for efficiently tracking the occurrence of new malware-control domain names in very large ISP networks. Segugio passively monitors the DNS traffic to build a machine-domain bipartite graph representing who is querying what. After labelling nodes in this query behavior graph that are known to be either benign or malware-related, we propose a novel approach to accurately detect previously unknown malware-control domains. We implemented a proof-of-concept version of Segugio and deployed it in large ISP networks that serve millions of users. Our experimental results show that Segugio can track the occurrence of new malware-control domains with up to 94 true positives (TPs) at less than 0.1 false positives (FPs). In addition, we provide the following results: (1) we show that Segugio can also detect control domains related to new, previously unseen malware families, with 85 TPs at 0.1 FPs, (2) Segugio's detection models learned on traffic from a given ISP network can be deployed into a different ISP network and still achieve very high detection accuracy, (3) new malware-control domains can be detected days or even weeks before they appear in a large commercial domain name blacklist, and (4) we show that Segugio clearly outperforms Notos, a previously proposed domain name reputation system.",
"The web is one of the most popular vectors to spread malware. Attackers lure victims to visit compromised web pages or entice them to click on malicious links. These victims are redirected to sites that exploit their browsers or trick them into installing malicious software using social engineering. In this paper, we tackle the problem of detecting malicious web pages from a novel angle. Instead of looking at particular features of a (malicious) web page, we analyze how a large and diverse set of web browsers reach these pages. That is, we use the browsers of a collection of web users to record their interactions with websites, as well as the redirections they go through to reach their final destinations. We then aggregate the different redirection chains that lead to a specific web page and analyze the characteristics of the resulting redirection graph. As we will show, these characteristics can be used to detect malicious pages. We argue that our approach is less prone to evasion than previous systems, allows us to also detect scam pages that rely on social engineering rather than only those that exploit browser vulnerabilities, and can be implemented efficiently. We developed a system, called SpiderWeb, which implements our proposed approach. We show that this system works well in detecting web pages that deliver malware.",
"Most modern malware download attacks occur via the browser, typically due to social engineering and drive-by downloads. In this paper, we study the \"origin\" of malware download attacks experienced by real network users, with the objective of improving malware download defenses. Specifically, we study the web paths followed by users who eventually fall victim to different types of malware downloads. To this end, we propose a novel incident investigation system, named WebWitness. Our system targets two main goals: 1) automatically trace back and label the sequence of events (e.g., visited web pages) preceding malware downloads, to highlight how users reach attack pages on the web; and 2) leverage these automatically labeled in-the-wild malware download paths to better understand current attack trends, and to develop more effective defenses. We deployed WebWitness on a large academic network for a period of ten months, where we collected and categorized thousands of live malicious download paths. An analysis of this labeled data allowed us to design a new defense against drive-by downloads that rely on injecting malicious content into (hacked) legitimate web pages. For example, we show that by leveraging the incident investigation information output by WebWitness we can decrease the infection rate for this type of drive-by downloads by almost six times, on average, compared to existing URL blacklisting approaches.",
"With the Internet becoming the dominant channel for marketing and promotion, online advertisements are also increasingly used for illegal purposes such as propagating malware, scamming, click frauds, etc. To understand the gravity of these malicious advertising activities, which we call malvertising, we perform a large-scale study through analyzing ad-related Web traces crawled over a three-month period. Our study reveals the rampancy of malvertising: hundreds of top ranking Web sites fell victims and leading ad networks such as DoubleClick were infiltrated. To mitigate this threat, we identify prominent features from malicious advertising nodes and their related content delivery paths, and leverage them to build a new detection system called MadTracer. MadTracer automatically generates detection rules and utilizes them to inspect advertisement delivery processes and detect malvertising activities. Our evaluation shows that MadTracer was capable of capturing a large number of malvertising cases, 15 times as many as Google Safe Browsing and Microsoft Forefront did together, at a low false detection rate. It also detected new attacks, including a type of click-fraud attack that has never been reported before."
]
}
|
1811.00926
|
2616029431
|
Modern websites include various types of third-party content such as JavaScript, images, stylesheets, and Flash objects in order to create interactive user interfaces. In addition to explicit inclusion of third-party content by website publishers, ISPs and browser extensions are hijacking web browsing sessions with increasing frequency to inject third-party content (e.g., ads). However, third-party content can also introduce security risks to users of these websites, unbeknownst to both website operators and users. Because of the often highly dynamic nature of these inclusions as well as the use of advanced cloaking techniques in contemporary malware, it is exceedingly difficult to preemptively recognize and block inclusions of malicious third-party content before it has the chance to attack the user’s system.
|
Another approach is to search and restrict third-party code included in web applications @cite_26 @cite_19 @cite_32 . For example, ADsafe @cite_9 removes dangerous JavaScript features (e.g., eval ), enforcing a whitelist of allowed JavaScript functionality considered safe. It is also possible to protect against malicious JavaScript ads by enforcing policies at runtime @cite_4 @cite_31 . For example, @cite_13 introduce a client-side framework that allows web applications to enforce fine-grained security policies for DOM elements. AdSentry @cite_17 provides a shadow JavaScript engine that runs untrusted ad scripts in a sandboxed environment.
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_9",
"@cite_32",
"@cite_19",
"@cite_31",
"@cite_13",
"@cite_17"
],
"mid": [
"2405466026",
"2121194882",
"",
"2137584523",
"2132733485",
"2032095999",
"2123582298",
"2166406630"
],
"abstract": [
"Publishers wish to sandbox third-party advertisements to protect themselves from malicious advertisements. One promising approach, used by ADsafe, Dojo Secure, and Jacaranda, sandboxes advertisements by statically verifying that their JavaScript conforms to a safe subset of the language. These systems blacklist known dangerous properties that would let advertisements escape the sandbox. Unfortunately, this approach does not prevent advertisements from accessing new methods added to the built-in prototype objects by the hosting page. In this paper, we show that onethird of the Alexa US Top 100 web sites would be exploitable by an ADsafe-verified advertisement. We propose an improved statically verified JavaScript subset that whitelists known-safe properties using namespaces. Our approach maintains the expressiveness and performance of static verification while improving security.",
"This paper introduces a method to control JavaScript execution. The aim is to prevent or modify inappropriate behaviour caused by e.g. malicious injected scripts or poorly designed third-party code. The approach is based on modifying the code so as to make it self-protecting: the protection mechanism (security policy) is embedded into the code itself and intercepts security relevant API calls. The challenges come from the nature of the JavaScript language: any variables in the scope of the program can be redefined, and code can be created and run on-the-fly. This creates potential problems, respectively, for tamper-proofing the protection mechanism, and for ensuring that no security relevant events bypass the protection. Unlike previous approaches to instrument and monitor JavaScript to enforce or adjust behaviour, the solution we propose is lightweight in that (i) it does not require a modified browser, and (ii) it does not require any run-time parsing and transformation of code (including dynamically generated code). As a result, the method has low run-time overhead compared to other methods satisfying (i), and the lack of need for browser modifications means that the policy can even be applied on the server to mitigate some effects of cross-site scripting bugs.",
"",
"Web sites that incorporate untrusted content may use browser- or language-based methods to keep such content from maliciously altering pages, stealing sensitive information, or causing other harm. We study language-based methods for filtering and rewriting JavaScript code, using Yahoo! ADSafe and Facebook FBJS as motivating examples. We explain the core problems by describing previously unknown vulnerabilities and subtleties, and develop a foundation for improved solutions based on an operational semantics of the full ECMA-262 language. We also discuss how to apply our analysis to address the JavaScript isolation problems we discovered.",
"The advent of Web 2.0 has lead to the proliferation of client-side code that is typically written in JavaScript. This code is often combined -- or mashed-up -- with other code and content from disparate, mutually untrusting parties, leading to undesirable security and reliability consequences. This paper proposes GATEKEEPER, a mostly static approach for soundly enforcing security and reliability policies for JavaScript programs. GATEKEEPER is a highly extensible system with a rich, expressive policy language, allowing the hosting site administrator to formulate their policies as succinct Datalog queries. The primary application of GATEKEEPER this paper explores is in reasoning about JavaScript widgets such as those hosted by widget portals Live.com and Google IG. Widgets submitted to these sites can be either malicious or just buggy and poorly written, and the hosting site has the authority to reject the submission of widgets that do not meet the site's security policies. To show the practicality of our approach, we describe nine representative security and reliability policies. Statically checking these policies results in 1,341 verified warnings in 684 widgets, no false negatives, due to the soundness of our analysis, and false positives affecting only two widgets.",
"Vulnerability-driven filtering of network data can offer a fast and easy-to-deploy alternative or intermediary to software patching, as exemplified in Shield [ 2004]. In this article, we take Shield's vision to a new domain, inspecting and cleansing not just static content, but also dynamic content. The dynamic content we target is the dynamic HTML in Web pages, which have become a popular vector for attacks. The key challenge in filtering dynamic HTML is that it is undecidable to statically determine whether an embedded script will exploit the browser at runtime. We avoid this undecidability problem by rewriting web pages and any embedded scripts into safe equivalents, inserting checks so that the filtering is done at runtime. The rewritten pages contain logic for recursively applying runtime checks to dynamically generated or modified web content, based on known vulnerabilities. We have built and evaluated BrowserShield, a general framework that performs this dynamic instrumentation of embedded scripts, and that admits policies for customized runtime actions like vulnerability-driven filtering. We also explore other applications on top of BrowserShield.",
"Much of the power of modern Web comes from the ability of a Web page to combine content and JavaScript code from disparate servers on the same page. While the ability to create such mash-ups is attractive for both the user and the developer because of extra functionality, code inclusion effectively opens the hosting site up for attacks and poor programming practices within every JavaScript library or API it chooses to use. In other words, expressiveness comes at the price of losing control. To regain the control, it is therefore valuable to provide means for the hosting page to restrict the behavior of the code that the page may include. This paper presents ConScript, a client-side advice implementation for security, built on top of Internet Explorer 8. ConScript allows the hosting page to express fine-grained application-specific security policies that are enforced at runtime. In addition to presenting 17 widely-ranging security and reliability policies that ConScript enables, we also show how policies can be generated automatically through static analysis of server-side code or runtime analysis of client-side code. We also present a type system that helps ensure correctness of ConScript policies. To show the practicality of ConScript in a range of settings, we compare the overhead of ConScript enforcement and conclude that it is significantly lower than that of other systems proposed in the literature, both on micro-benchmarks as well as large, widely-used applications such as MSN, GMail, Google Maps, and Live Desktop.",
"Internet advertising is one of the most popular online business models. JavaScript-based advertisements (ads) are often directly embedded in a web publisher's page to display ads relevant to users (e.g., by checking the user's browser environment and page content). However, as third-party code, the ads pose a significant threat to user privacy. Worse, malicious ads can exploit browser vulnerabilities to compromise users' machines and install malware. To protect users from these threats, we propose AdSentry, a comprehensive confinement solution for JavaScript-based advertisements. The crux of our approach is to use a shadow JavaScript engine to sandbox untrusted ads. In addition, AdSentry enables flexible regulation on ad script behaviors by completely mediating its access to the web page (including its DOM) without limiting the JavaScript functionality exposed to the ads. Our solution allows both web publishers and end users to specify access control policies to confine ads' behaviors. We have implemented a proof-of-concept prototype of AdSentry that transparently supports the Mozilla Firefox browser. Our experiments with a number of ads-related attacks successfully demonstrate its practicality and effectiveness. The performance measurement indicates that our system incurs a small performance overhead."
]
}
|
1811.00845
|
2898999186
|
We propose in this paper a combined model of Long Short Term Memory and Convolutional Neural Networks (LSTM-CNN) that exploits word embeddings and positional embeddings for cross-sentence n-ary relation extraction. The proposed model brings together the properties of both LSTMs and CNNs, to simultaneously exploit long-range sequential information and capture most informative features, essential for cross-sentence n-ary relation extraction. The LSTM-CNN model is evaluated on standard dataset on cross-sentence n-ary relation extraction, where it significantly outperforms baselines such as CNNs, LSTMs and also a combined CNN-LSTM model. The paper also shows that the LSTM-CNN model outperforms the current state-of-the-art methods on cross-sentence n-ary relation extraction.
|
There is a large body of research on intra-sentence relation extraction @cite_2 . However, our main focus in this paper is on cross-sentence relation extraction. Therefore, we will limit our discussion below to the cross-sentence relation extraction. Research on cross-sentence relation extraction has extensively used features drawn from dependency trees @cite_3 @cite_11 @cite_12 , tree kernels @cite_15 @cite_30 , and graph LSTMs @cite_12 . Further, studies on inter-sentence relation extraction have limited their attention on extracting binary relations present across sentences @cite_3 @cite_11 @cite_15 @cite_30 . Recently, peng2017cross proposed graph-LSTMs not only to consider binary relations, but also for @math -ary relations across sentences. Although graph LSTMs are useful to model @math -ary relations across sentences, the process of creating directed acyclic graphs covering words in multiple sentences is complex and error-prone. It is non-obvious as where to connect two parse trees and the parse errors compound during the graph creation step. Moreover, co-reference resolution and discourse features used by do not always improve performance of cross-sentence relation extraction.
|
{
"cite_N": [
"@cite_30",
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_12",
"@cite_11"
],
"mid": [
"2578456568",
"110692952",
"2785105945",
"2100258064",
"",
"2522187036"
],
"abstract": [
"",
"In natural language relationships between entities can asserted within a single sentence or over many sentences in a document. Many information extraction systems are constrained to extracting binary relations that are asserted within a single sentence (single-sentence relations) and this limits the proportion of relations they can extract since those expressed across multiple sentences (inter-sentential relations) are not considered. The analysis in this paper focuses on finding the distribution of inter-sentential and single-sentence relations in two corpora used for the evaluation of information extraction systems: the MUC6 corpus and the ACE corpus from 2003. In order to carry out this analysis we had to manually mark up all the management succession relations described in the MUC6 corpus. It was found that inter-sentential relations constitute 28.5 and 9.4 of the total number of relations in MUC6 and ACE03 respectively. This places upper bounds on the recall of information extraction systems that do not consider relations that are asserted across multiple sentences (71.5 and 90.6 respectively).",
"Relation extraction, as an important part of information extraction, can be used for many applications such as question-answering and knowledge base population. To thoroughly comprehend relation extraction, the paper reviews it mainly concentrating on its mainstream methods. Besides, open information extraction (OIE), as a different relation extraction paradigm, is introduced as well. Also, we exploit the challenges and directions for relation extraction. We hope the paper will give the overview of relation extraction and help guide the path ahead.",
"This paper proposes state-of-the-art models for time-event relation extraction (TERE). The models are specifically designed to work effectively with relations that span multiple sentences and paragraphs, i.e., inter-sentence TERE. Our main idea is: (i) to build a computational representation of the context of the two target relation arguments, and (ii) to encode it as structural features in Support Vector Machines using tree kernels. Results on two data sets – Machine Reading and TimeBank – with 3-fold crossvalidation show that the combination of traditional feature vectors and the new structural features improves on the state of the art for inter-sentence TERE by about 20 , achieving a 30.2 F1 score on intersentence TERE alone, and 47.2 F1 for all TERE (inter and intra sentence combined).",
"",
"The growing demand for structured knowledge has led to great interest in relation extraction, especially in cases with limited supervision. However, existing distance supervision approaches only extract relations expressed in single sentences. In general, cross-sentence relation extraction is under-explored, even in the supervised-learning setting. In this paper, we propose the first approach for applying distant supervision to cross- sentence relation extraction. At the core of our approach is a graph representation that can incorporate both standard dependencies and discourse relations, thus providing a unifying way to model relations within and across sentences. We extract features from multiple paths in this graph, increasing accuracy and robustness when confronted with linguistic variation and analysis error. Experiments on an important extraction task for precision medicine show that our approach can learn an accurate cross-sentence extractor, using only a small existing knowledge base and unlabeled text from biomedical research articles. Compared to the existing distant supervision paradigm, our approach extracted twice as many relations at similar precision, thus demonstrating the prevalence of cross-sentence relations and the promise of our approach."
]
}
|
1811.01147
|
2898832237
|
Recent studies show that 85 of women have changed their traveled route to avoid harassment and assault. Despite this, current mapping tools do not empower users with information to take charge of their personal safety. We propose SafeRoute, a novel solution to the problem of navigating cities and avoiding street harassment and crime. Unlike other street navigation applications, SafeRoute introduces a new type of path generation via deep reinforcement learning. This enables us to successfully optimize for multi-criteria path-finding and incorporate representation learning within our framework. Our agent learns to pick favorable streets to create a safe and short path with a reward function that incorporates safety and efficiency. Given access to recent crime reports in many urban cities, we train our model for experiments in Boston, New York, and San Francisco. We test our model on areas of these cities, specifically the populated downtown regions where tourists and those unfamiliar with the streets walk. We evaluate SafeRoute and successfully improve over state-of-the-art methods by up to 17 in local average distance from crimes while decreasing path length by up to 7 .
|
In @cite_7 , a deep reinforcement learning model was developed to localize itself on a 2D map from a 3D perspective and find shortest paths out of their respective mazes. The RPA model uses a combination of model-based and model-free learning for visual navigation with textual instructions @cite_0 . Recently, Google's DeepMind has created a deep reinforcement learning model that trains on Google Street View in order to navigate cities without a map @cite_3 . For tasks of reaching a destination point, the model represents the target in relation to its distances from landmarks nearby. One of the drawbacks of image-based navigation is the amount of data required for training. Furthermore, SafeRoute attempts to co-optimize the two goals of safety and distance, which would result in additional training to not only be able to find a target with unstructured image data but also classify unsafe streets and avoid them. Graph-based navigation appears in some deep reinforcement learning frameworks. DeepPath @cite_16 uses deep reinforcement learning to infer missing links within a knowledge graph. Another method trains on recordings of maze navigation to build a topological map and later navigate to a destination within the maze @cite_11 .
|
{
"cite_N": [
"@cite_7",
"@cite_3",
"@cite_0",
"@cite_16",
"@cite_11"
],
"mid": [
"2770898551",
"2795911278",
"2790797399",
"",
"2786472725"
],
"abstract": [
"The ability to use a 2D map to navigate a complex 3D environment is quite remarkable, and even difficult for many humans. Localization and navigation is also an important problem in domains such as robotics, and has recently become a focus of the deep reinforcement learning community. In this paper we teach a reinforcement learning agent to read a map in order to find the shortest way out of a random maze it has never seen before. Our system combines several state-of-the-art methods such as A3C and incorporates novel elements such as a recurrent localization cell. Our agent learns to localize itself based on 3D first person images and an approximate orientation angle. The agent generalizes well to bigger mazes, showing that it learned useful localization and navigation capabilities.",
"Navigating through unstructured environments is a basic capability of intelligent creatures, and thus is of fundamental interest in the study and development of artificial intelligence. Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation (\"I am here\") and a representation of the goal (\"I am going there\"). Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale. Recognising that successful navigation relies on integration of general policies with locale-specific knowledge, we propose a dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple cities. We present an interactive navigation environment that uses Google StreetView for its photographic content and worldwide coverage, and demonstrate that our learning method allows agents to learn to navigate multiple cities and to traverse to target destinations that may be kilometres away. The project webpage this http URL contains a video summarising our research and showing the trained agent in diverse city environments and on the transfer task, the form to request the StreetLearn dataset and links to further resources. The StreetLearn environment code is available at this https URL",
"Existing research studies on vision and language grounding for robot navigation focus on improving model-free deep reinforcement learning (DRL) models in synthetic environments. However, model-free DRL models do not consider the dynamics in the real-world environments, and they often fail to generalize to new scenes. In this paper, we take a radical approach to bridge the gap between synthetic studies and real-world practices---We propose a novel, planned-ahead hybrid reinforcement learning model that combines model-free and model-based reinforcement learning to solve a real-world vision-language navigation task. Our look-ahead module tightly integrates a look-ahead policy model with an environment model that predicts the next state and the reward. Experimental results suggest that our proposed method significantly outperforms the baselines and achieves the best on the real-world Room-to-Room dataset. Moreover, our scalable method is more generalizable when transferring to unseen environments, and the relative success rate is increased by 15.5 on the unseen test set.",
"",
"We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals. The proposed semi-parametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network capable of retrieving nodes from the graph based on observations. The graph stores no metric information, only connectivity of locations corresponding to the nodes. We use SPTM as a planning module in a navigation system. Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals. The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three. A video of the agent is available at this https URL"
]
}
|
1811.01027
|
2898800889
|
The explosive growth and increasing sophistication of Android malware call for new defensive techniques that are capable of protecting mobile users against novel threats. In this paper, we first extract the runtime Application Programming Interface (API) call sequences from Android apps, and then analyze higher-level semantic relations within the ecosystem to comprehensively characterize the apps. To model different types of entities (i.e., app, API, IMEI, signature, affiliation) and the rich semantic relations among them, we then construct a structural heterogeneous information network (HIN) and present meta-path based approach to depict the relatedness over apps. To efficiently classify nodes (e.g., apps) in the constructed HIN, we propose the HinLearning method to first obtain in-sample node embeddings and then learn representations of out-of-sample nodes without rerunning adjusting HIN embeddings at the first attempt. Afterwards, we design a deep neural network (DNN) classifier taking the learned HIN representations as inputs for Android malware detection. A comprehensive experimental study on the large-scale real sample collections from Tencent Security Lab is performed to compare various baselines. Promising experimental results demonstrate that our developed system AiDroid which integrates our proposed method outperforms others in real-time Android malware detection. AiDroid has already been incorporated into Tencent Mobile Security product that serves millions of users worldwide.
|
In recent years, there have been ample research studies on developing intelligent Android malware detection systems using machine learning and data mining techniques @cite_11 @cite_13 @cite_23 @cite_8 @cite_7 @cite_26 . For example, DroidDolphin @cite_13 built classifiers based on dynamic analysis, while DroidMat @cite_11 and DroidMiner @cite_21 constructed their models based on static analysis. However, most of the existing systems merely utilize content-based features for the detection. To further address the challenges of Android malware detection, in our preliminary work, HinDroid @cite_14 was proposed which considered higher-level semantic relations among apps and APIs and introduced HIN for the first time in Android malware detection; but HinDroid was primarily designed for static HIN without considering new arriving nodes.
|
{
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_23",
"@cite_13",
"@cite_11"
],
"mid": [
"2324464293",
"2744097819",
"2732916693",
"2772265308",
"121173099",
"2531296565",
"2024071684",
"2007857904"
],
"abstract": [
"Android users are constantly threatened by an increasing number of malicious applications (apps), generically called malware. Malware constitutes a serious threat to user privacy, money, device and file integrity. In this paper we note that, by studying their actions, we can classify malware into a small number of behavioral classes, each of which performs a limited set of misbehaviors that characterize them. These misbehaviors can be defined by monitoring features belonging to different Android levels. In this paper we present MADAM, a novel host-based malware detection system for Android devices which simultaneously analyzes and correlates features at four levels: kernel, application, user and package, to detect and stop malicious behaviors. MADAM has been specifically designed to take into account those behaviors that are characteristics of almost every real malware which can be found in the wild. MADAM detects and effectively blocks more than 96 percent of malicious apps, which come from three large datasets with about 2,800 apps, by exploiting the cooperation of two parallel classifiers and a behavioral signature-based detector. Extensive experiments, which also includes the analysis of a testbed of 9,804 genuine apps, have been conducted to show the low false alarm rate, the negligible performance overhead and limited battery consumption.",
"With explosive growth of Android malware and due to the severity of its damages to smart phone users, the detection of Android malware has become increasingly important in cybersecurity. The increasing sophistication of Android malware calls for new defensive techniques that are capable against novel threats and harder to evade. In this paper, to detect Android malware, instead of using Application Programming Interface (API) calls only, we further analyze the different relationships between them and create higher-level semantics which require more effort for attackers to evade the detection. We represent the Android applications (apps), related APIs, and their rich relationships as a structured heterogeneous information network (HIN). Then we use a meta-path based approach to characterize the semantic relatedness of apps and APIs. We use each meta-path to formulate a similarity measure over Android apps, and aggregate different similarities using multi-kernel learning. Then each meta-path is automatically weighted by the learning algorithm to make predictions. To the best of our knowledge, this is the first work to use structured HIN for Android malware detection. Comprehensive experiments on real sample collections from Comodo Cloud Security Center are conducted to compare various malware detection approaches. Promising experimental results demonstrate that our developed system HinDroid outperforms other alternative Android malware detection techniques.",
"In the Internet age, malware (such as viruses, trojans, ransomware, and bots) has posed serious and evolving security threats to Internet users. To protect legitimate users from these threats, anti-malware software products from different companies, including Comodo, Kaspersky, Kingsoft, and Symantec, provide the major defense against malware. Unfortunately, driven by the economic benefits, the number of new malware samples has explosively increased: anti-malware vendors are now confronted with millions of potential malware samples per year. In order to keep on combating the increase in malware samples, there is an urgent need to develop intelligent methods for effective and efficient malware detection from the real and large daily sample collection. In this article, we first provide a brief overview on malware as well as the anti-malware industry, and present the industrial needs on malware detection. We then survey intelligent malware detection methods. In these methods, the process of detection is usually divided into two stages: feature extraction and classification clustering. The performance of such intelligent malware detection approaches critically depend on the extracted features and the methods for classification clustering. We provide a comprehensive investigation on both the feature extraction and the classification clustering techniques. We also discuss the additional issues and the challenges of malware detection using data mining techniques and finally forecast the trends of malware development.",
"With smart phones being indispensable in people's everyday life, Android malware has posed serious threats to their security, making its detection of utmost concern. To protect legitimate users from the evolving Android malware attacks, machine learning-based systems have been successfully deployed and offer unparalleled flexibility in automatic Android malware detection. In these systems, based on different feature representations, various kinds of classifiers are constructed to detect Android malware. Unfortunately, as classifiers become more widely deployed, the incentive for defeating them increases. In this paper, we explore the security of machine learning in Android malware detection on the basis of a learning-based classifier with the input of a set of features extracted from the Android applications (apps). We consider different importances of the features associated with their contributions to the classification problem as well as their manipulation costs, and present a novel feature selection method (named SecCLS) to make the classifier harder to be evaded. To improve the system security while not compromising the detection accuracy, we further propose an ensemble learning approach (named SecENS) by aggregating the individual classifiers that are constructed using our proposed feature selection method SecCLS. Accordingly, we develop a system called SecureDroid which integrates our proposed methods (i.e., SecCLS and SecENS) to enhance security of machine learning-based Android malware detection. Comprehensive experiments on the real sample collections from Comodo Cloud Security Center are conducted to validate the effectiveness of SecureDroid against adversarial Android malware attacks by comparisons with other alternative defense methods. Our proposed secure-learning paradigm can also be readily applied to other malware detection tasks.",
"Most existing malicious Android app detection approaches rely on manually selected detection heuristics, features, and models. In this paper, we describe a new, complementary system, called DroidMiner, which uses static analysis to automatically mine malicious program logic from known Android malware, abstracts this logic into a sequence of threat modalities, and then seeks out these threat modality patterns in other unknown (or newly published) Android apps. We formalize a two-level behavioral graph representation used to capture Android app program logic, and design new techniques to identify and label elements of the graph that capture malicious behavioral patterns (or malicious modalities). After the automatic learning of these malicious behavioral models, DroidMiner can scan a new Android app to (i) determine whether it contains malicious modalities, (ii) diagnose the malware family to which it is most closely associated, (iii) and provide further evidence as to why the app is considered to be malicious by including a concise description of identified malicious behaviors. We evaluate DroidMiner using 2,466 malicious apps, identified from a corpus of over 67,000 third-party market Android apps, plus an additional set of over 10,000 official market Android apps. Using this set of real-world apps, we demonstrate that DroidMiner achieves a 95.3 detection rate, with only a 0.4 false positive rate. We further evaluate DroidMiner’s ability to classify malicious apps under their proper family labels, and measure its label accuracy at 92 .",
"Because of the explosive growth of Android malware and due to the severity of its damages, the detection of Android malware has become an increasing important topic in cyber security. Currently, the major defense against Android malware is commercial mobile security products which mainly use signature-based method for detection. However, attackers can easily devise methods, such as obfuscation and repackaging, to evade the detection, which calls for new defensive techniques that are harder to evade. In this paper, resting on the analysis of Application Programming Interface (API) calls extracted from the smali files, we further categorize the API calls which belong to the some method in the smali code into a block. Based on the generated code blocks, we then apply a deep learning framework (i.e., Deep Belief Network) for newly unknown Android malware detection. Using a real sample collection from Comodo Cloud Security Center, a comprehensive experimental study is performed to compare various malware detection approaches. Promising experimental results demonstrate that DroidDelver which integrates our proposed method outperform other alternative Android malware detection techniques.",
"Smartphones are getting more and more popular nowadays with various kinds of applications to make our lives more convenient. Unfortunately, malicious applications, also known as malware, arises as well. A user is often tempted into install a malware without any awareness, and the malware steals the users' personal information. Some malware would send SMS or make phone calls, which result in additional charges. Thus, detection of malware is critical to protect smartphone users. In this paper, we proposed DroidDolphin, a dynamic malware analysis framework which leverages the technologies of GUI-based testing, big data analysis, and machine learning to detect malicious Android applications. Based on our automatic testing tools, we were able to extract useful static and dynamic features from a training dataset composed with 32,000 benign and 32,000 malicious applications. Our preliminary results showed that the prediction accuracy reaches 86.1 and F-score reaches 0.857. As the dataset increases, the accuracy of detection increases significantly, which makes this methodology promising.",
"Recently, the threat of Android malware is spreading rapidly, especially those repackaged Android malware. Although understanding Android malware using dynamic analysis can provide a comprehensive view, it is still subjected to high cost in environment deployment and manual efforts in investigation. In this study, we propose a static feature-based mechanism to provide a static analyst paradigm for detecting the Android malware. The mechanism considers the static information including permissions, deployment of components, Intent messages passing and API calls for characterizing the Android applications behavior. In order to recognize different intentions of Android malware, different kinds of clustering algorithms can be applied to enhance the malware modeling capability. Besides, we leverage the proposed mechanism and develop a system, called Droid Mat. First, the Droid Mat extracts the information (e.g., requested permissions, Intent messages passing, etc) from each applicationi¦s manifest file, and regards components (Activity, Service, Receiver) as entry points drilling down for tracing API Calls related to permissions. Next, it applies K-means algorithm that enhances the malware modeling capability. The number of clusters are decided by Singular Value Decomposition (SVD) method on the low rank approximation. Finally, it uses kNN algorithm to classify the application as benign or malicious. The experiment result shows that the recall rate of our approach is better than one of well-known tool, Androguard, published in Black hat 2011, which focuses on Android malware analysis. In addition, Droid Mat is efficient since it takes only half of time than Androguard to predict 1738 apps as benign apps or Android malware."
]
}
|
1811.01027
|
2898800889
|
The explosive growth and increasing sophistication of Android malware call for new defensive techniques that are capable of protecting mobile users against novel threats. In this paper, we first extract the runtime Application Programming Interface (API) call sequences from Android apps, and then analyze higher-level semantic relations within the ecosystem to comprehensively characterize the apps. To model different types of entities (i.e., app, API, IMEI, signature, affiliation) and the rich semantic relations among them, we then construct a structural heterogeneous information network (HIN) and present meta-path based approach to depict the relatedness over apps. To efficiently classify nodes (e.g., apps) in the constructed HIN, we propose the HinLearning method to first obtain in-sample node embeddings and then learn representations of out-of-sample nodes without rerunning adjusting HIN embeddings at the first attempt. Afterwards, we design a deep neural network (DNN) classifier taking the learned HIN representations as inputs for Android malware detection. A comprehensive experimental study on the large-scale real sample collections from Tencent Security Lab is performed to compare various baselines. Promising experimental results demonstrate that our developed system AiDroid which integrates our proposed method outperforms others in real-time Android malware detection. AiDroid has already been incorporated into Tencent Mobile Security product that serves millions of users worldwide.
|
To solve the problem of network representation learning, after DeepWalk @cite_1 , LINE @cite_25 and node2vec @cite_6 that were proposed for homogeneous network embedding, HIN2vec @cite_22 , metapath2vec @cite_10 , metagraph2vec @cite_19 , and PME @cite_16 have been proposed for HIN representation learning. However, few of them can deal with out-of-sample nodes, i.e., nodes that arrive after the HIN embedding process. Though algorithms @cite_18 @cite_24 have been proposed to infer embeddings for out-of-sample nodes in HIN, they necessitate adjusting in-sample node embeddings and also the downstream classifier retraining. Efficient representation learning for out-of-sample nodes in HIN without rerunning adjusting HIN embeddings is in need for our application in real-time Android malware detection.
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_1",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_16",
"@cite_10",
"@cite_25"
],
"mid": [
"2062797058",
"2767774008",
"2154851992",
"2366141641",
"2793059793",
"2808927717",
"2809645418",
"2743104969",
"1888005072"
],
"abstract": [
"Data embedding is used in many machine learning applications to create low-dimensional feature representations, which preserves the structure of data points in their original space. In this paper, we examine the scenario of a heterogeneous network with nodes and content of various types. Such networks are notoriously difficult to mine because of the bewildering combination of heterogeneous contents and structures. The creation of a multidimensional embedding of such data opens the door to the use of a wide variety of off-the-shelf mining techniques for multidimensional data. Despite the importance of this problem, limited efforts have been made on embedding a network of scalable, dynamic and heterogeneous data. In such cases, both the content and linkage structure provide important cues for creating a unified feature representation of the underlying network. In this paper, we design a deep embedding algorithm for networked data. A highly nonlinear multi-layered embedding function is used to capture the complex interactions between the heterogeneous data in a network. Our goal is to create a multi-resolution deep embedding function, that reflects both the local and global network structures, and makes the resulting embedding useful for a variety of data mining tasks. In particular, we demonstrate that the rich content and linkage information in a heterogeneous network can be captured by such an approach, so that similarities among cross-modal data can be measured directly in a common embedding space. Once this goal has been achieved, a wide variety of data mining problems can be solved by applying off-the-shelf algorithms designed for handling vector representations. Our experiments on real-world network datasets show the effectiveness and scalability of the proposed algorithm as compared to the state-of-the-art embedding methods.",
"In this paper, we propose a novel representation learning framework, namely HIN2Vec, for heterogeneous information networks (HINs). The core of the proposed framework is a neural network model, also called HIN2Vec, designed to capture the rich semantics embedded in HINs by exploiting different types of relationships among nodes. Given a set of relationships specified in forms of meta-paths in an HIN, HIN2Vec carries out multiple prediction training tasks jointly based on a target set of relationships to learn latent vectors of nodes and meta-paths in the HIN. In addition to model design, several issues unique to HIN2Vec, including regularization of meta-path vectors, node type selection in negative sampling, and cycles in random walks, are examined. To validate our ideas, we learn latent vectors of nodes using four large-scale real HIN datasets, including Blogcatalog, Yelp, DBLP and U.S. Patents, and use them as features for multi-label node classification and link prediction applications on those networks. Empirical results show that HIN2Vec soundly outperforms the state-of-the-art representation learning models for network data, including DeepWalk, LINE, node2vec, PTE, HINE and ESim, by 6.6 to 23.8 of @math - @math in multi-label node classification and 5 to 70.8 of @math in link prediction.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",
"The human resources network, involves enterprise social networks and job networks, can be abstracted as heterogeneous networks or multi-layers networks. Adjusting the position assignments to maximize employee productivity and minimize the company’s cost is the goal of organization optimization. Taking the churn and interaction among the staff into account, this paper puts forward a dynamic optimization model for human resource adjustment, which is based on heterogeneous network, to describe the influence among individuals who are in personal relationship or professional relationship. More specifically, intimacy and loyalty are constructed to form the basis of churn rate, which indicate the influence of the personal and professional relationship respectively. With the operation of the organization, the change of intimacy and loyalty leads to the churn process, which are simulated with Monte Carlo method in a dynamic process among the heterogeneous network. After churning, an optimal strategy of recruitment and position adjustment is obtained using the Genetic Algorithm. In general, the human resource optimization process consists three periodic parts: loyalty and intimacy transformation, staff churn simulation and position assignment. Finally, a case study of an organization with 370 employee positions is carried out to demonstrate the whole process.",
"Due to its severe damages and threats to the security of the Internet and computing devices, malware detection has caught the attention of both anti-malware industry and researchers for decades. To combat the evolving malware attacks, in this paper, we first study how to utilize both content- and relation-based features to characterize sly malware; to model different types of entities (i.e., file, archive, machine, API, DLL ) and the rich semantic relationships among them (i.e., file-archive, file-machine, file-file, API-DLL, file-API relations), we then construct a structural heterogeneous information network (HIN) and present meta-graph based approach to depict the relatedness over files. To measure the relatedness over files on the constructed HIN, since malware detection is a cost-sensitive task, it calls for efficient methods to learn latent representations for HIN. To address this challenge, based on the built meta-graph schemes, we propose a new HIN embedding model metagraph2vec on the first attempt to learn the low-dimensional representations for the nodes in HIN, where both the HIN structures and semantics are maximally preserved for malware detection. A comprehensive experimental study on the real sample collections from Comodo Cloud Security Center is performed to compare various malware detection approaches. The promising experimental results demonstrate that our developed system Scorpion which integrate our proposed method outperforms other alternative malware detection techniques. The developed system has already been incorporated into the scanning tool of Comodo Antivirus product.",
"Heterogenous information network embedding aims to embed heterogenous information networks (HINs) into low dimensional spaces, in which each vertex is represented as a low-dimensional vector, and both global and local network structures in the original space are preserved. However, most of existing heterogenous information network embedding models adopt the dot product to measure the proximity in the low dimensional space, and thus they can only preserve the first-order proximity and are insufficient to capture the global structure. Compared with homogenous information networks, there are multiple types of links (i.e., multiple relations) in HINs, and the link distribution w.r.t relations is highly skewed. To address the above challenging issues, we propose a novel heterogenous information network embedding model PME based on the metric learning to capture both first-order and second-order proximities in a unified way. To alleviate the potential geometrical inflexibility of existing metric learning approaches, we propose to build object and relation embeddings in separate object space and relation spaces rather than in a common space. Afterwards, we learn embeddings by firstly projecting vertices from object space to corresponding relation space and then calculate the proximity between projected vertices. To overcome the heavy skewness of the link distribution w.r.t relations and avoid \"over-sampling'' or \"under-sampling'' for each relation, we propose a novel loss-aware adaptive sampling approach for the model optimization. Extensive experiments have been conducted on a large-scale HIN dataset, and the experimental results show superiority of our proposed PME model in terms of prediction accuracy and scalability.",
"We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.",
"This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE ."
]
}
|
1811.00942
|
2898766142
|
In recent years, we have witnessed a dramatic shift towards techniques driven by neural networks for a variety of NLP tasks. Undoubtedly, neural language models (NLMs) have reduced perplexity by impressive amounts. This progress, however, comes at a substantial cost in performance, in terms of inference latency and energy consumption, which is particularly of concern in deployments on mobile devices. This paper, which examines the quality-performance tradeoff of various language modeling techniques, represents to our knowledge the first to make this observation. We compare state-of-the-art NLMs with "classic" Kneser-Ney (KN) LMs in terms of energy usage, latency, perplexity, and prediction accuracy using two standard benchmarks. On a Raspberry Pi, we find that orders of increase in latency and energy usage correspond to less change in perplexity, while the difference is much less pronounced on a desktop.
|
evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer @cite_8 and mixture of softmaxes @cite_5 . Since our focus is on comparing core'' neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models.
|
{
"cite_N": [
"@cite_5",
"@cite_8"
],
"mid": [
"2767321762",
"2951672049"
],
"abstract": [
"We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck. Given that natural language is highly context-dependent, this further implies that in practice Softmax with distributed word embeddings does not have enough capacity to model natural language. We propose a simple and effective method to address this issue, and improve the state-of-the-art perplexities on Penn Treebank and WikiText-2 to 47.69 and 40.68 respectively.",
"We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks."
]
}
|
1811.00942
|
2898766142
|
In recent years, we have witnessed a dramatic shift towards techniques driven by neural networks for a variety of NLP tasks. Undoubtedly, neural language models (NLMs) have reduced perplexity by impressive amounts. This progress, however, comes at a substantial cost in performance, in terms of inference latency and energy consumption, which is particularly of concern in deployments on mobile devices. This paper, which examines the quality-performance tradeoff of various language modeling techniques, represents to our knowledge the first to make this observation. We compare state-of-the-art NLMs with "classic" Kneser-Ney (KN) LMs in terms of energy usage, latency, perplexity, and prediction accuracy using two standard benchmarks. On a Raspberry Pi, we find that orders of increase in latency and energy usage correspond to less change in perplexity, while the difference is much less pronounced on a desktop.
|
Quasi-Recurrent Neural Networks Quasi-recurrent neural networks (QRNNs; bradbury2016quasi ) achieve current state of the art in word-level language modeling @cite_11 . A quasi-recurrent layer comprises two separate parts: a convolution layer with three weights, and a recurrent pooling layer. Given an input @math , the convolution layer is where @math denotes the sigmoid function, @math represents masked convolution across time, and @math are convolution weights with @math input channels, @math output channels, and a window size of @math . In the recurrent pooling layer, the convolution outputs are combined sequentially: Multiple QRNN layers can be stacked for deeper hierarchical representation, with the output @math being fed as the input into the subsequent layer: In language modeling, a four-layer QRNN is a standard architecture @cite_11 .
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2792376130"
],
"abstract": [
"Many of the leading approaches in language modeling introduce novel, complex and specialized architectures. We take existing state-of-the-art word level language models based on LSTMs and QRNNs and extend them to both larger vocabularies as well as character-level granularity. When properly tuned, LSTMs and QRNNs achieve state-of-the-art results on character-level (Penn Treebank, enwik8) and word-level (WikiText-103) datasets, respectively. Results are obtained in only 12 hours (WikiText-103) to 2 days (enwik8) using a single modern GPU."
]
}
|
1811.01068
|
2949332723
|
We present 3D Pick & Mix, a new 3D shape retrieval system that provides users with a new level of freedom to explore 3D shape and Internet image collections by introducing the ability to reason about objects at the level of their constituent parts. While classic retrieval systems can only formulate simple searches such as "find the 3D model that is most similar to the input image" our new approach can formulate advanced and semantically meaningful search queries such as: "find me the 3D model that best combines the design of the legs of the chair in image 1 but with no armrests, like the chair in image 2". Many applications could benefit from such rich queries, users could browse through catalogues of furniture and pick and mix parts, combining for example the legs of a chair from one shop and the armrests from another shop.
|
Modeling of 3D object parts: We will differentiate between 3D segmentation approaches that seek to ensure consistency in the resulting segmentation across different examples of the same object class (co-segmentation) and those that seek a semantically meaningful segmentation (semantic segmentation). Some recent examples of approaches that perform co-segmentation can be found in @cite_17 @cite_23 , but as we seek to describe parts that have meaning to humans we will focus on the later. We can find examples of semantic 3D parts in approaches like @cite_0 . @cite_0 provides accurate semantic region annotations for large geometric datasets with a fraction of the effort by alternating between using few manual annotations from an expert and a system that propagates labels to new models. We exploit the ShapeNet annotations provided by @cite_0 as the ground truth part shape when constructing our joint manifold.
|
{
"cite_N": [
"@cite_0",
"@cite_23",
"@cite_17"
],
"mid": [
"2553307952",
"2549445985",
"2949896890"
],
"abstract": [
"Large repositories of 3D shapes provide valuable input for data-driven analysis and modeling tools. They are especially powerful once annotated with semantic information such as salient regions and functional parts. We propose a novel active learning method capable of enriching massive geometric datasets with accurate semantic region annotations. Given a shape collection and a user-specified region label our goal is to correctly demarcate the corresponding regions with minimal manual work. Our active framework achieves this goal by cycling between manually annotating the regions, automatically propagating these annotations across the rest of the shapes, manually verifying both human and automatic annotations, and learning from the verification results to improve the automatic propagation algorithm. We use a unified utility function that explicitly models the time cost of human input across all steps of our method. This allows us to jointly optimize for the set of models to annotate and for the set of models to verify based on the predicted impact of these actions on the human efficiency. We demonstrate that incorporating verification of all produced labelings within this unified objective improves both accuracy and efficiency of the active learning procedure. We automatically propagate human labels across a dynamic shape network using a conditional random field (CRF) framework, taking advantage of global shape-to-shape similarities, local feature similarities, and point-to-point correspondences. By combining these diverse cues we achieve higher accuracy than existing alternatives. We validate our framework on existing benchmarks demonstrating it to be significantly more efficient at using human input compared to previous techniques. We further validate its efficiency and robustness by annotating a massive shape dataset, labeling over 93,000 shape parts, across multiple model classes, and providing a labeled part collection more than one order of magnitude larger than existing ones.",
"We introduce a co-analysis technique designed for correspondence inference within large shape collections. Such collections are naturally rich in variation, adding ambiguity to the notoriously difficult problem of correspondence computation. We leverage the robustness of correspondences between similar shapes to address the difficulties associated with this problem. In our approach, pairs of similar shapes are extracted from the collection, analyzed and matched in an efficient and reliable manner, culminating in the construction of a network of correspondences that connects the entire collection. The correspondence between any pair of shapes then amounts to a simple propagation along the minimax path between the two shapes in the network. At the heart of our approach is the introduction of a robust, structure-oriented shape matching method. Leveraging the idea of projective analysis, we partition 2D projections of a shape to obtain a set of 1D ordered regions, which are both simple and efficient to match. We lift the matched projections back to the 3D domain to obtain a pairwise shape correspondence. The emphasis given to structural compatibility is a central tool in estimating the reliability and completeness of a computed correspondence, uncovering any non-negligible semantic discrepancies that may exist between shapes. These detected differences are a deciding factor in the establishment of a network aiming to capture local similarities. We demonstrate that the combination of the presented observations into a co-analysis method allows us to establish reliable correspondences among shapes within large collections.",
"We present a learning framework for abstracting complex shapes by learning to assemble objects using 3D volumetric primitives. In addition to generating simple and geometrically interpretable explanations of 3D objects, our framework also allows us to automatically discover and exploit consistent structure in the data. We demonstrate that using our method allows predicting shape representations which can be leveraged for obtaining a consistent parsing across the instances of a shape collection and constructing an interpretable shape similarity measure. We also examine applications for image-based prediction as well as shape manipulation."
]
}
|
1811.01045
|
2963487393
|
Subspace clustering algorithms are notorious for their scalability issues because building and processing large affinity matrices are demanding. In this paper, we introduce a method that simultaneously learns an embedding space along subspaces within it to minimize a notion of reconstruction error, thus addressing the problem of subspace clustering in an end-to-end learning paradigm. To achieve our goal, we propose a scheme to update subspaces within a deep neural network. This in turn frees us from the need of having an affinity matrix to perform clustering. Unlike previous attempts, our method can easily scale up to large datasets, making it unique in the context of unsupervised learning with deep architectures. Our experiments show that our method significantly improves the clustering accuracy while enjoying cheaper memory footprints.
|
Linear subspace clustering methods can be classified as algebraic algorithms, iterative methods, statistical methods and spectral clustering-based methods @cite_1 . Among them, spectral clustering-based methods @cite_27 @cite_5 @cite_38 @cite_37 @cite_6 @cite_21 have become dominant in the literature. In general, spectral clustering-based methods solve the problem in two steps: encode a notion of similarity between pairs of data points into an affinity matrix; then, apply normalized cuts @cite_24 or spectral clustering @cite_12 on this affinity matrix. To construct the affinity matrix, recent methods tend to rely on the concept of self-expressiveness , which seeks to express each point in a cluster as a linear combination of other points sharing some common notions ( , coming from the same subspace).
|
{
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_24",
"@cite_27",
"@cite_5",
"@cite_12"
],
"mid": [
"1992300915",
"2071422140",
"2372584135",
"",
"2114554887",
"2121947440",
"",
"1997201895",
"2165874743"
],
"abstract": [
"In this paper, we tackle the problem of clustering data points drawn from a union of linear (or affine) subspaces. To this end, we introduce an efficient subspace clustering algorithm that estimates dense connections between the points lying in the same subspace. In particular, instead of following the standard compressive sensing approach, we formulate subspace clustering as a Frobenius norm minimization problem, which inherently yields denser con- nections between the data points. While in the noise-free case we rely on the self-expressiveness of the observations, in the presence of noise we simultaneously learn a clean dictionary to represent the data. Our formulation lets us address the subspace clustering problem efficiently. More specifically, the solution can be obtained in closed-form for outlier-free observations, and by performing a series of linear operations in the presence of outliers. Interestingly, we show that our Frobenius norm formulation shares the same solution as the popular nuclear norm minimization approach when the data is free of any noise, or, in the case of corrupted data, when a clean dictionary is learned. Our experimental evaluation on motion segmentation and face clustering demonstrates the benefits of our algorithm in terms of clustering accuracy and efficiency.",
"The problems of motion segmentation and face clustering can be addressed in a framework of subspace clustering methods. In this paper, we tackle the more general problem of clustering data points lying in a union of low-dimensional linear(or affine) subspaces, which can be naturally applied in motion segmentation and face clustering. For data points drawn from linear (or affine) subspaces, we propose a novel algorithm called Null Space Clustering (NSC), utilizing the null space of the data matrix to construct the affinity matrix. To better deal with noise and outliers, it is converted to an equivalent problem with Frobenius norm minimization, which can be solved efficiently. We demonstrate that the proposed NSC leads to improved performance in terms of clustering accuracy and efficiency when compared to state-of-the-art algorithms on two well-known datasets, i.e., Hopkins 155 and Extended Yale B.",
"State-of-the-art subspace clustering methods are based on expressing each data point as a linear combination of other data points while regularizing the matrix of coefficients with @math , @math or nuclear norms. @math regularization is guaranteed to give a subspace-preserving affinity (i.e., there are no connections between points from different subspaces) under broad theoretical conditions, but the clusters may not be connected. @math and nuclear norm regularization often improve connectivity, but give a subspace-preserving affinity only for independent subspaces. Mixed @math , @math and nuclear norm regularizations offer a balance between the subspace-preserving and connectedness properties, but this comes at the cost of increased computational complexity. This paper studies the geometry of the elastic net regularizer (a mixture of the @math and @math norms) and uses it to derive a provably correct and scalable active set method for finding the optimal coefficients. Our geometric analysis also provides a theoretical justification and a geometric interpretation for the balance between the connectedness (due to @math regularization) and subspace-preserving (due to @math regularization) properties for elastic net subspace clustering. Our experiments show that the proposed active set method not only achieves state-of-the-art clustering performance, but also efficiently handles large-scale datasets.",
"",
"The Shape Interaction Matrix (SIM) is one of the earliest approaches to performing subspace clustering (i.e., separating points drawn from a union of subspaces). In this paper, we revisit the SIM and reveal its connections to several recent subspace clustering methods. Our analysis lets us derive a simple, yet effective algorithm to robustify the SIM and make it applicable to realistic scenarios where the data is corrupted by noise. We justify our method by intuitive examples and the matrix perturbation theory. We then show how this approach can be extended to handle missing data, thus yielding an efficient and general subspace clustering algorithm. We demonstrate the benefits of our approach over state-of-the-art subspace clustering methods on several challenging motion segmentation and face clustering problems, where the data includes corruptions and missing measurements.",
"We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.",
"",
"In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.",
"Despite many empirical successes of spectral clustering methods— algorithms that cluster points using eigenvectors of matrices derived from the data—there are several unresolved issues. First. there are a wide variety of algorithms that use the eigenvectors in slightly different ways. Second, many of these algorithms have no proof that they will actually compute a reasonable clustering. In this paper, we present a simple spectral clustering algorithm that can be implemented using a few lines of Matlab. Using tools from matrix perturbation theory, we analyze the algorithm, and give conditions under which it can be expected to do well. We also show surprisingly good experimental results on a number of challenging clustering problems."
]
}
|
1811.01045
|
2963487393
|
Subspace clustering algorithms are notorious for their scalability issues because building and processing large affinity matrices are demanding. In this paper, we introduce a method that simultaneously learns an embedding space along subspaces within it to minimize a notion of reconstruction error, thus addressing the problem of subspace clustering in an end-to-end learning paradigm. To achieve our goal, we propose a scheme to update subspaces within a deep neural network. This in turn frees us from the need of having an affinity matrix to perform clustering. Unlike previous attempts, our method can easily scale up to large datasets, making it unique in the context of unsupervised learning with deep architectures. Our experiments show that our method significantly improves the clustering accuracy while enjoying cheaper memory footprints.
|
The literature on true end-to-end learning of subspace clustering is surprisingly limited. Furthermore and to the best of our knowledge, none of the deep algorithms can handle medium size datasets, let aside the large ones Among all the datasets that have been tested, COIL100 with 7,200 images seems to be the largest one. . In hybrid methods such as @cite_19 , hand-crafted features ( , SIFT @cite_11 or HOG @cite_26 ) are fed into a deep auto-encoder with a sparse subspace clustering (SSC) prior. The final clustering is then obtained by applying k-means or SSC on the learned auto-encoder features. Instead of using hand-crafted features, Deep subspace clustering Networks (DSC-NET) @cite_20 employ the deep convolutional Auto-encoder to nonlinearly map the images to a latent space, and make use of a self-expressive layer between the encoder and the decoder to learn the affinities between all the data points. Through learning affinity matrix within the neural network, state-of-the-art results on several traditional small datasets are reported in @cite_20 . Nevertheless, relying on the whole dataset to create the affinity matrix, DSC-NET cannot scale for large dataset.
|
{
"cite_N": [
"@cite_19",
"@cite_26",
"@cite_20",
"@cite_11"
],
"mid": [
"2571899125",
"2161969291",
"2963365397",
"2151103935"
],
"abstract": [
"Subspace clustering aims to cluster unlabeled samples into multiple groups by implicitly seeking a subspace to fit each group. Most of existing methods are based on a shallow linear model, which may fail in handling data with nonlinear structure. In this paper, we propose a novel subspace clustering method -- deeP subspAce clusteRing with sparsiTY prior (PARTY) -- based on a new deep learning architecture. PARTY explicitly learns to progressively transform input data into nonlinear latent space and to be adaptive to the local and global subspace structure simultaneously. In particular, considering local structure, PARTY learns representation for the input data with minimal reconstruction error. Moreover, PARTY incorporates a prior sparsity information into the hidden representation learning to preserve the sparse reconstruction relation over the whole data set. To the best of our knowledge, PARTY is the first deep learning based subspace clustering method. Extensive experiments verify the effectiveness of our method.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"We present a novel deep neural network architecture for unsupervised subspace clustering. This architecture is built upon deep auto-encoders, which non-linearly map the input data into a latent space. Our key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the \"self-expressiveness\" property that has proven effective in traditional subspace clustering. Being differentiable, our new self-expressive layer provides a simple but effective way to learn pairwise affinities between all data points through a standard back-propagation procedure. Being nonlinear, our neural-network based method is able to cluster data points having complex (often nonlinear) structures. We further propose pre-training and fine-tuning strategies that let us effectively learn the parameters of our subspace clustering networks. Our experiments show that the proposed method significantly outperforms the state-of-the-art unsupervised subspace clustering methods.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance."
]
}
|
1811.01045
|
2963487393
|
Subspace clustering algorithms are notorious for their scalability issues because building and processing large affinity matrices are demanding. In this paper, we introduce a method that simultaneously learns an embedding space along subspaces within it to minimize a notion of reconstruction error, thus addressing the problem of subspace clustering in an end-to-end learning paradigm. To achieve our goal, we propose a scheme to update subspaces within a deep neural network. This in turn frees us from the need of having an affinity matrix to perform clustering. Unlike previous attempts, our method can easily scale up to large datasets, making it unique in the context of unsupervised learning with deep architectures. Our experiments show that our method significantly improves the clustering accuracy while enjoying cheaper memory footprints.
|
The SSC by Orthogonal Matching Pursuit (SSC-OMP) @cite_8 is probably the only subspace clustering which could be considered as scalable''. The main idea is to replace the large scale convex optimization procedure with the OMP algorithm in constructing the affinity matrix. Having said this, SSC-OMP makes use of spectral clustering and hence still fails to really push subspace clustering for large scale datasets.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2963840432"
],
"abstract": [
"Subspace clustering methods based on l1, l2 or nuclear norm regularization have become very popular due to their simplicity, theoretical guarantees and empirical success. However, the choice of the regularizer can greatly impact both theory and practice. For instance, l1 regularization is guaranteed to give a subspace-preserving affinity (i.e., there are no connections between points from different subspaces) under broad conditions (e.g., arbitrary subspaces and corrupted data). However, it requires solving a large scale convex optimization problem. On the other hand, l2 and nuclear norm regularization provide efficient closed form solutions, but require very strong assumptions to guarantee a subspace-preserving affinity, e.g., independent subspaces and uncorrupted data. In this paper we study a subspace clustering method based on orthogonal matching pursuit. We show that the method is both computationally efficient and guaranteed to give a subspace-preserving affinity under broad conditions. Experiments on synthetic data verify our theoretical analysis, and applications in handwritten digit and face clustering show that our approach achieves the best trade off between accuracy and efficiency. Moreover, our approach is the first one to handle 100,000 data points."
]
}
|
1811.01045
|
2963487393
|
Subspace clustering algorithms are notorious for their scalability issues because building and processing large affinity matrices are demanding. In this paper, we introduce a method that simultaneously learns an embedding space along subspaces within it to minimize a notion of reconstruction error, thus addressing the problem of subspace clustering in an end-to-end learning paradigm. To achieve our goal, we propose a scheme to update subspaces within a deep neural network. This in turn frees us from the need of having an affinity matrix to perform clustering. Unlike previous attempts, our method can easily scale up to large datasets, making it unique in the context of unsupervised learning with deep architectures. Our experiments show that our method significantly improves the clustering accuracy while enjoying cheaper memory footprints.
|
@math -Subspace Clustering @cite_32 @cite_23 , an iterative methods, can be considered as a generalization of @math -means algorithm. @math -SC shows fast convergence behavior and can handle both linear and affine subspaces explicitly. However, @math -SC methods are sensitive to outliers and initialization. Attempts to make @math -SC methods more robust include the work of Zhang al @cite_17 and Balzano al @cite_34 . In the former, best @math subspaces from a large number of candidate subspaces are selected using a greedy combinatorial algorithm @cite_17 to make the algorithm robust to data corruptions. Balzano al propose a variant of @math subspaces method named @math -GROUSE which can handle the missing data in subspace clustering. However, the resulting methods seem not to producing competitive results compared to methods relying on affinity matrices.
|
{
"cite_N": [
"@cite_34",
"@cite_32",
"@cite_23",
"@cite_17"
],
"mid": [
"2162991150",
"1606778734",
"2118858274",
"2004744336"
],
"abstract": [
"Linear subspace models have recently been successfully employed to model highly incomplete high-dimensional data, but they are sometimes too restrictive to model the data well. Modeling data as a union of subspaces gives more flexibility and leads to the problem of Subspace Clustering, or clustering vectors into groups that lie in or near the same subspace. Low-rank matrix completion allows one to estimate a single subspace from incomplete data, and this work has recently been extended for the union of subspaces problem [3]. However, the algorithm analyzed there is computationally demanding. Here we present a fast algorithm that combines GROUSE, an incremental matrix completion algorithm, and k-subspaces, the alternating minimization heuristic for solving the subspace clustering problem. k-GROUSE is two orders of magnitude faster than the algorithm proposed in [3] and relies on a slightly more general projection theorem which we present here.",
"Recently, Bradley and Mangasarian studied the problem of finding the nearest plane to m given points in ℝn in the least square sense. They showed that the problem reduces to finding the least eigenvalue and associated eigenvector of a certain n×n symmetric positive-semidefinite matrix. We extend this result to the general problem of finding the nearest q-flat to m points, with 0≤q≤n−1.",
"In many applications it is desirable to cluster high dimensional data along various subspaces, which we refer to as projective clustering. We propose a new objective function for projective clustering, taking into account the inherent trade-off between the dimension of a subspace and the induced clustering error. We then present an extension of the k-means clustering algorithm for projective clustering in arbitrary subspaces, and also propose techniques to avoid local minima. Unlike previous algorithms, ours can choose the dimension of each cluster independently and automatically. Furthermore, experimental results show that our algorithm is significantly more accurate than the previous approaches.",
"We present a simple and fast geometric method for modeling data by a union of affine subspaces. The method begins by forming a collection of local best-fit affine subspaces, i.e., subspaces approximating the data in local neighborhoods. The correct sizes of the local neighborhoods are determined automatically by the Jones' β 2 numbers (we prove under certain geometric conditions that our method finds the optimal local neighborhoods). The collection of subspaces is further processed by a greedy selection procedure or a spectral method to generate the final model. We discuss applications to tracking-based motion segmentation and clustering of faces under different illuminating conditions. We give extensive experimental evidence demonstrating the state of the art accuracy and speed of the suggested algorithms on these problems and also on synthetic hybrid linear data as well as the MNIST handwritten digits data; and we demonstrate how to use our algorithms for fast determination of the number of affine subspaces."
]
}
|
1811.00945
|
2899513582
|
To achieve the long-term goal of machines being able to engage humans in conversation, our models should be engaging. We focus on communication grounded in images, whereby a dialogue is conducted based on a given photo, a setup that is naturally engaging to humans (, 2014). We collect a large dataset of grounded human-human conversations, where humans are asked to play the role of a given personality, as the use of personality in conversation has also been shown to be engaging (, 2018). Our dataset, Image-Chat, consists of 202k dialogues and 401k utterances over 202k images using 215 possible personality traits. We then design a set of natural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. Automatic metrics and human evaluations show the efficacy of approach, in particular where our best performing model is preferred over human conversationalists 47.7 of the time
|
The majority of work in dialogue is not grounded in perception, e.g. much recent work explores sequence-to-sequence models or retrieval models for goal-directed or chit-chat tasks . While these tasks are text-based only, many of the techniques developed can likely be transferred for use in multimodal systems, for example using state-of-the-art transformer representations for text @cite_0 as a sub-component.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2795571593"
],
"abstract": [
"Computer-based conversational agents are becoming ubiquitous. However, for these systems to be engaging and valuable to the user, they must be able to express emotion, in addition to providing informative responses. Humans rely on much more than language during conversations; visual information is key to providing context. We present the first example of an image-grounded conversational agent using visual sentiment, facial expression and scene features. We show that key qualities of the generated dialogue can be manipulated by the features used for training the agent. We evaluate our model on a large and very challenging real-world dataset of conversations from social media (Twitter). The image-grounding leads to significantly more informative, emotional and specific responses, and the exact qualities can be tuned depending on the image features used. Furthermore, our model improves the objective quality of dialogue responses when evaluated on standard natural language metrics."
]
}
|
1811.00945
|
2899513582
|
To achieve the long-term goal of machines being able to engage humans in conversation, our models should be engaging. We focus on communication grounded in images, whereby a dialogue is conducted based on a given photo, a setup that is naturally engaging to humans (, 2014). We collect a large dataset of grounded human-human conversations, where humans are asked to play the role of a given personality, as the use of personality in conversation has also been shown to be engaging (, 2018). Our dataset, Image-Chat, consists of 202k dialogues and 401k utterances over 202k images using 215 possible personality traits. We then design a set of natural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. Automatic metrics and human evaluations show the efficacy of approach, in particular where our best performing model is preferred over human conversationalists 47.7 of the time
|
In the area of language and vision, one of the most widely studied areas is image captioning, which involves a single turn utterance given an image. This typically involves producing a descriptive sentence describing the input image, in contrast to producing a conversational utterance as in dialogue. Popular datasets include COCO and Flickr30k . Again, a variety of sequence-to-sequence and retrieval models have been applied. These tasks measure the ability of models to understand the content of an image, but not to carry out an engaging conversation grounded in perception. Some works have extended image captioning from being purely factual towards more engaging captions by incorporating style and personality while still being single turn, e.g. . In particular, the work of @cite_5 builds a large dataset involving personality-based image captions. Our work builds upon this dataset and extends it to multi-turn dialogue.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2898609520"
],
"abstract": [
"Standard image captioning tasks such as COCO and Flickr30k are factual, neutral in tone and (to a human) state the obvious (e.g., \"a man playing a guitar\"). While such tasks are useful to verify that a machine understands the content of an image, they are not engaging to humans as captions. With this in mind we define a new task, Personality-Captions, where the goal is to be as engaging to humans as possible by incorporating controllable style and personality traits. We collect and release a large dataset of 201,858 of such captions conditioned over 215 possible traits. We build models that combine existing work from (i) sentence representations (, 2018) with Transformers trained on 1.7 billion dialogue examples; and (ii) image representations (, 2018) with ResNets trained on 3.5 billion social media images. We obtain state-of-the-art performance on Flickr30k and COCO, and strong performance on our new task. Finally, online evaluations validate that our task and models are engaging to humans, with our best model close to human performance."
]
}
|
1811.01075
|
2972868728
|
Next generation Unmanned Aerial Vehicles (UAVs) must reliably avoid moving obstacles. Existing dynamic collision avoidance methods are effective where obstacle trajectories are linear or known, but such restrictions are not accurate to many real-world UAV applications. We propose an efficient method of predicting an obstacle's motion based only on recent observations, via online training of an LSTM neural network. Given such predictions, we define a Nonlinear Probabilistic Velocity Obstacle (NPVO), which can be used select a velocity that is collision free with a given probability. We take a step towards formal verification of our approach, using statistical model checking to approximate the probability that our system will mispredict an obstacle's motion. Given such a probability, we prove upper bounds on the probability of collision in multiagent and reciprocal collision avoidance scenarios. Furthermore, we demonstrate in simulation that our method avoids collisions where state-of-the-art methods fail.
|
We propose a novel algorithm to predict the motion of moving obstacles via online training of an LSTM Recurrent Neural Network (RNN) @cite_5 , using dropout to obtain uncertainty estimates over these predictions @cite_19 @cite_17 . To our knowledge, this is the first work proposing an online obstacle motion prediction system for collision avoidance without a priori environmental knowledge or an extensive offline training set.
|
{
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_17"
],
"mid": [
"2964059111",
"",
"2963266340"
],
"abstract": [
"Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning.",
"",
"Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning."
]
}
|
1811.01075
|
2972868728
|
Next generation Unmanned Aerial Vehicles (UAVs) must reliably avoid moving obstacles. Existing dynamic collision avoidance methods are effective where obstacle trajectories are linear or known, but such restrictions are not accurate to many real-world UAV applications. We propose an efficient method of predicting an obstacle's motion based only on recent observations, via online training of an LSTM neural network. Given such predictions, we define a Nonlinear Probabilistic Velocity Obstacle (NPVO), which can be used select a velocity that is collision free with a given probability. We take a step towards formal verification of our approach, using statistical model checking to approximate the probability that our system will mispredict an obstacle's motion. Given such a probability, we prove upper bounds on the probability of collision in multiagent and reciprocal collision avoidance scenarios. Furthermore, we demonstrate in simulation that our method avoids collisions where state-of-the-art methods fail.
|
Given probabilistic predictions of obstacle movement, we propose an uncertainty-aware multi-agent dynamic collision avoidance algorithm based on Nonlinear Probabilistic Velocity Obstacles (NPVO), a novel extension of existing velocity obstacle notions. These include Probabilistic Velocity Obstacles, which provide an uncertainty-aware policy in static environments @cite_1 , and Nonlinear Velocity Obstacles @cite_10 , which can guarantee collision avoidance for obstacles moving along known trajectories. Our NPVO, which considers the future behavior of an obstacle in a probabilistic sense, is a generalization of these notions.
|
{
"cite_N": [
"@cite_10",
"@cite_1"
],
"mid": [
"2120794503",
"2157304242"
],
"abstract": [
"This paper generalizes the concept of velocity obstacles given by (1998) to obstacles moving along arbitrary trajectories. We introduce the nonlinear velocity obstacle, which takes into account the shape, velocity and path curvature of the moving obstacle. The nonlinear v-obstacle allows selecting a single avoidance maneuver (if one exists) that avoids any number of obstacles moving on any known trajectories. For unknown trajectories, the nonlinear v-obstacles can be used to generate local avoidance maneuvers based on the current velocity and path curvature of the moving obstacle. This elevates the planning strategy to a second order method, compared to the first order avoidance using the linear v-obstacle, and zero order avoidance using only position information. Analytic expressions for the nonlinear v-obstacle are derived for general trajectories in the plane. The nonlinear v-obstacles are demonstrated in a complex traffic example.",
"Most of present work for autonomous navigation in dynamic environment doesn't take into account the dynamics of the obstacles or the limits of the perception system. To face these problems we applied the probabilistic velocity obstacle (PVO) approach (Kluge and Prassler, 2004) to a dynamic occupancy grid. The paper presents a method to estimate the probability of collision where uncertainty in position, shape and velocity of the obstacles, occlusions and limited sensor range contribute directly to the computation. A simple navigation algorithm is then presented in order to apply the method to collision avoidance and goal driven control. Simulation results show that the robot is able to adapt its behaviour to the level of available knowledge and navigate safely among obstacles with a constant linear velocity. Extensions to non-linear, non-constant velocities are proposed."
]
}
|
1811.01075
|
2972868728
|
Next generation Unmanned Aerial Vehicles (UAVs) must reliably avoid moving obstacles. Existing dynamic collision avoidance methods are effective where obstacle trajectories are linear or known, but such restrictions are not accurate to many real-world UAV applications. We propose an efficient method of predicting an obstacle's motion based only on recent observations, via online training of an LSTM neural network. Given such predictions, we define a Nonlinear Probabilistic Velocity Obstacle (NPVO), which can be used select a velocity that is collision free with a given probability. We take a step towards formal verification of our approach, using statistical model checking to approximate the probability that our system will mispredict an obstacle's motion. Given such a probability, we prove upper bounds on the probability of collision in multiagent and reciprocal collision avoidance scenarios. Furthermore, we demonstrate in simulation that our method avoids collisions where state-of-the-art methods fail.
|
The state-of-the-art Optimal Reciprocal Collision Avoidance" (ORCA) algorithm uses reciprocal velocity obstacles to control multiple agents in unstructured environments @cite_4 . This approach is popular due to its ease of implementation and guarantees that a collision-free trajectory will be found for @math time, if one is available. However, ORCA assumes that all obstacles in the workspace are either static or operating according to the same policy. Violations of this assumption can lead to catastrophic behavior, as shown in Figure . We demonstrate in simulation that our NPVO approach is able to avoid such obstacles, and prove bounds on our algorithm's safety and scalability in reciprocal and multi-agent scenarios.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"192919555"
],
"abstract": [
"In this paper, we present a formal approach to reciprocal n-body collision avoidance, where multiple mobile robots need to avoid collisions with each other while moving in a common workspace. In our formulation, each robot acts fully independently, and does not communicate with other robots. Based on the definition of velocity obstacles [5], we derive sufficient conditions for collision-free motion by reducing the problem to solving a low-dimensional linear program. We test our approach on several dense and complex simulation scenarios involving thousands of robots and compute collision-free actions for all of them in only a few milliseconds. To the best of our knowledge, this method is the first that can guarantee local collision-free motion for a large number of robots in a cluttered workspace."
]
}
|
1811.01075
|
2972868728
|
Next generation Unmanned Aerial Vehicles (UAVs) must reliably avoid moving obstacles. Existing dynamic collision avoidance methods are effective where obstacle trajectories are linear or known, but such restrictions are not accurate to many real-world UAV applications. We propose an efficient method of predicting an obstacle's motion based only on recent observations, via online training of an LSTM neural network. Given such predictions, we define a Nonlinear Probabilistic Velocity Obstacle (NPVO), which can be used select a velocity that is collision free with a given probability. We take a step towards formal verification of our approach, using statistical model checking to approximate the probability that our system will mispredict an obstacle's motion. Given such a probability, we prove upper bounds on the probability of collision in multiagent and reciprocal collision avoidance scenarios. Furthermore, we demonstrate in simulation that our method avoids collisions where state-of-the-art methods fail.
|
Finally, we take an important step towards rigorous verification of our framework. Existing results for formal verification of systems based on techniques like LSTM are highly limited @cite_18 @cite_13 @cite_6 , but formal guarantees are of vital importance for safety-critical applications like collision avoidance. We propose a novel statistical model checking formulation to approximate the probability that an obstacle will remain within certain bounds. Along the way, we demonstrate that predictions generated by our algorithm are robust to perception uncertainty in the form of additive Gaussian noise.
|
{
"cite_N": [
"@cite_18",
"@cite_13",
"@cite_6"
],
"mid": [
"2963857521",
"2543296129",
"2594877703"
],
"abstract": [
"Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95 to 0.5 .In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100 probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness.",
"Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods."
]
}
|
1811.00438
|
2963235042
|
Learning feature detection has been largely an unexplored area when compared to handcrafted feature detection. Recent learning formulations use the covariant constraint in their loss function to learn covariant detectors. However, just learning from covariant constraint can lead to detection of unstable features. To impart further, stability detectors are trained to extract pre-determined features obtained by hand-crafted detectors. However, in the process they lose the ability to detect novel features. In an attempt to overcome the above limitations, we propose an improved scheme by incorporating covariant constraints in form of triplets with addition to an affine covariant constraint. We show that using these additional constraints one can learn to detect novel and stable features without using pre-determined features for training. Extensive experiments show our model achieves state-of-the-art performance in repeatability score on the well known datasets such as Vgg-Affine, EF, and Webcam.
|
Detecting interest points in images have been dominated by heuristic methods. These methods identify specific visual structures, between images which have undergone transformations, consistently. The visual structures are so chosen to make detection covariant for certain transformation. Hand crafted heuristic detectors can be classified roughly into two category based on the visual structure they detect: i) points and ii) blobs. Point based detectors are covariant towards translation and rotation, examples include Harris @cite_23 , Edge-Foci @cite_1 . Scale and affine covariant version of Harris @cite_23 are proposed in @cite_7 and @cite_2 respectively. Blob detectors include DoG @cite_14 and SURF @cite_15 are implicitly covariant to scale changes by the virtue of using scale-space pyramids for detection. Affine adaptation of blob detection is also proposed in @cite_2 .
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_1",
"@cite_23",
"@cite_2",
"@cite_15"
],
"mid": [
"2151103935",
"2119747362",
"2083833836",
"2111308925",
"2172188317",
"1677409904"
],
"abstract": [
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"This paper presents a new method for detecting scale invariant interest points. The method is based on two recent results on scale space: (1) Interest points can be adapted to scale and give repeatable results (geometrically stable). (2) Local extrema over scale of normalized derivatives indicate the presence of characteristic local structures. Our method first computes a multi-scale representation for the Harris interest point detector. We then select points at which a local measure (the Laplacian) is maximal over scales. This allows a selection of distinctive points for which the characteristic scale is known. These points are invariant to scale, rotation and translation as well as robust to illumination changes and limited changes of viewpoint. For indexing, the image is characterized by a set of scale invariant points; the scale associated with each point allows the computation of a scale invariant descriptor. Our descriptors are, in addition, invariant to image rotation, of affine illumination changes and robust to small perspective deformations. Experimental results for indexing show an excellent performance up to a scale factor of 4 for a database with more than 5000 images.",
"In this paper, we describe an interest point detector using edge foci. Unlike traditional detectors that compute interest points directly from image intensities, we use normalized intensity edges and their orientations. We hypothesize that detectors based on the presence of oriented edges are more robust to non-linear lighting variations and background clutter than intensity based techniques. Specifically, we detect edge foci, which are points in the image that are roughly equidistant from edges with orientations perpendicular to the point. The scale of the interest point is defined by the distance between the edge foci and the edges. We quantify the performance of our detector using the interest point's repeatability, uniformity of spatial distribution, and the uniqueness of the resulting descriptors. Results are found using traditional datasets and new datasets with challenging non-linear lighting variations and occlusions.",
"The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed.",
"In this paper we propose a novel approach for detecting interest points invariant to scale and affine transformations. Our scale and affine invariant detectors are based on the following recent results: (1) Interest points extracted with the Harris detector can be adapted to affine transformations and give repeatable results (geometrically stable). (2) The characteristic scale of a local structure is indicated by a local extremum over scale of normalized derivatives (the Laplacian). (3) The affine shape of a point neighborhood is estimated based on the second moment matrix. Our scale invariant detector computes a multi-scale representation for the Harris interest point detector and then selects points at which a local measure (the Laplacian) is maximal over scales. This provides a set of distinctive points which are invariant to scale, rotation and translation as well as robust to illumination changes and limited changes of viewpoint. The characteristic scale determines a scale invariant region for each point. We extend the scale invariant detector to affine invariance by estimating the affine shape of a point neighborhood. An iterative algorithm modifies location, scale and neighborhood of each point and converges to affine invariant points. This method can deal with significant affine transformations including large scale changes. The characteristic scale and the affine shape of neighborhood determine an affine invariant region for each point. We present a comparative evaluation of different detectors and show that our approach provides better results than existing methods. The performance of our detector is also confirmed by excellent matching resultss the image is described by a set of scale affine invariant descriptors computed on the regions associated with our points.",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance."
]
}
|
1811.00438
|
2963235042
|
Learning feature detection has been largely an unexplored area when compared to handcrafted feature detection. Recent learning formulations use the covariant constraint in their loss function to learn covariant detectors. However, just learning from covariant constraint can lead to detection of unstable features. To impart further, stability detectors are trained to extract pre-determined features obtained by hand-crafted detectors. However, in the process they lose the ability to detect novel features. In an attempt to overcome the above limitations, we propose an improved scheme by incorporating covariant constraints in form of triplets with addition to an affine covariant constraint. We show that using these additional constraints one can learn to detect novel and stable features without using pre-determined features for training. Extensive experiments show our model achieves state-of-the-art performance in repeatability score on the well known datasets such as Vgg-Affine, EF, and Webcam.
|
There are fewer learning based approaches compared to hand-crafted ones. The most common line of work involves detecting anchor points based on existing detectors. TILDE @cite_18 is an example of such detector that learns to detect DoG points between images taken from the same viewpoint but having drastic illumination differences. A point is assumed to be , if it is detected consistently in most images sharing a common scene. By additionally introducing locations from those images where these points were not originally detected into the training set, TILDE outperforms DoG in terms of repeatability. TaSK @cite_9 and LIFT @cite_17 also detects anchor points based on similar strategies. The downside of such approaches is that the performance of the learned detector is dependent on the anchor detector used, i.e for certain transformations the learned detector can also reflect the poor performance of the anchor detector. Another area focuses on increasing repeatability of existing detectors. An instance being FAST-ER @cite_11 improves repeatability of FAST @cite_4 key-points by optimizing its parameters. In Quad-Network @cite_16 an unsupervised approach is proposed where patches extracted from two images are assigned a score and ranking consistency between corresponding patches helps in improving repeatability.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_9",
"@cite_17",
"@cite_16",
"@cite_11"
],
"mid": [
"1945298332",
"2584333262",
"2100737973",
"2320444803",
"2556970001",
"2170282673"
],
"abstract": [
"We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive. We first identify good keypoint candidates in multiple training images taken from the same viewpoint. We then train a regressor to predict a score map whose maxima are those points so that they can be found by simple non-maximum suppression. As there are no standard datasets to test the influence of these kinds of changes, we created our own, which we will make publicly available. We will show that our method significantly outperforms the state-of-the-art methods in such challenging conditions, while still achieving state-of-the-art performance on untrained standard datasets.",
"Where feature points are used in real-time frame-rate applications, a high-speed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in real-time applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7 of tlie available processing time. By comparison neither the Harris detector (120 ) nor the detection stage of SIFT (300 ) can operate at full frame rate. Clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing. In particular, the same scene viewed from two different positions should yield features which correspond to the same real-world 3D locations[1]. Hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3D scenes. This comparison supports a number of claims made elsewhere concerning existing corner detectors. Further, contrary to our initial expectations, we show that despite being principally constructed for speed, our detector significantly outperforms existing feature detectors according to this criterion.",
"In this paper, we show that a better performance can be achieved by training a keypoint detector to only find those points that are suitable to the needs of the given task. We demonstrate our approach in an urban environment, where the keypoint detector should focus on stable man-made structures and ignore objects that undergo natural changes such as vegetation and clouds. We use WaldBoost learning with task specific training samples in order to train a keypoint detector with this capability. We show that our aproach generalizes to a broad class of problems where the task is known beforehand.",
"We introduce a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description. While previous works have successfully tackled each one of these problems individually, we show how to learn to do all three in a unified manner while preserving end-to-end differentiability. We then demonstrate that our Deep pipeline outperforms state-of-the-art methods on a number of benchmark datasets, without the need of retraining.",
"Several machine learning tasks require to represent the data using only a sparse set of interest points. An ideal detector is able to find the corresponding interest points even if the data undergo a transformation typical for a given domain. Since the task is of high practical interest in computer vision, many hand-crafted solutions were proposed. In this paper, we ask a fundamental question: can we learn such detectors from scratch? Since it is often unclear what points are interesting, human labelling cannot be used to find a truly unbiased solution. Therefore, the task requires an unsupervised formulation. We are the first to propose such a formulation: training a neural network to rank points in a transformation-invariant manner. Interest points are then extracted from the top bottom quantiles of this ranking. We validate our approach on two tasks: standard RGB image interest point detection and challenging cross-modal interest point detection between RGB and depth images. We quantitatively show that our unsupervised method performs better or on-par with baselines.",
"The repeatability and efficiency of a corner detector determines how likely it is to be useful in a real-world application. The repeatability is important because the same scene viewed from different positions should yield features which correspond to the same real-world 3D locations. The efficiency is important because this determines whether the detector combined with further processing can operate at frame rate. Three advances are described in this paper. First, we present a new heuristic for feature detection and, using machine learning, we derive a feature detector from this which can fully process live PAL video using less than 5 percent of the available processing time. By comparison, most other detectors cannot even operate at frame rate (Harris detector 115 percent, SIFT 195 percent). Second, we generalize the detector, allowing it to be optimized for repeatability, with little loss of efficiency. Third, we carry out a rigorous comparison of corner detectors based on the above repeatability criterion applied to 3D scenes. We show that, despite being principally constructed for speed, on these stringent tests, our heuristic detector significantly outperforms existing feature detectors. Finally, the comparison demonstrates that using machine learning produces significant improvements in repeatability, yielding a detector that is both very fast and of very high quality."
]
}
|
1811.00438
|
2963235042
|
Learning feature detection has been largely an unexplored area when compared to handcrafted feature detection. Recent learning formulations use the covariant constraint in their loss function to learn covariant detectors. However, just learning from covariant constraint can lead to detection of unstable features. To impart further, stability detectors are trained to extract pre-determined features obtained by hand-crafted detectors. However, in the process they lose the ability to detect novel features. In an attempt to overcome the above limitations, we propose an improved scheme by incorporating covariant constraints in form of triplets with addition to an affine covariant constraint. We show that using these additional constraints one can learn to detect novel and stable features without using pre-determined features for training. Extensive experiments show our model achieves state-of-the-art performance in repeatability score on the well known datasets such as Vgg-Affine, EF, and Webcam.
|
More recently, Lenc et. al @cite_10 proposed a Siamese CNN based method to learn covariant detectors. In this method, two patches related by a transformation are fed to the network which regresses two points (one for each patch). Applying a loss ensuring that the two regressed points differ by the same transformation, lets the network detect points covariant to that transformation. However, a major drawback of the method is the lack of ensuring the network regressing to a and feature point. This method can lead to the CNN model being trained sub-optimally. In order to alleviate the drawback, Zhang et. al @cite_13 proposed a method using standard patches to ensure that the network regress to the keypoints. The standard patches are centered around a feature detected by detectors such as TILDEP24 @cite_18 . Though, this method of standard patches is generic in nature, the transformation extensively studied is translations.
|
{
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_10"
],
"mid": [
"2749224737",
"1945298332",
"2345643369"
],
"abstract": [
"Robust covariant local feature detectors are important for detecting local features that are (1) discriminative of the image content and (2) can be repeatably detected at consistent locations when the image undergoes diverse transformations. Such detectors are critical for applications such as image search and scene reconstruction. Many learning-based local feature detectors address one of these two problems while overlooking the other. In this work, we propose a novel learning-based method to simultaneously address both issues. Specifically, we extend the covariant constraint proposed by Lenc and Vedaldi [8] by defining the concepts of standard patch and canonical feature and leverage these to train a novel robust covariant detector. We show that the introduction of these concepts greatly simplifies the learning stage of the covariant detector, and also makes the detector much more robust. Extensive experiments show that our method outperforms previous hand-crafted and learning-based detectors by large margins in terms of repeatability.",
"We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive. We first identify good keypoint candidates in multiple training images taken from the same viewpoint. We then train a regressor to predict a score map whose maxima are those points so that they can be found by simple non-maximum suppression. As there are no standard datasets to test the influence of these kinds of changes, we created our own, which we will make publicly available. We will show that our method significantly outperforms the state-of-the-art methods in such challenging conditions, while still achieving state-of-the-art performance on untrained standard datasets.",
"Local covariant feature detection, namely the problem of extracting viewpoint invariant features from images, has so far largely resisted the application of machine learning techniques. In this paper, we propose the first fully general formulation for learning local covariant feature detectors. We propose to cast detection as a regression problem, enabling the use of powerful regressors such as deep neural networks. We then derive a covariance constraint that can be used to automatically learn which visual structures provide stable anchors for local feature detection. We support these ideas theoretically, proposing a novel analysis of local features in term of geometric transformations, and we show that all common and many uncommon detectors can be derived in this framework. Finally, we present empirical results on translation and rotation covariant detectors on standard feature benchmarks, showing the power and flexibility of the framework."
]
}
|
1811.00487
|
2898690403
|
In mobile wireless sensor networks (MWSNs), each sensor has the ability not only to sense and transmit data but also to move to some specific location. Because the movement of sensors consumes much more power than that in sensing and communication, the problem of scheduling mobile sensors to cover all targets and maintain network connectivity such that the total movement distance of mobile sensors is minimized has received a great deal of attention. However, in reality, due to a limited budget or numerous targets, mobile sensors may be not enough to cover all targets or form a connected network. Therefore, targets must be weighted by their importance. The more important a target, the higher the weight of the target. A more general problem for target coverage and network connectivity, termed the Maximum Weighted Target Coverage and Sensor Connectivity with Limited Mobile Sensors (MWTCSCLMS) problem, is studied. In this paper, an approximation algorithm, termed the weighted-maximum-coverage-based algorithm (WMCBA), is proposed for the subproblem of the MWTCSCLMS problem. Based on the WMCBA, the Steiner-tree-based algorithm (STBA) is proposed for the MWTCSCLMS problem. Simulation results demonstrate that the STBA provides better performance than the other methods.
|
The coverage problem is an important issue in a wireless sensor network, in which each sensor has its own mission to monitor a region through the sensor's sensing range. Different applications have various coverage requirements @cite_13 @cite_21 @cite_24 @cite_1 . In @cite_13 , the area coverage problem is discussed in the way to deploy sensors to form a wireless sensor network such that a particular area will be fully covered and ensure the network connectivity. In @cite_21 , the area coverage problem is studied to deploy sensors to form a connected wireless sensor network even if unpredicted obstacles exist in the sensing field. In @cite_24 , the problem of constructing a minimum size connected wireless sensor network such that the critical grids in a sensing field are all covered by sensors is addressed. In @cite_1 , the barrier coverage problem, the problem of deploying sensors to construct a barrier such that invaders will be detected by at least one sensor, is studied.
|
{
"cite_N": [
"@cite_24",
"@cite_1",
"@cite_21",
"@cite_13"
],
"mid": [
"2145869511",
"2011532811",
"",
"1992828312"
],
"abstract": [
"Wireless sensor networks are formed by connected sensors that each have the ability to collect, process, and store environmental information as well as communicate with others via inter-sensor wireless communication. These characteristics allow wireless sensor networks to be used in a wide range of applications. In many applications, such as environmental monitoring, battlefield surveillance, nuclear, biological, and chemical (NBC) attack detection, and so on, critical areas and common areas must be distinguished adequately, and it is more practical and efficient to monitor critical areas rather than common areas if the sensor field is large, or the available budget cannot provide enough sensors to fully cover the entire sensor field. This provides the motivation for the problem of deploying the minimum sensors on grid points to construct a connected wireless sensor network able to fully cover critical square grids, termed CRITICAL-SQUARE-GRID COVERAGE. In this paper, we propose an approximation algorithm for CRITICAL-SQUARE-GRID COVERAGE. Simulations show that the proposed algorithm provides a good solution for CRITICAL-SQUARE-GRID COVERAGE.",
"Proposing a new approach to barrier coverage in wireless sensor network.Modeling barrier coverage with stochastic edge-weighted graph.Finding an optimal solution for the network stochastic edge-weighted coverage graph.Comparing the performance of the proposed method with the greedy and optimal methods. Barrier coverage is one of the most important applications of wireless sensor networks. It is used to detect mobile objects are entering into the boundary of a sensor network field. Energy efficiency is one of the main concerns in barrier coverage for wireless sensor networks and its solution can be widely used in sensor barrier applications, such as intrusion detectors and border security. In this work, we take the energy efficiency as objectives of the study on barrier coverage. The cost in the present paper can be any performance measurement and normally is defined as any resource which is consumed by sensor barrier. In this paper, barrier coverage problem is modeled based on stochastic coverage graph first. Then, a distributed learning automata-based method is proposed to find a near optimal solution to the stochastic barrier coverage problem. The stochastic barrier coverage problem seeks to find minimum required number of sensor nodes to construct sensor barrier path. To study the performance of the proposed method, computer simulations are conducted. The simulation results show that the proposed algorithm significantly outperforms the greedy based algorithm and optimal method in terms of number of network barrier paths.",
"",
"This paper introduces an optimal deployment algorithm of sensors in a given region to provide desired coverage and connectivity for a wireless sensor network. Our paper utilizes two separate procedures for covering different regions of a symmetrical rectangular area. The proposed method divides the given area of interest into two distinct sub-regions termed as the central and edge regions. In each region, a unique scheme is used to determine the number and location of sensors required to monitor and completely cover the region keeping the connectivity and coverage ranges of the sensors, their hardware specification and the dimensions of the region concerned as the constraints. Our scheme reduces the overhead in determining the position of sensors for deployment by following a coverage and connectivity algorithm of lesser complexity rather than those present in related schemes. Finally, we compare our deployment scheme with the interpolation scheme of [2] in regions of different dimensions and different coverage and connectivity levels with different sensing ranges of the sensors to show our cost efficiency over the latter."
]
}
|
1811.00487
|
2898690403
|
In mobile wireless sensor networks (MWSNs), each sensor has the ability not only to sense and transmit data but also to move to some specific location. Because the movement of sensors consumes much more power than that in sensing and communication, the problem of scheduling mobile sensors to cover all targets and maintain network connectivity such that the total movement distance of mobile sensors is minimized has received a great deal of attention. However, in reality, due to a limited budget or numerous targets, mobile sensors may be not enough to cover all targets or form a connected network. Therefore, targets must be weighted by their importance. The more important a target, the higher the weight of the target. A more general problem for target coverage and network connectivity, termed the Maximum Weighted Target Coverage and Sensor Connectivity with Limited Mobile Sensors (MWTCSCLMS) problem, is studied. In this paper, an approximation algorithm, termed the weighted-maximum-coverage-based algorithm (WMCBA), is proposed for the subproblem of the MWTCSCLMS problem. Based on the WMCBA, the Steiner-tree-based algorithm (STBA) is proposed for the MWTCSCLMS problem. Simulation results demonstrate that the STBA provides better performance than the other methods.
|
In MWSNs, when mobile sensors are randomly deployed in a sensing field, mobile sensors can be used to improve the coverage quality and the network connectivity in MWSNs. In @cite_8 , a survey on utilizing node mobility to extend the network lifetime is discussed and provided. In @cite_6 , algorithms are proposed to dispatch mobile sensors to designated locations such that the area of interest can be @math -covered. In @cite_22 , when mobile sensors have different sensing ranges, algorithms based on the multiplicatively weighted Voronoi diagram are proposed to find coverage holes such that the coverage area can be improved. In @cite_10 , an algorithm is proposed to relocate the minimum number of redundant mobile sensors to maintain connectivity between a region of interest and a center of interest in which a particular event occurs, where mobile sensors are initially deployed in the region of interest, and the center of interest is outside the region of interest. In @cite_15 , a distributed algorithm is proposed to move mobile sensors to cover all targets and satisfy the minimum allowed detection probability such that the network lifetime is maximized.
|
{
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_6",
"@cite_15",
"@cite_10"
],
"mid": [
"1987044550",
"2134887250",
"2102099586",
"2013227485",
"2004462085"
],
"abstract": [
"",
"Sensors are used to monitor and control the physical environment. In mobile sensor networks, nodes can self-propel via springs, wheels, or they can be attached to transporters, such as vehicles. Sensors have limited energy supply and the sensor network is expected to be functional for a long time, so optimizing the energy consumption to prolong the network lifetime becomes an important issue. In static sensor networks, if sensors are uniformly deployed, sensors near the sinks die first. This is because besides sending their own sensed data, they also participate in forwarding data on behalf of other sensors located farther away from the sink. This uneven energy consumption results in network partitioning and limitation of the network lifetime. In this paper, we survey mechanisms that utilize nodes' mobility to extend the network lifetime. We divide these mechanisms into three groups: mechanisms using mobile sinks, mechanisms using mobile sensors redeployment, and mechanisms using mobile relays. Using mobile sinks, energy is saved by using shorter multi-hop data delivery paths and the set of sensors located near a sink changes over time, thus the energy consumption is balanced in the whole network. Using mobile sensors, the initial deployment can be improved through sensor relocation such that to balance energy consumption and to extend network lifetime. Mobile nodes can also be used as relays, which can inherit the responsibilities of the co-locating static sensors or they can carry data to the sink to reduce the cost of long distance communication. We provide overviews and comparisons among different mechanisms.",
"One of the research issues in wireless sensor networks (WSNs) is how to efficiently deploy sensors to cover an area. In this paper, we solve the k-coverage sensor deployment problem to achieve multi-level coverage of an area I. We consider two sub-problems: k-coverage placement and distributed dispatch problems. The placement problem asks how to determine the minimum number of sensors required and their locations in I to guarantee that I is k-covered and the network is connected; the dispatch problem asks how to schedule mobile sensors to move to the designated locations according to the result computed by the placement strategy such that the energy consumption due to movement is minimized. Our solutions to the placement problem consider both the binary and probabilistic sensing models, and allow an arbitrary relationship between the communication distance and sensing distance of sensors. For the dispatch problem, we propose a competition-based and a pattern-based schemes. The former allows mobile sensors to bid for their closest locations, while the latter allows sensors to derive the target locations on their own. Our proposed schemes are efficient in terms of the number of sensors required and are distributed in nature. Simulation results are presented to verify their effectiveness.",
"One of the main operations in wireless sensor networks is the surveillance of a set of events (targets) that occur in the field. In practice, a node monitors an event accurately when it is located closer to it, while the opposite happens when the node is moving away from the target. This detection accuracy can be represented by a probabilistic distribution. Since the network nodes are usually randomly deployed, some of the events are monitored by a few nodes and others by many nodes. In applications where there is a need of a full coverage and of a minimum allowed detection accuracy, a single node may not be able to sufficiently cover an event by itself. In this case, two or more nodes are needed to collaborate and to cover a single target. Moreover, all the nodes must be connected with a base station that collects the monitoring data. In this paper we describe the problem of the minimum sampling quality, where an event must be sufficiently detected by the maximum possible amount of time. Since the probability of detecting a single target using randomly deployed static nodes is quite low, we present a localized algorithm based on mobile nodes. Our algorithm sacrifices a part of the energy of the nodes by moving them to a new location in order to satisfy the desired detection accuracy. It divides the monitoring process in rounds to extend the network lifetime, while it ensures connectivity with the base station. Furthermore, since the network lifetime is strongly related to the number of rounds, we propose two redeployment schemes that enhance the performance of our approach by balancing the number of sensors between densely covered areas and areas that are poorly covered. Finally, our evaluation results show an over 10 times improvement on the network lifetime compared to the case where the sensors are static. Our approaches, also, outperform a virtual forces algorithm when connectivity with the base station is required. The redeployment schemes present a good balance between network lifetime and convergence time.",
"We propose a sensor node relocation approach in wireless sensor networks to maintain connectivity between a Region Of Interest (ROI) where the sensor nodes are initially deployed and a Center Of Interest (COI) outside the ROI where a particular event happens. Our proposed approach, called Chain Based Relocation Approach (CBRA), aims to relocate a minimum number of redundant sensors from their initial positions within the ROI towards the COI to maintain the connectivity between the ROI and the COI. CBRA uses steps which determine the redundant nodes' set, the propagation of the COI coordinates within the ROI and then the selection and the relocation of the redundant nodes towards the COI. The selection of the redundant nodes is based on an average energy consumption model to balance the energy consumption among the sensor nodes when they are relocated depending on their initial and final positions. We evaluate the performance of CBRA using performance metrics such as energy consumption, the number of relocated nodes, relocation time and number of transmitted messages. Sensor nodes are relocated using a chain-based method between the ROI and the COI. In addition, if one relocated sensor node fails, the connectivity between the COI and the ROI is affected. To address this possible failure, we propose a fault tolerant recovery procedure to repair the route between the COI and the ROI. Finally, we compare the performance of CBRA with two other approaches."
]
}
|
1811.00487
|
2898690403
|
In mobile wireless sensor networks (MWSNs), each sensor has the ability not only to sense and transmit data but also to move to some specific location. Because the movement of sensors consumes much more power than that in sensing and communication, the problem of scheduling mobile sensors to cover all targets and maintain network connectivity such that the total movement distance of mobile sensors is minimized has received a great deal of attention. However, in reality, due to a limited budget or numerous targets, mobile sensors may be not enough to cover all targets or form a connected network. Therefore, targets must be weighted by their importance. The more important a target, the higher the weight of the target. A more general problem for target coverage and network connectivity, termed the Maximum Weighted Target Coverage and Sensor Connectivity with Limited Mobile Sensors (MWTCSCLMS) problem, is studied. In this paper, an approximation algorithm, termed the weighted-maximum-coverage-based algorithm (WMCBA), is proposed for the subproblem of the MWTCSCLMS problem. Based on the WMCBA, the Steiner-tree-based algorithm (STBA) is proposed for the MWTCSCLMS problem. Simulation results demonstrate that the STBA provides better performance than the other methods.
|
In this paper, a set of @math mobile sensors @math is pre-deployed in a sensing field. We assume that each mobile sensor in @math has the same sensing range @math to sense targets. In addition, the data sink and each mobile sensor have the same transmission range @math to communicate with the other mobile sensors. While given a set of @math targets @math with known locations in the field, mobile sensors can be scheduled to move in any direction and stop anywhere @cite_29 to cover targets or connect with the data sink and the other mobile sensors. In reality, all of the targets in the field may not be covered due to the limited mobile sensors. Targets in the sensing field, therefore, must be weighted by their importance; that is, the more important a target, the higher the weight of the target. Hereafter, the weight of target @math is denoted by @math .
|
{
"cite_N": [
"@cite_29"
],
"mid": [
"2157633675"
],
"abstract": [
"Recent years have witnessed the deployments of wireless sensor networks in a class of mission-critical applications such as object detection and tracking. These applications often impose stringent Quality-of-Service requirements including high detection probability, low false alarm rate, and bounded detection delay. Although a dense all-static network may initially meet these Quality-of-Service requirements, it does not adapt to unpredictable dynamics in network conditions (e.g., coverage holes caused by death of nodes) or physical environments (e.g., changed spatial distribution of events). This paper exploits reactive mobility to improve the target detection performance of wireless sensor networks. In our approach, mobile sensors collaborate with static sensors and move reactively to achieve the required detection performance. Specifically, mobile sensors initially remain stationary and are directed to move toward a possible target only when a detection consensus is reached by a group of sensors. The accuracy of final detection result is then improved as the measurements of mobile sensors have higher Signal-to-Noise Ratios after the movement. We develop a sensor movement scheduling algorithm that achieves near-optimal system detection performance under a given detection delay bound. The effectiveness of our approach is validated by extensive simulations using the real data traces collected by 23 sensor nodes."
]
}
|
1811.00606
|
2898843412
|
Most neural Information Retrieval (Neu-IR) models derive query-to-document ranking scores based on term-level matching. Inspired by TileBars, a classical term distribution visualization method, in this paper, we propose a novel Neu-IR model that handles query-to-document matching at the subtopic and higher levels. Our system first splits the documents into topical segments, "visualizes" the matchings between the query and the segments, and then feeds an interaction matrix into a Neu-IR model, DeepTileBars, to obtain the final ranking scores. DeepTileBars models the relevance signals occurring at different granularities in a document's topic hierarchy. It better captures the discourse structure of a document and thus the matching patterns. Although its design and implementation are light-weight, DeepTileBars outperforms other state-of-the-art Neu-IR models on benchmark datasets including the Text REtrieval Conference (TREC) 2010-2012 Web Tracks and LETOR 4.0.
|
There are only a few pieces of research that share similar intention with ours, i.e. visualizing the relevance signals in a document for deep learning. Works that are the closest to ours are MatchPyramid, HiNT, and ViP @cite_1 .
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2766213870"
],
"abstract": [
"When applying learning to rank algorithms to Web search, a large number of features are usually designed to capture the relevance signals. Most of these features are computed based on the extracted textual elements, link analysis, and user logs. However, Web pages are not solely linked texts, but have structured layout organizing a large variety of elements in different styles. Such layout itself can convey useful visual information, indicating the relevance of a Web page. For example, the query-independent layout (i.e., raw page layout) can help identify the page quality, while the query-dependent layout (i.e., page rendered with matched query words) can further tell rich structural information (e.g., size, position and proximity) of the matching signals. However, such visual information of layout has been seldom utilized in Web search in the past. In this work, we propose to learn rich visual features automatically from the layout of Web pages (i.e., Web page snapshots) for relevance ranking. Both query-independent and query-dependent snapshots are considered as the new inputs. We then propose a novel visual perception model inspired by human's visual search behaviors on page viewing to extract the visual features. This model can be learned end-to-end together with traditional human-crafted features. We also show that such visual features can be efficiently acquired in the online setting with an extended inverted indexing scheme. Experiments on benchmark collections demonstrate that learning visual features from Web page snapshots can significantly improve the performance of relevance ranking in ad-hoc Web retrieval tasks."
]
}
|
1811.00656
|
2898877033
|
In this work, we describe a new deep learning based method that can effectively distinguish AI-generated fake videos (referred to as DeepFake videos hereafter) from real videos. Our method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video. Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks (CNNs). Compared to previous methods which use a large amount of real and DeepFake generated images to train CNN classifier, our method does not need DeepFake generated images as negative training examples since we target the artifacts in affine face warping as the distinctive feature to distinguish real and fake images. The advantages of our method are two-fold: (1) Such artifacts can be simulated directly using simple image processing operations on a image to make it as negative example. Since training a DeepFake model to generate negative examples is time-consuming and resource-demanding, our method saves a plenty of time and resources in training data collection; (2) Since such artifacts are general existed in DeepFake videos from different sources, our method is more robust compared to others. Our method is evaluated on two sets of DeepFake video datasets for its effectiveness in practice.
|
AI-based Video Synthesis Algorithms The new generation of AI-based video synthesis algorithms are based on the recent developments in new deep learning models, especially the generative adversarial networks (GANs) @cite_9 . A GAN model consists of two deep neural networks trained in tandem. The generator network aims to produce images that cannot be distinguished from the training real images, while the discriminator network aims to tell them apart. When training completes, the generator is used to synthesize images with realistic appearance.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2099471712"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
]
}
|
1811.00656
|
2898877033
|
In this work, we describe a new deep learning based method that can effectively distinguish AI-generated fake videos (referred to as DeepFake videos hereafter) from real videos. Our method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video. Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks (CNNs). Compared to previous methods which use a large amount of real and DeepFake generated images to train CNN classifier, our method does not need DeepFake generated images as negative training examples since we target the artifacts in affine face warping as the distinctive feature to distinguish real and fake images. The advantages of our method are two-fold: (1) Such artifacts can be simulated directly using simple image processing operations on a image to make it as negative example. Since training a DeepFake model to generate negative examples is time-consuming and resource-demanding, our method saves a plenty of time and resources in training data collection; (2) Since such artifacts are general existed in DeepFake videos from different sources, our method is more robust compared to others. Our method is evaluated on two sets of DeepFake video datasets for its effectiveness in practice.
|
The GAN model inspired many subsequent works for image synthesis, such as @cite_2 @cite_17 @cite_19 @cite_15 @cite_14 @cite_0 @cite_25 @cite_7 @cite_24 @cite_18 . Liu al @cite_25 proposed an unsupervised image to image translation framework based on coupled GANs, which aims to learn the joint representation of images in different domains. This algorithm is the basis for the DeepFake algorithm.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_0",
"@cite_19",
"@cite_24",
"@cite_2",
"@cite_15",
"@cite_25",
"@cite_17"
],
"mid": [
"2963767194",
"2553897675",
"2962793481",
"2963709863",
"",
"2963917969",
"648143168",
"2963073614",
"2962947361",
"2173520492"
],
"abstract": [
"Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.",
"We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulators output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a self-regularization term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.",
"",
"We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content speech should be in Stephen Colbert’s style. Our approach combines both spatial and temporal information along with adversarial losses for content translation and style preservation. In this work, we first study the advantages of using spatiotemporal constraints over spatial constraints for effective retargeting. We then demonstrate the proposed approach for the problems where information in both space and time matters such as face-to-face translation, flower-to-flower, wind and cloud synthesis, sunrise and sunset.",
"In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach [11]. Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains. Since there exists an infinite set of joint distributions that can arrive the given marginal distributions, one could infer nothing about the joint distribution from the marginal distributions without additional assumptions. To address the problem, we make a shared-latent space assumption and propose an unsupervised image-to-image translation framework based on Coupled GANs. We compare the proposed framework with competing approaches and present high quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation. We also apply the proposed framework to domain adaptation and achieve state-of-the-art performance on benchmark datasets. Code and additional results are available in https: github.com mingyuliutw unit.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations."
]
}
|
1811.00656
|
2898877033
|
In this work, we describe a new deep learning based method that can effectively distinguish AI-generated fake videos (referred to as DeepFake videos hereafter) from real videos. Our method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video. Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks (CNNs). Compared to previous methods which use a large amount of real and DeepFake generated images to train CNN classifier, our method does not need DeepFake generated images as negative training examples since we target the artifacts in affine face warping as the distinctive feature to distinguish real and fake images. The advantages of our method are two-fold: (1) Such artifacts can be simulated directly using simple image processing operations on a image to make it as negative example. Since training a DeepFake model to generate negative examples is time-consuming and resource-demanding, our method saves a plenty of time and resources in training data collection; (2) Since such artifacts are general existed in DeepFake videos from different sources, our method is more robust compared to others. Our method is evaluated on two sets of DeepFake video datasets for its effectiveness in practice.
|
The creation of a DeepFake video starts with an input video of a specific individual (‘target’), and generates another video with the target’s faces replaced with that of another individual (‘source’), based on a GAN model trained to translate between the faces of the target and the source, see Figure . More recently, Zhu al @cite_7 proposed cycle-consistent loss to push the performance of GAN, namely Cycle-GAN. Bansal al @cite_24 stepped further and proposed Recycle-GAN, which incorporated temporal information and spatial cues with conditional generative adversarial networks. StarGAN @cite_18 learned the mapping across multiple domains only using a single generator and discriminator.
|
{
"cite_N": [
"@cite_24",
"@cite_18",
"@cite_7"
],
"mid": [
"2963917969",
"2963767194",
"2962793481"
],
"abstract": [
"We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content speech should be in Stephen Colbert’s style. Our approach combines both spatial and temporal information along with adversarial losses for content translation and style preservation. In this work, we first study the advantages of using spatiotemporal constraints over spatial constraints for effective retargeting. We then demonstrate the proposed approach for the problems where information in both space and time matters such as face-to-face translation, flower-to-flower, wind and cloud synthesis, sunrise and sunset.",
"Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."
]
}
|
1811.00656
|
2898877033
|
In this work, we describe a new deep learning based method that can effectively distinguish AI-generated fake videos (referred to as DeepFake videos hereafter) from real videos. Our method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video. Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks (CNNs). Compared to previous methods which use a large amount of real and DeepFake generated images to train CNN classifier, our method does not need DeepFake generated images as negative training examples since we target the artifacts in affine face warping as the distinctive feature to distinguish real and fake images. The advantages of our method are two-fold: (1) Such artifacts can be simulated directly using simple image processing operations on a image to make it as negative example. Since training a DeepFake model to generate negative examples is time-consuming and resource-demanding, our method saves a plenty of time and resources in training data collection; (2) Since such artifacts are general existed in DeepFake videos from different sources, our method is more robust compared to others. Our method is evaluated on two sets of DeepFake video datasets for its effectiveness in practice.
|
Resampling Detection . The artifacts introduced by the DeepFake production pipeline is in essence due to affine transforms to the synthesized face. In the literature of digital media forensics, detecting transforms or the underlying resampling algorithm has been extensively studied, , @cite_26 @cite_28 @cite_20 @cite_22 @cite_3 @cite_21 @cite_12 @cite_1 @cite_27 @cite_23 @cite_10 . However, the performance of these methods are affected by the post-processing steps, such as image video compression, which are not subject to simple modeling. Besides, these methods usually aim to estimate the exact resampling operation from whole images, but for our purpose, a simpler solution can be obtained by just comparing regions of potentially synthesized faces and the rest of the image -- the latter are expected to be free of such artifacts while the existence of such artifacts in the former is a telltale cue for the video being a DeepFake.
|
{
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_27",
"@cite_23",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2155180118",
"1975528596",
"2163470764",
"2532188585",
"168198616",
"2159244236",
"2142610050",
"2085958611",
"2963777235",
"2028384798",
"2162156567"
],
"abstract": [
"The unique stature of photographs as a definitive recording of events is being diminished due, in part, to the ease with which digital images can be manipulated and altered. Although good forgeries may leave no visual clues of having been tampered with, they may, nevertheless, alter the underlying statistics of an image. For example, we describe how resampling (e.g., scaling or rotating) introduces specific statistical correlations, and describe how these correlations can be automatically detected in any portion of an image. This technique works in the absence of any digital watermark or signature. We show the efficacy of this approach on uncompressed TIFF images, and JPEG and GIF images with minimal compression. We expect this technique to be among the first of many tools that will be needed to expose digital forgeries.",
"This paper revisits the state-of-the-art resampling detector, which is based on periodic artifacts in the residue of a local linear predictor. Inspired by recent findings from the literature, we take a closer look at the complex detection procedure and model the detected artifacts in the spatial and frequency domain by means of the variance of the prediction residue. We give an exact formulation on how transformation parameters influence the appearance of periodic artifacts and analytically derive the expected position of characteristic resampling peaks. We present an equivalent accelerated and simplified detector, which is orders of magnitudes faster than the conventional scheme and experimentally shown to be comparably reliable.",
"Usually digital image forgeries are created by copy-pasting a portion of an image onto some other image. While doing so, it is often necessary to resize the pasted portion of the image to suit the sampling grid of the host image. The resampling operation changes certain characteristics of the pasted portion, which when detected serves as a clue of tampering. In this paper, we present deterministic techniques to detect resampling, and localize the portion of the image that has been tampered with. Two of the techniques are in pixel domain and two others in frequency domain. We study the efficacy of our techniques against JPEG compression and subsequent resampling of the entire tampered image.",
"Resampling detection has become a standard tool in digital image forensics. This paper investigates the important case of resampling detection in re-compressed JPEG images. We show how blocking artifacts of the previous compression step can help to increase the otherwise drastically reduced detection performance in JPEG compressed images. We give a formulation on how affine transformations of JPEG compressed images affect state-of-the-art resampling detectors and derive a new efficient detection variant, which better suits this relevant detection scenario. The principal appropriateness of using JPEG pre-compression artifacts for the detection of resampling in re-compressed images is backed with experimental evidence on a large image set and for a variety of different JPEG qualities.",
"To create convincing forged images, manipulated images or parts of them are usually exposed to some geometric operations which require a resampling step. Therefore, detecting traces of resampling became an important approach in the field of image forensics. In this paper, we revisit existing techniques for resampling detection and design some targeted attacks in order to assess their reliability. We show that the combination of multiple resampling and hybrid median filtering works well for hiding traces of resampling. Moreover, we propose an improved technique for detecting resampling using image forensic tools. Experimental evaluations show that the proposed technique is good for resampling detection and more robust against some targeted attacks.",
"Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.",
"Millions of photos are uploaded to social networking sites every day. However, the authenticity of these images has been severely questioned. Recent advances in image editing software have helped forged images spread widely. While these fake images leave no visual clues, they can still be detected by inspecting the traces left by the resampling process. In this paper, we propose a novel rotation-tolerant resampling detection method, and design a blind image forgery detection algorithm based on this resampling detection method. A measurement called \"Rate-Distance\" is devised for measuring the distance between two resampled images with different resampling history. Images are classified based on their \"Rate-Distances\". Through experimental results, we demonstrate that the proposed method can achieve high detection accuracy.",
"This study presents a method for resampling detection. By combining texture analysis with resampling detection, the task of resampling detection is considered as a texture classification problem. In other words, the influence of resampling operations on a raw single-sampled image is viewed as an alteration of the image texture in a fine scale. First, local linear transform is used to obtain textural detail sub-bands. A 36-D feature vector is then extracted from the normalized characteristic function moments of textural detail sub-bands to train a support vector machine classifier. Finally, experimental results are reported on three databases, with each having almost 10,000 images. Comparison with the previous study reveals that the proposed method is effective for resampling detection. In addition, extensive experiments on cover and stego bitmap images illustrate that the proposed method is essential for constructing accurate targeted and blind steganalysis methods for heterogeneous images, raw single-sampled images, and images resampled at different scales.",
"Resampling is an important signature of manipulated images. In this paper, we propose two methods to detect and localize image manipulations based on a combination of resampling features and deep learning. In the first method, the Radon transform of resampling features are computed on overlapping image patches. Deep learning classifiers and a Gaussian conditional random field model are then used to create a heatmap. Tampered regions are located using a Random Walker segmentation method. In the second method, resampling features computed on overlapping image patches are passed through a Long short-term memory (LSTM) based network for classification and localization. We compare the performance of detection localization of both these methods. Our experimental results show that both techniques are effective in detecting and localizing digital image forgeries.",
"Detection of resampling traces for digital image blind authentication has been addressed recently by A. C. Gallagher and later extended by B. Mahdian and S. Saic. On the other side, it is well known from the synchronization area in communications that prefiltering is an appropriate tool to improve the performance of those schemes exploiting the underlying cyclostationarity of communication signals. Thus, the detection of resampling manipulations improves significantly when the derivative of the interpolated signal is used for covariance computation. This work focuses on the role of prefiltering as a way of boosting resampling traces and, in particular, on the use of derivation.",
"In this paper, we analyze and analytically describe the specific statistical changes brought into the covariance structure of signal by the interpolation process. We show that interpolated signals and their derivatives contain specific detectable periodic properties. Based on this, we propose a blind, efficient, and automatic method capable of finding traces of resampling and interpolation. The proposed method can be very useful in many areas, especially in image security and authentication. For instance, when two or more images are spliced together, to create high quality and consistent image forgeries, almost always geometric transformations, such as scaling, rotation, or skewing are needed. These procedures are typically based on a resampling and interpolation step. By having a method capable of detecting the traces of resampling, we can significantly reduce the successful usage of such forgeries. Among other points, the presented method is also very useful in estimation of the geometric transformations factors."
]
}
|
1811.00656
|
2898877033
|
In this work, we describe a new deep learning based method that can effectively distinguish AI-generated fake videos (referred to as DeepFake videos hereafter) from real videos. Our method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video. Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks (CNNs). Compared to previous methods which use a large amount of real and DeepFake generated images to train CNN classifier, our method does not need DeepFake generated images as negative training examples since we target the artifacts in affine face warping as the distinctive feature to distinguish real and fake images. The advantages of our method are two-fold: (1) Such artifacts can be simulated directly using simple image processing operations on a image to make it as negative example. Since training a DeepFake model to generate negative examples is time-consuming and resource-demanding, our method saves a plenty of time and resources in training data collection; (2) Since such artifacts are general existed in DeepFake videos from different sources, our method is more robust compared to others. Our method is evaluated on two sets of DeepFake video datasets for its effectiveness in practice.
|
GAN Generated Image Video Detection . Detecting GAN generated images or videos has also made progress recently. Li al @cite_11 observed that DeepFake faces lack realistic eye blinking, as training images obtained over the Internet usually do not include photographs with the subject's eyes closed. The lack of eye blinking is detected with a CNN RNN model to expose DeepFake videos. However, this detection can be circumvented by purposely incorporating images with closed eyes in training. Li al @cite_5 exploited the color disparity between GAN generated images and real images in non-RGB color spaces to classify them. However, it is not clear if this method is extensible to inspecting local regions as in the case of DeepFake. Afchar al @cite_4 trained a convolutional neural networks to directly classify real faces and fake faces generated by DeepFake and Face2face @cite_13 . While it shows promising performance, this holistic approach has its drawback. In particular, it requires both real and fake images as training data, and generating the fake images using the AI-based synthesis algorithms is less efficient than the simple mechanism for training data generation in our method.
|
{
"cite_N": [
"@cite_13",
"@cite_5",
"@cite_4",
"@cite_11"
],
"mid": [
"2301937176",
"2888519208",
"2891145043",
"2806757392"
],
"abstract": [
"We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.",
"With the powerful deep network architectures, such as generative adversarial networks and variational autoencoders, large amounts of photorealistic images can be generated. The generated images, already fooling human eyes successfully, are not initially targeted for deceiving image authentication systems. However, research communities as well as public media show great concerns on whether these images would lead to serious security issues. In this paper, we address the problem of detecting deep network generated (DNG) images by analyzing the disparities in color components between real scene images and DNG images. Existing deep networks generate images in RGB color space and have no explicit constrains on color correlations; therefore, DNG images have more obvious differences from real images in other color spaces, such as HSV and YCbCr, especially in the chrominance components. Besides, the DNG images are different from the real ones when considering red, green, and blue components together. Based on these observations, we propose a feature set to capture color image statistics for detecting the DNG images. Moreover, three different detection scenarios in practice are considered and the corresponding detection strategies are designed. Extensive experiments have been conducted on face image datasets to evaluate the effectiveness of the proposed method. The experimental results show that the proposed method is able to distinguish the DNG images from real ones with high accuracies.",
"This paper presents a method to automatically and efficiently detect face tampering in videos, and particularly focuses on two recent techniques used to generate hyper-realistic forged videos: Deepfake and Face2Face. Traditional image forensics techniques are usually not well suited to videos due to the compression that strongly degrades the data. Thus, this paper follows a deep learning approach and presents two networks, both with a low number of layers to focus on the mesoscopic properties of images. We evaluate those fast networks on both an existing dataset and a dataset we have constituted from online videos. The tests demonstrate a very successful detection rate with more than 98 for Deepfake and 95 for Face2Face.",
"The new developments in deep generative networks have significantly improve the quality and efficiency in generating realistically-looking fake face videos. In this work, we describe a new method to expose fake face videos generated with neural networks. Our method is based on detection of eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos. Our method is tested over benchmarks of eye-blinking detection datasets and also show promising performance on detecting videos generated with DeepFake."
]
}
|
1811.00473
|
2963740313
|
A feature learning task involves training models that are capable of inferring good representations (transformations of the original space) from input data alone. When working with limited or unlabelled data, and also when multiple visual domains are considered, methods that rely on large annotated datasets, such as Convolutional Neural Networks (CNNs), cannot be employed. In this paper we investigate different auto-encoder (AE) architectures, which require no labels, and explore training strategies to learn representations from images. The models are evaluated considering both the reconstruction error of the images and the feature spaces in terms of their discriminative power. We study the role of dense and convolutional layers on the results, as well as the depth and capacity of the networks, since those are shown to affect both the dimensionality reduction and the capability of generalising for different visual domains. Classification results with AE features were as discriminative as pre-trained CNN features. Our findings can be used as guidelines for the design of unsupervised representation learning methods within and across domains.
|
While Unsupervised Representation Learning is a well-studied topic in the broad field of machine learning and image understanding @cite_18 , not much work has been done towards the analysis of those feature spaces when working with cross-domain models. The problem we want to tackle by studying the feature space in a cross-domain scenario can be defined as a form of Transfer Learning task @cite_9 , where one wants knowledge from one known domain to transfer and consequently improve learning tasks within a different domain.
|
{
"cite_N": [
"@cite_9",
"@cite_18"
],
"mid": [
"2165698076",
"2122922389"
],
"abstract": [
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"We present a new machine learning framework called \"self-taught learning\" for using unlabeled data in supervised classification tasks. We do not assume that the unlabeled data follows the same class labels or generative distribution as the labeled data. Thus, we would like to use a large number of unlabeled images (or audio samples, or text documents) randomly downloaded from the Internet to improve performance on a given image (or audio, or text) classification task. Such unlabeled data is significantly easier to obtain than in typical semi-supervised or transfer learning settings, making self-taught learning widely applicable to many practical learning problems. We describe an approach to self-taught learning that uses sparse coding to construct higher-level features using the unlabeled data. These features form a succinct input representation and significantly improve classification performance. When using an SVM for classification, we further show how a Fisher kernel can be learned for this representation."
]
}
|
1811.00473
|
2963740313
|
A feature learning task involves training models that are capable of inferring good representations (transformations of the original space) from input data alone. When working with limited or unlabelled data, and also when multiple visual domains are considered, methods that rely on large annotated datasets, such as Convolutional Neural Networks (CNNs), cannot be employed. In this paper we investigate different auto-encoder (AE) architectures, which require no labels, and explore training strategies to learn representations from images. The models are evaluated considering both the reconstruction error of the images and the feature spaces in terms of their discriminative power. We study the role of dense and convolutional layers on the results, as well as the depth and capacity of the networks, since those are shown to affect both the dimensionality reduction and the capability of generalising for different visual domains. Classification results with AE features were as discriminative as pre-trained CNN features. Our findings can be used as guidelines for the design of unsupervised representation learning methods within and across domains.
|
One of the few studies that leverage the use of auto-encoders in a cross-domain adaptation problem comes from @cite_10 ; the authors solved their low sample number problem by training an auto-encoder on an unrelated dataset with many samples and using the learned model to extract local features on their target domain; finally, they concatenated those features with a new set learned through usual CNN classifier setup on the desired domain. This study showed the potential for transfer learning with AE architectures but did not experiment with an AE-only design for their models, an application we address in this paper.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2592385986"
],
"abstract": [
"We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder’s hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65 to 78 average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50 while improving classification performance."
]
}
|
1811.00686
|
2899426887
|
In this note we consider setups in which variational objectives for Bayesian neural networks can be computed in closed form. In particular we focus on single-layer networks in which the activation function is piecewise polynomial (e.g. ReLU). In this case we show that for a Normal likelihood and structured Normal variational distributions one can compute a variational lower bound in closed form. In addition we compute the predictive mean and variance in closed form. Finally, we also show how to compute approximate lower bounds for other likelihoods (e.g. softmax classification). In experiments we show how the resulting variational objectives can help improve training and provide fast test time predictions.
|
The approach most closely related to ours is probably the deterministic approximations in ref. (indeed they compute some of the same ReLU integrals that we do). While we focus on single-layer neural networks, the distinct advantage of their approximation scheme is that it can be applied to networks of arbitrary depth. Thus some of our results are potentially complementary to theirs. Reference also constructs deterministic variational objectives for the specific case of the ReLU activation function. Reference @cite_0 considers quadratic piecewise linear bounds for the logistic-log-partion function in the context of Bernoulli-logistic latent Gaussian models. Finally, approaches for variance reduction in the stochastic variational inference setting include and .
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"759726671"
],
"abstract": [
"Bernoulli-logistic latent Gaussian models (bLGMs) are a useful model class, but accurate parameter estimation is complicated by the fact that the marginal likelihood contains an intractable logistic-Gaussian integral. In this work, we propose the use of fixed piecewise linear and quadratic upper bounds to the logistic-log-partition (LLP) function as a way of circumventing this intractable integral. We describe a framework for approximately computing minimax optimal piecewise quadratic bounds, as well a generalized expectation maximization algorithm based on using piecewise bounds to estimate bLGMs. We prove a theoretical result relating the maximum error in the LLP bound to the maximum error in the marginal likelihood estimate. Finally, we present empirical results showing that piece-wise bounds can be significantly more accurate than previously proposed variational bounds."
]
}
|
1811.00685
|
2899267345
|
For each family of finite classical groups, and their associated simple quotients, we provide an explicit presentation on a specific generating set of size at most 8. Since there exist efficient algorithms to construct this generating set in any copy of the group, our presentations can be used to verify claimed isomorphisms between representations of the classical group. The presentations are available in Magma.
|
Babai and Szemer 'e di @cite_21 formulated the Short Presentation Conjecture : there exists a constant @math such that every finite simple group @math has a presentation of bit-length @math . The results of @cite_26 @cite_12 @cite_7 establish this conjecture with @math for all finite simple groups, with the possible exception of the Ree groups @math .
|
{
"cite_N": [
"@cite_7",
"@cite_21",
"@cite_12",
"@cite_26"
],
"mid": [
"",
"1980351200",
"2053954083",
"1964155237"
],
"abstract": [
"",
"We build a theory of black box groups, and apply it to matrix groups over finite fields. Elements of a black box group are encoded by strings of uniform length and group operations are performd by an oracle. Subgroups are given by a list of generators. We prove that for such subgroups, membership and divisor of the order are in NPB. (B is the group box oracle.) Under a plausible mathematical hypothesis on short presentations of finite simple groups, nom membership and exaact order will also be in NPB and thus in NPB ∩ NPB.",
"We give a presentation of length O(log2|G|) for the groups G ≅ PSU3(q). This result has applications in recent algorithms to compute the structure of permutation groups and matrix groups.",
"We conjecture that every finite groupGhas a short presentation (in terms of generators and relations) in the sense that the totallengthof the relations is (log|G|)O(1). We show that it suffices to prove this conjecture for simple groups. Motivated by applications in computational complexity theory, we conjecture that for finite simple groups, such a short presentation is computable in polynomial time from the standard name ofG, assuming in the case of Lie type simple groups overGF(pm) that an irreducible polynomialfof degreemoverGF(p) and a primitive root ofGF(pm) are given. We verify this (stronger) conjecture for all finite simple groups except for the three families of rank 1 twisted groups: we do not handle the unitary groupsPSU(3, q) = 2A2(q), the Suzuki groupsSz(q) = 2B2(q), and the Ree groupsR(q) = 2G2(q). In particular,all finite groups G without composition factors of these types have presentations of length O((log|G|)3). For groups of Lie type (normal or twisted) of rank ≥ 2, we use a reduced version of the Curtis–Steinberg–Tits presentation."
]
}
|
1811.00685
|
2899267345
|
For each family of finite classical groups, and their associated simple quotients, we provide an explicit presentation on a specific generating set of size at most 8. Since there exist efficient algorithms to construct this generating set in any copy of the group, our presentations can be used to verify claimed isomorphisms between representations of the classical group. The presentations are available in Magma.
|
The conjecture was motivated by potential complexity applications to questions about matrix groups defined over finite fields (see @cite_21 for details); its proof also provided verification for the first constructive recognition algorithms for classical groups, developed by Kantor and Seress @cite_4 .
|
{
"cite_N": [
"@cite_21",
"@cite_4"
],
"mid": [
"1980351200",
"1969754062"
],
"abstract": [
"We build a theory of black box groups, and apply it to matrix groups over finite fields. Elements of a black box group are encoded by strings of uniform length and group operations are performd by an oracle. Subgroups are given by a list of generators. We prove that for such subgroups, membership and divisor of the order are in NPB. (B is the group box oracle.) Under a plausible mathematical hypothesis on short presentations of finite simple groups, nom membership and exaact order will also be in NPB and thus in NPB ∩ NPB.",
"Introduction Preliminaries Special linear groups: @math Orthogonal groups: @math Symplectic groups: @math Unitary groups: @math Proofs of Theorems 1.1 and 1.1, and of corollaries 1.2-1.4 Permutation group algorithms Concluding remarks References."
]
}
|
1811.00685
|
2899267345
|
For each family of finite classical groups, and their associated simple quotients, we provide an explicit presentation on a specific generating set of size at most 8. Since there exist efficient algorithms to construct this generating set in any copy of the group, our presentations can be used to verify claimed isomorphisms between representations of the classical group. The presentations are available in Magma.
|
Rarely do we know explicit words (short or otherwise) to express standard generators in terms of the generators used in @cite_10 , or vice versa, so it is not feasible to convert their presentations to ones on standard generators. One significant obstruction is that sometimes generators used there are identified only by specifying properties they must satisfy. Nor do constructive recognition algorithms employing their generators exist for classical groups, so these presentations cannot be used directly to verify the necessary isomorphisms. Theorem 10.1 of @cite_10 illustrates an additional concern: it employs a generator @math specified only up to certain properties. This element does not exist in @math ; in private communication in 2012, the authors of @cite_10 fixed the error. Thus, while the results of @cite_18 @cite_19 provide spectacular answers to long-standing challenging problems, we believe that our presentations are necessary for our significant algorithmic application.
|
{
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_10"
],
"mid": [
"",
"2123560844",
"1997911818"
],
"abstract": [
"",
"There is a constant such that all nonabelian finite simple groups of rank over , with the possible exception of the Ree groups , have presentations with at most generators and relations and total length at most . As a corollary, we deduce a conjecture of Holt: there is a constant such that for every finite simple group , every prime and every irreducible -module",
"All nonabelian finite simple groups of rank @math over a field of size @math , with the possible exception of the Ree groups @math , have presentations with at most @math relations and bit-length @math . Moreover, @math and @math have presentations with 3 generators @math 7 relations and bit-length @math , while @math has a presentation with 7 generators, @math relations and bit-length @math"
]
}
|
1906.08541
|
2951898572
|
Graph convolution networks (GCN) have emerged as the leading method to classify node classes in networks, and have reached the highest accuracy in multiple node classification tasks. In the absence of available tagged samples, active learning methods have been developed to obtain the highest accuracy using the minimal number of queries to an oracle. The current best active learning methods use the sample class uncertainty as selection criteria. However, in graph based classification, the class of each node is often related to the class of its neighbors. As such, the uncertainty in the class of a node's neighbor may be a more appropriate selection criterion. We here propose two such criteria, one extending the classical uncertainty measure, and the other extending the page-rank algorithm. We show that the latter is optimal when the fraction of tagged nodes is low, and when this fraction grows to one over the average degree, the regional uncertainty performs better than all existing methods. While we have tested this methods on graphs, such methods can be extended to any classification problem, where a distance metrics can be defined between the input samples. All the code used can be accessed at : this https URL All the datasets used can be accessed at : this https URL
|
Uncertainty sampling is a general framework for measuring informativeness @cite_25 , where a learner queries the instance that it is most uncertain how to label. Culotta and McCallum (2005) @cite_43 employ a simple uncertainty-based strategy for sequence models called least confidence (LC): @math . Here, @math is the most likely label. This approach queries the instance for which the current model has the least confidence in its most likely labeling. (2001) @cite_4 propose another uncertainty strategy, which queries the instance with the smallest margin between the posteriors for its two most likely labels: @math , where @math and @math are the first and second best labels, respectively. Another uncertainty-based measure of informativeness is entropy (Shannon, 1948) @cite_29 . For a discrete random variable Y , the entropy is given by @math , and represents the information needed to "encode" the distribution of outcomes for Y . As such, it is often thought of as a measure of uncertainty in machine learning. We thus can calculate one of the uncertainty measures for each vertex based on the current model and choose the most uncertain instances to reveal.
|
{
"cite_N": [
"@cite_43",
"@cite_29",
"@cite_4",
"@cite_25"
],
"mid": [
"2117763124",
"2041404167",
"1580375566",
"1513874326"
],
"abstract": [
"A common obstacle preventing the rapid deployment of supervised machine learning algorithms is the lack of labeled training data. This is particularly expensive to obtain for structured prediction tasks, where each training instance may have multiple, interacting labels, all of which must be correctly annotated for the instance to be of use to the learner. Traditional active learning addresses this problem by optimizing the order in which the examples are labeled to increase learning efficiency. However, this approach does not consider the difficulty of labeling each example, which can vary widely in structured prediction tasks. For example, the labeling predicted by a partially trained system may be easier to correct for some instances than for others. We propose a new active learning paradigm which reduces not only how many instances the annotator must label, but also how difficult each instance is to annotate. The system also leverages information from partially correct predictions to efficiently solicit annotations from the user. We validate this active learning framework in an interactive information extraction system, reducing the total number of annotation actions by 22 .",
"Scientific knowledge grows at a phenomenal pace--but few books have had as lasting an impact or played as important a role in our modern world as The Mathematical Theory of Communication, published originally as a paper on communication theory more than fifty years ago. Republished in book form shortly thereafter, it has since gone through four hardcover and sixteen paperback printings. It is a revolutionary work, astounding in its foresight and contemporaneity. The University of Illinois Press is pleased and honored to issue this commemorative reprinting of a classic.",
"Information extraction from HTML documents requires a classifier capable of assigning semantic labels to the words or word sequences to be extracted. If completely labeled documents are available for training, well-known Markov model techniques can be used to learn such classifiers. In this paper, we consider the more challenging task of learning hidden Markov models (HMMs) when only partially (sparsely) labeled documents are available for training. We first give detailed account of the task and its appropriate loss function, and show how it can be minimized given an HMM. We describe an EM style algorithm for learning HMMs from partially labeled data. We then present an active learning algorithm that selects \"difficult\" unlabeled tokens and asks the user to label them. We study empirically by how much active learning reduces the required data labeling effort, or increases the quality of the learned model achievable with a given amount of user effort.",
"Abstract Uncertainty sampling methods iteratively request class labels for training instances whose classes are uncertain despite the previous labeled instances. These methods can greatly reduce the number of instances that an expert need label. One problem with this approach is that the classifier best suited for an application may be too expensive to train or use during the selection of instances. We test the use of one classifier (a highly efficient probabilistic one) to select examples for training another (the C4.5 rule induction program). Despite being chosen by this heterogeneous approach, the uncertainty samples yielded classifiers with lower error rates than random samples ten times larger."
]
}
|
1906.08541
|
2951898572
|
Graph convolution networks (GCN) have emerged as the leading method to classify node classes in networks, and have reached the highest accuracy in multiple node classification tasks. In the absence of available tagged samples, active learning methods have been developed to obtain the highest accuracy using the minimal number of queries to an oracle. The current best active learning methods use the sample class uncertainty as selection criteria. However, in graph based classification, the class of each node is often related to the class of its neighbors. As such, the uncertainty in the class of a node's neighbor may be a more appropriate selection criterion. We here propose two such criteria, one extending the classical uncertainty measure, and the other extending the page-rank algorithm. We show that the latter is optimal when the fraction of tagged nodes is low, and when this fraction grows to one over the average degree, the regional uncertainty performs better than all existing methods. While we have tested this methods on graphs, such methods can be extended to any classification problem, where a distance metrics can be defined between the input samples. All the code used can be accessed at : this https URL All the datasets used can be accessed at : this https URL
|
In representative sampling, one assumes that informative instances are representative" of the underlying distribution. @cite_39 considered a query strategy for nearest-neighbor methods that selects queries that are (i) least similar to the labeled instances, and (ii) most similar to the unlabeled instances. Nguyen and Smeulders (2004) @cite_14 proposed a density-based approach that first clusters instances and tries to avoid querying outliers by propagating label information to instances in the same cluster. Settles and Craven (2008) suggested a general density-weighting technique combining both uncertainty and representative. They query instances as follows: @math where @math represents the informativeness of x according to some base" query strategy A, and @math are the unlabeled samples. The second term weights the informativeness of x by its average similarity to all other instances in the input distribution (as approximated by U), subject to a parameter @math that controls the relative importance of the density term @cite_6 . Zhu and Wang also proposed sampling by a combination of uncertainty and density to solve the outliers problem emerging in some uncertainty techniques @cite_42
|
{
"cite_N": [
"@cite_14",
"@cite_42",
"@cite_6",
"@cite_39"
],
"mid": [
"1978633512",
"2096623936",
"2171671120",
"1590983731"
],
"abstract": [
"The paper is concerned with two-class active learning. While the common approach for collecting data in active learning is to select samples close to the classification boundary, better performance can be achieved by taking into account the prior data distribution. The main contribution of the paper is a formal framework that incorporates clustering into active learning. The algorithm first constructs a classifier on the set of the cluster representatives, and then propagates the classification decision to the other samples via a local noise model. The proposed model allows to select the most representative samples as well as to avoid repeatedly labeling samples in the same cluster. During the active learning process, the clustering is adjusted using the coarse-to-fine strategy in order to balance between the advantage of large clusters and the accuracy of the data representation. The results of experiments in image databases show a better performance of our algorithm compared to the current methods.",
"To solve the knowledge bottleneck problem, active learning has been widely used for its ability to automatically select the most informative unlabeled examples for human annotation. One of the key enabling techniques of active learning is uncertainty sampling, which uses one classifier to identify unlabeled examples with the least confidence. Uncertainty sampling often presents problems when outliers are selected. To solve the outlier problem, this paper presents two techniques, sampling by uncertainty and density (SUD) and density-based re-ranking. Both techniques prefer not only the most informative example in terms of uncertainty criterion, but also the most representative example in terms of density criterion. Experimental results of active learning for word sense disambiguation and text classification tasks using six real-world evaluation data sets demonstrate the effectiveness of the proposed methods.",
"Active learning is well-suited to many problems in natural language processing, where unlabeled data may be abundant but annotation is slow and expensive. This paper aims to shed light on the best active learning approaches for sequence labeling tasks such as information extraction and document segmentation. We survey previously used query selection strategies for sequence models, and propose several novel algorithms to address their shortcomings. We also conduct a large-scale empirical comparison using multiple corpora, which demonstrates that our proposed methods advance the state of the art.",
"This paper proposes an efficient example sampling method for example-based word sense disambiguation systems. To construct a database of practical size, a considerable overhead for manual sense disambiguation (overhead for supervision) is required. In addition, the time complexity of searching a large-sized database poses a considerable problem (overhead for search). To counter these problems, our method selectively samples a smaller-sized effective subset from a given example set for use in word sense disambiguation. Our method is characterized by the reliance on the notion of training utility: the degree to which each example is informative for future example sampling when used for the training of the system. The system progressively collects examples by selecting those with greatest utility. The paper reports the effectiveness of our method through experiments on about one thousand sentences. Compared to experiments with other example sampling methods, our method reduced both the overhead for supervision and the overhead for search, without the degeneration of the performance of the system."
]
}
|
1906.08541
|
2951898572
|
Graph convolution networks (GCN) have emerged as the leading method to classify node classes in networks, and have reached the highest accuracy in multiple node classification tasks. In the absence of available tagged samples, active learning methods have been developed to obtain the highest accuracy using the minimal number of queries to an oracle. The current best active learning methods use the sample class uncertainty as selection criteria. However, in graph based classification, the class of each node is often related to the class of its neighbors. As such, the uncertainty in the class of a node's neighbor may be a more appropriate selection criterion. We here propose two such criteria, one extending the classical uncertainty measure, and the other extending the page-rank algorithm. We show that the latter is optimal when the fraction of tagged nodes is low, and when this fraction grows to one over the average degree, the regional uncertainty performs better than all existing methods. While we have tested this methods on graphs, such methods can be extended to any classification problem, where a distance metrics can be defined between the input samples. All the code used can be accessed at : this https URL All the datasets used can be accessed at : this https URL
|
In modularity based AL approaches, nodes are divided into communities. Macskassy proposed to reveal the one most central node in each community. Then divide each community into sub-communities and sample the most central node in each sub-community and so on. Since community based methods does not seem to work by themselves, Mackassy suggested a hybrid method combining communities, centrality. and uncertainty with Empirical Risk Minimization (ERM) framework @cite_33 . Ping and Zhu proposed combining communities structure to perform batch mode active learning. They used communities to consider the overlap in information content among the "best" instances @cite_22 .
|
{
"cite_N": [
"@cite_22",
"@cite_33"
],
"mid": [
"2771759651",
"2114900710"
],
"abstract": [
"Active learning for networked data that focuses on predicting the labels of other nodes accurately by knowing the labels of a small subset of nodes is attracting more and more researchers because it is very useful especially in cases, where labeled data are expensive to obtain. However, most existing research either only apply to networks with assortative community structure or focus on node attribute data with links or are designed for working in single mode that will work at a higher learning and query cost than batch active learning in general. In view of this, in this paper, we propose a batch mode active learning method which uses information-theoretic techniques and random walk to select which nodes to label. The proposed method requires only network topology as its input, does not need to know the number of blocks in advance, and makes no initial assumptions about how the blocks connect. We test our method on two different types of networks: assortative structure and diassortative structure, and then compare our method with a single mode active learning method that is similar to our method except for working in single mode and several simple batch mode active learning methods using information-theoretic techniques and simple heuristics, such as employing degree or betweenness centrality. The experimental results show that the proposed method in this paper significantly outperforms them.",
"Active and semi-supervised learning are important techniques when labeled data are scarce. Recently a method was suggested for combining active learning with a semi-supervised learning algorithm that uses Gaussian fields and harmonic functions. This classifier is relational in nature: it relies on having the data presented as a partially labeled graph (also known as a within-network learning problem). This work showed yet again that empirical risk minimization (ERM) was the best method to find the next instance to label and provided an efficient way to compute ERM with the semi-supervised classifier. The computational problem with ERM is that it relies on computing the risk for all possible instances. If we could limit the candidates that should be investigated, then we can speed up active learning considerably. In the case where the data is graphical in nature, we can leverage the graph structure to rapidly identify instances that are likely to be good candidates for labeling. This paper describes a novel hybrid approach of using of community finding and social network analytic centrality measures to identify good candidates for labeling and then using ERM to find the best instance in this candidate set. We show on real-world data that we can limit the ERM computations to a fraction of instances with comparable performance."
]
}
|
1906.08541
|
2951898572
|
Graph convolution networks (GCN) have emerged as the leading method to classify node classes in networks, and have reached the highest accuracy in multiple node classification tasks. In the absence of available tagged samples, active learning methods have been developed to obtain the highest accuracy using the minimal number of queries to an oracle. The current best active learning methods use the sample class uncertainty as selection criteria. However, in graph based classification, the class of each node is often related to the class of its neighbors. As such, the uncertainty in the class of a node's neighbor may be a more appropriate selection criterion. We here propose two such criteria, one extending the classical uncertainty measure, and the other extending the page-rank algorithm. We show that the latter is optimal when the fraction of tagged nodes is low, and when this fraction grows to one over the average degree, the regional uncertainty performs better than all existing methods. While we have tested this methods on graphs, such methods can be extended to any classification problem, where a distance metrics can be defined between the input samples. All the code used can be accessed at : this https URL All the datasets used can be accessed at : this https URL
|
Centrality based approaches focus on nodes which are more central (e.g. higher degree). The assumption is that the central nodes will have a major impact on the unknown labels, as used by Macskassy in the ERM algorithm, where he showed that betweenness centrality is a good measure for centrality @cite_33 . Cai and Chang proposed to calculate a node representativeness score based on graph centrality. They tested several centrality measures: degree centrality, betweenness centrality, harmonic centrality, closeness centrality, and page-rank centrality. They conclude that the Page-Rank centrality is superior, and suggest using it when the prediction model is not informative enough @cite_9 .
|
{
"cite_N": [
"@cite_9",
"@cite_33"
],
"mid": [
"2614195334",
"2114900710"
],
"abstract": [
"Graph embedding provides an efficient solution for graph analysis by converting the graph into a low-dimensional space which preserves the structure information. In contrast to the graph structure data, the i.i.d. node embedding can be processed efficiently in terms of both time and space. Current semi-supervised graph embedding algorithms assume the labelled nodes are given, which may not be always true in the real world. While manually label all training data is inapplicable, how to select the subset of training data to label so as to maximize the graph analysis task performance is of great importance. This motivates our proposed active graph embedding (AGE) framework, in which we design a general active learning query strategy for any semi-supervised graph embedding algorithm. AGE selects the most informative nodes as the training labelled nodes based on the graphical information (i.e., node centrality) as well as the learnt node embedding (i.e., node classification uncertainty and node embedding representativeness). Different query criteria are combined with the time-sensitive parameters which shift the focus from graph based query criteria to embedding based criteria as the learning progresses. Experiments have been conducted on three public data sets and the results verified the effectiveness of each component of our query strategy and the power of combining them using time-sensitive parameters. Our code is available online at: this https URL.",
"Active and semi-supervised learning are important techniques when labeled data are scarce. Recently a method was suggested for combining active learning with a semi-supervised learning algorithm that uses Gaussian fields and harmonic functions. This classifier is relational in nature: it relies on having the data presented as a partially labeled graph (also known as a within-network learning problem). This work showed yet again that empirical risk minimization (ERM) was the best method to find the next instance to label and provided an efficient way to compute ERM with the semi-supervised classifier. The computational problem with ERM is that it relies on computing the risk for all possible instances. If we could limit the candidates that should be investigated, then we can speed up active learning considerably. In the case where the data is graphical in nature, we can leverage the graph structure to rapidly identify instances that are likely to be good candidates for labeling. This paper describes a novel hybrid approach of using of community finding and social network analytic centrality measures to identify good candidates for labeling and then using ERM to find the best instance in this candidate set. We show on real-world data that we can limit the ERM computations to a fraction of instances with comparable performance."
]
}
|
1906.08541
|
2951898572
|
Graph convolution networks (GCN) have emerged as the leading method to classify node classes in networks, and have reached the highest accuracy in multiple node classification tasks. In the absence of available tagged samples, active learning methods have been developed to obtain the highest accuracy using the minimal number of queries to an oracle. The current best active learning methods use the sample class uncertainty as selection criteria. However, in graph based classification, the class of each node is often related to the class of its neighbors. As such, the uncertainty in the class of a node's neighbor may be a more appropriate selection criterion. We here propose two such criteria, one extending the classical uncertainty measure, and the other extending the page-rank algorithm. We show that the latter is optimal when the fraction of tagged nodes is low, and when this fraction grows to one over the average degree, the regional uncertainty performs better than all existing methods. While we have tested this methods on graphs, such methods can be extended to any classification problem, where a distance metrics can be defined between the input samples. All the code used can be accessed at : this https URL All the datasets used can be accessed at : this https URL
|
In label propagation approaches, the implicit assumption is of label smoothness over the graph or over the projection of the graph into some manifold in @math . Ming Ji proposed to select the data points to label such that the total variance of the Gaussian field over unlabeled examples, as well as the expected prediction error of the harmonic Gaussian field classifier, is minimized. An efficient computation scheme was then proposed to solve the corresponding optimization problem with no additional parameter @cite_48 . Yifei Ma extended sub-modularity guarantees from V-optimality to @math -optimality using properties specific to Gaussian Markov Random Field (GRMF)s @cite_46 . Dimitris Berberidis proposed to sample the nodes with the highest expected change of the unknown labels. Thus, in contrast with the expected error reduction and entropy minimization approaches that actively sample with the goal of increasing the “confidence” on the model, focus on maximally perturbing the model with each node sampled. The intuition behind this approach is that by sampling nodes with the largest impact, one may take faster steps towards an increasingly accurate model @cite_0 .
|
{
"cite_N": [
"@cite_0",
"@cite_48",
"@cite_46"
],
"mid": [
"2964204618",
"2135069250",
"2111843353"
],
"abstract": [
"This paper deals with active sampling of graph nodes representing training data for binary classification. The graph may be given or constructed using similarity measures among nodal features. Leveraging the graph for classification builds on the premise that labels across neighboring nodes are correlated according to a categorical Markov random field (MRF). This model is further relaxed to a Gaussian (G)MRF with labels taking continuous values—an approximation that not only mitigates the combinatorial complexity of the categorical model, but also offers optimal unbiased soft predictors of the unlabeled nodes. The proposed sampling strategy is based on querying the node whose label disclosure is expected to inflict the largest change on the GMRF, and in this sense it is the most informative on average. Connections are established to other sampling methods including uncertainty sampling, variance minimization, and sampling based on the @math optimality criterion. A simple yet effective heuristic is also introduced for increasing the exploration capabilities of the sampler, and reducing bias of the resultant classifier, by adjusting the confidence on the model label predictions. The novel sampling strategies are based on quantities that are readily available without the need for model retraining, rendering them computationally efficient and scalable to large graphs. Numerical tests using synthetic and real data demonstrate that the proposed methods achieve accuracy that is comparable or superior to the state of the art even at reduced runtime.",
"We consider the problem of active learning over the vertices in a graph, without feature representation. Our study is based on the common graph smoothness assumption, which is formulated in a Gaussian random field model. We analyze the probability distribution over the unlabeled vertices conditioned on the label information, which is a multivariate normal with the mean being the harmonic solution over the field. Then we select the nodes to label such that the total variance of the distribution on the unlabeled data, as well as the expected prediction error, is minimized. In this way, the classifier we obtain is theoretically more robust. Compared with existing methods, our algorithm has the advantage of selecting data in a batch offline mode with solid theoretical support. We show improved performance over existing label selection criteria on several real world data sets.",
"A common classifier for unlabeled nodes on undirected graphs uses label propagation from the labeled nodes, equivalent to the harmonic predictor on Gaussian random fields (GRFs). For active learning on GRFs, the commonly used V-optimality criterion queries nodes that reduce the L2 (regression) loss. V-optimality satisfies a submodularity property showing that greedy reduction produces a (1 - 1 e) globally optimal solution. However, L2 loss may not characterise the true nature of 0 1 loss in classification problems and thus may not be the best choice for active learning. We consider a new criterion we call Σ-optimality, which queries the node that minimizes the sum of the elements in the predictive covariance. Σ-optimality directly optimizes the risk of the surveying problem, which is to determine the proportion of nodes belonging to one class. In this paper we extend submodularity guarantees from V-optimality to Σ-optimality using properties specific to GRFs. We further show that GRFs satisfy the suppressor-free condition in addition to the conditional independence inherited from Markov random fields. We test Σ-optimality on real-world graphs with both synthetic and real data and show that it outperforms V-optimality and other related methods on classification."
]
}
|
1906.08827
|
2950564960
|
In MRI, motion correction for fet al body poses a particular challenge due to the presence of local non-rigid transformations of organs caused by bending and stretching. The existing slice-to-volume (SVR) reconstruction methods provide efficient solution for the fet al brain that undergoes only rigid transformation or 4D fet al heart with rigid states correlated to cardiac phases. However, for fet al body reconstruction, rigid registration cannot resolve the issue of misregistrations due to deformable motion. This results in propagation of registration error to the reconstructed volume and subsequent degradation of features. We propose a novel approach for non-rigid motion correction in 3D volumes based on an extension of the classical SVR method with hierarchical deformable registration scheme and structure-based outlier rejection. Deformable SVR (DSVR) method allows high resolution reconstruction of the fet al trunk and the robust scheme for structure-based rejection of misregistered slices minimises the impact of registration error. The method performance is evaluated by comparison to the SVR and patch-to-volume registration methods for reconstruction of fet al trunk on a series of fet al MRI datasets from 28-30 weeks gestational age (GA) range with varying degree of motion corruption. An additional phantom study with simulated non-rigid motion is used for the assessment of consistency of DSVR reconstructed volumes.
|
The original concept of application of SVR for reconstruction of fet al brain from motion-corrupted MRI stacks was proposed in @cite_21 . It includes slice-to-volume registration interleaved with scattered data interpolation based on weighed sum of Gaussian kernels representing point spread function (PSF). During the following decade, the SVR reconstruction framework was gradually formalised and optimised with B-spline interpolation @cite_1 , SR reconstruction @cite_9 @cite_15 , edge-preserving regularisation @cite_26 , outlier rejection @cite_9 @cite_19 , intensity matching @cite_22 , total variation regularisation @cite_3 , sinc PSF model and GPU-parallelisation @cite_20 .
|
{
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_15",
"@cite_20"
],
"mid": [
"1582236847",
"2024251722",
"2135264048",
"2085681975",
"2166966312",
"1891734390",
"",
"",
"1994514622"
],
"abstract": [
"Super-resolution techniques provide a route to studying fine scale anatomical detail using multiple lower resolution acquisitions. In particular, techniques that do not depend on regular sampling can be used in medical imaging situations where imaging time and resolution are limited by subject motion. We investigate in this work the use of a super-resolution technique for anisotropic fet al brain MR data reconstruction without modifying the data acquisition protocol. The approach, which consists of iterative motion correction and high resolution image estimation, is compared with a previously used scattered data interpolation-based reconstruction method. To optimize acquisition time, an evaluation of the influence of the number of input images and image noise is also performed. Evaluation on simulated MR images and real data show significant improvements in performance provided by the super-resolution approach.",
"We propose a method for the reconstruction of volumetric fet al MRI from 2D slices, comprising super-resolution reconstruction of the volume interleaved with slice-to-volume registration to correct for the motion. The method incorporates novel intensity matching of acquired 2D slices and robust statistics which completely excludes identified misregistered or corrupted voxels and slices. The reconstruction method is applied to motion-corrupted data simulated from MRI of a preterm neonate, as well as 10 clinically acquired thick-slice fet al MRI scans and three scan-sequence optimized thin-slice fet al datasets. The proposed method produced high quality reconstruction results from all the datasets to which it was applied. Quantitative analysis performed on simulated and clinical data shows that both intensity matching and robust statistics result in statistically significant improvement of super-resolution reconstruction. The proposed novel EM-based robust statistics also improves the reconstruction when compared to previously proposed Huber robust statistics. The best results are obtained when thin-slice data and the correct approximation of the point spread function is used. This paper addresses the need for a comprehensive reconstruction algorithm of 3D fet al MRI, so far lacking in the scientific literature.",
"Fast magnetic resonance imaging slice acquisition techniques such as single shot fast spin echo are routinely used in the presence of uncontrollable motion. These techniques are widely used for fet al magnetic resonance imaging (MRI) and MRI of moving subjects and organs. Although high-quality slices are frequently acquired by these techniques, inter-slice motion leads to severe motion artifacts that are apparent in out-of-plane views. Slice sequential acquisitions do not enable 3-D volume representation. In this study, we have developed a novel technique based on a slice acquisition model, which enables the reconstruction of a volumetric image from multiple-scan slice acquisitions. The super-resolution volume reconstruction is formulated as an inverse problem of finding the underlying structure generating the acquired slices. We have developed a robust M-estimation solution which minimizes a robust error norm function between the model-generated slices and the acquired slices. The accuracy and robustness of this novel technique has been quantitatively assessed through simulations with digital brain phantom images as well as high-resolution newborn images. We also report here successful application of our new technique for the reconstruction of volumetric fet al brain MRI from clinically acquired data.",
"Rationale and Objectives This paper describes a novel approach to forming high-resolution MR images of the human fet al brain. It addresses the key problem of fet al motion by proposing a registration-refined compounding of multiple sets of orthogonal fast two-dimensional MRI slices, which are currently acquired for clinical studies, into a single high-resolution MRI volume. Materials and Methods A robust multiresolution slice alignment is applied iteratively to the data to correct motion of the fetus that occurs between two-dimensional acquisitions. This is combined with an intensity correction step and a super-resolution reconstruction step, to form a single high isotropic resolution volume of the fet al brain. Results Experimental validation on synthetic image data with known motion types and underlying anatomy, together with retrospective application to sets of clinical acquisitions, are included. Conclusion Results indicate that this method promises a unique route to acquiring high-resolution MRI of the fet al brain in vivo allowing comparable quality to that of neonatal MRI. Such data provide a highly valuable window into the process of normal and abnormal brain development, which is directly applicable in a clinical setting.",
"Motion degrades magnetic resonance (MR) images and prevents acquisition of self-consistent and high-quality volume images. A novel methodology, Snapshot magnetic resonance imaging (MRI) with volume reconstruction (SVR) has been developed for imaging moving subjects at high resolution and high signal-to-noise ratio (SNR). The method combines registered 2D slices from sequential dynamic single-shot scans. The SVR approach requires that the anatomy in question is not changing shape or size and is moving at a rate that allows snapshot images to be acquired. After imaging the target volume repeatedly to guarantee sufficient sampling every where, a robust slice-to-volume registration method has been implemented that achieves alignment of each slice within 0.3 mm in the examples tested. Multilevel scattered interpolation has been used to obtain high-fidelity reconstruction with root-mean-square (rms) error that is less than the noise level in the images. The SVR method has been performed successfully for brain studies on subjects that cannot stay still, and in some cases were moving substantially during scanning. For example, awake neonates, deliberately moved adults and, especially, on fetuses, for which no conventional high-resolution 3D method is currently available. Fine structure of the in-utero fet al brain is clearly revealed for the first time and substantial SNR improvement is realized by having many individually acquired slices contribute to each voxel in the reconstructed image.",
"Abstract Although fet al anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fet al MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fet al and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fet al MRI recovery as in comparison with state-of-the art methods.",
"",
"",
"Capturing an enclosing volume of moving subjects and organs using fast individual image slice acquisition has shown promise in dealing with motion artefacts. Motion between slice acquisitions results in spatial inconsistencies that can be resolved by slice-to-volume reconstruction (SVR) methods to provide high quality 3D image data. Existing algorithms are, however, typically very slow, specialised to specific applications and rely on approximations, which impedes their potential clinical use. In this paper, we present a fast multi-GPU accelerated framework for slice-to-volume reconstruction. It is based on optimised 2D 3D registration, super-resolution with automatic outlier rejection and an additional (optional) intensity bias correction. We introduce a novel and fully automatic procedure for selecting the image stack with least motion to serve as an initial registration target. We evaluate the proposed method using artificial motion corrupted phantom data as well as clinical data, including tracked freehand ultrasound of the liver and fet al Magnetic Resonance Imaging. We achieve speed-up factors greater than 30 compared to a single CPU system and greater than 10 compared to currently available state-of-the-art multi-core CPU methods. We ensure high reconstruction accuracy by exact computation of the point-spread function for every input data point, which has not previously been possible due to computational limitations. Our framework and its implementation is scalable for available computational infrastructures and tests show a speed-up factor of 1.70 for each additional GPU. This paves the way for the online application of image based reconstruction methods during clinical examinations. The source code for the proposed approach is publicly available."
]
}
|
1906.08827
|
2950564960
|
In MRI, motion correction for fet al body poses a particular challenge due to the presence of local non-rigid transformations of organs caused by bending and stretching. The existing slice-to-volume (SVR) reconstruction methods provide efficient solution for the fet al brain that undergoes only rigid transformation or 4D fet al heart with rigid states correlated to cardiac phases. However, for fet al body reconstruction, rigid registration cannot resolve the issue of misregistrations due to deformable motion. This results in propagation of registration error to the reconstructed volume and subsequent degradation of features. We propose a novel approach for non-rigid motion correction in 3D volumes based on an extension of the classical SVR method with hierarchical deformable registration scheme and structure-based outlier rejection. Deformable SVR (DSVR) method allows high resolution reconstruction of the fet al trunk and the robust scheme for structure-based rejection of misregistered slices minimises the impact of registration error. The method performance is evaluated by comparison to the SVR and patch-to-volume registration methods for reconstruction of fet al trunk on a series of fet al MRI datasets from 28-30 weeks gestational age (GA) range with varying degree of motion corruption. An additional phantom study with simulated non-rigid motion is used for the assessment of consistency of DSVR reconstructed volumes.
|
Rigid SVR-based reconstruction of deformable organs was addressed by patch-to-volume registration (PVR) approach based on registration of patches for large FoV motion compensation @cite_27 and an optimised version of @cite_22 for placenta reconstruction @cite_16 . Given the known cardiac phases of each of the slices, SVR can also be employed for 4D fet al cardiac reconstruction from dynamic MRI @cite_2 .
|
{
"cite_N": [
"@cite_27",
"@cite_16",
"@cite_22",
"@cite_2"
],
"mid": [
"2549627956",
"2929362235",
"2024251722",
"2604178943"
],
"abstract": [
"In this paper, we present a novel method for the correction of motion artifacts that are present in fet al magnetic resonance imaging (MRI) scans of the whole uterus. Contrary to current slice-to-volume registration (SVR) methods, requiring an inflexible anatomical enclosure of a single investigated organ, the proposed patch-to-volume reconstruction (PVR) approach is able to reconstruct a large field of view of non-rigidly deforming structures. It relaxes rigid motion assumptions by introducing a specific amount of redundant information that is exploited with parallelized patchwise optimization, super-resolution, and automatic outlier rejection. We further describe and provide an efficient parallel implementation of PVR allowing its execution within reasonable time on commercially available graphics processing units, enabling its use in the clinical practice. We evaluate PVR’s computational overhead compared with standard methods and observe improved reconstruction accuracy in the presence of affine motion artifacts compared with conventional SVR in synthetic experiments. Furthermore, we have evaluated our method qualitatively and quantitatively on real fet al MRI data subject to maternal breathing and sudden fet al movements. We evaluate peak-signal-to-noise ratio, structural similarity index, and cross correlation with respect to the originally acquired data and provide a method for visual inspection of reconstruction uncertainty. We further evaluate the distance error for selected anatomical landmarks in the fet al head, as well as calculating the mean and maximum displacements resulting from automatic non-rigid registration to a motion-free ground truth image. These experiments demonstrate a successful application of PVR motion compensation to the whole fet al body, uterus, and placenta.",
"Abstract Recent advances in fet al magnetic resonance imaging (MRI) open the door to improved detection and characterization of fet al and placental abnormalities. Since interpreting MRI data can be complex and ambiguous, there is a need for robust computational methods able to quantify placental anatomy (including its vasculature) and function. In this work, we propose a novel fully-automated method to segment the placenta and its peripheral blood vessels from fet al MRI. First, a super-resolution reconstruction of the uterus is generated by combining axial, sagittal and coronal views. The placenta is then segmented using 3D Gabor filters, texture features and Support Vector Machines. A uterus edge-based instance selection is proposed to identify the support vectors defining the placenta boundary. Subsequently, peripheral blood vessels are extracted through a curvature-based corner detector. Our approach is validated on a rich set of 44 control and pathological cases: singleton and (normal monochorionic) twin pregnancies between 25–37 weeks of gestation. Dice coefficients of 0.82 ± 0.02 and 0.81 ± 0.08 are achieved for placenta and its vasculature segmentation, respectively. A comparative analysis with state of the art convolutional neural networks (CNN), namely, 3D U-Net, V-Net, DeepMedic, Holistic3D Net, HighRes3D Net and Dense V-Net is also conducted for placenta localization, with our method outperforming all CNN approaches. Results suggest that our methodology can aid the diagnosis and surgical planning of severe fet al disorders.",
"We propose a method for the reconstruction of volumetric fet al MRI from 2D slices, comprising super-resolution reconstruction of the volume interleaved with slice-to-volume registration to correct for the motion. The method incorporates novel intensity matching of acquired 2D slices and robust statistics which completely excludes identified misregistered or corrupted voxels and slices. The reconstruction method is applied to motion-corrupted data simulated from MRI of a preterm neonate, as well as 10 clinically acquired thick-slice fet al MRI scans and three scan-sequence optimized thin-slice fet al datasets. The proposed method produced high quality reconstruction results from all the datasets to which it was applied. Quantitative analysis performed on simulated and clinical data shows that both intensity matching and robust statistics result in statistically significant improvement of super-resolution reconstruction. The proposed novel EM-based robust statistics also improves the reconstruction when compared to previously proposed Huber robust statistics. The best results are obtained when thin-slice data and the correct approximation of the point spread function is used. This paper addresses the need for a comprehensive reconstruction algorithm of 3D fet al MRI, so far lacking in the scientific literature.",
"Purpose Development of a MRI acquisition and reconstruction strategy to depict fet al cardiac anatomy in the presence of maternal and fet al motion. Methods The proposed strategy involves i) acquisition and reconstruction of highly accelerated dynamic MRI, followed by image-based ii) cardiac synchronization, iii) motion correction, iv) outlier rejection, and finally v) cardiac cine reconstruction. Postprocessing entirely was automated, aside from a user-defined region of interest delineating the fet al heart. The method was evaluated in 30 mid- to late gestational age singleton pregnancies scanned without maternal breath-hold. Results The combination of complementary acquisition reconstruction and correction rejection steps in the pipeline served to improve the quality of the reconstructed 2D cine images, resulting in increased visibility of small, dynamic anatomical features. Artifact-free cine images successfully were produced in 36 of 39 acquired data sets; prolonged general fet al movements precluded processing of the remaining three data sets. Conclusions The proposed method shows promise as a motion-tolerant framework to enable further detail in MRI studies of the fet al heart and great vessels. Processing data in image-space allowed for spatial and temporal operations to be applied to the fet al heart in isolation, separate from extraneous changes elsewhere in the field of view. Magn Reson Med 79:327–338, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited."
]
}
|
1906.08827
|
2950564960
|
In MRI, motion correction for fet al body poses a particular challenge due to the presence of local non-rigid transformations of organs caused by bending and stretching. The existing slice-to-volume (SVR) reconstruction methods provide efficient solution for the fet al brain that undergoes only rigid transformation or 4D fet al heart with rigid states correlated to cardiac phases. However, for fet al body reconstruction, rigid registration cannot resolve the issue of misregistrations due to deformable motion. This results in propagation of registration error to the reconstructed volume and subsequent degradation of features. We propose a novel approach for non-rigid motion correction in 3D volumes based on an extension of the classical SVR method with hierarchical deformable registration scheme and structure-based outlier rejection. Deformable SVR (DSVR) method allows high resolution reconstruction of the fet al trunk and the robust scheme for structure-based rejection of misregistered slices minimises the impact of registration error. The method performance is evaluated by comparison to the SVR and patch-to-volume registration methods for reconstruction of fet al trunk on a series of fet al MRI datasets from 28-30 weeks gestational age (GA) range with varying degree of motion corruption. An additional phantom study with simulated non-rigid motion is used for the assessment of consistency of DSVR reconstructed volumes.
|
With respect to application of deformable SVR for motion correction, the existing solutions primarily focus on registration of intra-operative slices with a pre-operative planning volume @cite_6 @cite_10 , multimodal registration (e.g., histology to MRI) @cite_4 @cite_14 @cite_13 or motion correction within a single volume @cite_7 . The majority of monomodal methods are based on rigid SVR for global alignment followed by Free Form Deformation (FFD) registration for correction of non-rigid shape changes. Recently, @cite_17 formalised deformable graph-based SVR approach validated on a 3D heart MRI dataset. However, the existing implementation is limited to in-plane deformations only. Model-based SVR methods integrating biomechanical models for physics-based regularisation were proposed in works of @cite_28 @cite_8 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_10",
"@cite_6",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2163751498",
"2125570966",
"",
"2030811296",
"",
"2166698883",
"",
"2622673649"
],
"abstract": [
"",
"Registration of histological images to three-dimensional imaging modalities is an important step in quantitative analysis of brain structure, in architectonic mapping of the brain, and in investigation of the pathology of a brain disease. Reconstruction of histology volume from serial sections is a well-established procedure, but it does not address registration of individual slices from sparse sections, which is the aim of the slice-to-volume approach. This study presents a flexible framework for intensity-based slice-to-volume nonrigid registration algorithms with a geometric transformation deformation field parametrized by various classes of spline functions: thin-plate splines (TPS), Gaussian elastic body splines (GEBS), or cubic B-splines. Algorithms are applied to cross-modality registration of histological and magnetic resonance images of the human brain. Registration performance is evaluated across a range of optimization algorithms and intensity-based cost functions. For a particular case of histological data, best results are obtained with a TPS three-dimensional (3D) warp, a new unconstrained optimization algorithm (NEWUOA), and a correlation-coefficient-based cost function.",
"Breathing motion is one of the main sources of artifacts in MRI acquisitions that can severely impair diagnosis. In MRI with continuously moving table, the application of common motion compensation approaches such as breath holding or the synchronization of the measurement with the breathing motion can be problematic. In this study, a technique for the reduction of breathing-motion artifacts for MRI with continuously moving table is presented, which reconstructs motion-consistent volumes from data acquired during free breathing. Axial images are acquired rapidly compared to the period of the breathing motion and consistently combined using a combination of rigid and nonrigid slice-to-volume registration. This new technique is compared to a previously reported artifact reduction method for MRI with continuously moving table that is based on the same acquisition scheme. While the latter method only suppresses ghosting artifacts, the new technique is shown to additionally reduce blurring, misregistrations, and signal cancellations in the reconstructed images. Magn Reson Med 63:701–712, 2010. © 2010 Wiley-Liss, Inc.",
"",
"A method is proposed for automatic registration of 3D preoperative magnetic resonance images of deformable tissue to a sequence of its 2D intraoperative images. The algorithm employs a dynamic continuum mechanics model of the deformation and similarity (distance) measures such as correlation ratio, mutual information or sum of squared differences for registration. The registration is solely based on information present in the 3D preoperative and 2D intraoperative images and does not require fiducial markers, feature extraction or image segmentation. Results of experiments with a biopsy training breast phantom show that the proposed method can perform well in the presence of large deformations. This is particularly useful for clinical applications such as MR-based breast biopsy where large tissue deformations occur.",
"",
"Purpose: MRI-guided prostate needle biopsy requires compensation for organ motion between target planning and needle placement. Two questions are studied and answered in this paper: 1) is rigid registration sufficient in tracking the targets with an error smaller than the clinically significant size of prostate cancer and 2) what is the effect of the number of intraoperative slices on registration accuracy and speed? Methods: we propose multislice-to-volume registration algorithms for tracking the biopsy targets within the prostate. Three orthogonal plus additional transverse intraoperative slices are acquired in the approximate center of the prostate and registered with a high-resolution target planning volume. Both rigid and deformable scenarios were implemented. Both simulated and clinical MRI-guided robotic prostate biopsy data were used to assess tracking accuracy. Results: average registration errors in clinical patient data were 2.6 mm for the rigid algorithm and 2.1 mm for the deformable algorithm. Conclusion: rigid tracking appears to be promising. Three tracking slices yield significantly high registration speed with an affordable error.",
"",
"Deformable image registration is a fundamental problem in computer vision and medical image computing. In this paper we investigate the use of graphical models in the context of a particular type of image registration problem, known as slice-to-volume registration. We introduce a scal-able, modular and flexible formulation that can accommodate low-rank and high order terms, that simultaneously selects the plane and estimates the in-plane deformation through a single shot optimization approach. The proposed framework is instantiated into different variants seeking either a compromise between computational efficiency (soft plane selection constraints and approximate definition of the data similarity terms through pair-wise components) or exact definition of the data terms and the constraints on the plane selection. Simulated and real-data in the context of ultrasound and magnetic resonance registration (where both framework instantiations as well as different optimization strategies are considered) demonstrate the potentials of our method."
]
}
|
1906.08743
|
2950098109
|
Manipulating video content is easier than ever. Due to the misuse potential of manipulated content, multiple detection techniques that analyze the pixel data from the videos have been proposed. However, clever manipulators should also carefully forge the metadata and auxiliary header information, which is harder to do for videos than images. In this paper, we propose to identify forged videos by analyzing their multimedia stream descriptors with simple binary classifiers, completely avoiding the pixel space. Using well-known datasets, our results show that this scalable approach can achieve a high manipulation detection score if the manipulators have not done a careful data sanitization of the multimedia stream descriptors.
|
The multimedia forensics research community has a long history of trying to address the problem of detecting manipulations in video sequences. @cite_34 provide an extensive and thorough overview of the main research directions and solutions that have been explored in the last decade. More recent work has focused on specific video manipulations, such as local tampering detection in video sequences @cite_27 @cite_28 , video re-encoding detection @cite_3 @cite_13 , splicing detection in videos @cite_31 @cite_17 @cite_32 , and near-duplicate video detection @cite_23 @cite_7 . @cite_24 @cite_35 also present solutions that use 3D PatchMatch @cite_26 for video forgery detection and localization, whereas @cite_19 suggest using data-driven machine learning based approaches. Solutions tailored to detecting the latest video manipulation techniques have also been recently presented. These include the works of @cite_40 @cite_12 on detecting Deepfakes and @cite_33 @cite_38 on Face2Face @cite_39 manipulation detection.
|
{
"cite_N": [
"@cite_35",
"@cite_3",
"@cite_38",
"@cite_39",
"@cite_23",
"@cite_17",
"@cite_26",
"@cite_7",
"@cite_28",
"@cite_32",
"@cite_19",
"@cite_27",
"@cite_40",
"@cite_34",
"@cite_12",
"@cite_33",
"@cite_24",
"@cite_31",
"@cite_13"
],
"mid": [
"2598783175",
"2053541750",
"2913399670",
"2301937176",
"",
"",
"1993120651",
"",
"",
"",
"2725613633",
"2016828421",
"2914447220",
"2106609510",
"2911424785",
"2794857359",
"",
"2149072313",
""
],
"abstract": [
"We propose a new algorithm for the reliable detection and localization of video copy–move forgeries. Discovering well-crafted video copy–moves may be very difficult, especially when some uniform background is copied to occlude foreground objects. To reliably detect both additive and occlusive copy–moves, we use a dense-field approach, with invariant features that guarantee robustness to several postprocessing operations. To limit complexity, a suitable video-oriented version of PatchMatch is used, with a multiresolution search strategy, and a focus on volumes of interest. Performance assessment relies on a new dataset, designed ad hoc , with realistic copy–moves and a wide variety of challenging situations. Experimental results show the proposed method to detect and localize video copy–moves with good accuracy even in adverse conditions.",
"Bit rate is one of the important criterions for digital video quality. With some video tools, however, video bit rate can be easily increased without improving the video quality at all. In such a case, a claimed high bit rate video would actually have poor visual quality if it is up-converted from an original lower bit rate version. Therefore, exposing fake bit rate videos becomes an important issue for digital video forensics. To the best of our knowledge, although some methods have been proposed for exposing fake bit rate MPEG-2 videos, no relative work has been reported to further estimate their original bit rates. In this paper, we first analyze the statistical artifacts of these fake bit rate videos, including the requantization artifacts based on the first-digit law in the DCT frequency domain (12-D) and the changes of the structural similarity indexes between the query video and its sequential bit rate down-converted versions in the spatial domain (4-D), and then we propose a compact yet very effective 16-D feature vector for exposing fake bit rate videos and further estimating their original bit rates. The extensive experiments evaluated on hundreds of video sequences with four different resolutions and two typical compression schemes (i.e., MPEG-2 and H.264 AVC) have shown the effectiveness of the proposed method compared with the existing relative ones.",
"High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processin pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.",
"We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.",
"",
"",
"This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.",
"",
"",
"",
"Video forgery detection is becoming an important issue in recent years, because modern editing software provide powerful and easy-to-use tools to manipulate videos. In this paper we propose to perform detection by means of deep learning, with an architecture based on autoencoders and recurrent neural networks. A training phase on a few pristine frames allows the autoencoder to learn an intrinsic model of the source. Then, forged material is singled out as anomalous, as it does not fit the learned model, and is encoded with a large reconstruction error. Recursive networks, implemented with the long short-term memory model, are used to exploit temporal dependencies. Preliminary results on forged videos show the potential of this approach.",
"Due to the ease with which digital information can be altered, many digital forensic techniques have been developed to authenticate multimedia content. Similarly, a number of anti-forensic operations have recently been designed to make digital forgeries undetectable by forensic techniques. However, like the digital manipulations they are designed to hide, many anti-forensic operations leave behind their own forensically detectable traces. As a result, a digital forger must balance the trade-off between completely erasing evidence of their forgery and introducing new evidence of anti-forensic manipulation. Because a forensic investigator is typically bound by a constraint on their probability of false alarm (P_fa), they must also balance a trade-off between the accuracy with which they detect forgeries and the accuracy with which they detect the use of anti-forensics. In this paper, we analyze the interaction between a forger and a forensic investigator by examining the problem of authenticating digital videos. Specifically, we study the problem of adding or deleting a sequence of frames from a digital video. We begin by developing a theoretical model of the forensically detectable fingerprints that frame deletion or addition leaves behind, then use this model to improve upon the video frame deletion or addition detection technique proposed by Wang and Farid. Next, we propose an anti-forensic technique designed to fool video forensic techniques and develop a method for detecting the use of anti-forensics. We introduce a new set of techniques for evaluating the performance of anti-forensic operations and develop a game theoretic framework for analyzing the interplay between a forensic investigator and a forger. We use these new techniques to evaluate the performance of each of our proposed forensic and anti-forensic techniques, and identify the optimal actions of both the forger and forensic investigator.",
"The new developments in deep generative networks have significantly improve the quality and efficiency in generating realistically-looking fake face videos. In this work, we describe a new method to expose fake face videos generated with deep neural network models. Our method is based on detection of eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos. Our method is evaluated over benchmarks of eye-blinking detection datasets and shows promising performance on detecting videos generated with DNN based software DeepFake.",
"Validating a given multimedia content is nowadays quite a hard task because of the huge amount of possible alterations that could have been operated on it. In order to face this problem, image and video experts have proposed a wide set of solutions to reconstruct the processing history of a given multimedia signal. These strategies rely on the fact that non-reversible operations applied to a signal leave some traces (\"footprints\") that can be identified and classified in order to reconstruct the possible alterations that have been operated on the original source. These solutions permit also to identify which source generated a specific image or video content given some device-related peculiarities. The paper aims at providing an overview of the existing video processing techniques, considering all the possible alterations that can be operated on a single signal and also the possibility of identifying the traces that could reveal important information about its origin and use.",
"In recent months a machine learning based free software tool has made it easy to create believable face swaps in videos that leaves few traces of manipulation, in what are known as \"deepfake\" videos. Scenarios where these realistic fake videos are used to create political distress, blackmail someone or fake terrorism events are easily envisioned. This paper proposes a temporal-aware pipeline to automatically detect deepfake videos. Our system uses a convolutional neural network (CNN) to extract frame-level features. These features are then used to train a recurrent neural network (RNN) that learns to classify if a video has been subject to manipulation or not. We evaluate our method against a large set of deepfake videos collected from multiple video websites. We show how our system can achieve competitive results in this task while using a simple architecture.",
"With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.",
"",
"We propose a new approach for locating forged regions in a video using correlation of noise residue. In our method, block-level correlation values of noise residual are extracted as a feature for classification. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a two-step scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Two video inpainting schemes are used to simulate two different types of forgery processes for performance evaluation. Simulation results show that our method achieves promising accuracy in video forgery detection.",
""
]
}
|
1906.08743
|
2950098109
|
Manipulating video content is easier than ever. Due to the misuse potential of manipulated content, multiple detection techniques that analyze the pixel data from the videos have been proposed. However, clever manipulators should also carefully forge the metadata and auxiliary header information, which is harder to do for videos than images. In this paper, we propose to identify forged videos by analyzing their multimedia stream descriptors with simple binary classifiers, completely avoiding the pixel space. Using well-known datasets, our results show that this scalable approach can achieve a high manipulation detection score if the manipulators have not done a careful data sanitization of the multimedia stream descriptors.
|
As covered by @cite_34 , image-based forensics techniques that leverage camera noise residuals @cite_16 , image compression artifacts @cite_1 , or geometric and physics inconsistencies in the scene @cite_37 can also be used in videos when applied frame by frame. @cite_6 and @cite_21 , Exif image metadata is used to detect either image brightness and contrast adjustments, and splicing manipulations in images, respectively. Finally, @cite_5 use video file container metadata for video integrity verification and source device identification. To the best of our knowledge, video manipulation detection techniques that exploit the multimedia stream descriptors have not been previously proposed.
|
{
"cite_N": [
"@cite_37",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_16",
"@cite_5",
"@cite_34"
],
"mid": [
"",
"",
"1994743750",
"2062233254",
"2141698957",
"2887134435",
"2106609510"
],
"abstract": [
"",
"",
"In this paper, we propose a forensic algorithm to discriminate between original and forged regions in JPEG images, under the hypothesis that the tampered image presents a double JPEG compression, either aligned (A-DJPG) or nonaligned (NA-DJPG). Unlike previous approaches, the proposed algorithm does not need to manually select a suspect region in order to test the presence or the absence of double compression artifacts. Based on an improved and unified statistical model characterizing the artifacts that appear in the presence of both A-DJPG or NA-DJPG, the proposed algorithm automatically computes a likelihood map indicating the probability for each 8 × 8 discrete cosine transform block of being doubly compressed. The validity of the proposed approach has been assessed by evaluating the performance of a detector based on thresholding the likelihood map, considering different forensic scenarios. The effectiveness of the proposed method is also confirmed by tests carried on realistic tampered images. An interesting property of the proposed Bayesian approach is that it can be easily extended to work with traces left by other kinds of processing.",
"EXchangeable Image File format (EXIF) is a metadata header containing shot-related camera settings such as aperture, exposure time, ISO speed etc. These settings can affect the photo content in many ways. In this paper, we investigate the underlying EXIF-Image correlation and propose a novel model, which correlates image statistical noise features with several commonly used EXIF features. By formulating each EXIF feature as a weighted combination of different image statistical noise features, we first select a compact image statistical noise feature set using sequential floating forward selection. The underlying correlation as a set of regression weights is then solved using a least squares solution. When applying our learned correlation to detect image manipulation, we achieve average test accuracies of 94.6 , 94.1 and 94.9 in three different cameras to detect the presence of common image brightness and contrast adjustment.",
"Digital images can be captured or generated by a variety of sources including digital cameras, scanners and computer graphics softwares. In many cases it is important to be able to determine the source of a digital image such as for criminal and forensic investigation. This paper presents methods for distinguishing between an image captured using a digital camera, a computer generated image and an image captured using a scanner. The method proposed here is based on the differences in the image generation processes used in these devices and is independent of the image content. The method is based on using features of the residual pattern noise that exist in images obtained from digital cameras and scanners. The residual noise present in computer generated images does not have structures similar to the pattern noise of cameras and scanners. The experiments show that a feature based approach using an SVM classifier gives high accuracy.",
"Video forensics keeps developing new technologies to verify the authenticity and the integrity of digital videos. While most of the existing methods rely on the analysis of the video data stream, recently, a new line of research was introduced to investigate video life cycle based on the analysis of the video container. Anyway, existing contributions in this field are based on manual comparison of video container structure and content, which is time demanding and error-prone. In this paper, we introduce a method for unsupervised analysis of video file containers, and present two main forensic applications of such method: the first one deals with video integrity verification, based on the dissimilarity between a reference and a query file container; the second one focuses on the identification and classification of the source device brand, based on the analysis of containers structure and content. Noticeably, the latter application relies on the likelihood-ratio framework, which is more and more approved by the forensic community as the appropriate way to exhibit findings in court. We tested and proved the effectiveness of both applications on a dataset composed by 578 videos taken with modern smartphones from major brands and models. The proposed approaches are proved to be valuable also for requiring an extremely small computational cost as opposed to all available techniques based on the video stream analysis or manual inspection of file containers.",
"Validating a given multimedia content is nowadays quite a hard task because of the huge amount of possible alterations that could have been operated on it. In order to face this problem, image and video experts have proposed a wide set of solutions to reconstruct the processing history of a given multimedia signal. These strategies rely on the fact that non-reversible operations applied to a signal leave some traces (\"footprints\") that can be identified and classified in order to reconstruct the possible alterations that have been operated on the original source. These solutions permit also to identify which source generated a specific image or video content given some device-related peculiarities. The paper aims at providing an overview of the existing video processing techniques, considering all the possible alterations that can be operated on a single signal and also the possibility of identifying the traces that could reveal important information about its origin and use."
]
}
|
1906.08487
|
2950593761
|
Recent neural conversation models that attempted to incorporate emotion and generate empathetic responses either focused on conditioning the output to a given emotion, or incorporating the current user emotional state. While these approaches have been successful to some extent in generating more diverse and seemingly engaging utterances, they do not factor in how the user would feel towards the generated dialogue response. Hence, in this paper, we advocate such look-ahead of user emotion as the key to modeling and generating empathetic dialogue responses. We thus train a Sentiment Predictor to estimate the user sentiment look-ahead towards the generated system responses, which is then used as the reward function for generating more empathetic responses. Human evaluation results show that our model outperforms other baselines in empathy, relevance, and fluency.
|
Recognizing sentiment and emotion has been a relatively well understood and researched task @cite_1 @cite_13 @cite_15 @cite_27 that has been deemed necessary for generating empathetic dialogues @cite_19 @cite_32 @cite_30 @cite_34 @cite_26 . Taking this further, successfully introduce a framework of controlling the sentiment and emoji of the generated response, while released a new Twitter conversation dataset distantly supervised with emojis. Meanwhile, also introduce new datasets for empathetic dialogues and train multi-task models on it. Finally, Deep RL has gained popularity for the ability to optimize non-differential metrics in summarization @cite_11 @cite_14 , dialogue @cite_3 , and emotional chatbot @cite_25 .
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_14",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_34",
"@cite_13",
"@cite_25",
"@cite_11"
],
"mid": [
"2905807898",
"",
"",
"2251939518",
"",
"2963167310",
"2465954854",
"2949776718",
"2963756346",
"2955429306",
"2740582239",
"2768195931",
"2963248296"
],
"abstract": [
"Abstract Big Data and Deep Learning algorithms combined with enormous computing power have paved ways for significant technological advancements. Technology is evolving to anticipate, understand and address our unmet needs. However, to fully meet human needs, machines or computers must deeply understand human behavior including emotions. Emotions are physiological states generated in humans as a reaction to internal or external events. They are complex and studied across numerous fields including computer science. As humans, on reading “Why don't you ever text me!”, we can either interpret it as a sad or an angry emotion and the same ambiguity exists for machines as well. Lack of facial expressions and voice modulations make detecting emotions in text a challenging problem. However, in today's online world, humans are increasingly communicating using text messaging applications and digital agents. Hence, it is imperative for machines to understand emotions in textual dialogue to provide emotionally aware responses to users. In this paper, we propose a novel Deep Learning based approach to detect emotions - Happy, Sad and Angry in textual dialogues. The essence of our approach lies in combining both semantic and sentiment based representations for more accurate emotion detection. We use semi-automated techniques to gather large scale training data with diverse ways of expressing emotions to train our model. Evaluation of our approach on real world dialogue datasets reveals that it significantly outperforms traditional Machine Learning baselines as well as other off-the-shelf Deep Learning models.",
"",
"",
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.",
"",
"",
"",
"In this paper, we propose Emo2Vec which encodes emotional semantics into vectors. We train Emo2Vec by multi-task learning six different emotion-related tasks, including emotion sentiment analysis, sarcasm classification, stress detection, abusive language classification, insult detection, and personality recognition. Our evaluation of Emo2Vec shows that it outperforms existing affect-related representations, such as Sentiment-Specific Word Embedding and DeepMoji embeddings with much smaller training corpora. When concatenated with GloVe, Emo2Vec achieves competitive performances to state-of-the-art results on several tasks using a simple logistic regression classifier.",
"Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.",
"",
"NLP tasks are often limited by scarcity of manually annotated data. In social media sentiment analysis and related tasks, researchers have therefore used binarized emoticons and specific hashtags as forms of distant supervision. Our paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations. Through emoji prediction on a dataset of 1246 million tweets containing one of 64 common emojis we obtain state-of-the-art performance on 8 benchmark datasets within sentiment, emotion and sarcasm detection using a single pretrained model. Our analyses confirm that the diversity of our emotional labels yield a performance improvement over previous distant supervision approaches.",
"Generating emotional language is a key step towards building empathetic natural language processing agents. However, a major challenge for this line of research is the lack of large-scale labeled training data, and previous studies are limited to only small sets of human annotated sentiment labels. Additionally, explicitly controlling the emotion and sentiment of generated text is also difficult. In this paper, we take a more radical approach: we exploit the idea of leveraging Twitter data that are naturally labeled with emojis. More specifically, we collect a large corpus of Twitter conversations that include emojis in the response, and assume the emojis convey the underlying emotions of the sentence. We then introduce a reinforced conditional variational encoder approach to train a deep generative model on these conversations, which allows us to use emojis to control the emotion of the generated text. Experimentally, we show in our quantitative and qualitative analyses that the proposed models can successfully generate high-quality abstractive conversation responses in accordance with designated emotions.",
"Abstract: Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster."
]
}
|
1906.08591
|
2949138582
|
Large-scale labeled datasets are the indispensable fuel that ignites the AI revolution as we see today. Most such datasets are constructed using crowdsourcing services such as Amazon Mechanical Turk which provides noisy labels from non-experts at a fair price. The sheer size of such datasets mandates that it is only feasible to collect a few labels per data point. We formulate the problem of test-time label aggregation as a statistical estimation problem of inferring the expected voting score in an ideal world where all workers label all items. By imitating workers with supervised learners and using them in a doubly robust estimation framework, we prove that the variance of estimation can be substantially reduced, even if the learner is a poor approximation. Synthetic and real-world experiments show that by combining the doubly robust approach with adaptive worker item selection, we often need as low as 0.1 labels per data point to achieve nearly the same accuracy as in the ideal world where all workers label all data points.
|
We briefly summarize the related work. Our study is motivated by the many trailblazing approaches in label-aggregation including the wisdom-of-crowds , Dawid-Skene model , minimax entropy approach @cite_4 , permutation-based model , worker cluster model , crowdsourced regression model @cite_1 and so on. Our contribution is complementary as we can take any of these models as blackboxes and hopefully improve true-label inference.
|
{
"cite_N": [
"@cite_1",
"@cite_4"
],
"mid": [
"2949107948",
"1814633089"
],
"abstract": [
"Crowdsourcing platforms emerged as popular venues for purchasing human intelligence at low cost for large volume of tasks. As many low-paid workers are prone to give noisy answers, a common practice is to add redundancy by assigning multiple workers to each task and then simply average out these answers. However, to fully harness the wisdom of the crowd, one needs to learn the heterogeneous quality of each worker. We resolve this fundamental challenge in crowdsourced regression tasks, i.e., the answer takes continuous labels, where identifying good or bad workers becomes much more non-trivial compared to a classification setting of discrete labels. In particular, we introduce a Bayesian iterative scheme and show that it provably achieves the optimal mean squared error. Our evaluations on synthetic and real-world datasets support our theoretical results and show the superiority of the proposed scheme.",
"There is a rapidly increasing interest in crowdsourcing for data labeling. By crowdsourcing, a large number of labels can be often quickly gathered at low cost. However, the labels provided by the crowdsourcing workers are usually not of high quality. In this paper, we propose a minimax conditional entropy principle to infer ground truth from noisy crowdsourced labels. Under this principle, we derive a unique probabilistic labeling model jointly parameterized by worker ability and item difficulty. We also propose an objective measurement principle, and show that our method is the only method which satisfies this objective measurement principle. We validate our method through a variety of real crowdsourcing datasets with binary, multiclass or ordinal labels."
]
}
|
1906.08511
|
2950323881
|
Zero-shot learning (ZSL) and cold-start recommendation (CSR) are two challenging problems in computer vision and recommender system, respectively. In general, they are independently investigated in different communities. This paper, however, reveals that ZSL and CSR are two extensions of the same intension. Both of them, for instance, attempt to predict unseen classes and involve two spaces, one for direct feature representation and the other for supplementary description. Yet there is no existing approach which addresses CSR from the ZSL perspective. This work, for the first time, formulates CSR as a ZSL problem, and a tailor-made ZSL method is proposed to handle CSR. Specifically, we propose a Low-rank Linear Auto-Encoder (LLAE), which challenges three cruxes, i.e., domain shift, spurious correlations and computing efficiency, in this paper. LLAE consists of two parts, a low-rank encoder maps user behavior into user attributes and a symmetric decoder reconstructs user behavior from user attributes. Extensive experiments on both ZSL and CSR tasks verify that the proposed method is a win-win formulation, i.e., not only can CSR be handled by ZSL models with a significant performance improvement compared with several conventional state-of-the-art methods, but the consideration of CSR can benefit ZSL as well.
|
Zero-shot learning. A basic assumption behind conventional visual recognition algorithms is that some instances of the test class are included in the training set, so that other test instances can be recognized by learning from the training samples. For a large-scale dataset, however, collecting training samples for new and rare objects is painful. A curious mind may ask if we can recognize an unseen object with some semantic description just like human beings do. To that end, zero-shot learning @cite_13 @cite_18 has been proposed. Typically, ZSL algorithms learn a projection which maps visual space to the semantic space, or the reverse. Different models are proposed based on different projection strategies. From a macro perspective, existing ZSL methods can be grouped into three categories: 1) Learning a mapping function from the visual space to the semantic space @cite_9 @cite_18 ; 2) Learning a mapping function from the semantic space to the visual space @cite_26 ; 3) Learning a latent space which shared by the visual domain and the semantic domain @cite_27 @cite_13 .
|
{
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_9",
"@cite_27",
"@cite_13"
],
"mid": [
"2736590139",
"2209594346",
"2128532956",
"1858576077",
"2952567519"
],
"abstract": [
"Zero-shot learning for visual recognition has received much interest in the most recent years. However, the semantic gap across visual features and their underlying semantics is still the biggest obstacle in zero-shot learning. To fight off this hurdle, we propose an effective Low-rank Embedded Semantic Dictionary learning (LESD) through ensemble strategy. Specifically, we formulate a novel framework to jointly seek a low-rank embedding and semantic dictionary to link visual features with their semantic representations, which manages to capture shared features across different observed classes. Moreover, ensemble strategy is adopted to learn multiple semantic dictionaries to constitute the latent basis for the unseen classes. Consequently, our model could extract a variety of visual characteristics within objects, which can be well generalized to unknown categories. Extensive experiments on several zero-shot benchmarks verify that the proposed model can outperform the state-of-the-art approaches.",
"Zero-shot learning (ZSL) can be considered as a special case of transfer learning where the source and target domains have different tasks label spaces and the target domain is unlabelled, providing little guidance for the knowledge transfer. A ZSL method typically assumes that the two domains share a common semantic representation space, where a visual feature vector extracted from an image video can be projected embedded using a projection function. Existing approaches learn the projection function from the source domain and apply it without adaptation to the target domain. They are thus based on naive knowledge transfer and the learned projections are prone to the domain shift problem. In this paper a novel ZSL method is proposed based on unsupervised domain adaptation. Specifically, we formulate a novel regularised sparse coding framework which uses the target domain class labels' projections in the semantic space to regularise the learned target domain projection thus effectively overcoming the projection domain shift problem. Extensive experiments on four object and action recognition benchmark datasets show that the proposed ZSL method significantly outperforms the state-of-the-arts.",
"We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.",
"In this paper we consider a version of the zero-shot learning problem where seen class source and target domain data are provided. The goal during test-time is to accurately predict the class label of an unseen target domain instance based on revealed source domain side information ( attributes) for unseen classes. Our method is based on viewing each source or target data as a mixture of seen class proportions and we postulate that the mixture patterns have to be similar if the two instances belong to the same unseen class. This perspective leads us to learning source target embedding functions that map an arbitrary source target domain data into a same semantic space where similarity can be readily measured. We develop a max-margin framework to learn these similarity functions and jointly optimize parameters by means of cross validation. Our test results are compelling, leading to significant improvement in terms of accuracy on most benchmark datasets for zero-shot recognition.",
"Zero-shot recognition (ZSR) deals with the problem of predicting class labels for target domain instances based on source domain side information (e.g. attributes) of unseen classes. We formulate ZSR as a binary prediction problem. Our resulting classifier is class-independent. It takes an arbitrary pair of source and target domain instances as input and predicts whether or not they come from the same class, i.e. whether there is a match. We model the posterior probability of a match since it is a sufficient statistic and propose a latent probabilistic model in this context. We develop a joint discriminative learning framework based on dictionary learning to jointly learn the parameters of our model for both domains, which ultimately leads to our class-independent classifier. Many of the existing embedding methods can be viewed as special cases of our probabilistic model. On ZSR our method shows 4.90 improvement over the state-of-the-art in accuracy averaged across four benchmark datasets. We also adapt ZSR method for zero-shot retrieval and show 22.45 improvement accordingly in mean average precision (mAP)."
]
}
|
1906.08511
|
2950323881
|
Zero-shot learning (ZSL) and cold-start recommendation (CSR) are two challenging problems in computer vision and recommender system, respectively. In general, they are independently investigated in different communities. This paper, however, reveals that ZSL and CSR are two extensions of the same intension. Both of them, for instance, attempt to predict unseen classes and involve two spaces, one for direct feature representation and the other for supplementary description. Yet there is no existing approach which addresses CSR from the ZSL perspective. This work, for the first time, formulates CSR as a ZSL problem, and a tailor-made ZSL method is proposed to handle CSR. Specifically, we propose a Low-rank Linear Auto-Encoder (LLAE), which challenges three cruxes, i.e., domain shift, spurious correlations and computing efficiency, in this paper. LLAE consists of two parts, a low-rank encoder maps user behavior into user attributes and a symmetric decoder reconstructs user behavior from user attributes. Extensive experiments on both ZSL and CSR tasks verify that the proposed method is a win-win formulation, i.e., not only can CSR be handled by ZSL models with a significant performance improvement compared with several conventional state-of-the-art methods, but the consideration of CSR can benefit ZSL as well.
|
Cold-start recommendation. Among the models which address cold-start recommendation, we focus on the ones which exploit side information, e.g., user attributes, personal information and user social network data, to facilitate the cold-start problem. Those models can be roughly grouped into three categories, e.g., the similarity based models @cite_28 @cite_6 , matrix factorization models @cite_8 @cite_32 and feature mapping models @cite_19 .
|
{
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_32",
"@cite_6",
"@cite_19"
],
"mid": [
"2064754513",
"2000764607",
"2148320811",
"2124744438",
"2102982709"
],
"abstract": [
"A key element of the social networks on the internet such as Facebook and Flickr is that they encourage users to create connections between themselves, other users and objects. One important task that has been approached in the literature that deals with such data is to use social graphs to predict user behavior (e.g. joining a group of interest). More specifically, we study the cold-start problem, where users only participate in some relations, which we will call social relations, but not in the relation on which the predictions are made, which we will refer to as target relations. We propose a formalization of the problem and a principled approach to it based on multi-relational factorization techniques. Furthermore, we derive a principled feature extraction scheme from the social data to extract predictors for a classifier on the target relation. Experiments conducted on real world datasets show that our approach outperforms current methods.",
"We examine the cold-start recommendation task in an online retail setting for users who have not yet purchased (or interacted in a meaningful way with) any available items but who have granted access to limited side information, such as basic demographic data (gender, age, location) or social network information (Facebook friends or page likes). We formalize neighborhood-based methods for cold-start collaborative filtering in a generalized matrix algebra framework that does not require purchase data for target users when their side information is available. In real-data experiments with 30,000 users who purchased 80,000+ books and had 9,000,000+ Facebook friends and 6,000,000+ page likes, we show that using Facebook page likes for cold-start recommendation yields up to a 3-fold improvement in mean average precision (mAP) and up to 6-fold improvements in Precision@k and Recall@k compared to most-popular-item, demographic, and Facebook friend cold-start recommenders. These results demonstrate the substantial predictive power of social network content, and its significant utility in a challenging problem - recommendation for cold-start users.",
"This paper examines the problem of social collaborative filtering (CF) to recommend items of interest to users in a social network setting. Unlike standard CF algorithms using relatively simple user and item features, recommendation in social networks poses the more complex problem of learning user preferences from a rich and complex set of user profile and interaction information. Many existing social CF methods have extended traditional CF matrix factorization, but have overlooked important aspects germane to the social setting. We propose a unified framework for social CF matrix factorization by introducing novel objective functions for training. Our new objective functions have three key features that address main drawbacks of existing approaches: (a) we fully exploit feature-based user similarity, (b) we permit direct learning of user-to-user information diffusion, and (c) we leverage co-preference (dis)agreement between two users to learn restricted areas of common interest. We evaluate these new social CF objectives, comparing them to each other and to a variety of (social) CF baselines, and analyze user behavior on live user trials in a custom-developed Facebook App involving data collected over five months from over 100 App users and their 37,000+ friends.",
"Abundance of information in recent years has become a serious challenge for web users. Recommender systems (RSs) have been often utilized to alleviate this issue. RSs prune large information spaces to recommend the most relevant items to users by considering their preferences. Nonetheless, in situations where users or items have few opinions, the recommendations cannot be made properly. This notable shortcoming in practical RSs is called cold-start problem. In the present study, we propose a novel approach to address this problem by incorporating social networking features. Coined as enhanced content-based algorithm using social networking (ECSN), the proposed algorithm considers the submitted ratings of faculty mates and friends besides user’s own preferences. The effectiveness of ECSN algorithm was evaluated by implementing it in MyExpert, a newly designed academic social network (ASN) for academics in Malaysia. Real feedbacks from live interactions of MyExpert users with the recommended items are recorded for 12 consecutive weeks in which four different algorithms, namely, random, collaborative, content-based, and ECSN were applied every three weeks. The empirical results show significant performance of ECSN in mitigating the cold-start problem besides improving the prediction accuracy of recommendations when compared with other studied recommender algorithms.",
"Cold-start scenarios in recommender systems are situations in which no prior events, like ratings or clicks, are known for certain users or items. To compute predictions in such cases, additional information about users (user attributes, e.g. gender, age, geographical location, occupation) and items (item attributes, e.g. genres, product categories, keywords) must be used. We describe a method that maps such entity (e.g. user or item) attributes to the latent features of a matrix (or higher-dimensional) factorization model. With such mappings, the factors of a MF model trained by standard techniques can be applied to the new-user and the new-item problem, while retaining its advantages, in particular speed and predictive accuracy. We use the mapping concept to construct an attribute-aware matrix factorization model for item recommendation from implicit, positive-only feedback. Experiments on the new-item problem show that this approach provides good predictive accuracy, while the prediction time only grows by a constant factor."
]
}
|
1906.08593
|
2953342208
|
Attention is a very efficient way to model the relationship between two sequences by comparing how similar two intermediate representations are. Initially demonstrated in NMT, it is a standard in all NLU tasks today when efficient interaction between sequences is considered. However, we show that attention, by virtue of its composition, works best only when it is given that there is a match somewhere between two sequences. It does not very well adapt to cases when there is no similarity between two sequences or if the relationship is contrastive. We propose an Conflict model which is very similar to how attention works but which emphasizes mostly on how well two sequences repel each other and finally empirically show how this method in conjunction with attention can boost the overall performance.
|
@cite_7 introduced attention first in neural machine translation. It used a feed-forward network over addition of encoder and decoder states to compute alignment score. Our work is very similar to this except we use element wise difference instead of addition to build our conflict function. @cite_9 came up with a scaled dot-product attention in their Transformer model which is fast and memory-efficient. Due to the scaling factor, it didn't have the issue of gradients zeroing out. On the other hand, @cite_5 has experimented with global and local attention based on the how many hidden states the attention function takes into account. Their experiments have revolved around three attention functions - dot, concat and general. Their findings include that dot product works best for global attention. Our work also belongs to the global attention family as we consider all the hidden states of the sequence.
|
{
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_7"
],
"mid": [
"2949335953",
"2963403868",
"2964308564"
],
"abstract": [
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.",
"Abstract: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition."
]
}
|
1906.08470
|
2951141188
|
Automatically extracted metadata from scholarly documents in PDF formats is usually noisy and heterogeneous, often containing incomplete fields and erroneous values. One common way of cleaning metadata is to use a bibliographic reference dataset. The challenge is to match records between corpora with high precision. The existing solution which is based on information retrieval and string similarity on titles works well only if the titles are cleaned. We introduce a system designed to match scholarly document entities with noisy metadata against a reference dataset. The blocking function uses the classic BM25 algorithm to find the matching candidates from the reference data that has been indexed by ElasticSearch. The core components use supervised methods which combine features extracted from all available metadata fields. The system also leverages available citation information to match entities. The combination of metadata and citation achieves high accuracy that significantly outperforms the baseline method on the same test dataset. We apply this system to match the database of CiteSeerX against Web of Science, PubMed, and DBLP. This method will be deployed in the CiteSeerX system to clean metadata and link records to other scholarly big datasets.
|
Information Retrieval-based : This method searches one or multiple attributes of an entity in the target corpus against the index of the reference corpus and rank the candidates using a similarity metric. This approach matches with DBLP @cite_2 . The reference dataset (DBLP) was indexed by Apache Solr. Metadata from the noisy dataset ( ) were used to query corresponding fields. Candidates were selected based on similarity scores. It was found that using 3-grams of titles and Jaccard similarity with a threshold of @math achieves the best F1-measure of @math ,000$ records) without applying any blocking function to reduce search spaces, so this method cannot be scaled up to large digital libraries containing tens of millions of records.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2253675773"
],
"abstract": [
"The CiteSeer x digital library stores and indexes research articles in Computer Science and related fields. Although its main purpose is to make it easier for researchers to search for scientific information, CiteSeer x has been proven as a powerful resource in many data mining, machine learning and information retrieval applications that use rich metadata, e.g., titles, abstracts, authors, venues, references lists, etc. The metadata extraction in CiteSeer x is done using automated techniques. Although fairly accurate, these techniques still result in noisy metadata. Since the performance of models trained on these data highly depends on the quality of the data, we propose an approach to CiteSeer x metadata cleaning that incorporates information from an external data source. The result is a subset of CiteSeer x , which is substantially cleaner than the entire set. Our goal is to make the new dataset available to the research community to facilitate future work in Information Retrieval."
]
}
|
1906.08470
|
2951141188
|
Automatically extracted metadata from scholarly documents in PDF formats is usually noisy and heterogeneous, often containing incomplete fields and erroneous values. One common way of cleaning metadata is to use a bibliographic reference dataset. The challenge is to match records between corpora with high precision. The existing solution which is based on information retrieval and string similarity on titles works well only if the titles are cleaned. We introduce a system designed to match scholarly document entities with noisy metadata against a reference dataset. The blocking function uses the classic BM25 algorithm to find the matching candidates from the reference data that has been indexed by ElasticSearch. The core components use supervised methods which combine features extracted from all available metadata fields. The system also leverages available citation information to match entities. The combination of metadata and citation achieves high accuracy that significantly outperforms the baseline method on the same test dataset. We apply this system to match the database of CiteSeerX against Web of Science, PubMed, and DBLP. This method will be deployed in the CiteSeerX system to clean metadata and link records to other scholarly big datasets.
|
Topical-based : This method is used to resolve and match entities that are represented by free text, e.g., Wiki articles. The challenge is that different sources may use different languages or terminologies to describe the same topic. A probabilistic model was proposed to integrate the topic extraction and matching into a unified model @cite_8 . As we don't have access to the full text of the reference datasets, this method is not applicable to our problem.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2079659743"
],
"abstract": [
"Given an entity in a source domain, finding its matched entities from another (target) domain is an important task in many applications. Traditionally, the problem was usually addressed by first extracting major keywords corresponding to the source entity and then query relevant entities from the target domain using those keywords. However, the method would inevitably fails if the two domains have less or no overlapping in the content. An extreme case is that the source domain is in English and the target domain is in Chinese. In this paper, we formalize the problem as entity matching across heterogeneous sources and propose a probabilistic topic model to solve the problem. The model integrates the topic extraction and entity matching, two core subtasks for dealing with the problem, into a unified model. Specifically, for handling the text disjointing problem, we use a cross-sampling process in our model to extract topics with terms coming from all the sources, and leverage existing matching relations through latent topic layers instead of at text layers. Benefit from the proposed model, we can not only find the matched documents for a query entity, but also explain why these documents are related by showing the common topics they share. Our experiments in two real-world applications show that the proposed model can extensively improve the matching performance (+19.8 and +7.1 in two applications respectively) compared with several alternative methods."
]
}
|
1906.08467
|
2951866079
|
Convolutional neural networks have a significant improvement in the accuracy of Object detection. As convolutional neural networks become deeper, the accuracy of detection is also obviously improved, and more floating-point calculations are needed. Many researchers use the knowledge distillation method to improve the accuracy of student networks by transferring knowledge from a deeper and larger teachers network to a small student network, in object detection. Most methods of knowledge distillation need to designed complex cost functions and they are aimed at the two-stage object detection algorithm. This paper proposes a clean and effective knowledge distillation method for the one-stage object detection. The feature maps generated by teacher network and student network are used as true samples and fake samples respectively, and generate adversarial training for both to improve the performance of the student network in one-stage object detection.
|
CNN For Detection : The deep learning architecture of object detection is mainly divided into two types:1) one is the one-stage object detection, such as the SSD proposed by Liu @cite_21 , which directly returns the position and category of the object by the convolutional neural network; 2)two is the two-stage object detection, such as fast rcnn @cite_9 proposed by , and later Faster-RCNN @cite_12 and R-FCN @cite_13 , etc., it first regress the proposal boxes by the convolutional neural network; then identify each proposal box again; Finally return to the correct location and category.
|
{
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"2193145675",
"2407521645",
"2953106684"
],
"abstract": [
"",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN [7, 19] that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets) [10], for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: https: github.com daijifeng001 r-fcn.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
]
}
|
1906.08467
|
2951866079
|
Convolutional neural networks have a significant improvement in the accuracy of Object detection. As convolutional neural networks become deeper, the accuracy of detection is also obviously improved, and more floating-point calculations are needed. Many researchers use the knowledge distillation method to improve the accuracy of student networks by transferring knowledge from a deeper and larger teachers network to a small student network, in object detection. Most methods of knowledge distillation need to designed complex cost functions and they are aimed at the two-stage object detection algorithm. This paper proposes a clean and effective knowledge distillation method for the one-stage object detection. The feature maps generated by teacher network and student network are used as true samples and fake samples respectively, and generate adversarial training for both to improve the performance of the student network in one-stage object detection.
|
Network Compression : many researchers believe that deep neural networks are over-parameterized, and it has too many redundant neurons and connections. He [8] thought each layer of neurons of convolutional neural networks are sparse, and they use lasso regression to find the most representative neuron per layer of convolutional neural networks reconstructs the output of this layer. Zhuang @cite_10 believe that layer-by-layer channel pruning affects the discriminating ability of Convolutional neural networks, so the auxiliary ability of convolutional neural networks is preserved by adding auxiliary loss in the fine-tune and pruning stages.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2963094099"
],
"abstract": [
"Channel pruning is one of the predominant approaches for deep model compression. Existing pruning methods either (i) train from scratch with sparsity constraints on channels, or (ii) minimize the reconstruction error between the pre-trained feature maps and the compressed ones. Both strategies suffer from limitations: the former kind is computationally expensive and difficult to converge, whilst the latter kind optimizes the reconstruction error but ignores the discriminative power of channels. To overcome these drawbacks, we investigate a simple-yet-effective method, named discrimination-aware channel pruning (DCP), which seeks to select those channels that really contribute to discriminative power. To this end, we introduce additional losses into the network to increase the discriminative power of intermediate layers. We then propose to select the most discriminative channels for each layer, where both an additional loss and the reconstruction error are considered. Last, we propose a greedy algorithm to make channel selection and parameter optimization in an iterative way. Extensive experiments demonstrate the effectiveness of our method. For example, on ILSVRC-12, our pruned ResNet-50 with 30 reduction of channels even outperforms the original model by 0.39 in top-1 accuracy."
]
}
|
1906.08467
|
2951866079
|
Convolutional neural networks have a significant improvement in the accuracy of Object detection. As convolutional neural networks become deeper, the accuracy of detection is also obviously improved, and more floating-point calculations are needed. Many researchers use the knowledge distillation method to improve the accuracy of student networks by transferring knowledge from a deeper and larger teachers network to a small student network, in object detection. Most methods of knowledge distillation need to designed complex cost functions and they are aimed at the two-stage object detection algorithm. This paper proposes a clean and effective knowledge distillation method for the one-stage object detection. The feature maps generated by teacher network and student network are used as true samples and fake samples respectively, and generate adversarial training for both to improve the performance of the student network in one-stage object detection.
|
Network quantification : Wu @cite_24 use the k-means clustering algorithm to accelerate and compress the convolutional layer and the fully connected layer of the model to obtain better quantization results by reducing the estimation error of the output response of each layer, and proposed an effective training scheme to suppress the multi-layer cumulative error after quantization. Jacob B @ proposed a method that quantify weights and inputs as uint8, and bias to unit32, as the same time, the forward uses quantization, and the backward correction error is not quantized to ensure that the convolutional neural networks performance and speed of inference during training.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"2233116163"
],
"abstract": [
"Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 6× speed-up and 15 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second."
]
}
|
1906.08467
|
2951866079
|
Convolutional neural networks have a significant improvement in the accuracy of Object detection. As convolutional neural networks become deeper, the accuracy of detection is also obviously improved, and more floating-point calculations are needed. Many researchers use the knowledge distillation method to improve the accuracy of student networks by transferring knowledge from a deeper and larger teachers network to a small student network, in object detection. Most methods of knowledge distillation need to designed complex cost functions and they are aimed at the two-stage object detection algorithm. This paper proposes a clean and effective knowledge distillation method for the one-stage object detection. The feature maps generated by teacher network and student network are used as true samples and fake samples respectively, and generate adversarial training for both to improve the performance of the student network in one-stage object detection.
|
In this paper, we use the one-stage object detection SSD @cite_21 as our object detection. The architecture of SSD is mainly divided into two parts, 1) backbone of network, as feature extractor. 2) SSD-Head, use the features extracted by the backbone of network to detect the category and location of the object. In order to obtain a better knowledge distillation effect, it is important to make rational use of these two parts.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2193145675"
],
"abstract": [
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd."
]
}
|
1906.08494
|
2952973179
|
For planning rearrangements of objects in a clutter, it is required to know the goal configuration of the objects. However, in real life scenarios, this information is not available most of the time. We introduce a novel method that computes a collision-free placement of objects on a cluttered surface, while minimizing the total number and amount of displacements of the existing moveable objects. Our method applies nested local searches that perform multi-objective optimizations guided by heuristics. Experimental evaluations demonstrate high computational efficiency and success rate of our method, as well as good quality of solutions.
|
Related work in robotics Rearrangement of multiple movable objects, a challenging problem that involves planning, manipulation and geometric reasoning, has received much attention in robotics. In particular, planning for geometric rearrangement with multiple movable objects and its variations, such as navigation among movable obstacles @cite_29 @cite_5 , have been studied using various approaches. Since even a simplified variant the rearrangement problem with only one movable obstacle has been proved to be NP-hard @cite_16 @cite_11 , most studies introduce several important restrictions to the problem, like monotonicity of plans @cite_18 @cite_25 @cite_10 @cite_1 @cite_7 , where each object can be moved at most once. Recent work have focused on generating non-monotonic plans @cite_13 @cite_4 @cite_30 @cite_6 @cite_22 @cite_23 . However, in most of these studies @cite_18 @cite_25 @cite_10 @cite_1 @cite_4 @cite_30 @cite_6 @cite_22 , it is assumed that the goal configuration is known. Finding suitable arrangements for objects on a cluttered surface has received relatively less attention.
|
{
"cite_N": [
"@cite_13",
"@cite_30",
"@cite_18",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_29",
"@cite_1",
"@cite_6",
"@cite_23",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"2087191078",
"2293698677",
"2139039898",
"2015241127",
"",
"2109447584",
"2540258482",
"2141753064",
"2418783521",
"2888726795",
"2104991009",
"1990677634",
"",
"1989604857",
"1999708036"
],
"abstract": [
"We introduce a novel computational method for geometric rearrangement of multiple movable objects on a cluttered surface, where objects can change locations more than once by pick and or push actions. This method consists of four stages: (i) finding tentative collision-free final configurations for all objects (all the new objects together with all other objects in the clutter) while also trying to minimize the number of object relocations, (ii) gridization of the continuous plane for a discrete placement of the initial configurations and the tentative final configurations of objects on the cluttered surface, (iii) finding a sequence of feasible pick and push actions to achieve the final discrete placement for the objects in the clutter from their initial discrete place, while simultaneously minimizing the number of object relocations, and (iv) finding feasible final configurations for all objects according to the optimal task plan calculated in stage (iii). For (i) and (iv), we introduce algorithms that utilize local search with random restarts; for (ii), we introduce a mathematical modeling of the discretization problem and use the state-of-the-art ASP reasoners to solve it; for (iii) we introduce a formal hybrid reasoning framework that allows embedding of geometric reasoning in task planning, and use the expressive formalisms and reasoners of ASP. We illustrate the usefulness of our integrated AI approach with several scenarios that cannot be solved by the existing approaches. We also provide a dynamic simulation for one of the scenarios, as supplementary material.",
"",
"In this paper, we describe a planner for a humanoid robot that is capable of finding a path in an environment with movable objects, whereas previous motion planner only deals with an environment with fixed objects. We address an environment manipulation problem for a humanoid robot that finds a walking path from the given start location to the goal location while displacing obstructing objects on the walking path. This problem requires more complex configuration space than previous researches using a mobile robot especially in a manipulation phase, since a humanoid robot has many degrees of freedom in its arm than a forklift type robot. Our approach is to build environment manipulation task graph that decompose the given task into subtasks which are solved using navigation path planner or whole body motion planner. We also propose a standing location search and a displacing obstacle location search for connecting subtasks. Efficient method to solve manipulation planning that relies on whole body inverse kinematics and motion planning technology is also shown. Finally, we show experimental results in an environment with movable objects such as chairs and trash boxes. The planner finds an action sequence consists of walking paths and manipulating obstructing objects to walk from the start position to the goal position.",
"This work proposes a method for efficiently computing manipulation paths to rearrange similar objects in a cluttered space. Rearrangement is a challenging problem as it involves combinatorially large, continuous configuration spaces due to the presence of multiple bodies and kinematically complex manipulators. This work leverages ideas from multi-robot motion planning and manipulation planning to propose appropriate graphical representations for this challenge. These representations allow to quickly reason whether manipulation paths allow the transition between entire sets of object arrangements without having to explicitly store these arrangements. The proposed method also takes advantage of precomputation given a manipulation roadmap for transferring a single object in the space. The approach is evaluated in simulation for a realistic model of a Baxter robot and executed on the real system, showing that the method solves complex instances and is promising in terms of scalability and success ratio.",
"",
"We present a novel planning algorithm for the problem of placing objects on a cluttered surface such as a table, counter or floor. The planner (1) selects a placement for the target object and (2) constructs a sequence of manipulation actions that create space for the object. When no continuous space is large enough for direct placement, the planner leverages means-end analysis and dynamic simulation to find a sequence of linear pushes that clears the necessary space. Our heuristic for determining candidate placement poses for the target object is used to guide the manipulation search. We show successful results for our algorithm in simulation.",
"In this paper, we address the problem of navigation among movable obstacles (NAMO): a practical extension to navigation for humanoids and other dexterous mobile robots. The robot is permitted to reconfigure the environment by moving obstacles and clearing free space for a path. Simpler problems have been shown to be P-SPACE hard. For real-world scenarios with large numbers of movable obstacles, complete motion planning techniques are largely intractable. This paper presents a resolution complete planner for a subclass of NAMO problems. Our planner takes advantage of the navigational structure through state-space decomposition and heuristic search. The planning complexity is reduced to the difficulty of the specific navigation task, rather than the dimensionality of the multi-object domain. We demonstrate real-time results for spaces that contain large numbers of movable obstacles. We also present a practical framework for single-agent search that can be used in algorithmic reasoning about this domain.",
"We present DARRT, a sampling-based algorithm for planning with multiple types of manipulation. Given a robot, a set of movable objects, and a set of actions for manipulating the objects, DARRT returns a sequence of manipulation actions that move the robot and objects from an initial configuration to a final configuration. The manipulation actions may be non-prehensile,meaning that the object is not rigidly attached to the robot, such as push, tilt, or pull. We describe a simple extension to the RRT algorithm to search the combined space of robot and objects and present an implementation of DARRT on the Willow Garage PR2 robot.",
"Manipulating multiple movable obstacles is a hard problem that involves searching high-dimensional C-spaces. A milestone method for this problem was able to compute solutions for monotone instances. These are problems where every object needs to be transferred at most once to achieve a desired arrangement. The method uses backtracking search to find the order with which objects should be moved. This paper first proposes an approximate but significantly faster alternative for monotone rearrangement instances. The method defines a dependency graph between objects given minimum constraint removal paths (MCR) to transfer each object to its target. From this graph, the approach discovers the order of moving objects by performing topological sorting without backtracking search. The approximation arises from the limitation to consider only MCR paths, which minimize, however, the number of conflicts between objects. To solve non-monotone instances, this primitive is incorporated in a higher-level incremental search algorithm for general rearrangement planning, which operates similar to Bi-RRT. Given a start and a goal object arrangement, tree structures of reachable new arrangements are generated by using the primitive as an expansion procedure. The integrated solution achieves probabilistic completeness for the general non-monotone case and based on simulated experiments it achieves very good success ratios, solution times and path quality relative to alternatives.",
"We present a method enabling a robot to automatically arrange objects using task and motion planning. Given an input scene consisting of cluttered objects, our method first constructs a target layout of objects as a guidance to the robot for arranging them. For constructing the layout, we use positive examples and pre-extract hierarchical, spatial and pairwise relationships between objects, to understand the user preference on arranging objects. Our method then enables a robot to arrange input objects to reach their target configurations using any task and motion planner. To efficiently arrange the objects, we also propose a priority layer that decides an order of arranging objects to take a small amount of actions. This is achieved by utilizing a dependency graph between objects. We test our method in three different scenes with varying numbers of objects, and apply our method to two well-known task and motion planners with the virtual PR2 robot. We demonstrate that we can use the robot to automatically arrange objects, and show that our priority layer reduces the total running time up to 2.15 times in those tested planners.",
"This paper presents artificial constraints as a method for guiding heuristic search in the computationally challenging domain of motion planning among movable obstacles. The robot is permitted to manipulate unspecified obstacles in order to create space for a path. A plan is an ordered sequence of paths for robot motion and object manipulation. We show that under monotone assumptions, anticipating future manipulation paths results in constraints on both the choice of objects and their placements at earlier stages in the plan. We present an algorithm that uses this observation to incrementally reduce the search space and quickly find solutions to previously unsolved classes of movable obstacle problems. Our planner is developed for arbitrary robot geometry and kinematics. It is presented with an implementation for the domain of navigation among movable obstacles.",
"",
"",
"This paper presents the resolve spatial constraints (RSC) algorithm for manipulation planning in a domain with movable obstacles. Empirically we show that our algorithm quickly generates plans for simulated articulated robots in a highly nonlinear search space of exponential dimension. RSC is a reverse-time search that samples future robot actions and constrains the space of prior object displacements. To optimize the efficiency of RSC, we identify methods for sampling object surfaces and generating connecting paths between grasps and placements. In addition to experimental analysis of RSC, this paper looks into object placements and task-space motion constraints among other unique features of the three dimensional manipulation planning domain.",
"We prove NP-hardness of a wide class of pushing-block puzzles similar to the classic Sokoban, generalizing several previous results [E.D. , in: Proc. 12th Canad. Conf. Comput. Geom., 2000, pp. 211-219; E.D. , Technical Report, January 2000; A Dhagat, J. O'Rourke, in: Proc. 4th Canad. Conf. Comput. Geom., 1992, pp. 188-191; D. Dor, U. Zwick, Computational Geometry 13 (4) (1999) 215-228; J. O'Rourke, Technical Report, November 1999; G. Wilfong, Ann. Math. Artif. Intell. 3 (1991) 131-150]. The puzzles consist of unit square blocks on an integer lattice; all blocks are movable. The robot may move horizontally and vertically in order to reach a specified goal position. The puzzle variants differ in the number of blocks that the robot can push at once, ranging from at most one (PUSH-1) up to arbitrarily many (PUSH-*). Other variations were introduced to make puzzles more tractable, in which blocks must slide their maximal extent when pushed (PUSHPUSH), and in which the robot's path must not revisit itself (PUSH-X). We prove that all of these puzzles are NP-hard."
]
}
|
1906.08494
|
2952973179
|
For planning rearrangements of objects in a clutter, it is required to know the goal configuration of the objects. However, in real life scenarios, this information is not available most of the time. We introduce a novel method that computes a collision-free placement of objects on a cluttered surface, while minimizing the total number and amount of displacements of the existing moveable objects. Our method applies nested local searches that perform multi-objective optimizations guided by heuristics. Experimental evaluations demonstrate high computational efficiency and success rate of our method, as well as good quality of solutions.
|
@cite_7 propose an algorithm that searches for a suitable placement for a single object on a cluttered surface by discretizing the possible orientations of the object, convolving object pixels with the ones on the table, and identifying candidate regions for the object placement that result in minimal penetration with other objects. A placement is then produced by sampling these regions; however, this placement may not be collision-free. Then, they plan for a sequence of linear push actions to rearrange the clutter and clear space for the new object such that this placement becomes collision-free. Note that there are several limitations in this approach; multiple new objects are not considered, the surface and possible object orientations are discretized, and the final configuration is not necessarily collision-free.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2109447584"
],
"abstract": [
"We present a novel planning algorithm for the problem of placing objects on a cluttered surface such as a table, counter or floor. The planner (1) selects a placement for the target object and (2) constructs a sequence of manipulation actions that create space for the object. When no continuous space is large enough for direct placement, the planner leverages means-end analysis and dynamic simulation to find a sequence of linear pushes that clears the necessary space. Our heuristic for determining candidate placement poses for the target object is used to guide the manipulation search. We show successful results for our algorithm in simulation."
]
}
|
1906.08494
|
2952973179
|
For planning rearrangements of objects in a clutter, it is required to know the goal configuration of the objects. However, in real life scenarios, this information is not available most of the time. We introduce a novel method that computes a collision-free placement of objects on a cluttered surface, while minimizing the total number and amount of displacements of the existing moveable objects. Our method applies nested local searches that perform multi-objective optimizations guided by heuristics. Experimental evaluations demonstrate high computational efficiency and success rate of our method, as well as good quality of solutions.
|
@cite_21 aim to find sensible placements for furniture by initially generating a random arrangement, then rearranging it to minimize a cost function that measures the difference between the current arrangement and several positive examples provided by the user. @cite_23 also follow on this idea, and modify the algorithm so it becomes more suitable for robotic applications. Neither study considers heavily cluttered scenes or utilizes high resolution collision checks. @cite_23 , the task is to rearrange objects currently available in the scene to achieve a more tidy arrangement; no new objects are added and there exists no constraints that force a certain set of objects to be on certain surfaces. Furthermore, in these studies, even the initial state is a feasible (collision-free) configuration, and the goal is to improve it in terms of a measure of tidiness.
|
{
"cite_N": [
"@cite_21",
"@cite_23"
],
"mid": [
"2130634053",
"2888726795"
],
"abstract": [
"We present a system that automatically synthesizes indoor scenes realistically populated by a variety of furniture objects. Given examples of sensibly furnished indoor scenes, our system extracts, in advance, hierarchical and spatial relationships for various furniture objects, encoding them into priors associated with ergonomic factors, such as visibility and accessibility, which are assembled into a cost function whose optimization yields realistic furniture arrangements. To deal with the prohibitively large search space, the cost function is optimized by simulated annealing using a Metropolis-Hastings state search step. We demonstrate that our system can synthesize multiple realistic furniture arrangements and, through a perceptual study, investigate whether there is a significant difference in the perceived functionality of the automatically synthesized results relative to furniture arrangements produced by human designers.",
"We present a method enabling a robot to automatically arrange objects using task and motion planning. Given an input scene consisting of cluttered objects, our method first constructs a target layout of objects as a guidance to the robot for arranging them. For constructing the layout, we use positive examples and pre-extract hierarchical, spatial and pairwise relationships between objects, to understand the user preference on arranging objects. Our method then enables a robot to arrange input objects to reach their target configurations using any task and motion planner. To efficiently arrange the objects, we also propose a priority layer that decides an order of arranging objects to take a small amount of actions. This is achieved by utilizing a dependency graph between objects. We test our method in three different scenes with varying numbers of objects, and apply our method to two well-known task and motion planners with the virtual PR2 robot. We demonstrate that we can use the robot to automatically arrange objects, and show that our priority layer reduces the total running time up to 2.15 times in those tested planners."
]
}
|
1906.08494
|
2952973179
|
For planning rearrangements of objects in a clutter, it is required to know the goal configuration of the objects. However, in real life scenarios, this information is not available most of the time. We introduce a novel method that computes a collision-free placement of objects on a cluttered surface, while minimizing the total number and amount of displacements of the existing moveable objects. Our method applies nested local searches that perform multi-objective optimizations guided by heuristics. Experimental evaluations demonstrate high computational efficiency and success rate of our method, as well as good quality of solutions.
|
@cite_17 @cite_28 @cite_3 extract object-to-object and object-to-human features from databases of 3D environments and learn semantic geometric preferences for object surface pairs. Then, they discretize the surfaces' point cloud into placing areas by random sampling and solve an maximum matching problem to assign each object's pose to a suitable placing area. This approach only considers placements to a predetermined set of discrete configurations and does not address the more challenging continuous version of the problem.
|
{
"cite_N": [
"@cite_28",
"@cite_3",
"@cite_17"
],
"mid": [
"1877780944",
"2405422284",
"2091862867"
],
"abstract": [
"We consider the problem of learning object arrangements in a 3D scene. The key idea here is to learn how objects relate to human poses based on their affordances, ease of use and reachability. In contrast to modeling object-object relationships, modeling human-object relationships scales linearly in the number of objects. We design appropriate density functions based on 3D spatial features to capture this. We learn the distribution of human poses in a scene using a variant of the Dirichlet process mixture model that allows sharing of the density function parameters across the same object types. Then we can reason about arrangements of the objects in the room based on these meaningful human poses. In our extensive experiments on 20 different rooms with a total of 47 objects, our algorithm predicted correct placements with an average error of 1:6 meters from ground truth. In arranging five real scenes, it received a score of 4.3 5 compared to 3.7 for the best baseline method.",
"While a significant body of work has been done on grasping objects, there is little prior work on placing and arranging objects in the environment. In this work, we consider placing multiple objects in complex placing areas, where neither the object nor the placing area may have been seen by the robot before. Specifically, the placements should not only be stable, but should also follow human usage preferences.We present learning and inference algorithms that consider these aspects in placing. In detail, given a set of 3D scenes containing objects, our method, based on Dirichlet process mixture models, samples human poses in each scene and learns how objects relate to those human poses. Then given a new room, our algorithm is able to select meaningful human poses and use them to determine where to place new objects.We evaluate our approach on a variety of scenes in simulation, as well as on robotic experiments.",
"Placing is a necessary skill for a personal robot to have in order to perform tasks such as arranging objects in a disorganized room. The object placements should not only be stable but also be in their semantically preferred placing areas and orientations. This is challenging because an environment can have a large variety of objects and placing areas that may not have been seen by the robot before. In this paper, we propose a learning approach for placing multiple objects in different placing areas in a scene. Given point-clouds of the objects and the scene, we design appropriate features and use a graphical model to encode various properties, such as the stacking of objects, stability, object-area relationship and common placing constraints. The inference in our model is an integer linear program, which we solve efficiently via an linear programming relaxation. We extensively evaluate our approach on 98 objects from 16 categories being placed into 40 areas. Our robotic experiments show a success rate of 98 in placing known objects and 82 in placing new objects stably. We use our method on our robots for performing tasks such as loading several dish-racks, a bookshelf and a fridge with multiple items."
]
}
|
1906.08494
|
2952973179
|
For planning rearrangements of objects in a clutter, it is required to know the goal configuration of the objects. However, in real life scenarios, this information is not available most of the time. We introduce a novel method that computes a collision-free placement of objects on a cluttered surface, while minimizing the total number and amount of displacements of the existing moveable objects. Our method applies nested local searches that perform multi-objective optimizations guided by heuristics. Experimental evaluations demonstrate high computational efficiency and success rate of our method, as well as good quality of solutions.
|
In our previous work @cite_13 , we have proposed an object placement algorithm based on a local search guided with heuristics and random restarts. This work significantly extends our earlier study by introducing an innermost potential field, as well as two nested local searches wrapped around this basic search algorithm, to improve upon the efficiency and quality of solutions, as well as the success rate. Our results indicate orders of magnitude difference in terms of CPU time and success rate in cluttered scenarios.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"2087191078"
],
"abstract": [
"We introduce a novel computational method for geometric rearrangement of multiple movable objects on a cluttered surface, where objects can change locations more than once by pick and or push actions. This method consists of four stages: (i) finding tentative collision-free final configurations for all objects (all the new objects together with all other objects in the clutter) while also trying to minimize the number of object relocations, (ii) gridization of the continuous plane for a discrete placement of the initial configurations and the tentative final configurations of objects on the cluttered surface, (iii) finding a sequence of feasible pick and push actions to achieve the final discrete placement for the objects in the clutter from their initial discrete place, while simultaneously minimizing the number of object relocations, and (iv) finding feasible final configurations for all objects according to the optimal task plan calculated in stage (iii). For (i) and (iv), we introduce algorithms that utilize local search with random restarts; for (ii), we introduce a mathematical modeling of the discretization problem and use the state-of-the-art ASP reasoners to solve it; for (iii) we introduce a formal hybrid reasoning framework that allows embedding of geometric reasoning in task planning, and use the expressive formalisms and reasoners of ASP. We illustrate the usefulness of our integrated AI approach with several scenarios that cannot be solved by the existing approaches. We also provide a dynamic simulation for one of the scenarios, as supplementary material."
]
}
|
1906.08494
|
2952973179
|
For planning rearrangements of objects in a clutter, it is required to know the goal configuration of the objects. However, in real life scenarios, this information is not available most of the time. We introduce a novel method that computes a collision-free placement of objects on a cluttered surface, while minimizing the total number and amount of displacements of the existing moveable objects. Our method applies nested local searches that perform multi-objective optimizations guided by heuristics. Experimental evaluations demonstrate high computational efficiency and success rate of our method, as well as good quality of solutions.
|
Related work in other areas A closely related problem to object placement, studied in computer graphics and operations research, is the packing problem (also known as the knapsack problem), where the goal is to place as many objects as possible in a non-overlapping configuration within a given empty container. The packing problem is NP-hard @cite_0 . It has been widely studied in 2D (cf. the survey @cite_14 ). It has been also studied in 3D under various conditions restrictions @cite_26 @cite_9 @cite_15 (e.g., packing a set of polyhedrons into a fixed size polyhedron without considering rotations @cite_2 , orthogonal packing of tetris-like items into rectangular bins @cite_12 @cite_20 ).
|
{
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_9",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_20",
"@cite_12"
],
"mid": [
"1979415050",
"1520820726",
"2788182033",
"2048802156",
"2126733148",
"2889160798",
"",
"2101057470"
],
"abstract": [
"Abstract Cutting and packing problems appear under various names in literature, e.g. cutting stock or trim loss problem, bin or strip packing problem, vehicle, pallet or container loading problem, nesting problem, knapsack problem etc. The paper develops a consistent and systematic approach for a comprehensive typology integrating the various kinds of problems. The typology is founded on the basic logical structure of cutting and packing problems. The purpose is to unify the different use of notions in the literature and to concentrate further research on special types of problems.",
"We propose a new constructive algorithm, called HAPE3D, which is a heuristic algorithm based on the principle of minimum total potential energy for the 3D irregular packing problem, involving packing a set of irregularly shaped polyhedrons into a box-shaped container with fixed width and length but unconstrained height. The objective is to allocate all the polyhedrons in the container, and thus minimize the waste or maximize profit. HAPE3D can deal with arbitrarily shaped polyhedrons, which can be rotated around each coordinate axis at different angles. The most outstanding merit is that HAPE3D does not need to calculate no-fit polyhedron (NFP), which is a huge obstacle for the 3D packing problem. HAPE3D can also be hybridized with a meta-heuristic algorithm such as simulated annealing. Two groups of computational experiments demonstrate the good performance of HAPE3D and prove that it can be hybridized quite well with a meta-heuristic algorithm to further improve the packing quality.",
"Abstract We study the problem of packing a given collection of arbitrary, in general concave, polyhedra into a cuboid of minimal volume. Continuous rotations and translations of polyhedra are allowed. In addition, minimal allowable distances between polyhedra are taken into account. We derive an exact mathematical model using adjusted radical free quasi phi-functions for concave polyhedra to describe non-overlapping and distance constraints. The model is a nonlinear programming formulation. We develop an efficient solution algorithm, which employs a fast starting point algorithm and a new compaction procedure. The procedure reduces our problem to a sequence of nonlinear programming subproblems of considerably smaller dimension and a smaller number of nonlinear inequalities. The benefit of this approach is borne out by the computational results, which include a comparison with previously published instances and new instances.",
"This paper investigates the combinatorial and computational aspects of certain extremal geometric problems in two and three dimensions. Specifically, we examine the problem of intersecting a convex subdivision with a line in order to maximize the number of intersections. A similar problem is to maximize the number of intersected facets in a cross-section of a three-dimensional convex polytope. Related problems concern maximum chains in certain families of posets defined over the regions of a convex subdivision. In most cases we are able to prove sharp bounds on the asymptotic behavior of the corresponding extremal functions. We also describe polynomial algorithms for all the problems discussed.",
"We present an efficient solution method for packing d-dimensional polytopes within the bounds of a polytope container. The central geometric operation of the method is an exact one-dimensional translation of a given polytope to a position which minimizes its volume of overlap with all other polytopes. We give a detailed description and a proof of a simple algorithm for this operation in which one only needs to know the set of (d-1)-dimensional facets in each polytope. Handling non-convex polytopes or even interior holes is a natural part of this algorithm. The translation algorithm is used as part of a local search heuristic and a meta-heuristic technique, guided local search, is used to escape local minima. Additional details are given for the three-dimensional case and results are reported for the problem of packing polyhedra in a rectangular parallelepiped. Utilization of container space is improved by an average of more than 14 percentage points compared to previous methods. The translation algorithm can also be used to solve the problem of maximizing the volume of intersection of two polytopes given a fixed translation direction. For two polytopes with complexity O(n) and O(m) and a fixed dimension, the running time is O(nmlog(nm)) for both the minimization and maximization variants of the translation algorithm.",
"",
"",
"The problem addressed in this paper is that of orthogonally packing a given set of rectangular-shaped items into the minimum number of three-dimensional rectangular bins. The problem is strongly NP-hard and extremely difficult to solve in practice. Lower bounds are discussed, and it is proved that the asymptotic worst-case performance ratio of the continuous lower bound is ?. An exact algorithm for filling a single bin is developed, leading to the definition of an exact branch-and-bound algorithm for the three-dimensional bin packing problem, which also incorporates original approximation algorithms. Extensive computational results, involving instances with up to 90 items, are presented: It is shown that many instances can be solved to optimality within a reasonable time limit."
]
}
|
1906.08484
|
2951203646
|
In a recent work, studied the following "fair" variants of classical clustering problems such as @math -means and @math -median: given a set of @math data points in @math and a binary type associated to each data point, the goal is to cluster the points while ensuring that the proportion of each type in each cluster is roughly the same as its underlying proportion. Subsequent work has focused on either extending this setting to when each data point has multiple, non-disjoint sensitive types such as race and gender, or to address the problem that the clustering algorithms in the above work do not scale well. The main contribution of this paper is an approach to clustering with fairness constraints that involve multiple, non-disjoint types, that is also scalable . Our approach is based on novel constructions of coresets: for the @math -median objective, we construct an @math -coreset of size @math where @math is the number of distinct collections of groups that a point may belong to, and for the @math -means objective, we show how to construct an @math -coreset of size @math . The former result is the first known coreset construction for the fair clustering problem with the @math -median objective, and the latter result removes the dependence on the size of the full dataset as in and generalizes it to multiple, non-disjoint types. Plugging our coresets into existing algorithms for fair clustering such as results in the fastest algorithms for several cases. Empirically, we assess our approach over the Adult and Bank dataset, and show that the coreset sizes are much smaller than the full dataset; applying coresets indeed accelerates the running time of computing the fair clustering objective while ensuring that the resulting objective difference is small.
|
There are increasingly more works on fair clustering algorithms. @cite_19 introduced the fair clustering problem for a binary type and obtained approximation algorithms for fair @math -median center. @cite_18 improved the running time to nearly linear for fair @math -median, but the approximation ratio is @math . R "o sner and Schmidt @cite_3 designed a 14-approximate algorithm for fair @math -center, and the ratio is improved to 5 by @cite_15 . For fair @math -means, @cite_25 introduced the notion of fair coresets, and presented an efficient streaming algorithm. More generally, @cite_15 proposed a bi-criteria approximation for fair @math -median means center supplier facility location. Very recently, @cite_39 presented a bi-criteria approximation algorithm for fair @math -clustering problem (Definition ) with arbitrary group structures (potentially overlapping), and @cite_13 improved their results by proposing the first constant-factor approximation algorithm. It is still open to design a near linear time @math -approximate algorithm for the fair @math -clustering problem.
|
{
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_39",
"@cite_19",
"@cite_15",
"@cite_13",
"@cite_25"
],
"mid": [
"2929382047",
"2785893262",
"",
"2963992001",
"2901801762",
"2947175317",
"2908166511"
],
"abstract": [
"",
"The @math -center problem is a classical combinatorial optimization problem which asks to find @math centers such that the maximum distance of any input point in a set @math to its assigned center is minimized. The problem allows for elegant @math -approximations. However, the situation becomes significantly more difficult when constraints are added to the problem. We raise the question whether general methods can be derived to turn an approximation algorithm for a clustering problem with some constraints into an approximation algorithm that respects one constraint more. Our constraint of choice is privacy: Here, we are asked to only open a center when at least @math clients will be assigned to it. We show how to combine privacy with several other constraints.",
"",
"We study the question of fair clustering under the disparate impact doctrine, where each protected class must have approximately equal representation in every cluster. We formulate the fair clustering problem under both the k-center and the k-median objectives, and show that even with two protected classes the problem is challenging, as the optimum solution can violate common conventions---for instance a point may no longer be assigned to its nearest cluster center! En route we introduce the concept of fairlets, which are minimal sets that satisfy fair representation while approximately preserving the clustering objective. We show that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms. While finding good fairlets can be NP-hard, we proceed to obtain efficient approximation algorithms based on minimum cost flow. We empirically demonstrate the by quantifying the value of fair clustering on real-world datasets with sensitive attributes.",
"Clustering is a fundamental tool in data mining. It partitions points into groups (clusters) and may be used to make decisions for each point based on its group. However, this process may harm protected (minority) classes if the clustering algorithm does not adequately represent them in desirable clusters -- especially if the data is already biased. At NIPS 2017, proposed a model for fair clustering requiring the representation in each cluster to (approximately) preserve the global fraction of each protected class. Restricting to two protected classes, they developed both a 4-approximation for the fair @math -center problem and a @math -approximation for the fair @math -median problem, where @math is a parameter for the fairness model. For multiple protected classes, the best known result is a 14-approximation for fair @math -center. We extend and improve the known results. Firstly, we give a 5-approximation for the fair @math -center problem with multiple protected classes. Secondly, we propose a relaxed fairness notion under which we can give bicriteria constant-factor approximations for all of the classical clustering objectives @math -center, @math -supplier, @math -median, @math -means and facility location. The latter approximations are achieved by a framework that takes an arbitrary existing unfair (integral) solution and a fair (fractional) LP solution and combines them into an essentially fair clustering with a weakly supervised rounding scheme. In this way, a fair clustering can be established belatedly, in a situation where the centers are already fixed.",
"Reducing hidden bias in the data and ensuring fairness in algorithmic data analysis has recently received significant attention. We complement several recent papers in this line of research by introducing a general method to reduce bias in the data through random projections in a fair'' subspace. We apply this method to densest subgraph and @math -means. For densest subgraph, our approach based on fair projections allows to recover both theoretically and empirically an almost optimal, fair, dense subgraph hidden in the input data. We also show that, under the small set expansion hypothesis, approximating this problem beyond a factor of @math is NP-hard and we show a polynomial time algorithm with a matching approximation bound. We further apply our method to @math -means. In a previous paper, [NIPS 2017] showed that problems such as @math -means can be approximated up to a constant factor while ensuring that none of two protected class (e.g., gender, ethnicity) is disparately impacted. We show that fair projections generalize the concept of fairlet introduced by to any number of protected attributes and improve empirically the quality of the resulting clustering. We also present the first constant-factor approximation for an arbitrary number of protected attributes thus settling an open problem recently addressed in several works.",
"We study fair clustering problems as proposed by Here, points have a sensitive attribute and all clusters in the solution are required to be balanced with respect to it (to counteract any form of data-inherent bias). Previous algorithms for fair clustering do not scale well. We show how to model and compute so-called coresets for fair clustering problems, which can be used to significantly reduce the input data size. We prove that the coresets are composable and show how to compute them in a streaming setting. We also propose a novel combination of the coreset construction with a sketching technique due to which may be of independent interest. We conclude with an empirical evaluation."
]
}
|
1906.08570
|
2951585370
|
Hindi question answering systems suffer from a lack of data. To address the same, this paper presents an approach towards automatic question generation. We present a rule-based system for question generation in Hindi by formalizing question transformation methods based on karaka-dependency theory. We use a Hindi dependency parser to mark the karaka roles and use IndoWordNet a Hindi ontology to detect the semantic category of the karaka role heads to generate the interrogatives. We analyze how one sentence can have multiple generations from the same karaka role's rule. The generations are manually annotated by multiple annotators on a semantic and syntactic scale for evaluation. Further, we constrain our generation with the help of various semantic and syntactic filters so as to improve the generation quality. Using these methods, we are able to generate diverse questions, significantly more than number of sentences fed to the system.
|
Previous works on question generation relied on templates @cite_13 @cite_5 . Further work was done in neural generation as well. Generating factoid questions with neural networks @cite_7 @cite_11 with a sizeable corpus was done. However neural generation methods require sizeable amount of corpus to train machine learning models. Hence, it is difficult for such models to facilitate Hindi, which is rather resource scarce in this regard.
|
{
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_7",
"@cite_11"
],
"mid": [
"1813062533",
"19665345",
"2304545146",
"2886505372"
],
"abstract": [
"The question-answering system developed by this research matches one-sentence-long user questions to a number of question templates that cover the conceptual model of the database and describe the concepts, their attributes, and the relationships in form of natural language questions. A question template resembles a frequently asked question (FAQ). Unlike a static FAQ, however, a question template may contain entity slots that are replaced by data instances from the underlying database. During the question-answering process, the system retrieves relevant data instances and question templates, and offers one or several interpretations of the original question. The user selects an interpretation to be answered.",
"Self-questioning is an important reading comprehension strategy, so it would be useful for an intelligent tutor to help students apply it to any given text. Our goal is to help children generate questions that make them think about the text in ways that improve their comprehension and retention. However, teaching and scaffolding self-questioning involve analyzing both the text and the students' responses. This requirement poses a tricky challenge to generating such instruction automatically, especially for children too young to respond by typing. This paper describes how to generate self-questioning instruction for an automated reading tutor. Following expert pedagogy, we decompose strategy instruction into describing, modeling, scaffolding, and prompting the strategy. We present a working example to illustrate how we generate each of these four phases of instruction for a given text. We identify some relevant criteria and use them to evaluate the generated instruction on a corpus of 513 children's stories.",
"Over the past decade, large-scale supervised learning corpora have enabled machine learning researchers to make substantial advances. However, to this date, there are no large-scale question-answer corpora available. In this paper we present the 30M Factoid Question-Answer Corpus, an enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions. The produced question answer pairs are evaluated both by human evaluators and using automatic evaluation metrics, including well-established machine translation and sentence similarity metrics. Across all evaluation criteria the question-generation model outperforms the competing template-based baseline. Furthermore, when presented to human evaluators, the generated questions appear comparable in quality to real human-generated questions.",
"Asking intelligent and relevant questions is an important capability of conversational systems such as chatbots. Neural network-based approaches represent the state-of-the-art in automatic question generation (QG). In this work, we attempt to strengthen them significantly by adopting a holistic and novel generator-evaluator framework that directly optimizes objectives that reward semantics and structure. In this paper, we present a novel deep reinforcement learning based framework for automatic question generation. The generator of the framework is a sequence-to-sequence model, whereas the evaluator model of the framework evaluates and assigns a reward to each predicted question. The overall model is trained by learning the parameters of the generator network which maximizes the reward.Our framework allows us to directly optimize any task-specific score including evaluation measures such as BLEU, GLEU, ROUGE-L,etc., suitable for sequence to sequence tasks such as QG. Our evaluation shows that our approach significantly outperforms state-of-the-art systems on the widely-used SQuAD benchmark in both automatic and human evaluation."
]
}
|
1906.08570
|
2951585370
|
Hindi question answering systems suffer from a lack of data. To address the same, this paper presents an approach towards automatic question generation. We present a rule-based system for question generation in Hindi by formalizing question transformation methods based on karaka-dependency theory. We use a Hindi dependency parser to mark the karaka roles and use IndoWordNet a Hindi ontology to detect the semantic category of the karaka role heads to generate the interrogatives. We analyze how one sentence can have multiple generations from the same karaka role's rule. The generations are manually annotated by multiple annotators on a semantic and syntactic scale for evaluation. Further, we constrain our generation with the help of various semantic and syntactic filters so as to improve the generation quality. Using these methods, we are able to generate diverse questions, significantly more than number of sentences fed to the system.
|
With limited work done for them by @cite_24 @cite_18 , question answering in Indian languages has a very poor representation. Among these languages, Hindi has been most extensively researched, yet few rule-based QA systems have been proposed @cite_14 @cite_16 . @cite_8 's system works on code mixed Hindi-English data. @cite_12 's work on a Hindi QA corpus generation was originally done in English. The Hindi version was artificially created through translation methods. These methods do not involve human annotated Hindi data.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_8",
"@cite_24",
"@cite_16",
"@cite_12"
],
"mid": [
"",
"",
"2749078814",
"2477575531",
"2786864293",
"1525961042"
],
"abstract": [
"",
"",
"Code-Mixing (CM) is a natural phenomenon observed in many multilingual societies and is becoming the preferred medium of expression and communication in online and social media fora. In spite of this, current Question Answering (QA) systems do not support CM and are only designed to work with a single interaction language. This assumption makes it inconvenient for multi-lingual users to interact naturally with the QA system especially in scenarios where they do not know the right word in the target language. In this paper, we present WebShodh - an end-end web-based Factoid QA system for CM languages. We demonstrate our system with two CM language pairs: Hinglish (Matrix language: Hindi, Embedded language: English) and Tenglish (Matrix language: Telugu, Embedded language: English). Lack of language resources such as annotated corpora, POS taggers or parsers for CM languages poses a huge challenge for automated processing and analysis. In view of this resource scarcity, we only assume the existence of bi-lingual dictionaries from the matrix languages to English and use it for lexically translating the question into English. Later, we use this loosely translated question for our downstream analysis such as Answer Type(AType) prediction, answer retrieval and ranking. Evaluation of our system reveals that we achieve an MRR of 0.37 and 0.32 for Hinglish and Tenglish respectively. We hosted this system online and plan to leverage it for collecting more CM questions and answers data for further improvement.",
"A Question Answering (QA) System is fairly an Information Retrieval(IR) system in which a query is stated to the system and it relocates the correct or closest results to the specific question asked in natural language. It is one of the consequences of Natural Language Interface to Database (NLIDB). The paper discusses the implementation of a Hindi Language QA system developed using Machine Learning approach. The implemented QA system is divided into three phases: Accessing natural language (NL) Query; where the input query is read, preprocessed and get tokenized; next is feature extraction (FE) phase; where specific features vectors are identified from the results of previous phase and finally the Classification phase; where the Naive Baye's classifier has been used, along with the knowledge base already stored in the system. This paper reflects that the concepts of similarity and classification provide better results than the use of ‘equals’ concept by defining the overall accuracy of finding the relevant answers of the specific questions asked by the user.",
"In Contemporary world, life styles and interactions have changed in all applications domain due to increasing advances of internet technology. Due to recent advances in information explosion, tries to build an intelligent question answering system where user may communicate with a machine in natural language to get response to user question using different strategies like Natural Language Processing (NLP), Artificial Intelligence, Information Retrieval and Human Computer Interaction. Natural Language Processing is a technique where computer behave like human, which helps people to talk to the computer in their own language rather than computer commands. The skills needed to build intelligent answering system includes tokenization, parsing, parts of speech tagging, question classification, query construction, sentence understanding, document retrieval, keyword ranking, classifier, answer extraction and validation. The current study intends to develop an intelligent system for user queries in natural language for precise answer.",
"One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks."
]
}
|
1906.08570
|
2951585370
|
Hindi question answering systems suffer from a lack of data. To address the same, this paper presents an approach towards automatic question generation. We present a rule-based system for question generation in Hindi by formalizing question transformation methods based on karaka-dependency theory. We use a Hindi dependency parser to mark the karaka roles and use IndoWordNet a Hindi ontology to detect the semantic category of the karaka role heads to generate the interrogatives. We analyze how one sentence can have multiple generations from the same karaka role's rule. The generations are manually annotated by multiple annotators on a semantic and syntactic scale for evaluation. Further, we constrain our generation with the help of various semantic and syntactic filters so as to improve the generation quality. Using these methods, we are able to generate diverse questions, significantly more than number of sentences fed to the system.
|
Dependency parsing in Indian languages follows Indian grammatical tradition by Panini, an early @math century BC linguist. According to Paninian Grammar, Hindi dependency roles can be explained in terms of Karakas which can loosely correlate to typical dependency labels used in English. The head of a sentence is the main verb while the rest of the phrases are children of the main verb. The role of these child nodes with respect to the main verb is the Karaka rols. For example (as shown in Figure ) is translated as Here, the relations are specified as - which is or doer, which is or patient of the verb and which is the time this action took place. Karakas sometimes are expressed with their case markers ( and in this case). @cite_4 describes the annotation scheme of these dependency parsers. In the following sections, we explain rules we developed for each karaka role along with how the karaka could be understood in terms of thematic roles. We use the dependency parser for Hindi provided at https: bitbucket.org iscnlp parser
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2405140167"
],
"abstract": [
"The paper introduces a dependency annotation effort which aims to fully annotate a million word Hindi corpus. It is the first attempt of its kind to develop a large scale tree-bank for an Indian language. In this paper we provide the motivation for following the Paninian framework as the annotation scheme and argue that the Paninian framework is better suited to model the various linguistic phenomena manifest in Indian languages. We present the basic annotation scheme. We also show how the scheme handles some phenomenon such as complex verbs, ellipses, etc. Empirical results of some experiments done on the currently annotated sentences are also reported."
]
}
|
1906.08332
|
2953369397
|
This study explores a simple but strong baseline for person re-identification (ReID). Person ReID with deep neural networks has progressed and achieved high performance in recent years. However, many state-of-the-art methods design complex network structures and concatenate multi-branch features. In the literature, some effective training tricks briefly appear in several papers or source codes. The present study collects and evaluates these effective training tricks in person ReID. By combining these tricks, the model achieves 94.5 rank-1 and 85.9 mean average precision on Market1501 with only using the global features of ResNet50. The performance surpasses all existing global- and part-based baselines in person ReID. We propose a novel neck structure named as batch normalization neck (BNNeck). BNNeck adds a batch normalization layer after global pooling layer to separate metric and classification losses into two different feature spaces because we observe they are inconsistent in one embedding space. Extended experiments show that BNNeck can boost the baseline, and our baseline can improve the performance of existing state-of-the-art methods. Our codes and models are available at: this https URL.
|
Recent studies on person ReID mostly focus on building deep convolutional neural networks (CNNs) to represent the features of person images in an end-to-end learning manner. GoogleNet @cite_9 , ResNet @cite_12 , DenseNet @cite_45 , etc are widely used backbone networks. The baselines can be classified into two main genres in accordance with the loss function, classification loss and metric loss. For classification loss, Zheng @cite_24 proposed ID-discriminative embedding (IDE) to train the re-ID model as image classification which is fine-tuned from the ImageNet @cite_16 pre-trained models. Classification loss is also called ID loss in person ReID because IDE is trained by classification loss. However, ID loss requires an extra fully connected (FC) layer to predict the logits of person IDs in the training stage. In the inference stage, such FC layer is removed, and the feature from the last pooling layer is used as the representation vector of the person image.
|
{
"cite_N": [
"@cite_9",
"@cite_24",
"@cite_45",
"@cite_16",
"@cite_12"
],
"mid": [
"2097117768",
"2549957142",
"2963446712",
"2108598243",
"2194775991"
],
"abstract": [
"We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"In this article, we revisit two popular convolutional neural networks in person re-identification (re-ID): verification and identification models. The two models have their respective advantages and limitations due to different loss functions. Here, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a Siamese network that simultaneously computes the identification loss and verification loss. Given a pair of training images, the network predicts the identities of the two input images and whether they belong to the same identity. Our network learns a discriminative embedding and a similarity measurement at the same time, thus taking full usage of the re-ID annotations. Our method can be easily applied on different pretrained networks. Albeit simple, the learned embedding improves the state-of-the-art performance on two public person re-ID benchmarks. Further, we show that our architecture can also be applied to image retrieval. The code is available at https: github.com layumi 2016_person_re-ID.",
"Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
]
}
|
1906.08332
|
2953369397
|
This study explores a simple but strong baseline for person re-identification (ReID). Person ReID with deep neural networks has progressed and achieved high performance in recent years. However, many state-of-the-art methods design complex network structures and concatenate multi-branch features. In the literature, some effective training tricks briefly appear in several papers or source codes. The present study collects and evaluates these effective training tricks in person ReID. By combining these tricks, the model achieves 94.5 rank-1 and 85.9 mean average precision on Market1501 with only using the global features of ResNet50. The performance surpasses all existing global- and part-based baselines in person ReID. We propose a novel neck structure named as batch normalization neck (BNNeck). BNNeck adds a batch normalization layer after global pooling layer to separate metric and classification losses into two different feature spaces because we observe they are inconsistent in one embedding space. Extended experiments show that BNNeck can boost the baseline, and our baseline can improve the performance of existing state-of-the-art methods. Our codes and models are available at: this https URL.
|
Different from ID loss, metric loss regards the ReID task as a clustering or ranking problem. The most widely used baseline based on metric learning is training model with triplet loss @cite_28 . A triplet includes there images, anchor, positive, and negative samples. The anchor and positive samples belong to the same person ID, whereas the negative sample belongs to a different person ID. Triplet loss minimizes the distance from the anchor sample to the positive sample and maximizes the distance from the anchor sample to the negative one. However, triplet loss is greatly influenced by the sample triplets. Inspired by FaceNet @cite_30 , Hermans proposed an online hard example mining for triplet loss (TriHard loss). Most current methods are expanded on the TriHard baseline. Combining ID loss with TriHard loss is also a popular manner of acquiring a strong baseline @cite_38 .
|
{
"cite_N": [
"@cite_28",
"@cite_38",
"@cite_30"
],
"mid": [
"2432402544",
"2946574625",
"2096733369"
],
"abstract": [
"Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, i.e. , the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively comparing their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.",
"Abstract Person re-identification (ReID) is a challenging problem, where global features of person images are not enough to solve unaligned image pairs. Many previous works used human pose information to acquire aligned local features to boost the performance. However, those methods need extra labeled data to train an available human pose estimation model. In this paper, we propose a novel method named Dynamically Matching Local Information (DMLI) that could dynamically align local information without requiring extra supervision. DMLI could achieve better performance, especially when encountering the human pose misalignment caused by inaccurate person detection boxes. Then, we propose a deep model name AlignedReID++ which is jointly learned with global features and local feature based on DMLI. AlignedReID++ improves the performance of global features, and could use DMLI to further increase accuracy in the inference phase. Experiments show effectiveness of our proposed method in comparison with several state-of-the-art person ReID approaches. Additionally, it achieves rank-1 accuracy of 92.8 on Market1501 and 86.2 on DukeMTMCReID with ResNet50. The code and models have been released 2 .",
"Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors."
]
}
|
1906.08332
|
2953369397
|
This study explores a simple but strong baseline for person re-identification (ReID). Person ReID with deep neural networks has progressed and achieved high performance in recent years. However, many state-of-the-art methods design complex network structures and concatenate multi-branch features. In the literature, some effective training tricks briefly appear in several papers or source codes. The present study collects and evaluates these effective training tricks in person ReID. By combining these tricks, the model achieves 94.5 rank-1 and 85.9 mean average precision on Market1501 with only using the global features of ResNet50. The performance surpasses all existing global- and part-based baselines in person ReID. We propose a novel neck structure named as batch normalization neck (BNNeck). BNNeck adds a batch normalization layer after global pooling layer to separate metric and classification losses into two different feature spaces because we observe they are inconsistent in one embedding space. Extended experiments show that BNNeck can boost the baseline, and our baseline can improve the performance of existing state-of-the-art methods. Our codes and models are available at: this https URL.
|
, which divide the image into several stripes and extract local features for each stripe, play an important role in person ReID. Inspired by PCB, the typical methods includes AlignedReID++ @cite_38 , MGN @cite_39 , SCPNet @cite_11 , etc. Stripe-based local features are effective in boosting the performance of the ReID model. However, they always encounter the problem of pose misalignment.
|
{
"cite_N": [
"@cite_38",
"@cite_11",
"@cite_39"
],
"mid": [
"2946574625",
"2954309578",
"2795758732"
],
"abstract": [
"Abstract Person re-identification (ReID) is a challenging problem, where global features of person images are not enough to solve unaligned image pairs. Many previous works used human pose information to acquire aligned local features to boost the performance. However, those methods need extra labeled data to train an available human pose estimation model. In this paper, we propose a novel method named Dynamically Matching Local Information (DMLI) that could dynamically align local information without requiring extra supervision. DMLI could achieve better performance, especially when encountering the human pose misalignment caused by inaccurate person detection boxes. Then, we propose a deep model name AlignedReID++ which is jointly learned with global features and local feature based on DMLI. AlignedReID++ improves the performance of global features, and could use DMLI to further increase accuracy in the inference phase. Experiments show effectiveness of our proposed method in comparison with several state-of-the-art person ReID approaches. Additionally, it achieves rank-1 accuracy of 92.8 on Market1501 and 86.2 on DukeMTMCReID with ResNet50. The code and models have been released 2 .",
"Holistic person re-identification (ReID) has received extensive study in the past few years and achieves impressive progress. However, persons are often occluded by obstacles or other persons in practical scenarios, which makes partial person re-identification non-trivial. In this paper, we propose a spatial-channel parallelism network (SCPNet) in which each channel in the ReID feature pays attention to a given spatial part of the body. The spatial-channel corresponding relationship supervises the network to learn discriminative feature for both holistic and partial person re-identification. The single model trained on four holistic ReID datasets achieves competitive accuracy on these four datasets, as well as outperforms the state-of-the-art methods on two partial ReID datasets without training.",
"The combination of global and partial features has been an essential solution to improve discriminative performances in person re-identification (Re-ID) tasks. Previous part-based methods mainly focus on locating regions with specific pre-defined semantics to learn local representations, which increases learning difficulty but not efficient or robust to scenarios with large variances. In this paper, we propose an end-to-end feature learning strategy integrating discriminative information with various granularities. We carefully design the Multiple Granularity Network (MGN), a multi-branch deep network architecture consisting of one branch for global feature representations and two branches for local feature representations. Instead of learning on semantic regions, we uniformly partition the images into several stripes, and vary the number of parts in different local branches to obtain local feature representations with multiple granularities. Comprehensive experiments implemented on the mainstream evaluation datasets including Market-1501, DukeMTMC-reid and CUHK03 indicate that our method robustly achieves state-of-the-art performances and outperforms any existing approaches by a large margin. For example, on Market-1501 dataset in single query mode, we obtain a top result of Rank-1 mAP=96.6 94.2 with this method after re-ranking."
]
}
|
1906.08332
|
2953369397
|
This study explores a simple but strong baseline for person re-identification (ReID). Person ReID with deep neural networks has progressed and achieved high performance in recent years. However, many state-of-the-art methods design complex network structures and concatenate multi-branch features. In the literature, some effective training tricks briefly appear in several papers or source codes. The present study collects and evaluates these effective training tricks in person ReID. By combining these tricks, the model achieves 94.5 rank-1 and 85.9 mean average precision on Market1501 with only using the global features of ResNet50. The performance surpasses all existing global- and part-based baselines in person ReID. We propose a novel neck structure named as batch normalization neck (BNNeck). BNNeck adds a batch normalization layer after global pooling layer to separate metric and classification losses into two different feature spaces because we observe they are inconsistent in one embedding space. Extended experiments show that BNNeck can boost the baseline, and our baseline can improve the performance of existing state-of-the-art methods. Our codes and models are available at: this https URL.
|
@cite_13 @cite_33 @cite_7 use mask as external cues to remove the background clutters in pixel level and contain body shape information. For example, Song @cite_13 proposed a mask-guided contrastive attention model that applies binary segmentation masks to learn features separately from the body and background regions. Kalayeh @cite_33 proposed SPReID, which uses human semantic parsing to harness local visual cues for person ReID. Mask-guided models extremely rely on accurate pedestrian segmentation model.
|
{
"cite_N": [
"@cite_13",
"@cite_33",
"@cite_7"
],
"mid": [
"2798775284",
"2963805953",
"2797394071"
],
"abstract": [
"Person Re-identification (ReID) is an important yet challenging task in computer vision. Due to the diverse background clutters, variations on viewpoints and body poses, it is far from solved. How to extract discriminative and robust features invariant to background clutters is the core problem. In this paper, we first introduce the binary segmentation masks to construct synthetic RGB-Mask pairs as inputs, then we design a mask-guided contrastive attention model (MGCAM) to learn features separately from the body and background regions. Moreover, we propose a novel region-level triplet loss to restrain the features learnt from different regions, i.e., pulling the features from the full image and body region close, whereas pushing the features from backgrounds away. We may be the first one to successfully introduce the binary mask into person ReID task and the first one to propose region-level contrastive learning. We evaluate the proposed method on three public datasets, including MARS, Market-1501 and CUHK03. Extensive experimental results show that the proposed method is effective and achieves the state-of-the-art results. Mask and code will be released upon request.",
"Person re-identification is a challenging task mainly due to factors such as background clutter, pose, illumination and camera point of view variations. These elements hinder the process of extracting robust and discriminative representations, hence preventing different identities from being successfully distinguished. To improve the representation learning, usually local features from human body parts are extracted. However, the common practice for such a process has been based on bounding box part detection. In this paper, we propose to adopt human semantic parsing which, due to its pixel-level accuracy and capability of modeling arbitrary contours, is naturally a better alternative. Our proposed SPReID integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance. We also show that, by employing a simple yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152, with no modification, while operating solely on full image, can dramatically outperform current state-of-the-art. Our proposed methods improve state-of-the-art person re-identification on: Market-1501 [48] by 17 in mAP and 6 in rank-1, CUHK03 [24] by 4 in rank-1 and DukeMTMC-reID [50] by 24 in mAP and 10 in rank-1.",
"Person retrieval faces many challenges including cluttered background, appearance variations (e.g., illumination, pose, occlusion) among different camera views and the similarity among different person's images. To address these issues, we put forward a novel mask based deep ranking neural network with a skipped fusing layer. Firstly, to alleviate the problem of cluttered background, masked images with only the foreground regions are incorporated as input in the proposed neural network. Secondly, to reduce the impact of the appearance variations, the multi-layer fusion scheme is developed to obtain more discriminative fine-grained information. Lastly, considering person retrieval is a special image retrieval task, we propose a novel ranking loss to optimize the whole network. The proposed ranking loss can further mitigate the interference problem of similar negative samples when producing ranking results. The extensive experiments validate the superiority of the proposed method compared with the state-of-the-art methods on many benchmark datasets."
]
}
|
1906.08332
|
2953369397
|
This study explores a simple but strong baseline for person re-identification (ReID). Person ReID with deep neural networks has progressed and achieved high performance in recent years. However, many state-of-the-art methods design complex network structures and concatenate multi-branch features. In the literature, some effective training tricks briefly appear in several papers or source codes. The present study collects and evaluates these effective training tricks in person ReID. By combining these tricks, the model achieves 94.5 rank-1 and 85.9 mean average precision on Market1501 with only using the global features of ResNet50. The performance surpasses all existing global- and part-based baselines in person ReID. We propose a novel neck structure named as batch normalization neck (BNNeck). BNNeck adds a batch normalization layer after global pooling layer to separate metric and classification losses into two different feature spaces because we observe they are inconsistent in one embedding space. Extended experiments show that BNNeck can boost the baseline, and our baseline can improve the performance of existing state-of-the-art methods. Our codes and models are available at: this https URL.
|
@cite_15 @cite_6 @cite_43 @cite_35 involve an attention mechanism to extract additional discriminative features. In comparison with pixel-level masks, attention region can be regraded as an automatically learned high-level mask'. A popular model is Harmonious Attention CNN (HA-CNN) model porposed by Li @cite_6 . HA-CNN combines the learning of soft pixel and hard regional attentions along with simultaneous optimization of feature representations. An advantage of attention-based models is that they do not require a segmentation model to acquire mask information.
|
{
"cite_N": [
"@cite_43",
"@cite_15",
"@cite_6",
"@cite_35"
],
"mid": [
"2963736028",
"2964044605",
"2962926870",
"2963910742"
],
"abstract": [
"Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics.",
"Typical person re-identification (ReID) methods usually describe each pedestrian with a single feature vector and match them in a task-specific metric space. However, the methods based on a single feature vector are not sufficient enough to overcome visual ambiguity, which frequently occurs in real scenario. In this paper, we propose a novel end-to-end trainable framework, called Dual ATtention Matching network (DuATM), to learn context-aware feature sequences and perform attentive sequence comparison simultaneously. The core component of our DuATM framework is a dual attention mechanism, in which both intrasequence and inter-sequence attention strategies are used for feature refinement and feature-pair alignment, respectively. Thus, detailed visual cues contained in the intermediate feature sequences can be automatically exploited and properly compared. We train the proposed DuATM network as a siamese network via a triplet loss assisted with a decorrelation loss and a cross-entropy loss. We conduct extensive experiments on both image and video based ReID benchmark datasets. Experimental results demonstrate the significant advantages of our approach compared to the state-of-the-art methods.",
"Existing person re-identification (re-id) methods either assume the availability of well-aligned person bounding box images as model input or rely on constrained attention selection mechanisms to calibrate misaligned images. They are therefore sub-optimal for re-id matching in arbitrarily aligned person images potentially with large human pose variations and unconstrained auto-detection errors. In this work, we show the advantages of jointly learning attention selection and feature representation in a Convolutional Neural Network (CNN) by maximising the complementary information of different levels of visual attention subject to re-id discriminative learning constraints. Specifically, we formulate a novel Harmonious Attention CNN (HA-CNN) model for joint learning of soft pixel attention and hard regional attention along with simultaneous optimisation of feature representations, dedicated to optimise person re-id in uncontrolled (misaligned) images. Extensive comparative evaluations validate the superiority of this new HA-CNN model for person re-id over a wide variety of state-of-the-art methods on three large-scale benchmarks including CUHK03, Market-1501, and DukeMTMC-ReID.",
"Person re-identification (ReID) is to identify pedestrians observed from different camera views based on visual appearance. It is a challenging task due to large pose variations, complex background clutters and severe occlusions. Recently, human pose estimation by predicting joint locations was largely improved in accuracy. It is reasonable to use pose estimation results for handling pose variations and background clutters, and such attempts have obtained great improvement in ReID performance. However, we argue that the pose information was not well utilized and hasn't yet been fully exploited for person ReID. In this work, we introduce a novel framework called Attention-Aware Compositional Network (AACN) for person ReID. AACN consists of two main components: Pose-guided Part Attention (PPA) and Attention-aware Feature Composition (AFC). PPA is learned and applied to mask out undesirable background features in pedestrian feature maps. Furthermore, pose-guided visibility scores are estimated for body parts to deal with part occlusion in the proposed AFC module. Extensive experiments with ablation analysis show the effectiveness of our method, and state-of-the-art results are achieved on several public datasets, including Market-1501, CUHK03, CUHK01, SenseReID, CUHK03-NP and DukeMTMC-reID."
]
}
|
1906.08332
|
2953369397
|
This study explores a simple but strong baseline for person re-identification (ReID). Person ReID with deep neural networks has progressed and achieved high performance in recent years. However, many state-of-the-art methods design complex network structures and concatenate multi-branch features. In the literature, some effective training tricks briefly appear in several papers or source codes. The present study collects and evaluates these effective training tricks in person ReID. By combining these tricks, the model achieves 94.5 rank-1 and 85.9 mean average precision on Market1501 with only using the global features of ResNet50. The performance surpasses all existing global- and part-based baselines in person ReID. We propose a novel neck structure named as batch normalization neck (BNNeck). BNNeck adds a batch normalization layer after global pooling layer to separate metric and classification losses into two different feature spaces because we observe they are inconsistent in one embedding space. Extended experiments show that BNNeck can boost the baseline, and our baseline can improve the performance of existing state-of-the-art methods. Our codes and models are available at: this https URL.
|
@cite_5 @cite_23 @cite_25 @cite_17 address the limited data for person ReID. Zheng @cite_5 first used GAN @cite_22 to generate images for enriching ReID datasets. The GAN model randomly generates unlabeled and unclear images. On the basis of @cite_5 , PTGAN @cite_23 and CamStyle @cite_25 were proposed to bridge domain and camera gaps for person ReID, respectively. Qian @cite_17 proposed PNGAN for obtaining a new pedestrian feature and transforming a person into normalized poses. The final feature is obtained by combining the pose-independent features with original ReID features. With the development of GAN, many ganbased methods have been proposed to generate high quality for supervised and unsupervised person ReID tasks.
|
{
"cite_N": [
"@cite_22",
"@cite_23",
"@cite_5",
"@cite_25",
"@cite_17"
],
"mid": [
"2099471712",
"2963047834",
"2585635281",
"2898047322",
"2964186374"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Although the performance of person Re-Identification (ReID) has been significantly boosted, many challenging issues in real scenarios have not been fully investigated, e.g., the complex scenes and lighting variations, viewpoint and pose changes, and the large number of identities in a camera network. To facilitate the research towards conquering those issues, this paper contributes a new dataset called MSMT171 with many important features, e.g., 1) the raw videos are taken by an 15-camera network deployed in both indoor and outdoor scenes, 2) the videos cover a long period of time and present complex lighting variations, and 3) it contains currently the largest number of annotated identities, i.e., 4,101 identities and 126,441 bounding boxes. We also observe that, domain gap commonly exists between datasets, which essentially causes severe performance drop when training and testing on different datasets. This results in that available training data cannot be effectively leveraged for new testing domains. To relieve the expensive costs of annotating new training samples, we propose a Person Transfer Generative Adversarial Network (PTGAN) to bridge the domain gap. Comprehensive experiments show that the domain gap could be substantially narrowed-down by the PTGAN.",
"The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market- 1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at https: github.com layumi Person-reID_GAN.",
"Person re-identification (re-ID) is a cross-camera retrieval task that suffers from image style variations caused by different cameras. The art implicitly addresses this problem by learning a camera-invariant descriptor subspace. In this paper, we explicitly consider this challenge by introducing camera style (CamStyle). CamStyle can serve as a data augmentation approach that reduces the risk of deep network overfitting and that smooths the CamStyle disparities. Specifically, with a style transfer model, labeled training images can be style transferred to each camera, and along with the original training samples, form the augmented training set. This method, while increasing data diversity against overfitting, also incurs a considerable level of noise. In the effort to alleviate the impact of noise, the label smooth regularization (LSR) is adopted. The vanilla version of our method (without LSR) performs reasonably well on few camera systems in which overfitting often occurs. With LSR, we demonstrate consistent improvement in all systems regardless of the extent of overfitting. We also report competitive accuracy compared with the state of the art on Market-1501 and DukeMTMC-re-ID. Importantly, CamStyle can be employed to the challenging problems of one view learning and unsupervised domain adaptation (UDA) in person re-identification (re-ID), both of which have critical research and application significance. The former only has labeled data in one camera view and the latter only has labeled data in the source domain. Experimental results show that CamStyle significantly improves the performance of the baseline in the two problems. Specially, for UDA, CamStyle achieves state-of-the-art accuracy based on a baseline deep re-ID model on Market-1501 and DukeMTMC-reID. Our code is available at: https: github.com zhunzhong07 CamStyle .",
"Person Re-identification (re-id) faces two major challenges: the lack of cross-view paired training data and learning discriminative identity-sensitive and view-invariant features in the presence of large pose variations. In this work, we address both problems by proposing a novel deep person image generation model for synthesizing realistic person images conditional on the pose. The model is based on a generative adversarial network (GAN) designed specifically for pose normalization in re-id, thus termed pose-normalization GAN (PN-GAN). With the synthesized images, we can learn a new type of deep re-id features free of the influence of pose variations. We show that these features are complementary to features learned with the original images. Importantly, a more realistic unsupervised learning setting is considered in this work, and our model is shown to have the potential to be generalizable to a new re-id dataset without any fine-tuning. The codes will be released at https: github.com naiq PN_GAN."
]
}
|
1906.08308
|
2950009908
|
Prior work on the complexity of bribery assumes that the bribery happens simultaneously, and that the briber has full knowledge of all voters' votes. But neither of those assumptions always holds. In many real-world settings, votes come in sequentially, and the briber may have a use-it-or-lose-it moment to decide whether to bribe alter a given vote, and at the time of making that decision, the briber may not know what votes remaining voters are planning on casting. In this paper, we introduce a model for, and initiate the study of, bribery in such an online, sequential setting. We show that even for election systems whose winner-determination problem is polynomial-time computable, an online, sequential setting may vastly increase the complexity of bribery, in fact jumping the problem up to completeness for high levels of the polynomial hierarchy or even PSPACE. On the other hand, we show that for some natural, important election systems, such a dramatic complexity increase does not occur, and we pinpoint the complexity of their bribery problems in the online, sequential setting.
|
Our approach to the briber's goal, which is assuming worst-case revelations of information, is inspired by the approach used in the area known as online algorithms @cite_1 .
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1552828154"
],
"abstract": [
"Preface 1. Introduction to competitive analysis: the list accessing problem 2. Introduction to randomized algorithms: the list accessing problem 3. Paging: deterministic algorithms 4. Paging: randomized algorithms 5. Alternative models for paging: beyond pure competitive analysis 6. Game theoretic foundations 7. Request - answer games 8. Competitive analysis and zero-sum games 9. Metrical task systems 10. The k-server problem 11. Randomized k-server algorithms 12. Load-balancing 13. Call admission and circuit-routing 14. Search, trading and portfolio selection 15. Competitive analysis and decision making under uncertainty Appendices Bibliography Index."
]
}
|
1906.08416
|
2951322441
|
We investigate the robustness properties of image recognition models equipped with two features inspired by human vision, an explicit episodic memory and a shape bias, at the ImageNet scale. As reported in previous work, we show that an explicit episodic memory improves the robustness of image recognition models against small-norm adversarial perturbations under some threat models. It does not, however, improve the robustness against more natural, and typically larger, perturbations. Learning more robust features during training appears to be necessary for robustness in this second sense. We show that features derived from a model that was encouraged to learn global, shape-based representations (, 2019) do not only improve the robustness against natural perturbations, but when used in conjunction with an episodic memory, they also provide additional robustness against adversarial perturbations. Finally, we address three important design choices for the episodic memory: memory size, dimensionality of the memories and the retrieval method. We show that to make the episodic memory more compact, it is preferable to reduce the number of memories by clustering them, instead of reducing their dimensionality.
|
To our knowledge, the idea of using an episodic cache memory to improve the adversarial robustness of image classifiers was first proposed in @cite_2 and @cite_4 . @cite_2 considered a differentiable memory that was trained end-to-end with the rest of the model. This makes their model computationally much more expensive than the cache models considered here, where the cache uses pre-trained features instead. The deep -nearest neighbor model in @cite_4 and the CacheOnly'' model described in @cite_8 are closer to our cache models in this respect, however these works did not consider models at the ImageNet scale. More recently, @cite_19 did consider cache models at the ImageNet scale (and beyond) and demonstrated substantial improvements in adversarial robustness under certain threat models.
|
{
"cite_N": [
"@cite_19",
"@cite_8",
"@cite_4",
"@cite_2"
],
"mid": [
"2919093153",
"2963380118",
"2793165286",
"2788475065"
],
"abstract": [
"Does the human mind resemble the machine-learning systems that mirror its performance? Convolutional neural networks (CNNs) have achieved human-level benchmarks in classifying novel images. These advances support technologies such as autonomous vehicles and machine diagnosis; but beyond this, they serve as candidate models for human vision itself. However, unlike humans, CNNs are “fooled” by adversarial examples—nonsense patterns that machines recognize as familiar objects, or seemingly irrelevant image perturbations that nevertheless alter the machine’s classification. Such bizarre behaviors challenge the promise of these new advances; but do human and machine judgments fundamentally diverge? Here, we show that human and machine classification of adversarial images are robustly related: In 8 experiments on 5 prominent and diverse adversarial imagesets, human subjects correctly anticipated the machine’s preferred label over relevant foils—even for images described as “totally unrecognizable to human eyes”. Human intuition may be a surprisingly reliable guide to machine (mis)classification—with consequences for minds and machines alike.",
"Training large-scale image recognition models is computationally expensive. This raises the question of whether there might be simple ways to improve the test performance of an already trained model without having to re-train or even fine-tune it with new data. Here, we show that, surprisingly, this is indeed possible. The key observation we make is that the layers of a deep network close to the output layer contain independent, easily extractable class-relevant information that is not contained in the output layer itself. We propose to extract this extra class-relevant information using a simple key-value cache memory to improve the classification performance of the model at test time. Our cache memory is directly inspired by a similar cache model previously proposed for language modeling (, 2017). This cache component does not require any training or fine-tuning; it can be applied to any pre-trained model and, by properly setting only two hyper-parameters, leads to significant improvements in its classification performance. Improvements are observed across several architectures and datasets. In the cache component, using features extracted from layers close to the output (but not from the output layer itself) as keys leads to the largest improvements. Concatenating features from multiple layers to form keys can further improve performance over using single-layer features as keys. The cache component also has a regularizing effect, a simple consequence of which is that it substantially increases the robustness of models against adversarial attacks.",
"Deep neural networks (DNNs) enable innovative applications of machine learning like image recognition, machine translation, or malware detection. However, deep learning is often criticized for its lack of robustness in adversarial settings (e.g., vulnerability to adversarial inputs) and general inability to rationalize its predictions. In this work, we exploit the structure of deep learning to enable new learning-based inference and decision strategies that achieve desirable properties such as robustness and interpretability. We take a first step in this direction and introduce the Deep k-Nearest Neighbors (DkNN). This hybrid classifier combines the k-nearest neighbors algorithm with representations of the data learned by each layer of the DNN: a test input is compared to its neighboring training points according to the distance that separates them in the representations. We show the labels of these neighboring points afford confidence estimates for inputs outside the model's training manifold, including on malicious inputs like adversarial examples--and therein provides protections against inputs that are outside the models understanding. This is because the nearest neighbors can be used to estimate the nonconformity of, i.e., the lack of support for, a prediction in the training data. The neighbors also constitute human-interpretable explanations of predictions. We evaluate the DkNN algorithm on several datasets, and show the confidence estimates accurately identify inputs outside the model, and that the explanations provided by nearest neighbors are intuitive and useful in understanding model failures.",
"We propose a retrieval-augmented convolutional network and propose to train it with local mixup, a novel variant of the recently proposed mixup algorithm. The proposed hybrid architecture combining a convolutional network and an off-the-shelf retrieval engine was designed to mitigate the adverse effect of off-manifold adversarial examples, while the proposed local mixup addresses on-manifold ones by explicitly encouraging the classifier to locally behave linearly on the data manifold. Our evaluation of the proposed approach against five readily-available adversarial attacks on three datasets--CIFAR-10, SVHN and ImageNet--demonstrate the improved robustness compared to the vanilla convolutional network."
]
}
|
1906.08172
|
2949520286
|
Building applications that perceive the world around them is challenging. A developer needs to (a) select and develop corresponding machine learning algorithms and models, (b) build a series of prototypes and demos, (c) balance resource consumption against the quality of the solutions, and finally (d) identify and mitigate problematic cases. The MediaPipe framework addresses all of these challenges. A developer can use MediaPipe to build prototypes by combining existing perception components, to advance them to polished cross-platform applications and measure system performance and resource consumption on target platforms. We show that these features enable a developer to focus on the algorithm or model development and use MediaPipe as an environment for iteratively improving their application with results reproducible across different devices and platforms. MediaPipe will be open-sourced at this https URL.
|
Media analysis is an active area of research in both academia and industry. Typically, a media file or camera input containing both audio and video is extracted into separate streams via a media decoder which are then analyzed separately. Neural net engines like TensorFlow @cite_1 , PyTorch @cite_7 , CNTK @cite_2 , MXNet @cite_9 represent their neural networks in forms of directed graphs whose nodes are simple and deterministic, one input generates one output, which allows very efficient execution of the compute graph consisting of such lower level semantics. MediaPipe on the other hand, operates at much higher level semantics and allows more complex and dynamic behavior, one input can generate zero, one or multiple outputs, which cannot be modeled with neural networks. This complexity allows MediaPipe to excel at analyzing media at higher semantics.
|
{
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_7",
"@cite_2"
],
"mid": [
"2186615578",
"2402144811",
"2899771611",
"2513383847"
],
"abstract": [
"MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines.",
"TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. Tensor-Flow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous \"parameter server\" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model and demonstrate the compelling performance that TensorFlow achieves for several real-world applications.",
"",
"This tutorial will introduce the Computational Network Toolkit, or CNTK, Microsoft's cutting-edge open-source deep-learning toolkit for Windows and Linux. CNTK is a powerful computation-graph based deep-learning toolkit for training and evaluating deep neural networks. Microsoft product groups use CNTK, for example to create the Cortana speech models and web ranking. CNTK supports feed-forward, convolutional, and recurrent networks for speech, image, and text workloads, also in combination. Popular network types are supported either natively (convolution) or can be described as a CNTK configuration (LSTM, sequence-to-sequence). CNTK scales to multiple GPU servers and is designed around efficiency. The tutorial will give an overview of CNTK's general architecture and describe the specific methods and algorithms used for automatic differentiation, recurrent-loop inference and execution, memory sharing, on-the-fly randomization of large corpora, and multi-server parallelization. We will then show how typical uses looks like for relevant tasks like image recognition, sequence-to-sequence modeling, and speech recognition."
]
}
|
1906.08172
|
2949520286
|
Building applications that perceive the world around them is challenging. A developer needs to (a) select and develop corresponding machine learning algorithms and models, (b) build a series of prototypes and demos, (c) balance resource consumption against the quality of the solutions, and finally (d) identify and mitigate problematic cases. The MediaPipe framework addresses all of these challenges. A developer can use MediaPipe to build prototypes by combining existing perception components, to advance them to polished cross-platform applications and measure system performance and resource consumption on target platforms. We show that these features enable a developer to focus on the algorithm or model development and use MediaPipe as an environment for iteratively improving their application with results reproducible across different devices and platforms. MediaPipe will be open-sourced at this https URL.
|
The research project Ptolemy @cite_11 studies concurrent systems, but it heavily focuses on modeling and simulation for the purpose of studying such systems. With MediaPipe, a developer can build and analyze concurrent systems via graphs, and further deploy such systems as performant applications. However, MediaPipe is targeted towards applications in the audio video processing domain and not limited to the scope of modeling the performance of concurrent systems.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2169180789"
],
"abstract": [
"Modern embedded computing systems tend to be heterogeneous in the sense of being composed of subsystems with very different characteristics, which communicate and interact in a variety of ways-synchronous or asynchronous, buffered or unbuffered, etc. Obviously, when designing such systems, a modeling language needs to reflect this heterogeneity. Today's modeling environments usually offer a variant of what we call amorphous heterogeneity to address this problem. This paper argues that modeling systems in this manner leads to unexpected and hard-to-analyze interactions between the communication mechanisms and proposes a more structured approach to heterogeneity, called hierarchical heterogeneity, to solve this problem. It proposes a model structure and semantic framework that support this form of heterogeneity, and discusses the issues arising from heterogeneous component interaction and the desire for component reuse. It introduces the notion of domain polymorphism as a way to address these issues."
]
}
|
1906.08157
|
2953190883
|
In this work we present a novel approach to solving concurrent multiagent planning problems in which several agents act in parallel. Our approach relies on a compilation from concurrent multiagent planning to classical planning, allowing us to use an off-the-shelf classical planner to solve the original multiagent problem. The solution can be directly interpreted as a concurrent plan that satisfies a given set of concurrency constraints, while avoiding the exponential blowup associated with concurrent actions. Our planner is the first to handle action effects that are conditional on what other agents are doing. Theoretically, we show that the compilation is sound and complete. Empirically, we show that our compilation can solve challenging multiagent planning problems that require concurrent actions.
|
JonssonR11 ( JonssonR11 ) present a best-response approach for MAPs with concurrent actions, where each agent attempts to improve its own part of a concurrent plan while the actions of all other agents are fixed. However, their approach only serves to improve an existing concurrent plan, and is unable to compute an initial concurrent plan. FMAP @cite_12 is a partial-order planner that also allows agents to execute actions in parallel, but the authors do not present experimental results for MAP domains that require concurrency.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2015886927"
],
"abstract": [
"This paper proposes FMAP (Forward Multi-Agent Planning), a fully-distributed multi-agent planning method that integrates planning and coordination. Although FMAP is specifically aimed at solving problems that require cooperation among agents, the flexibility of the domain-independent planning model allows FMAP to tackle multi-agent planning tasks of any type. In FMAP, agents jointly explore the plan space by building up refinement plans through a complete and flexible forward-chaining partial-order planner. The search is guided by h D T G , a novel heuristic function that is based on the concepts of Domain Transition Graph and frontier state and is optimized to evaluate plans in distributed environments. Agents in FMAP apply an advanced privacy model that allows them to adequately keep private information while communicating only the data of the refinement plans that is relevant to each of the participating agents. Experimental results show that FMAP is a general-purpose approach that efficiently solves tightly-coupled domains that have specialized agents and cooperative goals as well as loosely-coupled problems. Specifically, the empirical evaluation shows that FMAP outperforms current MAP systems at solving complex planning tasks that are adapted from the International Planning Competition benchmarks."
]
}
|
1906.08157
|
2953190883
|
In this work we present a novel approach to solving concurrent multiagent planning problems in which several agents act in parallel. Our approach relies on a compilation from concurrent multiagent planning to classical planning, allowing us to use an off-the-shelf classical planner to solve the original multiagent problem. The solution can be directly interpreted as a concurrent plan that satisfies a given set of concurrency constraints, while avoiding the exponential blowup associated with concurrent actions. Our planner is the first to handle action effects that are conditional on what other agents are doing. Theoretically, we show that the compilation is sound and complete. Empirically, we show that our compilation can solve challenging multiagent planning problems that require concurrent actions.
|
BrafmanZoran14 ( BrafmanZoran14 ) extended the MAFS multiagent distributed algorithm @cite_6 to support actions requiring concurrency while preserving privacy. Messages are exchanged between agents in order to inform each other about the expansion of relevant states. Consequently, agents explore the search space together while preserving privacy. As pointed out by ShekharB18 ( ShekharB18 ), it has two main problems: (1) it does not consider the issue of subsumed actions, and (2) it does not support concurrent actions that affect each others preconditions.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2098999170"
],
"abstract": [
"This paper deals with the problem of classical planning for multiple cooperative agents who have private information about their local state and capabilities they do not want to reveal. Two main approaches have recently been proposed to solve this type of prob- lem - one is based on reduction to distributed constraint satisfaction, and the other on partial-order planning techniques. In classical single-agent planning, constraint-based and partial-order planning techniques are currently dominated by heuristic forward search. The question arises whether it is possible to formulate a distributed heuristic forward search algorithm for privacy-preserving classical multi-agent planning. Our work provides a positive answer to this question in the form of a general approach to distributed state-space search in which each agent performs only the part of the state expansion relevant to it. The resulting algorithms are simple and efficient - outperforming previous algorithms by orders of magnitude - while offering similar flexibility to that of forward-search algorithms for single-agent planning. Furthermore, one particular variant of our general approach yields a distributed version of the a* algorithm that is the first cost-optimal distributed algorithm for privacy-preserving planning."
]
}
|
1906.08157
|
2953190883
|
In this work we present a novel approach to solving concurrent multiagent planning problems in which several agents act in parallel. Our approach relies on a compilation from concurrent multiagent planning to classical planning, allowing us to use an off-the-shelf classical planner to solve the original multiagent problem. The solution can be directly interpreted as a concurrent plan that satisfies a given set of concurrency constraints, while avoiding the exponential blowup associated with concurrent actions. Our planner is the first to handle action effects that are conditional on what other agents are doing. Theoretically, we show that the compilation is sound and complete. Empirically, we show that our compilation can solve challenging multiagent planning problems that require concurrent actions.
|
Compilations from multiagent to classical planning have also been considered by other authors. muise-codmap15 ( muise-codmap15 ) proposed a transformation to respect privacy among agents. The resulting classical planning problem was then solved using a centralized classical planner as in our approach. Besides, compilations to classical planning have also been used in temporal planning, obtaining state-of-the-art results in many of the International Planning Competition domains @cite_9 .
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2397290112"
],
"abstract": [
"In this paper we describe two novel algorithms for temporal planning. The first algorithm, TP, is an adaptation of the TEMPO algorithm. It compiles each temporal action into two classical actions, corresponding to the start and end of the temporal action, but handles the temporal constraints on actions through a modification of the Fast Downward planning system. The second algorithm, TPSHE, is a pure compilation from temporal to classical planning for the case in which required concurrency only appears in the form of single hard envelopes. We describe novel classes of temporal planning instances for which TPSHE is provably sound and complete. Compiling a temporal instance into a classical one gives a lot of freedom in terms of the planner or heuristic used to solve the instance. In experiments TPSHE significantly outperforms all planners from the temporal track of the International Planning Competition."
]
}
|
1906.08339
|
2950450425
|
The health outcomes of high-need patients can be substantially influenced by the degree of patient engagement in their own care. The role of care managers includes that of enrolling patients into care programs and keeping them sufficiently engaged in the program, so that patients can attain various goals. The attainment of these goals is expected to improve the patients' health outcomes. In this paper, we present a real world data-driven method and the behavioral engagement scoring pipeline for scoring the engagement level of a patient in two regards: (1) Their interest in enrolling into a relevant care program, and (2) their interest and commitment to program goals. We use this score to predict a patient's propensity to respond (i.e., to a call for enrollment into a program, or to an assigned program goal). Using real-world care management data, we show that our scoring method successfully predicts patient engagement. We also show that we are able to provide interpretable insights to care managers, using prototypical patients as a point of reference, without sacrificing prediction performance.
|
To further differentiate engagement strategies from CMDS data, in this paper we particularly focus on methods that account for case-based reasoning to improve the interpretability of clustering models, e.g., selecting prototypical cases to represent the learned clusters @cite_29 . In particular, we incorporate locally supervised metric learning @cite_19 and prototypical case-based reasoning in a machine learning model to identify explainable engagement behavioral profiles and to produce personalized engagement scores.
|
{
"cite_N": [
"@cite_19",
"@cite_29"
],
"mid": [
"2147898714",
"2130485404"
],
"abstract": [
"Effective patient similarity assessment is important for clinical decision support. It enables the capture of past experience as manifested in the collective longitudinal medical records of patients to help clinicians assess the likely outcomes resulting from their decisions and actions. However, it is challenging to devise a patient similarity metric that is clinically relevant and semantically sound. Patient similarity is highly context sensitive: it depends on factors such as the disease, the particular stage of the disease, and co-morbidities. One way to discern the semantics in a particular context is to take advantage of physicians’ expert knowledge as reflected in labels assigned to some patients. In this paper we present a method that leverages localized supervised metric learning to effectively incorporate such expert knowledge to arrive at semantically sound patient similarity measures. Experiments using data obtained from the MIMIC II database demonstrate the effectiveness of this approach.",
"We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the \"quintessential\" observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants' understanding when using explanations produced by BCM, compared to those given by prior art."
]
}
|
1811.00256
|
2898813360
|
Human activity recognition has drawn considerable attention recently in the field of computer vision due to the development of commodity depth cameras, by which the human activity is represented as a sequence of 3D skeleton postures. Assuming human body 3D joint locations of an activity lie on a manifold, the problem of recognizing human activity is formulated as the computation of activity manifold-manifold distance (AMMD). In this paper, we first design an efficient division method to decompose a manifold into ordered continuous maximal linear patches (CMLPs) that denote meaningful action snippets of the action sequence. Then the CMLP is represented by its position (average value of points) and the first principal component, which specify the major posture and main evolving direction of an action snippet, respectively. Finally, we compute the distance between CMLPs by taking both the posture and direction into consideration. Based on these preparations, an intuitive distance measure that preserves the local order of action snippets is proposed to compute AMMD. The performance on two benchmark datasets demonstrates the effectiveness of the proposed approach.
|
Manifold based representation and related algorithms have attracted much attention in image and video analysis. In consideration of temporal dimension, @cite_12 exploited locality preserving projections to project a given sequence of moving silhouettes associated with an action video into a low-dimensional space. Modeling each image set with a manifold, @cite_25 formulated the image sets classification for face recognition as a problem of calculating the manifold-manifold distance (MMD). The authors extracted maximal linear patches (MLPs) to form nonlinear manifold and integrated the distances between pairs of MLPs to compute MMD. Similar to image sets that each set is composed of images from the same person but covering variations, human body 3D joint locations of an activity can be viewed as a non-linear manifold embedded in a higher-dimensional space. However, in this case, MLP is not a proper decomposition for activity manifold since it may disorder the geometric structure of action sequence.
|
{
"cite_N": [
"@cite_25",
"@cite_12"
],
"mid": [
"2000771160",
"2106094637"
],
"abstract": [
"In this paper, we address the problem of classifying image sets for face recognition, where each set contains images belonging to the same subject and typically covering large variations. By modeling each image set as a manifold, we formulate the problem as the computation of the distance between two manifolds, called manifold-manifold distance (MMD). Since an image set can come in three pattern levels, point, subspace, and manifold, we systematically study the distance among the three levels and formulate them in a general multilevel MMD framework. Specifically, we express a manifold by a collection of local linear models, each depicted by a subspace. MMD is then converted to integrate the distances between pairs of subspaces from one of the involved manifolds. We theoretically and experimentally study several configurations of the ingredients of MMD. The proposed method is applied to the task of face recognition with image sets, where identification is achieved by seeking the minimum MMD from the probe to the gallery of image sets. Our experiments demonstrate that, as a general set similarity measure, MMD consistently outperforms other competing nondiscriminative methods and is also promisingly comparable to the state-of-the-art discriminative methods.",
"In this paper, we learn explicit representations for dynamic shape manifolds of moving humans for the task of action recognition. We exploit locality preserving projections (LPP) for dimensionality reduction, leading to a low-dimensional embedding of human movements. Given a sequence of moving silhouettes associated to an action video, by LPP, we project them into a low-dimensional space to characterize the spatiotemporal property of the action, as well as to preserve much of the geometric structure. To match the embedded action trajectories, the median Hausdorff distance or normalized spatiotemporal correlation is used for similarity measures. Action classification is then achieved in a nearest-neighbor framework. To evaluate the proposed method, extensive experiments have been carried out on a recent dataset including ten actions performed by nine different subjects. The experimental results show that the proposed method is able to not only recognize human actions effectively, but also considerably tolerate some challenging conditions, e.g., partial occlusion, low-quality videos, changes in viewpoints, scales, and clothes; within-class variations caused by different subjects with different physical build; styles of motion; etc"
]
}
|
1811.00266
|
2899273198
|
When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation [Ni+ 2017] and definition generation [Noraset+ 2017; Gadetsky+ 2018], our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.
|
Our task is closely related to word sense disambiguation () @cite_11 , which identifies a pre-defined sense for the target word with its context. Although we can use it to solve our task by retrieving the definition sentence for the sense identified by , it requires a substantial amount of training data to handle a different set of meanings of each word, and cannot handle words (or senses) which are not registered in the dictionary. Although some studies have attempted to detect novel senses of words for given contexts @cite_16 @cite_8 , they do not provide definition sentences. Our task avoids these difficulties in by directly generating descriptions for phrases or words with their contexts. It also allows us to flexibly tailor a fine-grained definition for the specific context.
|
{
"cite_N": [
"@cite_8",
"@cite_16",
"@cite_11"
],
"mid": [
"2251383907",
"1972372808",
"1951840372"
],
"abstract": [
"Unsupervised word sense disambiguation (WSD) methods are an attractive approach to all-words WSD due to their non-reliance on expensive annotated data. Unsupervised estimates of sense frequency have been shown to be very useful for WSD due to the skewed nature of word sense distributions. This paper presents a fully unsupervised topic modelling-based approach to sense frequency estimation, which is highly portable to different corpora and sense inventories, in being applicable to any part of speech, and not requiring a hierarchical sense inventory, parsing or parallel text. We demonstrate the effectiveness of the method over the tasks of predominant sense learning and sense distribution acquisition, and also the novel tasks of detecting senses which aren’t attested in the corpus, and identifying novel senses in the corpus which aren’t captured in the sense inventory.",
"We address the problem of unknown word sense detection: the identification of corpus occurrences that are not covered by a given sense inventory. We model this as an instance of outlier detection, using a simple nearest neighbor-based approach to measuring the resemblance of a new item to a training set. In combination with a method that alleviates data sparseness by sharing training data across lemmas, the approach achieves a precision of 0.77 and recall of 0.82.",
"Systems and methods for word sense disambiguation, including discerning one or more senses or occurrences, distinguishing between senses or occurrences, and determining a meaning for a sense or occurrence of a subject term. In a collection of documents containing terms and a reference collection containing at least one meaning associated with a term, the method includes forming a vector space representation of terms and documents. In some embodiments, the vector space is a latent semantic index vector space. In some embodiments, occurrences are clustered to discern or distinguish a sense of a term. In preferred embodiments, meaning of a sense or occurrence is assigned based on either correlation with an external reference source, or proximity to a reference source that has been indexed into the space."
]
}
|
1811.00266
|
2899273198
|
When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation [Ni+ 2017] and definition generation [Noraset+ 2017; Gadetsky+ 2018], our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.
|
Paraphrasing @cite_10 @cite_13 (or text simplification @cite_18 ) can be used to rephrase words with unknown senses. However, the target of paraphrase acquisition are words (or phrases) with no specified context. Although several studies @cite_19 @cite_2 @cite_5 consider sub-sentential (context-sensitive) paraphrases, they do not intend to obtain a definition-like description as a paraphrase of a word.
|
{
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_13"
],
"mid": [
"",
"2145815109",
"1567833515",
"2009987829",
"",
"2051593977"
],
"abstract": [
"",
"Paraphrasing methods recognize, generate, or extract phrases, sentences, or longer natural language expressions that convey almost the same information. Textual entailment methods, on the other hand, recognize, generate, or extract pairs of natural language expressions, such that a human who reads (and trusts) the first element of a pair would most likely infer that the other element is also true. Paraphrasing can be seen as bidirectional textual entailment and methods from the two areas are often similar. Both kinds of methods are useful, at least in principle, in a wide range of natural language processing applications, including question answering, summarization, text generation, and machine translation. We summarize key ideas from the two areas by considering in turn recognition, generation, and extraction methods, also pointing to prominent articles and resources.",
"Lexical paraphrasing is an inherently context sensitive problem because a word's meaning depends on context. Most paraphrasing work finds patterns and templates that can replace other patterns or templates in some context, but we are attempting to make decisions for a specific context. In this paper we develop a global classifier that takes a word vand its context, along with a candidate word u, and determines whether ucan replace vin the given context while maintaining the original meaning. We develop an unsupervised, bootstrapped, learning approach to this problem. Key to our approach is the use of a very large amount of unlabeled data to derive a reliable supervision signal that is then used to train a supervised learning algorithm. We demonstrate that our approach performs significantly better than state-of-the-art paraphrasing approaches, and generalizes well to unseen pairs of words.",
"The ability to generate or to recognize paraphrases is key to the vast majority of NLP applications. As correctly exploiting context during translation has been shown to be successful, using context information for paraphrasing could also lead to improved performance. In this article, we adopt the pivot approach based on parallel multilingual corpora proposed by (Bannard and Callison-Burch, 2005), which finds short paraphrases by finding appropriate pivot phrases in one or several auxiliary languages and back-translating these pivot phrases into the original language. We show how context can be exploited both when attempting to find pivot phrases, and when looking for the most appropriate paraphrase in the original subsentential \"envelope\". This framework allows the use of paraphrasing units ranging from words to large sub-sentential fragments for which context information from the sentence can be successfully exploited. We report experiments on a text revision task, and show that in these experiments our contextual sub-sentential paraphrasing system outperforms a strong baseline system.",
"",
"The task of paraphrasing is inherently familiar to speakers of all languages. Moreover, the task of automatically generating or extracting semantic equivalences for the various units of language-words, phrases, and sentences-is an important part of natural language processing (NLP) and is being increasingly employed to improve the performance of several NLP applications. In this article, we attempt to conduct a comprehensive and application-independent survey of data-driven phrasal and sentential paraphrase generation methods, while also conveying an appreciation for the importance and potential use of paraphrases in the field of NLP research. Recent work done in manual and automatic construction of paraphrase corpora is also examined. We also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generation."
]
}
|
1811.00266
|
2899273198
|
When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation [Ni+ 2017] and definition generation [Noraset+ 2017; Gadetsky+ 2018], our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.
|
Our task of describing phrases with its given context is a generalization of these three tasks @cite_12 @cite_17 @cite_3 , and the proposed method naturally utilizes both local and global contexts of a word in question.
|
{
"cite_N": [
"@cite_3",
"@cite_12",
"@cite_17"
],
"mid": [
"2798287132",
"2116261113",
"2756725603"
],
"abstract": [
"",
"Long short-term memory (LSTM) can solve many tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams without explicitly marked sequence ends. Without resets, the internal state values may grow indefinitely and eventually cause the network to break down. Our remedy is an adaptive \"forget gate\" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review an illustrative benchmark problem on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve a continual version of that problem. LSTM with forget gates, however, easily solves it in an elegant way.",
"We describe a data-driven approach for automatically explaining new, non-standard English expressions in a given sentence, building on a large dataset that includes 15 years of crowdsourced examples from UrbanDictionary.com. Unlike prior studies that focus on matching keywords from a slang dictionary, we investigate the possibility of learning a neural sequence-to-sequence model that generates explanations of unseen non-standard English expressions given context. We propose a dual encoder approach---a word-level encoder learns the representation of context, and a second character-level encoder to learn the hidden representation of the target non-standard expression. Our model can produce reasonable definitions of new non-standard English expressions given their context with certain confidence."
]
}
|
1811.00145
|
2891468099
|
While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the @math evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims. We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms. Using adaptive importance-sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. We demonstrate our framework on a highway scenario, accelerating system evaluation by @math - @math times over naive Monte Carlo sampling methods and @math - @math times (where @math is the number of processors) over real-world testing.
|
AV testing generally consists of three paradigms. The first, largely attributable to regulatory efforts, uses a finite set of basic competencies ( the Euro NCAP Test Protocol @cite_14 ); while this methodology is successful in designing safety features such as airbags and seat-belts, the non-adaptive nature of static testing is less effective in complex software systems found in AVs. Alternatively, real-world testing---deployment of vehicles with human oversight---exposes the vehicle to a wider variety of unpredictable test conditions. However, as we outlined above, these methods pose a danger to the public and require prohibitive number of driving hours due to the rare nature of accidents @cite_36 . Simulation-based falsification (in our context, simply finding any crash) has also been successfully utilized @cite_37 ; this approach does not maintain a link to the likelihood of the occurrence of a particular event, which we believe to be key in acting to prioritize and correct AV behavior.
|
{
"cite_N": [
"@cite_36",
"@cite_37",
"@cite_14"
],
"mid": [
"2809947384",
"2564317698",
"347283462"
],
"abstract": [
"Industrial cyber-physical systems are hybrid systems with strict safety requirements. Despite not having a formal semantics, most of these systems are modeled using Stateflow Simulink for mainly two reasons: (1) it is easier to model, test, and simulate using these tools, and (2) dynamics of these systems are not supported by most other tools. Furthermore, with the ever growing complexity of cyber-physical systems, grows the gap between what can be modeled using an automatic formal verification tool and models of industrial cyber-physical systems. In this paper, we present a simple formal model for self-deriving cars. While after some simplification, safety of this system has already been proven manually, to the best of our knowledge, no automatic formal verification tool supports its dynamics. We hope this serves as a challenge problem for formal verification tools targeting industrial applications.",
"This paper proposes an approach to automatically generating test cases for testing motion controllers of autonomous vehicular systems. Test scenarios may consist of single or multiple vehicles under test at the same time. Tests are performed in simulation environments. The approach is based on using a robustness metric for evaluating simulation outcomes as a cost function. Initial states and inputs are updated by stochastic optimization methods between the tests for achieving smaller robustness values. The test generation framework has been implemented in the toolbox S-TaLiRo. The proposed framework's ability to generate interesting test cases is demonstrated by a case study.",
"Euro NCAP has released its updated rating scheme for 2013-2016 that outlines, amongst other technologies, the implementation of Autonomous Emergency Braking (AEB) technologies within the overall rating scheme. Three types of AEB technologies will be included in the rating scheme, starting with low speed car-to-car AEB City and higher speed car-to-car AEB Inter-Urban in 2014, followed two years later by AEB Pedestrian. In 2011 the Primary Safety Technical Working Group (PNCAP TWG) started working on AEB protocols, where Euro NCAP members have contributed to the development of the Test and Assessment protocols. They have been developed in a relatively short time, by finding the commonalities and discussing the differences between different initiatives from industry, insurers and others that were the main source of input to the working group. Recently, both AEB City and AEB Inter-Urban protocols were finalized. The test protocol details a series of tests, following an incremental speed approach for systems with AEB and Forward Collision Warning (FCW) functionality, and specifies in detail the target vehicle to ensure the highest level of reproducibility and repeatability. The assessment protocols identify the scoring principle and relative weight of each scenario for inclusion in the overall rating scheme. This paper describes both protocols."
]
}
|
1811.00145
|
2891468099
|
While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the @math evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims. We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms. Using adaptive importance-sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. We demonstrate our framework on a highway scenario, accelerating system evaluation by @math - @math times over naive Monte Carlo sampling methods and @math - @math times (where @math is the number of processors) over real-world testing.
|
Even given a consistent and complete notion of blame, verification remains highly intractable from a computational standpoint. Efficient algorithms only exist for restricted classes of systems in the domain of AVs, and they are fundamentally difficult to scale. Specifically, AVs---unlike previous successful applications of verification methods to application domains such as microprocessors @cite_39 ---include both continuous and discrete dynamics. This class of dynamics falls within the purview of hybrid systems @cite_12 , for which exhaustive verification is largely undecidable @cite_33 .
|
{
"cite_N": [
"@cite_33",
"@cite_12",
"@cite_39"
],
"mid": [
"2085838366",
"2144040052",
""
],
"abstract": [
"Hybrid automata model systems with both digital and analog components, such as embedded control programs. Many verification tasks for such programs can be expressed as reachability problems for hybrid automata. By improving on previous decidability and undecidability results, we identify a boundary between decidability and undecidability for the reachability problem of hybrid automata. On the positive side, we give an (optimal) PSPACE reachability algorithm for the case of initialized rectangular automata, where all analog variables follow independent trajectories within piecewise-linear envelopes and are reinitialized whenever the envelope changes. Our algorithm is based on the construction of a timed automaton that contains all reachability information about a given initialized rectangular automaton. The translation has practical significance for verification, because it guarantees the termination of symbolic procedures for the reachability analysis of initialized rectangular automata. The translation also preserves the?-languages of initialized rectangular automata with bounded nondeterminism. On the negative side, we show that several slight generalizations of initialized rectangular automata lead to an undecidable reachability problem. In particular, we prove that the reachability problem is undecidable for timed automata augmented with a single stopwatch.",
"The aim of this course is to introduce some fundamental concepts from the area of hybrid systems, that is dynamical systems that involve the interaction of continuous (real valued) states and discrete (finite valued) states. Applications where these types of dynamics play a prominent role will be highlighted. We will introduce general methods for investigating properties such as existence of solutions, reachability and decidability of hybrid systems. The methods will be demonstrated on the motivating applications. Students who successfully complete the course should be able to appreciate the diversity of phenomena that arise in hybrid systems and how discrete “discrete” entities and concepts such as automata, decidability and bisimulation can coexist with continuous entities and concepts, such as differential equations.",
""
]
}
|
1811.00145
|
2891468099
|
While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the @math evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims. We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms. Using adaptive importance-sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. We demonstrate our framework on a highway scenario, accelerating system evaluation by @math - @math times over naive Monte Carlo sampling methods and @math - @math times (where @math is the number of processors) over real-world testing.
|
Verifying individual components of the perception pipeline, even as standalone systems, is a nascent, active area of research (see @cite_55 @cite_31 @cite_21 and many others). Current subsystem verification techniques for deep neural networks @cite_32 @cite_16 @cite_5 do not scale to state-of-the-art models and largely investigate the robustness of the network with respect to small perturbations of a single sample. There are two key assumptions in these works; the label of the input is unchanged within the radius of allowable perturbations, and the resulting expansion of the test set covers a meaningful portion of possible inputs to the network. Unfortunately, for realistic cases in AVs it is likely that perturbations to the state of the world which in turn generates an image change the label. Furthermore, the combinatorial nature of scenario configurations casts serious doubt on any claims of coverage.
|
{
"cite_N": [
"@cite_55",
"@cite_21",
"@cite_32",
"@cite_5",
"@cite_31",
"@cite_16"
],
"mid": [
"2951635495",
"2709553318",
"2280163991",
"2950499086",
"",
"2950147618"
],
"abstract": [
"We give algorithms with provable guarantees that learn a class of deep nets in the generative model view popularized by Hinton and others. Our generative model is an @math node multilayer neural net that has degree at most @math for some @math and each edge has a random edge weight in @math . Our algorithm learns almost all networks in this class with polynomial running time. The sample complexity is quadratic or cubic depending upon the details of the model. The algorithm uses layerwise learning. It is based upon a novel idea of observing correlations among features and using these to infer the underlying edge structure via a global graph recovery procedure. The analysis of the algorithm reveals interesting structure of neural networks with random edge weights.",
"This paper presents a margin-based multiclass generalization bound for neural networks that scales with their margin-normalized \"spectral complexity\": their Lipschitz constant, meaning the product of the spectral norms of the weight matrices, times a certain correction factor. This bound is empirically investigated for a standard AlexNet network trained with SGD on the mnist and cifar10 datasets, with both original and random labels; the bound, the Lipschitz constants, and the excess risks are all in direct correlation, suggesting both that SGD selects predictors whose complexity scales with the difficulty of the learning task, and secondly that the presented bound is sensitive to this complexity.",
"Efficient exploration in complex environments remains a major challenge for reinforcement learning. We propose bootstrapped DQN, a simple algorithm that explores in a computationally and statistically efficient manner through use of randomized value functions. Unlike dithering strategies such as epsilon-greedy exploration, bootstrapped DQN carries out temporally-extended (or deep) exploration; this can lead to exponentially faster learning. We demonstrate these benefits in complex stochastic MDPs and in the large-scale Arcade Learning Environment. Bootstrapped DQN substantially improves learning times and performance across most Atari games.",
"Neural networks have demonstrated considerable success on a wide variety of real-world problems. However, networks trained only to optimize for training accuracy can often be fooled by adversarial examples - slightly perturbed inputs that are misclassified with high confidence. Verification of networks enables us to gauge their vulnerability to such adversarial examples. We formulate verification of piecewise-linear neural networks as a mixed integer program. On a representative task of finding minimum adversarial distortions, our verifier is two to three orders of magnitude quicker than the state-of-the-art. We achieve this computational speedup via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available. The computational speedup allows us to verify properties on convolutional networks with an order of magnitude more ReLUs than networks previously verified by any complete verifier. In particular, we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded @math norm @math : for this classifier, we find an adversarial example for 4.38 of samples, and a certificate of robustness (to perturbations with bounded norm) for the remainder. Across all robust training procedures and network architectures considered, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack.",
"",
"Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods."
]
}
|
1811.00145
|
2891468099
|
While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the @math evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to statistically validate performance claims. We implement a simulation framework that can test an entire modern autonomous driving system, including, in particular, systems that employ deep-learning perception and control algorithms. Using adaptive importance-sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. We demonstrate our framework on a highway scenario, accelerating system evaluation by @math - @math times over naive Monte Carlo sampling methods and @math - @math times (where @math is the number of processors) over real-world testing.
|
To summarize, a fundamental tradeoff emerges when comparing the requirements of our risk-based framework to other testing paradigms, such as real-world testing or formal verification. Real-world testing endangers the public but is still in some sense a gold standard. Verified subsystems provide evidence that the AV should drive safely even if the estimated distribution shifts, but verification techniques are limited by computational intractability as well as the need for both white-box models and the completeness of specifications that assign blame ( @cite_44 ). In turn, our risk-based framework is most useful when the base distribution @math is accurate, but even when @math is misspecified, our adaptive importance sampling techniques can still efficiently identify dangerous scenarios, especially those that may be missed by verification methods assigning blame. Our framework offers significant speedups over real-world testing and allows efficient evaluation of black-box AV input output behavior, providing a powerful tool to aid in the design of safe AVs.
|
{
"cite_N": [
"@cite_44"
],
"mid": [
"2749747771"
],
"abstract": [
"In recent years, car makers and tech companies have been racing towards self driving cars. It seems that the main parameter in this race is who will have the first car on the road. The goal of this paper is to add to the equation two additional crucial parameters. The first is standardization of safety assurance --- what are the minimal requirements that every self-driving car must satisfy, and how can we verify these requirements. The second parameter is scalability --- engineering solutions that lead to unleashed costs will not scale to millions of cars, which will push interest in this field into a niche academic corner, and drive the entire field into a \"winter of autonomous driving\". In the first part of the paper we propose a white-box, interpretable, mathematical model for safety assurance, which we call Responsibility-Sensitive Safety (RSS). In the second part we describe a design of a system that adheres to our safety assurance requirements and is scalable to millions of cars."
]
}
|
1811.00228
|
2964352056
|
The recent advances of deep learning in both computer vision (CV) and natural language processing (NLP) provide us a new way of understanding semantics, by which we can deal with more challenging tasks such as automatic description generation from natural images. In this challenge, the encoder-decoder framework has achieved promising performance when a convolutional neural network (CNN) is used as image encoder and a recurrent neural network (RNN) as decoder. In this paper, we introduce a sequential guiding network that guides the decoder during word generation. The new model is an extension of the encoder-decoder framework with attention that has an additional guiding long short-term memory (LSTM) and can be trained in an end-to-end manner by using image descriptions pairs. We validate our approach by conducting extensive experiments on a benchmark dataset, i.e., MS COCO Captions. The proposed model achieves significant improvement comparing to the other state-of-the-art deep learning models.
|
A pure sequence-to-sequence architecture for image captioning is proposed in @cite_38 . Different from previous approaches, their model represents images as a sequence of detected objects and a is introduced to help the model focus on important objects. While resulting in a more complex architecture, their approach claims state-of-the-art results in all metrics. Instead of training via (penalized) maximum likelihood estimation, some recent works use Policy Gradient (PG) methods to directly optimize the non-differentiable testing metrics, claiming boost in term of performance measure. While @cite_8 optimize for the standard CIDEr metric, @cite_18 proposed to optimize for a new testing metric that is a linear combination of CIDEr @cite_29 and SPICE @cite_35 they called SPIDEr, which they found better correlated with human judgment. However in this line of work, it is not clear yet whether the improvement in testing metrics could result in captions with better quality.
|
{
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_35",
"@cite_8",
"@cite_29"
],
"mid": [
"2964168617",
"2599772929",
"2506483933",
"2963084599",
"1956340063"
],
"abstract": [
"",
"Current image captioning methods are usually trained via maximum likelihood estimation. However, the log-likelihood score of a caption does not correlate well with human assessments of quality. Standard syntactic evaluation metrics, such as BLEU, METEOR and ROUGE, are also not well correlated. The newer SPICE and CIDEr metrics are better correlated, but have traditionally been hard to optimize for. In this paper, we show how to use a policy gradient (PG) method to directly optimize a linear combination of SPICE and CIDEr (a combination we call SPIDEr): the SPICE score ensures our captions are semantically faithful to the image, while CIDEr score ensures our captions are syntactically fluent. The PG method we propose improves on the prior MIXER approach, by using Monte Carlo rollouts instead of mixing MLE training with PG. We show empirically that our algorithm leads to easier optimization and improved results compared to MIXER. Finally, we show that using our PG method we can optimize any of the metrics, including the proposed SPIDEr metric which results in image captions that are strongly preferred by human raters compared to captions generated by the same model but trained to optimize MLE or the COCO metrics.",
"There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors? and can caption-generators count?",
"Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a baseline to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.",
"Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking."
]
}
|
1811.00240
|
2899091287
|
We propose a multilingual model to recognize Big Five Personality traits from text data in four different languages: English, Spanish, Dutch and Italian. Our analysis shows that words having a similar semantic meaning in different languages do not necessarily correspond to the same personality traits. Therefore, we propose a personality alignment method, GlobalTrait, which has a mapping for each trait from the source language to the target language (English), such that words that correlate positively to each trait are close together in the multilingual vector space. Using these aligned embeddings for training, we can transfer personality related training features from high-resource languages such as English to other low-resource languages, and get better multilingual results, when compared to using simple monolingual and unaligned multilingual embeddings. We achieve an average F-score increase (across all three languages except English) from 65 to 73.4 (+8.4), when comparing our monolingual model to multilingual using CNN with personality aligned embeddings. We also show relatively good performance in the regression tasks, and better classification results when evaluating our model on a separate Chinese dataset.
|
Deep Learning models such as Convolutional Neural Networks (CNNs) have gained popularity in the task of text classification @cite_7 @cite_8 . This is because CNNs are good at capturing text features via its convolution operation, which can be applied on the text by taking the distributed representation of the words, called word embeddings, as input. Learning such distributed representation comes from the hypothesis that words that appear in similar contexts have similar meaning @cite_6 . Different works have been carried out in the past to learn such representations of words, such as @cite_12 @cite_27 , and more recently @cite_19 . Cross-lingual or multilingual word embeddings try to capture such semantic information of words across two or more languages, such that the words that have similar meaning in different languages are close together in the vector space @cite_15 @cite_0 . For our task we use a more recent approach @cite_22 , which does not require parallel data and learns a mapping from the source language embedding space to the target language in an unsupervised fashion.
|
{
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_12"
],
"mid": [
"2762484717",
"2120615054",
"2949541494",
"",
"",
"2952566282",
"2250539671",
"342285082",
"2950133940"
],
"abstract": [
"State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet. In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way. Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English-Chinese. We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation. Our code, embeddings and dictionaries are publicly available.",
"The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.",
"We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.",
"",
"",
"Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character @math -grams. A vector representation is associated to each character @math -gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"The distributional hypothesis of Harris (1954), according to which the meaning of words is evidenced by the contexts they occur in, has motivated several effective techniques for obtaining vector space semantic representations of words using unannotated text corpora. This paper argues that lexico-semantic content should additionally be invariant across languages and proposes a simple technique based on canonical correlation analysis (CCA) for incorporating multilingual evidence into vectors generated monolingually. We evaluate the resulting word representations on standard lexical semantic evaluation tasks and show that our method produces substantially better semantic representations than monolingual techniques.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible."
]
}
|
1811.00264
|
2899272023
|
To cluster data that are not linearly separable in the original feature space, @math -means clustering was extended to the kernel version. However, the performance of kernel @math -means clustering largely depends on the choice of kernel function. To mitigate this problem, multiple kernel learning has been introduced into the @math -means clustering to obtain an optimal kernel combination for clustering. Despite the success of multiple kernel @math -means clustering in various scenarios, few of the existing work update the combination coefficients based on the diversity of kernels, which leads to the result that the selected kernels contain high redundancy and would degrade the clustering performance and efficiency. In this paper, we propose a simple but efficient strategy that selects a diverse subset from the pre-specified kernels as the representative kernels, and then incorporate the subset selection process into the framework of multiple @math -means clustering. The representative kernels can be indicated as the significant combination weights. Due to the non-convexity of the obtained objective function, we develop an alternating minimization method to optimize the combination coefficients of the selected kernels and the cluster membership alternatively. We evaluate the proposed approach on several benchmark and real-world datasets. The experimental results demonstrate the competitiveness of our approach in comparison with the state-of-the-art methods.
|
Multi-view clustering attempts to obtain consistent cluster structures from different views @cite_9 @cite_23 @cite_14 @cite_28 @cite_20 . In @cite_9 , multi-view versions of clustering approaches, including @math -means, expectation maximization and hierarchical agglomerative methods, are studied for document clustering to demonstrate their advantages over single-view counterparts. The work in @cite_14 proposes to constrain the similarity graph from one view with the spectral embedding from the other view in the framework of spectral clustering using the idea of . Based on , @cite_23 presents a simple subspace learning method for multi-view clustering under a natural assumption that different views are uncorrelated given the label of the cluster. In consideration of the limitation that most existing work on data fusion assumes the same weight for features from one source, @cite_28 provides a novel framework for multi-view clustering which learns a weight for individual feature via a structured sparsity regularization.
|
{
"cite_N": [
"@cite_14",
"@cite_28",
"@cite_9",
"@cite_23",
"@cite_20"
],
"mid": [
"2101324110",
"2105709960",
"",
"2142674578",
"2780925023"
],
"abstract": [
"We propose a spectral clustering algorithm for the multi-view setting where we have access to multiple views of the data, each of which can be independently used for clustering. Our spectral clustering algorithm has a flavor of co-training, which is already a widely used idea in semi-supervised learning. We work on the assumption that the true underlying clustering would assign a point to the same cluster irrespective of the view. Hence, we constrain our approach to only search for the clusterings that agree across the views. Our algorithm does not have any hyperparameters to set, which is a major advantage in unsupervised learning. We empirically compare with a number of baseline methods on synthetic and real-world datasets to show the efficacy of the proposed algorithm.",
"Combining information from various data sources has become an important research topic in machine learning with many scientific applications. Most previous studies employ kernels or graphs to integrate different types of features, which routinely assume one weight for one type of features. However, for many problems, the importance of features in one source to an individual cluster of data can be varied, which makes the previous approaches ineffective. In this paper, we propose a novel multi-view learning model to integrate all features and learn the weight for every feature with respect to each cluster individually via new joint structured sparsity-inducing norms. The proposed multi-view learning framework allows us not only to perform clustering tasks, but also to deal with classification tasks by an extension when the labeling knowledge is available. A new efficient algorithm is derived to solve the formulated objective with rigorous theoretical proof on its convergence. We applied our new data fusion method to five broadly used multi-view data sets for both clustering and classification. In all experimental results, our method clearly outperforms other related state-of-the-art methods.",
"",
"Clustering data in high dimensions is believed to be a hard problem in general. A number of efficient clustering algorithms developed in recent years address this problem by projecting the data into a lower-dimensional subspace, e.g. via Principal Components Analysis (PCA) or random projections, before clustering. Here, we consider constructing such projections using multiple views of the data, via Canonical Correlation Analysis (CCA). Under the assumption that the views are un-correlated given the cluster label, we show that the separation conditions required for the algorithm to be successful are significantly weaker than prior results in the literature. We provide results for mixtures of Gaussians and mixtures of log concave distributions. We also provide empirical support from audio-visual speaker clustering (where we desire the clusters to correspond to speaker ID) and from hierarchical Wikipedia document clustering (where one view is the words in the document and the other is the link structure).",
"With advances in information acquisition technologies, multi-view data become ubiquitous. Multi-view learning has thus become more and more popular in machine learning and data mining fields. Multi-view unsupervised or semi-supervised learning, such as co-training, co-regularization has gained considerable attention. Although recently, multi-view clustering (MVC) methods have been developed rapidly, there has not been a survey to summarize and analyze the current progress. Therefore, this paper reviews the common strategies for combining multiple views of data and based on this summary we propose a novel taxonomy of the MVC approaches. We further discuss the relationships between MVC and multi-view representation, ensemble clustering, multi-task clustering, multi-view supervised and semi-supervised learning. Several representative real-world applications are elaborated. To promote future development of MVC, we envision several open problems that may require further investigation and thorough examination."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.