aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1811.05696
|
2901846148
|
Neural generative models have become popular and achieved promising performance on short-text conversation tasks. They are generally trained to build a 1-to-1 mapping from the input post to its output response. However, a given post is often associated with multiple replies simultaneously in real applications. Previous research on this task mainly focuses on improving the relevance and informativeness of the top one generated response for each post. Very few works study generating multiple accurate and diverse responses for the same post. In this paper, we propose a novel response generation model, which considers a set of responses jointly and generates multiple diverse responses simultaneously. A reinforcement learning algorithm is designed to solve our model. Experiments on two short-text conversation tasks validate that the multiple responses generated by our model obtain higher quality and larger diversity compared with various state-of-the-art generative models.
|
The Seq2seq framework has been widely used for conversational response generation @cite_11 @cite_22 @cite_0 . Such models learn the mapping from an input @math to one output @math by maximizing the pairwise probability of @math . During testing, these models only target for one response. In order to obtain multiple responses, beam search can be used. However, the resulting multiple sequences are often very similar. Many approaches have been proposed to re-rank diverse meaningful answers into higher positions. For example, Li al ( li2016simple ) proposed a simple fast decoding algorithm to directly encourage response diversity in the scoring function used in beam search. Shao al ( shao2017generating ) heuristically re-ranked the responses segment by segment to inject diversity earlier in the decoding process. These methods only modified the decoding steps and still often generated responses using different words but with similar semantics.
|
{
"cite_N": [
"@cite_0",
"@cite_22",
"@cite_11"
],
"mid": [
"2963963856",
"",
"1591706642"
],
"abstract": [
"We propose Neural Responding Machine (NRM), a neural network-based response generator for Short-Text Conversation. NRM takes the general encoderdecoder framework: it formalizes the generation of response as a decoding process based on the latent representation of the input text, while both encoding and decoding are realized with recurrent neural networks (RNN). The NRM is trained with a large amount of one-round conversation data collected from a microblogging service. Empirical study shows that NRM can generate grammatically correct and content-wise appropriate responses to over 75 of the input text, outperforming stateof-the-arts in the same setting, including retrieval-based and SMT-based models.",
"",
"Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model."
]
}
|
1811.05696
|
2901846148
|
Neural generative models have become popular and achieved promising performance on short-text conversation tasks. They are generally trained to build a 1-to-1 mapping from the input post to its output response. However, a given post is often associated with multiple replies simultaneously in real applications. Previous research on this task mainly focuses on improving the relevance and informativeness of the top one generated response for each post. Very few works study generating multiple accurate and diverse responses for the same post. In this paper, we propose a novel response generation model, which considers a set of responses jointly and generates multiple diverse responses simultaneously. A reinforcement learning algorithm is designed to solve our model. Experiments on two short-text conversation tasks validate that the multiple responses generated by our model obtain higher quality and larger diversity compared with various state-of-the-art generative models.
|
A few works have explored different factors that decide the generation of diverse responses. For task-oriented dialogue systems, Wen al proposed to model a latent space to represent intentions. Zhao al adopted the CVAE for learning discourse-level diversity for dialog models. They further incorporated a distribution over potential dialogue acts for better discourse-level diversity. Unlike them, our work targets for the single round open-domain conversation task, which assumes no discourse level information. Also, it is observed that in CVAE with a fixed Gaussian prior, the learned conditional posteriors tend to collapse to a single mode, yielding little diversity in the generated results @cite_23 . In our model, if we choose two latent words with far different meanings, the generated responses should be different.
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"2963594498"
],
"abstract": [
"This paper explores image caption generation using conditional variational auto-encoders (CVAEs). Standard CVAEs with a fixed Gaussian prior yield descriptions with too little variability. Instead, we propose two models that explicitly structure the latent space around K components corresponding to different types of image content, and combine components to create priors for images that contain multiple types of content simultaneously (e.g., several kinds of objects). Our first model uses a Gaussian Mixture model (GMM) prior, while the second one defines a novel Additive Gaussian (AG) prior that linearly combines component means. We show that both models produce captions that are more diverse and more accurate than a strong LSTM baseline or a “vanilla” CVAE with a fixed Gaussian prior, with AG-CVAE showing particular promise."
]
}
|
1811.05652
|
2901251508
|
Cardinality constrained submodular function maximization, which aims to select a subset of size at most @math to maximize a monotone submodular utility function, is the key in many data mining and machine learning applications such as data summarization and maximum coverage problems. When data is given as a stream, streaming submodular optimization (SSO) techniques are desired. Existing SSO techniques can only apply to insertion-only streams where each element has an infinite lifespan, and sliding-window streams where each element has a same lifespan (i.e., window size). However, elements in some data streams may have arbitrary different lifespans, and this requires addressing SSO over streams with inhomogeneous-decays (SSO-ID). This work formulates the SSO-ID problem and presents three algorithms: BasicStreaming is a basic streaming algorithm that achieves an @math approximation factor; HistApprox improves the efficiency significantly and achieves an @math approximation factor; HistStreaming is a streaming version of HistApprox and uses heuristics to further improve the efficiency. Experiments conducted on real data demonstrate that HistStreaming can find high quality solutions and is up to two orders of magnitude faster than the naive Greedy algorithm.
|
Cardinality Constrained Submodular Function Maximization. Submodular optimization lies at the core of many data mining and machine learning applications. Because the objectives in many optimization problems have a diminishing returns property, which can be captured by submodularity. In the past few years, submodular optimization has been applied to a wide variety of scenarios, including sensor placement @cite_15 , outbreak detection @cite_17 , search result diversification @cite_9 , feature selection @cite_1 , data summarization @cite_14 @cite_20 , influence maximization @cite_13 , just name a few. The algorithm @cite_21 plays as a silver bullet in solving the cardinality constrained submodular maximization problem. Improving the efficiency of algorithm has also gained a lot of interests, such as lazy evaluation @cite_19 , disk-based optimization @cite_8 , distributed computation @cite_5 @cite_12 , sampling @cite_14 , etc.
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_20",
"@cite_17"
],
"mid": [
"2950865888",
"2020594369",
"1993320088",
"2068748923",
"2156504490",
"1898824936",
"2611972874",
"2131824593",
"",
"2101246692",
"2803250016",
"2141403143"
],
"abstract": [
"Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice? In this paper, we develop the first linear-time algorithm for maximizing a general monotone submodular function subject to a cardinality constraint. We show that our randomized algorithm, STOCHASTIC-GREEDY, can achieve a @math approximation guarantee, in expectation, to the optimum solution in time linear in the size of the data and independent of the cardinality constraint. We empirically demonstrate the effectiveness of our algorithm on submodular functions arising in data summarization, including training large-scale kernel methods, exemplar-based clustering, and sensor placement. We observe that STOCHASTIC-GREEDY practically achieves the same utility value as lazy greedy but runs much faster. More surprisingly, we observe that in many practical scenarios STOCHASTIC-GREEDY does not evaluate the whole fraction of data points even once and still achieves indistinguishable results compared to lazy greedy.",
"The problem of Set Cover - to find the smallest subcollection of sets that covers some universe - is at the heart of many data and analysis tasks. It arises in a wide range of settings, including operations research, machine learning, planning, data quality and data mining. Although finding an optimal solution is NP-hard, the greedy algorithm is widely used, and typically finds solutions that are close to optimal. However, a direct implementation of the greedy approach, which picks the set with the largest number of uncovered items at each step, does not behave well when the input is very large and disk resident. The greedy algorithm must make many random accesses to disk, which are unpredictable and costly in comparison to linear scans. In order to scale Set Cover to large datasets, we provide a new algorithm which finds a solution that is provably close to that of greedy, but which is much more efficient to implement using modern disk technology. Our experiments show a ten-fold improvement in speed on moderately-sized datasets, and an even greater improvement on larger datasets.",
"We study the problem of answering ambiguous web queries in a setting where there exists a taxonomy of information, and that both queries and documents may belong to more than one category according to this taxonomy. We present a systematic approach to diversifying results that aims to minimize the risk of dissatisfaction of the average user. We propose an algorithm that well approximates this objective in general, and is provably optimal for a natural special case. Furthermore, we generalize several classical IR metrics, including NDCG, MRR, and MAP, to explicitly account for the value of diversification. We demonstrate empirically that our algorithm scores higher in these generalized metrics compared to results produced by commercial search engines.",
"We present a new sketch for summarizing network data. The sketch has the following properties which make it useful in communication-efficient aggregation in distributed streaming scenarios, such as sensor networks: the sketch is duplicate insensitive, i.e., reinsertions of the same data will not affect the sketch and hence the estimates of aggregates. Unlike previous duplicate-insensitive sketches for sensor data aggregation [S. , Synposis diffusion for robust aggregation in sensor networks, in Proceedings of the 2nd International Conference on Embedded Network Sensor Systems, (2004), pp. 250-262], [J. , Approximate aggregation techniques for sensor databases, in Proceedings of the 20th International Conference on Data Engineering (ICDE), 2004, pp. 449-460], it is also time decaying, so that the weight of a data item in the sketch can decrease with time according to a user-specified decay function. The sketch can give provably approximate guarantees for various aggregates of data, including the sum, median, quantiles, and frequent elements. The size of the sketch and the time taken to update it are both polylogarithmic in the size of the relevant data. Further, multiple sketches computed over distributed data can be combined without loss of accuracy. To our knowledge, this is the first sketch that combines all the above properties.",
"We present a unifying framework for information theoretic feature selection, bringing almost two decades of research on heuristic filter criteria under a single theoretical interpretation. This is in response to the question: \"what are the implicit statistical assumptions of feature selection criteria based on mutual information?\". To answer this, we adopt a different strategy than is usual in the feature selection literature--instead of trying to define a criterion, we derive one, directly from a clearly specified objective function: the conditional likelihood of the training labels. While many hand-designed heuristic criteria try to optimize a definition of feature 'relevancy' and 'redundancy', our approach leads to a probabilistic framework which naturally incorporates these concepts. As a result we can unify the numerous criteria published over the last two decades, and show them to be low-order approximations to the exact (but intractable) optimisation problem. The primary contribution is to show that common heuristics for information based feature selection (including Markov Blanket algorithms as a special case) are approximate iterative maximisers of the conditional likelihood. A large empirical study provides strong evidence to favour certain classes of criteria, in particular those that balance the relative size of the relevancy redundancy terms. Overall we conclude that the JMI criterion (Yang and Moody, 1999; , 2008) provides the best tradeoff in terms of accuracy, stability, and flexibility with small data samples.",
"Given a finite set E and a real valued function f on P(E) (the power set of E) the optimal subset problem (P) is to find S ⊂ E maximizing f over P(E). Many combinatorial optimization problems can be formulated in these terms. Here, a family of approximate solution methods is studied : the greedy algorithms.",
"We study the problem of efficiently optimizing submodular functions under cardinality constraints in distributed setting. Recently, several distributed algorithms for this problem have been introduced which either achieve a sub-optimal solution or they run in super-constant number of rounds of computation. Unlike previous work, we aim to design distributed algorithms in multiple rounds with almost optimal approximation guarantees at the cost of outputting a larger number of elements. Toward this goal, we present a distributed algorithm that, for any e > 0 and any constant r, outputs a set S of O(rk e1 r) items in r rounds, and achieves a (1-e)-approximation of the value of the optimum set with k items. This is the first distributed algorithm that achieves an approximation factor of (1-e) running in less than log 1 e number of rounds. We also prove a hardness result showing that the output of any 1-e approximation distributed algorithm limited to one distributed round should have at least Ω(k e) items. In light of this hardness result, our distributed algorithm in one round, r = 1, is asymptotically tight in terms of the output size. We support the theoretical guarantees with an extensive empirical study of our algorithm showing that achieving almost optimum solutions is indeed possible in a few rounds for large-scale real datasets.",
"When monitoring spatial phenomena, which can often be modeled as Gaussian processes (GPs), choosing sensor locations is a fundamental task. There are several common strategies to address this task, for example, geometry or disk models, placing sensors at the points of highest entropy (variance) in the GP model, and A-, D-, or E-optimal design. In this paper, we tackle the combinatorial optimization problem of maximizing the mutual information between the chosen locations and the locations which are not selected. We prove that the problem of finding the configuration that maximizes mutual information is NP-complete. To address this issue, we describe a polynomial-time approximation that is within (1-1 e) of the optimum by exploiting the submodularity of mutual information. We also show how submodularity can be used to obtain online bounds, and design branch and bound search procedures. We then extend our algorithm to exploit lazy evaluations and local structure in the GP, yielding significant speedups. We also extend our approach to find placements which are robust against node failures and uncertainties in the model. These extensions are again associated with rigorous theoretical approximation guarantees, exploiting the submodularity of the objective function. We demonstrate the advantages of our approach towards optimizing mutual information in a very extensive empirical study on two real-world data sets.",
"",
"Greedy algorithms are practitioners' best friends - they are intuitive, simple to implement, and often lead to very good solutions. However, implementing greedy algorithms in a distributed setting is challenging since the greedy choice is inherently sequential, and it is not clear how to take advantage of the extra processing power. Our main result is a powerful sampling technique that aids in parallelization of sequential algorithms. We then show how to use this primitive to adapt a broad class of greedy algorithms to the MapReduce paradigm; this class includes maximum cover and submodular maximization subject to p-system constraints. Our method yields efficient algorithms that run in a logarithmic number of rounds, while obtaining solutions that are arbitrarily close to those produced by the standard sequential greedy algorithm. We begin with algorithms for modular maximization subject to a matroid constraint, and then extend this approach to obtain approximation algorithms for submodular maximization subject to knapsack or p-system constraints. Finally, we empirically validate our algorithms, and show that they achieve the same quality of the solution as standard greedy algorithms but run in a substantially fewer number of rounds.",
"The sheer scale of modern datasets has resulted in a dire need for summarization techniques that identify representative elements in a dataset. Fortunately, the vast majority of data summarization tasks satisfy an intuitive diminishing returns condition known as submodularity, which allows us to find nearly-optimal solutions in linear time. We focus on a two-stage submodular framework where the goal is to use some given training functions to reduce the ground set so that optimizing new functions (drawn from the same distribution) over the reduced set provides almost as much value as optimizing them over the entire ground set. In this paper, we develop the first streaming and distributed solutions to this problem. In addition to providing strong theoretical guarantees, we demonstrate both the utility and efficiency of our algorithms on real-world tasks including image summarization and ride-share optimization.",
"Given a water distribution network, where should we place sensors toquickly detect contaminants? Or, which blogs should we read to avoid missing important stories?. These seemingly different problems share common structure: Outbreak detection can be modeled as selecting nodes (sensor locations, blogs) in a network, in order to detect the spreading of a virus or information asquickly as possible. We present a general methodology for near optimal sensor placement in these and related problems. We demonstrate that many realistic outbreak detection objectives (e.g., detection likelihood, population affected) exhibit the property of \"submodularity\". We exploit submodularity to develop an efficient algorithm that scales to large problems, achieving near optimal placements, while being 700 times faster than a simple greedy algorithm. We also derive online bounds on the quality of the placements obtained by any algorithm. Our algorithms and bounds also handle cases where nodes (sensor locations, blogs) have different costs. We evaluate our approach on several large real-world problems,including a model of a water distribution network from the EPA, andreal blog data. The obtained sensor placements are provably near optimal, providing a constant fraction of the optimal solution. We show that the approach scales, achieving speedups and savings in storage of several orders of magnitude. We also show how the approach leads to deeper insights in both applications, answering multicriteria trade-off, cost-sensitivity and generalization questions."
]
}
|
1811.05652
|
2901251508
|
Cardinality constrained submodular function maximization, which aims to select a subset of size at most @math to maximize a monotone submodular utility function, is the key in many data mining and machine learning applications such as data summarization and maximum coverage problems. When data is given as a stream, streaming submodular optimization (SSO) techniques are desired. Existing SSO techniques can only apply to insertion-only streams where each element has an infinite lifespan, and sliding-window streams where each element has a same lifespan (i.e., window size). However, elements in some data streams may have arbitrary different lifespans, and this requires addressing SSO over streams with inhomogeneous-decays (SSO-ID). This work formulates the SSO-ID problem and presents three algorithms: BasicStreaming is a basic streaming algorithm that achieves an @math approximation factor; HistApprox improves the efficiency significantly and achieves an @math approximation factor; HistStreaming is a streaming version of HistApprox and uses heuristics to further improve the efficiency. Experiments conducted on real data demonstrate that HistStreaming can find high quality solutions and is up to two orders of magnitude faster than the naive Greedy algorithm.
|
Streaming Submodular Optimization (SSO). SSO is another way to improve the efficiency of solving submodular optimization problems, and are gaining interests in recent years due to the rise of big data and high-speed streams that an algorithm can only access a small fraction of the data at a time point. design streaming algorithms that need to traverse the streaming data for a few rounds which is suitable for the MapReduce framework. then design the algorithm which is the first one round streaming algorithm for insertion-only streams. is adopted as the basic building block in our algorithms. SSO over sliding-window streams has recently been studied by and respectively, that both leverage smooth histograms @cite_11 . Our algorithms actually can be viewed as a generalization of these existing methods, and our SSO techniques apply for streams with inhomogeneous decays.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2128846062"
],
"abstract": [
"In the streaming model elements arrive sequentially and can be observed only once. Maintaining statistics and aggregates is an important and non-trivial task in the model. This becomes even more challenging in the sliding windows model, where statistics must be maintained only over the most recent n elements. In their pioneering paper, Datar, Gionis, Indyk and Motwani [15] presented exponential histograms, an effective method for estimating statistics on sliding windows. In this paper we present a new smooth histograms method that improves the approximation error rate obtained via exponential histograms. Furthermore, our smooth histograms method not only captures and improves multiple previous results on sliding windows bur also extends the class functions that can be approximated on sliding windows. In particular, we provide the first approximation algorithms for the following functions: Lp norms for p notin [1,2], frequency moments, length of increasing subsequence and geometric mean."
]
}
|
1811.05614
|
2901910659
|
Many successful methods have been proposed for learning low dimensional representations on large-scale networks, while almost all existing methods are designed in inseparable processes, learning embeddings for entire networks even when only a small proportion of nodes are of interest. This leads to great inconvenience, especially on super-large or dynamic networks, where these methods become almost impossible to implement. In this paper, we formalize the problem of separated matrix factorization, based on which we elaborate a novel objective function that preserves both local and global information. We further propose SepNE, a simple and flexible network embedding algorithm which independently learns representations for different subsets of nodes in separated processes. By implementing separability, our algorithm reduces the redundant efforts to embed irrelevant nodes, yielding scalability to super-large networks, automatic implementation in distributed learning and further adaptations. We demonstrate the effectiveness of this approach on several real-world networks with different scales and subjects. With comparable accuracy, our approach significantly outperforms state-of-the-art baselines in running times on large networks.
|
There are massive literature proposed over NE problems. Traditional dimension reduction approaches @cite_8 @cite_5 @cite_7 are applicable on network data through Graph Laplacian Eigenmaps or proximity MF. Recently, various skip-gram-based NE models and applications were proposed @cite_15 @cite_3 @cite_17 . Besides, the pioneering work of @cite_16 proved an equivalency between skip-gram models and matrix factorization, which further leads to new proximity metrics under the proximity MF framework @cite_0 @cite_2 @cite_4 . Edge reconstruction algorithms @cite_13 were proposed to gain scalability on large networks. Neural networks, including autoencoders @cite_11 @cite_12 and CNNs @cite_10 @cite_9 were also leveraged in NE problems. There is also a new trend @cite_18 that leverages structural information instead of proximity in NE.
|
{
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_10",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2607500032",
"",
"2761896323",
"2156718197",
"2053186076",
"",
"2962767366",
"2962756421",
"2242161203",
"2090891622",
"2001141328",
"2154851992",
"2125031621",
"1888005072",
"2964234429",
"2808087697"
],
"abstract": [
"Structural identity is a concept of symmetry in which network nodes are identified according to the network structure and their relationship to other nodes. Structural identity has been studied in theory and practice over the past decades, but only recently has it been addressed with representational learning techniques. This work presents struc2vec, a novel and flexible framework for learning latent representations for the structural identity of nodes. struc2vec uses a hierarchy to measure node similarity at different scales, and constructs a multilayer graph to encode structural similarities and generate structural context for nodes. Numerical experiments indicate that state-of-the-art techniques for learning node representations fail in capturing stronger notions of structural identity, while struc2vec exhibits much superior performance in this task, as it overcomes limitations of prior approaches. As a consequence, numerical experiments indicate that struc2vec improves performance on classification tasks that depend more on structural identity.",
"",
"Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami operator on a manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered.",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in",
"",
"Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.",
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",
"Representation learning has shown its effectiveness in many tasks such as image classification and text mining. Network representation learning aims at learning distributed vector representation for each vertex in a network, which is also increasingly recognized as an important aspect for network analysis. Most network representation learning methods investigate network structures for learning. In reality, network vertices contain rich information (such as text), which cannot be well applied with algorithmic frameworks of typical representation learning methods. By proving that DeepWalk, a state-of-the-art network representation method, is actually equivalent to matrix factorization (MF), we propose text-associated DeepWalk (TADW). TADW incorporates text features of vertices into network representation learning under the framework of matrix factorization. We evaluate our method and various baseline methods by applying them to the task of multi-class classification of vertices. The experimental results show that, our method outperforms other baselines on all three datasets, especially when networks are noisy and training ratio is small. The source code of this paper can be obtained from https: github.com albertyang33 TADW.",
"In this paper, we present GraRep , a novel model for learning vertex representations of weighted graphs. This model learns low dimensional vectors to represent vertices appearing in a graph and, unlike existing work, integrates global structural information of the graph into the learning process. We also formally analyze the connections between our work and several previous research efforts, including the DeepWalk model of as well as the skip-gram model with negative sampling of We conduct experiments on a language network, a social network as well as a citation network and show that our learned global representations can be effectively used as features in tasks such as clustering, classification and visualization. Empirical results demonstrate that our representation significantly outperforms other state-of-the-art methods in such tasks.",
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.",
"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",
"We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by , and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization.",
"This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE .",
"",
""
]
}
|
1811.05614
|
2901910659
|
Many successful methods have been proposed for learning low dimensional representations on large-scale networks, while almost all existing methods are designed in inseparable processes, learning embeddings for entire networks even when only a small proportion of nodes are of interest. This leads to great inconvenience, especially on super-large or dynamic networks, where these methods become almost impossible to implement. In this paper, we formalize the problem of separated matrix factorization, based on which we elaborate a novel objective function that preserves both local and global information. We further propose SepNE, a simple and flexible network embedding algorithm which independently learns representations for different subsets of nodes in separated processes. By implementing separability, our algorithm reduces the redundant efforts to embed irrelevant nodes, yielding scalability to super-large networks, automatic implementation in distributed learning and further adaptations. We demonstrate the effectiveness of this approach on several real-world networks with different scales and subjects. With comparable accuracy, our approach significantly outperforms state-of-the-art baselines in running times on large networks.
|
The most similar work to ours is @cite_6 , in which a similar partition was adopted to achieve separability, while other parts of the work had major differences with ours. Besides, it focused mainly on technical issues in distributed learning and preserved only link information, while SepNE is more generalized idea with a more elaborated optimization goal.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2142535891"
],
"abstract": [
"Natural graphs, such as social networks, email graphs, or instant messaging patterns, have become pervasive through the internet. These graphs are massive, often containing hundreds of millions of nodes and billions of edges. While some theoretical models have been proposed to study such graphs, their analysis is still difficult due to the scale and nature of the data. We propose a framework for large-scale graph decomposition and inference. To resolve the scale, our framework is distributed so that the data are partitioned over a shared-nothing set of machines. We propose a novel factorization technique that relies on partitioning a graph so as to minimize the number of neighboring vertices rather than edges across partitions. Our decomposition is based on a streaming algorithm. It is network-aware as it adapts to the network topology of the underlying computational hardware. We use local copies of the variables and an efficient asynchronous communication protocol to synchronize the replicated values in order to perform most of the computation without having to incur the cost of network communication. On a graph of 200 million vertices and 10 billion edges, derived from an email communication network, our algorithm retains convergence properties while allowing for almost linear scalability in the number of computers."
]
}
|
1811.05521
|
2901671134
|
Deep learning models are vulnerable to external attacks. In this paper, we propose a Reinforcement Learning (RL) based approach to generate adversarial examples for the pre-trained (target) models. We assume a semi black-box setting where the only access an adversary has to the target model is the class probabilities obtained for the input queries. We train a Deep Q Network (DQN) agent which, with experience, learns to attack only a small portion of image pixels to generate non-targeted adversarial images. Initially, an agent explores an environment by sequentially modifying random sets of image pixels and observes its effect on the class probabilities. At the end of an episode, it receives a positive (negative) reward if it succeeds (fails) to alter the label of the image. Experimental results with MNIST, CIFAR-10 and Imagenet datasets demonstrate that our RL framework is able to learn an effective attack policy.
|
Deep learning models are vulnerable to external attacks such as adversarial inputs. The examples from the dataset can be perturbed in a manner that a human assigns the same label to it, however, it forces machine learning models to mis-classify it. They pose a potential threat to machine learning models when deployed in the real world @cite_1 . @cite_0 proposed a differential evolution based approach which generates the adversarial examples by modifying a single pixel in the input image. We suspect that different regions in the image have variable sensitivity to adversarial attack and we attempt to exploit it to generate adversarial images with minimal changes to the input image.
|
{
"cite_N": [
"@cite_0",
"@cite_1"
],
"mid": [
"2765424254",
"2460937040"
],
"abstract": [
"Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution. It requires less adversarial information and can fool more types of networks. The results show that 70.97 of the natural images can be perturbed to at least one target class by modifying just one pixel with 97.47 confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks.",
"Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera."
]
}
|
1811.05521
|
2901671134
|
Deep learning models are vulnerable to external attacks. In this paper, we propose a Reinforcement Learning (RL) based approach to generate adversarial examples for the pre-trained (target) models. We assume a semi black-box setting where the only access an adversary has to the target model is the class probabilities obtained for the input queries. We train a Deep Q Network (DQN) agent which, with experience, learns to attack only a small portion of image pixels to generate non-targeted adversarial images. Initially, an agent explores an environment by sequentially modifying random sets of image pixels and observes its effect on the class probabilities. At the end of an episode, it receives a positive (negative) reward if it succeeds (fails) to alter the label of the image. Experimental results with MNIST, CIFAR-10 and Imagenet datasets demonstrate that our RL framework is able to learn an effective attack policy.
|
We experimented with models trained on MNIST and CIFAR-10. We also demonstrate early results on the DenseNet-121 @cite_3 model trained on Imagenet. Results indicate that, with minimal changes to the input image, the RL agent is able to fool the target models. The python code for experiments with MNIST and CIFAR-10 datasets is available at https: github.com mandareln deep-q-learning-adversarial
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2511730936"
],
"abstract": [
"Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL ."
]
}
|
1811.05090
|
2949622622
|
We introduce the variational filtering EM algorithm, a simple, general-purpose method for performing variational inference in dynamical latent variable models using information from only past and present variables, i.e. filtering. The algorithm is derived from the variational objective in the filtering setting and consists of an optimization procedure at each time step. By performing each inference optimization procedure with an iterative amortized inference model, we obtain a computationally efficient implementation of the algorithm, which we call amortized variational filtering. We present experiments demonstrating that this general-purpose method improves performance across several deep dynamical latent variable models.
|
Amortized variational inference @cite_48 @cite_0 has enabled many recently proposed probabilistic deep dynamical latent variable models, with applications to video @cite_7 @cite_32 @cite_14 @cite_17 @cite_2 @cite_22 @cite_21 @cite_50 @cite_3 @cite_27 , speech @cite_16 @cite_44 @cite_43 @cite_46 @cite_3 , handwriting @cite_16 , music @cite_8 , etc. While these models differ in their functional mappings, most fall within the general form of Eq. . Crucially, simply encoding the observation at each step is insufficient to accurately perform approximate inference, as the prior can vary across steps. Thus, with each model, a hand-crafted amortized inference procedure has been proposed. For instance, many filtering inference methods re-use various components of the generative model @cite_16 @cite_44 @cite_2 @cite_50 , while some methods introduce separate recurrent neural networks into the filtering procedure @cite_13 @cite_50 or encode the previous latent sample @cite_9 . Specifying a filtering method has been an engineering effort, as we have lacked a theoretical framework.
|
{
"cite_N": [
"@cite_22",
"@cite_3",
"@cite_44",
"@cite_43",
"@cite_2",
"@cite_8",
"@cite_48",
"@cite_21",
"@cite_46",
"@cite_17",
"@cite_7",
"@cite_32",
"@cite_27",
"@cite_50",
"@cite_16",
"@cite_14",
"@cite_9",
"@cite_0",
"@cite_13"
],
"mid": [
"",
"",
"",
"",
"",
"2396566817",
"",
"",
"",
"",
"2952390294",
"",
"",
"",
"2950067852",
"",
"2396178844",
"",
"2166851633"
],
"abstract": [
"",
"",
"",
"",
"",
"How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over the uncertainty in a latent path, like a state space model, we improve the state of the art results on the Blizzard and TIMIT speech modeling data sets by a large margin, while achieving comparable performances to competing methods on polyphonic music modeling.",
"",
"",
"",
"",
"In a given scene, humans can often easily predict a set of immediate future events that might happen. However, generalized pixel-level anticipation in computer vision systems is difficult because machine learning struggles with the ambiguity inherent in predicting the future. In this paper, we focus on predicting the dense trajectory of pixels in a scene, specifically what will move in the scene, where it will travel, and how it will deform over the course of one second. We propose a conditional variational autoencoder as a solution to this problem. In this framework, direct inference from the image shapes the distribution of possible trajectories, while latent variables encode any necessary information that is not available in the image. We show that our method is able to successfully predict events in a wide variety of scenes and can produce multiple different predictions when the future is ambiguous. Our algorithm is trained on thousands of diverse, realistic videos and requires absolutely no human labeling. In addition to non-semantic action prediction, we find that our method learns a representation that is applicable to semantic vision tasks.",
"",
"",
"",
"In this paper, we explore the inclusion of latent random variables into the dynamic hidden state of a recurrent neural network (RNN) by combining elements of the variational autoencoder. We argue that through the use of high-level latent random variables, the variational RNN (VRNN)1 can model the kind of variability observed in highly structured sequential data such as natural speech. We empirically evaluate the proposed model against related sequential models on four speech datasets and one handwriting dataset. Our results show the important roles that latent random variables can play in the RNN dynamic hidden state.",
"",
"We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning of latent Markovian state space models. Leveraging recent advances in Stochastic Gradient Variational Bayes, DVBF can overcome intractable inference distributions by means of variational inference. Thus, it can handle highly nonlinear input data with temporal and spatial dependencies such as image sequences without domain knowledge. Our experiments show that enabling backpropagation through transitions enforces state space assumptions and significantly improves information content of the latent embedding. This also enables realistic long-term prediction.",
"",
"We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions. We develop this technique for a large class of probabilistic models and we demonstrate it with two probabilistic topic models, latent Dirichlet allocation and the hierarchical Dirichlet process topic model. Using stochastic variational inference, we analyze several large collections of documents: 300K articles from Nature, 1.8M articles from The New York Times, and 3.8M articles from Wikipedia. Stochastic inference can easily handle data sets of this size and outperforms traditional variational inference, which can only handle a smaller subset. (We also show that the Bayesian nonparametric topic model outperforms its parametric counterpart.) Stochastic variational inference lets us apply complex Bayesian models to massive data sets."
]
}
|
1811.05021
|
2950782168
|
Dialogue Act (DA) classification is a challenging problem in dialogue interpretation, which aims to attach semantic labels to utterances and characterize the speaker's intention. Currently, many existing approaches formulate the DA classification problem ranging from multi-classification to structured prediction, which suffer from two limitations: a) these methods are either handcrafted feature-based or have limited memories. b) adversarial examples can't be correctly classified by traditional training methods. To address these issues, in this paper we first cast the problem into a question and answering problem and proposed an improved dynamic memory networks with hierarchical pyramidal utterance encoder. Moreover, we apply adversarial training to train our proposed model. We evaluate our model on two public datasets, i.e., Switchboard dialogue act corpus and the MapTask corpus. Extensive experiments show that our proposed model is not only robust, but also achieves better performance when compared with some state-of-the-art baselines.
|
Most of the existing work for the problem of DA classification can be categorized as following two classes: a) Regarding the DA classification as a multi-classification problem @cite_38 @cite_40 . b) Regarding the DA classification as a sequence labeling problem @cite_24 . Recently, approaches based on deep learning methods improve many state-of-the-art techniques in NLP including DA classification accuracy on open-domain conversations @cite_36 @cite_13 @cite_0 @cite_41 . @cite_36 use a mixture of CNN and RNN to represent utterances where CNNs are used to extract local features from each utterance and RNNs are used to create a general view of the whole dialogue. @cite_13 design a deep neural network model that benefits from pre-trained word embeddings combined with a variation of the RNN structure for the DA classification task. @cite_0 also investigate the performance of using standard RNN and CNN on DA classification and get the cutting edge results on the MRDA corpus using CNN. @cite_41 propose a model based on CNNs and RNNs that incorporates preceding short texts as context to classify current DAs. Unlike previous models, we cast the DA classification task into a question and answering problem.
|
{
"cite_N": [
"@cite_38",
"@cite_36",
"@cite_41",
"@cite_24",
"@cite_0",
"@cite_40",
"@cite_13"
],
"mid": [
"",
"1526096287",
"2297405797",
"2401527985",
"2295434193",
"1579930314",
"2573626026"
],
"abstract": [
"",
"The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.",
"Recent approaches based on artificial neural networks (ANNs) have shown promising results for short-text classification. However, many short texts occur in sequences (e.g., sentences in a document or utterances in a dialog), and most existing ANN-based systems do not leverage the preceding short texts when classifying a subsequent one. In this work, we present a model based on recurrent neural networks and convolutional neural networks that incorporates the preceding short texts. Our model achieves state-of-the-art results on three different datasets for dialog act prediction.",
"We use a combination of linear support vector machines and hidden markov models for dialog act tagging in the HCRC MapTask corpus, and obtain better results than those previously reported. Support vector machines allow easy integration of sparse highdimensional text features and dense low-dimensional acoustic features, and produce posterior probabilities usable by sequence labelling algorithms. The relative contribution of text and acoustic features for each class of dialog act is analyzed. Index Terms: dialog acts, discourse, support vector machines, classification probabilities, Viterbi algorithm.",
"This paper presents a novel latent variable recurrent neural network architecture for jointly modeling sequences of words and (possibly latent) discourse relations between adjacent sentences. A recurrent neural network generates individual words, thus reaping the benefits of discriminatively-trained vector representations. The discourse relations are represented with a latent variable, which can be predicted or marginalized, depending on the task. The resulting model can therefore employ a training objective that includes not only discourse relation classification, but also word prediction. As a result, it outperforms state-of-the-art alternatives for two tasks: implicit discourse relation classification in the Penn Discourse Treebank, and dialog act classification in the Switchboard corpus. Furthermore, by marginalizing over latent discourse relations at test time, we obtain a discourse informed language model, which improves over a strong LSTM baseline.",
"In this paper we present a multidimensional approach to utterance segmentation and automatic dialogue act classification. We show that the use of multiple dimensions in distinguishing and annotating units not only supports a more accurate analysis of human communication, but can also help to solve some notorious problems concerning the segmentation of dialogue into functional units. We introduce the use of per-dimension segmentation for dialogue act taxonomies that feature multi-functionality and show that better classification results are obtained when using a separate segmentation for each dimension than when using one segmentation that fits all dimensions. Three machine learning techniques are applied and compared on the task of automatic classification of multiple communicative functions of utterances. The results are encouraging and indicate that communicative functions in important dimensions are easy machinelearnable.",
"This paper applies a deep long-short term memory (LSTM) structure to classify dialogue acts in open-domain conversations."
]
}
|
1811.05021
|
2950782168
|
Dialogue Act (DA) classification is a challenging problem in dialogue interpretation, which aims to attach semantic labels to utterances and characterize the speaker's intention. Currently, many existing approaches formulate the DA classification problem ranging from multi-classification to structured prediction, which suffer from two limitations: a) these methods are either handcrafted feature-based or have limited memories. b) adversarial examples can't be correctly classified by traditional training methods. To address these issues, in this paper we first cast the problem into a question and answering problem and proposed an improved dynamic memory networks with hierarchical pyramidal utterance encoder. Moreover, we apply adversarial training to train our proposed model. We evaluate our model on two public datasets, i.e., Switchboard dialogue act corpus and the MapTask corpus. Extensive experiments show that our proposed model is not only robust, but also achieves better performance when compared with some state-of-the-art baselines.
|
One line of research related to dynamic memory network is attention and memory mechanism @cite_46 @cite_4 , which have been successfully applied in many tasks such as text generation @cite_49 @cite_3 and question answering @cite_18 @cite_2 . In these works, memory is encoded as a continuous representation and operations on memory (e.g. reading and writing) are typically implemented with neural networks. Attention mechanism could be viewed as a compositional function, where lower level representations are regarded as the memory, and the function is to assign a weight to each lower position when computing an upper level representation. Such attention based approaches have achieved promising performances on a variety of NLP tasks @cite_44 . Based on these works, @cite_48 develops dynamic memory network which simultaneously contains memory updating mechanism and attention mechanism. @cite_39 proposes an improved dynamic memory network with some modifications in memory and input module. The dynamic memory network has been successfully applied in many scenarios such as question answering and sentiment analysis. To the best of our knowledge, it's the first time that we apply dynamic memory network in DA classification.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_48",
"@cite_3",
"@cite_39",
"@cite_44",
"@cite_49",
"@cite_2",
"@cite_46"
],
"mid": [
"2741903908",
"2133564696",
"2131494463",
"",
"",
"2949335953",
"2953022248",
"2739749670",
"2950527759"
],
"abstract": [
"",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the inputs and the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN can be trained end-to-end and obtains state-of-the-art results on several types of tasks and datasets: question answering (Facebook's bAbI dataset), text classification for sentiment analysis (Stanford Sentiment Treebank) and sequence modeling for part-of-speech tagging (WSJ-PTB). The training for these different tasks relies exclusively on trained word vector representations and input-question-answer triplets.",
"",
"",
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.",
"Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.",
"Machine comprehension(MC) style question answering is a representative problem in natural language processing. Previous methods rarely spend time on the improvement of encoding layer, especially the embedding of syntactic information and name entity of the words, which are very crucial to the quality of encoding. Moreover, existing attention methods represent each query word as a vector or use a single vector to represent the whole query sentence, neither of them can handle the proper weight of the key words in query sentence. In this paper, we introduce a novel neural network architecture called Multi-layer Embedding with Memory Network(MEMEN) for machine reading task. In the encoding layer, we employ classic skip-gram model to the syntactic and semantic information of the words to train a new kind of embedding layer. We also propose a memory network of full-orientation matching of the query and passage to catch more pivotal information. Experiments show that our model has competitive results both from the perspectives of precision and efficiency in Stanford Question Answering Dataset(SQuAD) among all published results and achieves the state-of-the-art results on TriviaQA dataset.",
"We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples."
]
}
|
1811.05014
|
2952693835
|
This paper introduces a fast and efficient network architecture, NeXtVLAD, to aggregate frame-level features into a compact feature vector for large-scale video classification. Briefly speaking, the basic idea is to decompose a high-dimensional feature into a group of relatively low-dimensional vectors with attention before applying NetVLAD aggregation over time. This NeXtVLAD approach turns out to be both effective and parameter efficient in aggregating temporal information. In the 2nd Youtube-8M video understanding challenge, a single NeXtVLAD model with less than 80M parameters achieves a GAP score of 0.87846 in private leaderboard. A mixture of 3 NeXtVLAD models results in 0.88722, which is ranked 3rd over 394 teams. The code is publicly available at this https URL.
|
Before the era of deep neural networks, researchers have proposed many encoding methods, including BoW (Bag of visual Words) @cite_26 , FV (Fisher Vector) @cite_30 and VLAD (Vector of Locally Aggregated Descriptors) @cite_13 etc., to aggregate local image descriptors into a global compact vector, aiming to achieve more compact image representation and improve the performance of large-scale visual recognition. Such aggregation methods are also applied to the researches of large-scale video classification in some early works @cite_15 @cite_31 . Recently, @cite_21 proposed a differentiable module, NetVLAD, to integrate VLAD into current neural networks and achieved significant improvement for the task of place recognition. The architecture was then proved to very effective in aggregating spatial and temporal information for compact video representation @cite_18 @cite_28 .
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_26",
"@cite_15",
"@cite_28",
"@cite_21",
"@cite_31",
"@cite_13"
],
"mid": [
"",
"",
"2131846894",
"2142194269",
"2608988379",
"2951019013",
"2034328688",
"2012592962"
],
"abstract": [
"",
"",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.",
"In this work, we introduce a new video representation for action classification that aggregates local convolutional features across the entire spatio-temporal extent of the video. We do so by integrating state-of-the-art two-stream networks [42] with learnable spatio-temporal feature aggregation [6]. The resulting architecture is end-to-end trainable for whole-video classification. We investigate different strategies for pooling across space and time and combining signals from the different streams. We find that: (i) it is important to pool jointly across space and time, but (ii) appearance and motion streams are best aggregated into their own separate representations. Finally, we show that our representation outperforms the two-stream base architecture by a large margin (13 relative) as well as outperforms other baselines with comparable base architectures on HMDB51, UCF101, and Charades video classification benchmarks.",
"We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the \"Vector of Locally Aggregated Descriptors\" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state-of-the-art compact image representations on standard image retrieval benchmarks.",
"Local space-time features capture local events in video and can be adapted to the size, the frequency and the velocity of moving patterns. In this paper, we demonstrate how such features can be used for recognizing complex motion patterns. We construct video representations in terms of local space-time features and integrate such representations with SVM classification schemes for recognition. For the purpose of evaluation we introduce a new video database containing 2391 sequences of six human actions performed by 25 people in four different scenarios. The presented results of action recognition justify the proposed method and demonstrate its advantage compared to other relative approaches for action recognition.",
"We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms."
]
}
|
1811.05014
|
2952693835
|
This paper introduces a fast and efficient network architecture, NeXtVLAD, to aggregate frame-level features into a compact feature vector for large-scale video classification. Briefly speaking, the basic idea is to decompose a high-dimensional feature into a group of relatively low-dimensional vectors with attention before applying NetVLAD aggregation over time. This NeXtVLAD approach turns out to be both effective and parameter efficient in aggregating temporal information. In the 2nd Youtube-8M video understanding challenge, a single NeXtVLAD model with less than 80M parameters achieves a GAP score of 0.87846 in private leaderboard. A mixture of 3 NeXtVLAD models results in 0.88722, which is ranked 3rd over 394 teams. The code is publicly available at this https URL.
|
Recently, with the availability of large-scale video datasets @cite_27 @cite_0 @cite_23 and mass computation power of GPUs, deep neural networks have achieved remarkable advances in the field of large-scale video classification @cite_14 @cite_16 @cite_11 @cite_4 . These approaches can be roughly assigned into four categories: (a) Spatiotemporal Convolutional Networks @cite_27 @cite_11 @cite_4 , which mainly rely on convolution and pooling to aggregate temporal information along with spatial information. (b) Two Stream Networks @cite_16 @cite_2 @cite_9 @cite_7 , which utilize stacked optical flow to recognize human motions in addition to the context frame images. (c) Recurrent Spatial Networks @cite_14 @cite_5 , which applies Recurrent Neural Networks, including LSTM or GRU to model temporal information in videos. (d) Other approaches @cite_8 @cite_3 @cite_17 @cite_6 , which use other solutions to generate compact features for video representation and classification.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_17",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_11"
],
"mid": [
"28988658",
"2952633803",
"",
"1926645898",
"",
"2950551233",
"",
"",
"1927052826",
"",
"2524365899",
"2342662179",
"2180092181",
"2952186347",
"1983364832"
],
"abstract": [
"We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"",
"In this paper we present a method to capture video-wide temporal information for action recognition. We postulate that a function capable of ordering the frames of a video temporally (based on the appearance) captures well the evolution of the appearance within the video. We learn such ranking functions per video via a ranking machine and use the parameters of these as a new video representation. The proposed method is easy to interpret and implement, fast to compute and effective in recognizing a wide variety of actions. We perform a large number of evaluations on datasets for generic action recognition (Hollywood2 and HMDB51), fine-grained actions (MPII- cooking activities) and gestures (Chalearn). Results show that the proposed method brings an absolute improvement of 7–10 , while being compatible with and complementary to further improvements in appearance and local motion based methods.",
"",
"We introduce the concept of \"dynamic image\", a novel compact representation of videos useful for video analysis, particularly in combination with convolutional neural networks (CNNs). A dynamic image encodes temporal data such as RGB or optical flow videos by using the concept of rank pooling'. The idea is to learn a ranking machine that captures the temporal evolution of the data and to use the parameters of the latter as a representation. When a linear ranking machine is used, the resulting representation is in the form of an image, which we call dynamic because it summarizes the video dynamics in addition of appearance. This is a powerful idea because it allows to convert any video to an image so that existing CNN models pre-trained for the analysis of still images can be immediately extended to videos. We also present an efficient and effective approximate rank pooling operator, accelerating standard rank pooling algorithms by orders of magnitude, and formulate that as a CNN layer. This new layer allows generalizing dynamic images to dynamic feature maps. We demonstrate the power of the new representations on standard benchmarks in action recognition achieving state-of-the-art performance.",
"",
"",
"In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new large-scale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.",
"",
"Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of 8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.",
"Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.",
"We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call \"percepts\" using Gated-Recurrent-Unit Recurrent Networks (GRUs).Our method relies on percepts that are extracted from all level of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts can leads to high-dimensionality video representations. To mitigate this effect and control the model number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler text-decoder model and without extra 3D CNN features.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep model that can act directly on the raw inputs. However, such models are currently limited to handling 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance, we propose regularizing the outputs with high-level features and combining the predictions of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and they achieve superior performance in comparison to baseline methods."
]
}
|
1811.05013
|
2901725278
|
We explore blindfold (question-only) baselines for Embodied Question Answering. The EmbodiedQA task requires an agent to answer a question by intelligently navigating in a simulated environment, gathering necessary visual information only through first-person vision before finally answering. Consequently, a blindfold baseline which ignores the environment and visual information is a degenerate solution, yet we show through our experiments on the EQAv1 dataset that a simple question-only baseline achieves state-of-the-art results on the EmbodiedQA task in all cases except when the agent is spawned extremely close to the object.
|
: introduced the PACMAN-RL+Q model which is bootstrapped with expert shortest-path demonstrations and later fine-tuned with REINFORCE @cite_28 . This model consists of a hierarchical navigation module: a planner and a controller, and a question answering module that acts when the navigation module has given up control. In a later work, introduce Neural Modular Control (NMC) which is a hierarchical policy network that operates over expert sub-policy sketches. The master and sub-policies are initialized with Behavior Cloning (BC), and later fine-tuned with Asynchronous Advantage Actor-Critic (A3C) @cite_21 .
|
{
"cite_N": [
"@cite_28",
"@cite_21"
],
"mid": [
"2119717200",
"2260756217"
],
"abstract": [
"This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input."
]
}
|
1811.05013
|
2901725278
|
We explore blindfold (question-only) baselines for Embodied Question Answering. The EmbodiedQA task requires an agent to answer a question by intelligently navigating in a simulated environment, gathering necessary visual information only through first-person vision before finally answering. Consequently, a blindfold baseline which ignores the environment and visual information is a degenerate solution, yet we show through our experiments on the EQAv1 dataset that a simple question-only baseline achieves state-of-the-art results on the EmbodiedQA task in all cases except when the agent is spawned extremely close to the object.
|
: Many recent studies in language and vision show how biases in a dataset allow models to perform well on a task without leveraging the meaning of the text or image in the underlying dataset. A simple CNN-BoW model was shown to achieve state-of-the-art results @cite_11 on the Visual7W @cite_9 task while also performing surprisingly well compared to the most complex systems proposed for the VQA dataset @cite_3 and other joint vision and language tasks @cite_4 @cite_20 . Simple nearest neighbor approaches have been shown to perform well on image captioning datasets @cite_23 . This phenomenon has also been observed in language processing tasks. On the Story-cloze task which was presented to evaluate common-sense reasoning, achieved state-of-the-art performance by ignoring the narrative and training a linear classifier with features related to the writing style of the two potential endings, rather than their content. Similar observations were found on the Natural Language Inference (NLI) datasets, where methods ignoring the context and relying only on the hypothesis perform remarkably well @cite_13 @cite_5 . Most recently, question-only and passage-only baselines on several QA datasets highlighted similar issues @cite_18 .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_9",
"@cite_3",
"@cite_23",
"@cite_5",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"2885826215",
"2123024445",
"2136462581",
"2950761309",
"1706899115",
"",
"2798358706",
"2882995289",
"2776202271"
],
"abstract": [
"Many recent papers address reading comprehension, where examples consist of (question, passage, answer) tuples. Presumably, a model must combine information from both questions and passages to predict corresponding answers. However, despite intense interest in the topic, with hundreds of published papers vying for leaderboard dominance, basic questions about the difficulty of many popular benchmarks remain unanswered. In this paper, we establish sensible baselines for the bAbI, SQuAD, CBT, CNN, and Who-did-What datasets, finding that question- and passage-only models often perform surprisingly well. On @math out of @math bAbI tasks, passage-only models achieve greater than @math accuracy, sometimes matching the full model. Interestingly, while CBT provides @math -sentence stories only the last is needed for comparably accurate prediction. By comparison, SQuAD and CNN appear better-constructed.",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"We have seen great progress in basic perceptual tasks such as object recognition and detection. However, AI models still fail to match humans in high-level vision tasks due to the lack of capacities for deeper reasoning. Recently the new task of visual question answering (QA) has been proposed to evaluate a model’s capacity for deep image understanding. Previous works have established a loose, global association between QA sentences and images. However, many questions and answers, in practice, relate to local regions in the images. We establish a semantic link between textual descriptions and image regions by object-level grounding. It enables a new type of QA with visual answers, in addition to textual answers used in previous work. We study the visual QA tasks in a grounded setting with a large collection of 7W multiple-choice QA pairs. Furthermore, we evaluate human performance and several baseline models on the QA tasks. Finally, we propose a novel LSTM model with spatial attention to tackle the 7W QA tasks.",
"We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).",
"We explore a variety of nearest neighbor baseline approaches for image captioning. These approaches find a set of nearest neighbor images in the training set from which a caption may be borrowed for the query image. We select a caption for the query image by finding the caption that best represents the \"consensus\" of the set of candidate captions gathered from the nearest neighbor images. When measured by automatic evaluation metrics on the MS COCO caption evaluation server, these approaches perform as well as many recent approaches that generate novel captions. However, human studies show that a method that generates novel captions is still preferred over the nearest neighbor approach.",
"",
"We propose a hypothesis only baseline for diagnosing Natural Language Inference (NLI). Especially when an NLI dataset assumes inference is occurring based purely on the relationship between a context and a hypothesis, it follows that assessing entailment relations while ignoring the provided context is a degenerate solution. Yet, through experiments on ten distinct NLI datasets, we find that this approach, which we refer to as a hypothesis-only model, is able to significantly outperform a majority class baseline across a number of NLI datasets. Our analysis suggests that statistical irregularities may allow a model to perform NLI in some datasets beyond what should be achievable without access to the context.",
"Multimodal representations of text and images have become popular in recent years. Text however has inherent ambiguities when describing visual scenes, leading to the recent development of datasets with detailed graphical descriptions in the form of scene graphs. We consider the task of joint representation of semantically precise scene graphs and images. We propose models for representing scene graphs and aligning them with images. We investigate methods based on bag-of-words, subpath representations, as well as neural networks. Our investigation proposes and contrasts several models which can address this task and highlights some unique challenges in both designing models and evaluation.",
"We introduce The House Of inteRactions (THOR), a framework for visual AI research, available at this http URL AI2-THOR consists of near photo-realistic 3D indoor scenes, where AI agents can navigate in the scenes and interact with objects to perform tasks. AI2-THOR enables research in many different domains including but not limited to deep reinforcement learning, imitation learning, learning by interaction, planning, visual question answering, unsupervised representation learning, object detection and segmentation, and learning models of cognition. The goal of AI2-THOR is to facilitate building visually intelligent models and push the research forward in this domain."
]
}
|
1811.05042
|
2901606971
|
Unsupervised domain adaptation methods aim to alleviate performance degradation caused by domain-shift by learning domain-invariant representations. Existing deep domain adaptation methods focus on holistic feature alignment by matching source and target holistic feature distributions, without considering local features and their multi-mode statistics. We show that the learned local feature patterns are more generic and transferable and a further local feature distribution matching enables fine-grained feature alignment. In this paper, we present a method for learning domain-invariant local feature patterns and jointly aligning holistic and local feature statistics. Comparisons to the state-of-the-art unsupervised domain adaptation methods on two popular benchmark datasets demonstrate the superiority of our approach and its effectiveness on alleviating negative transfer.
|
Our work is also related to feature aggregation methods, such as vectors of locally aggregated descriptors (VLAD) @cite_11 , bag of visual words (BoW) @cite_24 , and Fisher vectors (FV) @cite_21 . Previously, these methods have usually been applied to aggregate hand-crafted keypoint descriptors, such as SIFT, as a post-processing step, and only recently have them been extended to encode deep convolutional features with end-to-end training @cite_23 . VLAD has been successfully applied to image retrieval @cite_12 , place recognition @cite_23 , action recognition @cite_9 , etc. We build on the end-to-end trainable VLAD, and extend it to learn generic local feature patterns and facilitate local feature alignment for unsupervised domain adaptation.
|
{
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_24",
"@cite_23",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2147238549",
"2131846894",
"2951019013",
"2949266290",
"2012592962"
],
"abstract": [
"",
"Within the field of pattern classification, the Fisher kernel is a powerful framework which combines the strengths of generative and discriminative approaches. The idea is to characterize a signal with a gradient vector derived from a generative probability model and to subsequently feed this representation to a discriminative classifier. We propose to apply this framework to image categorization where the input signals are images and where the underlying generative model is a visual vocabulary: a Gaussian mixture model which approximates the distribution of low-level features in images. We show that Fisher kernels can actually be understood as an extension of the popular bag-of-visterms. Our approach demonstrates excellent performance on two challenging databases: an in-house database of 19 object scene categories and the recently released VOC 2006 database. It is also very practical: it has low computational needs both at training and test time and vocabularies trained on one set of categories can be applied to another set without any significant loss in performance.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the \"Vector of Locally Aggregated Descriptors\" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state-of-the-art compact image representations on standard image retrieval benchmarks.",
"Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks.",
"We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms."
]
}
|
1811.05118
|
2900900626
|
Face anti-spoofing is significant to the security of face recognition systems. Previous works on depth supervised learning have proved the effectiveness for face anti-spoofing. Nevertheless, they only considered the depth as an auxiliary supervision in the single frame. Different from these methods, we develop a new method to estimate depth information from multiple RGB frames and propose a depth-supervised architecture which can efficiently encodes spatiotemporal information for presentation attack detection. It includes two novel modules: optical flow guided feature block (OFFB) and convolution gated recurrent units (ConvGRU) module, which are designed to extract short-term and long-term motion to discriminate living and spoofing faces. Extensive experiments demonstrate that the proposed approach achieves state-of-the-art results on four benchmark datasets, namely OULU-NPU, SiW, CASIA-MFSD, and Replay-Attack.
|
Since face anti-spoofing is essentially a binary classification problem, most of previous anti-spoofing methods purely train a classifier under binary supervision, e.g., spoofing face as 0 and living face as 1. Binary classifiers contain traditional classifiers and neural networks. Prior works usually rely on hand-crafted features, such as LBP @cite_27 @cite_39 @cite_8 , SIFT @cite_4 , SURF @cite_31 , HoG @cite_1 @cite_5 , DoG @cite_33 @cite_28 with traditional classifiers, such as SVM and Random Forest. Since these manually-engineered features are often sensitive to varying condition, such as camera devices, lighting conditions and presentation attack instruments (PAIs), traditional methods often perform poorly in generalization.
|
{
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_1",
"@cite_39",
"@cite_27",
"@cite_5",
"@cite_31"
],
"mid": [
"2418633638",
"2063661788",
"2107227001",
"1889383825",
"2095252718",
"2106852298",
"1770095230",
"2042883034",
"2551249768"
],
"abstract": [
"With the wide deployment of the face recognition systems in applications from deduplication to mobile device unlocking, security against the face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays, and 3D masks of a face. We address the problem of face spoof detection against the print (photo) and replay (photo or video) attacks based on the analysis of image distortion ( e.g. , surface reflection, moire pattern, color distortion, and shape deformation) in spoof face images (or video frames). The application domain of interest is smartphone unlock, given that the growing number of smartphones have the face unlock and mobile payment capabilities. We build an unconstrained smartphone spoof attack database (MSU USSA) containing more than 1000 subjects. Both the print and replay attacks are captured using the front and rear cameras of a Nexus 5 smartphone. We analyze the image distortion of the print and replay attacks using different: 1) intensity channels (R, G, B, and grayscale); 2) image regions (entire image, detected face, and facial component between nose and chin); and 3) feature descriptors. We develop an efficient face spoof detection system on an Android smartphone. Experimental results on the public-domain Idiap Replay-Attack, CASIA FASD, and MSU-MFSD databases, and the MSU USSA database show that the proposed approach is effective in face spoof detection for both the cross-database and intra-database testing scenarios. User studies of our Android face spoof detection system involving 20 participants show that the proposed approach works very well in real application scenarios.",
"Spoofing face recognition systems with photos or videos of someone else is not difficult. Sometimes, all one needs is to display a picture on a laptop monitor or a printed photograph to the biometric system. In order to detect this kind of spoofs, in this paper we present a solution that works either with printed or LCD displayed photographs, even under bad illumination conditions without extra-devices or user involvement. Tests conducted on large databases show good improvements of classification accuracy as well as true positive and false positive rates compared to the state-of-the-art.",
"Current face biometric systems are vulnerable to spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access. Inspired by image quality assessment, characterization of printing artifacts, and differences in light reflection, we propose to approach the problem of spoofing detection from texture analysis point of view. Indeed, face prints usually contain printing quality defects that can be well detected using texture features. Hence, we present a novel approach based on analyzing facial image textures for detecting whether there is a live person in front of the camera or a face print. The proposed approach analyzes the texture of the facial images using multi-scale local binary patterns (LBP). Compared to many previous works, our proposed approach is robust, computationally fast and does not require user-cooperation. In addition, the texture features that are used for spoofing detection can also be used for face recognition. This provides a unique feature space for coupling spoofing detection and face recognition. Extensive experimental analysis on a publicly available database showed excellent results compared to existing works.",
"Spoofing with photograph or video is one of the most common manner to circumvent a face recognition system. In this paper, we present a real-time and non-intrusive method to address this based on individual images from a generic webcamera. The task is formulated as a binary classification problem, in which, however, the distribution of positive and negative are largely overlapping in the input space, and a suitable representation space is hence of importance. Using the Lambertian model, we propose two strategies to extract the essential information about different surface properties of a live human face or a photograph, in terms of latent samples. Based on these, we develop two new extensions to the sparse logistic regression model which allow quick and accurate spoof detection. Primary experiments on a large photo imposter database show that the proposed method gives preferable detection performance compared to others.",
"The face recognition community has finally started paying more attention to the long-neglected problem of spoofing attacks and the number of countermeasures is gradually increasing. Fairly good results have been reported on the publicly available databases but it is reasonable to assume that there exists no superior anti-spoofing technique due to the varying nature of attack scenarios and acquisition conditions. Therefore, we propose to approach the problem of face spoofing as a set of attack-specific subproblems that are solvable with a proper combination of complementary countermeasures. Inspired by how we humans can perform reliable spoofing detection only based on the available scene and context information, this work provides the first investigation in research literature that attempts to detect the presence of spoofing medium in the observed scene. We experiment with two publicly available databases consisting of several fake face attacks of different nature under varying conditions and imaging qualities. The experiments show excellent results beyond the state of the art. More importantly, our cross-database evaluation depicts that the proposed approach has promising generalization capabilities.",
"User authentication is an important step to protect information and in this field face biometrics is advantageous. Face biometrics is natural, easy to use and less human-invasive. Unfortunately, recent work has revealed that face biometrics is vulnerable to spoofing attacks using low-tech equipments. This article assesses how well existing face anti-spoofing countermeasures can work in a more realistic condition. Experiments carried out with two freely available video databases (Replay Attack Database and CASIA Face Anti-Spoofing Database) show low generalization and possible database bias in the evaluated countermeasures. To generalize and deal with the diversity of attacks in a real world scenario we introduce two strategies that show promising results.",
"User authentication is an important step to protect information and in this field face biometrics is advantageous. Face biometrics is natural, easy to use and less human-invasive. Unfortunately, recent work has revealed that face biometrics is vulnerable to spoofing attacks using low-tech cheap equipments. This article presents a countermeasure against such attacks based on the LBP−TOP operator combining both space and time information into a single multiresolution texture descriptor. Experiments carried out with the REPLAY ATTACK database show a Half Total Error Rate (HTER) improvement from 15.16 to 7.60 .",
"Spoofing attacks mainly include printing artifacts, electronic screens and ultra-realistic face masks or models. In this paper, we propose a component-based face coding approach for liveness detection. The proposed method consists of four steps: (1) locating the components of face; (2) coding the low-level features respectively for all the components; (3) deriving the high-level face representation by pooling the codes with weights derived from Fisher criterion; (4) concatenating the histograms from all components into a classifier for identification. The proposed framework makes good use of micro differences between genuine faces and fake faces. Meanwhile, the inherent appearance differences among different components are retained. Extensive experiments on three published standard databases demonstrate that the method can achieve the best liveness detection performance in three databases.",
"The vulnerabilities of face biometric authentication systems to spoofing attacks have received a significant attention during the recent years. Some of the proposed countermeasures have achieved impressive results when evaluated on intratests, i.e., the system is trained and tested on the same database. Unfortunately, most of these techniques fail to generalize well to unseen attacks, e.g., when the system is trained on one database and then evaluated on another database. This is a major concern in biometric antispoofing research that is mostly overlooked. In this letter, we propose a novel solution based on describing the facial appearance by applying Fisher vector encoding on speeded-up robust features extracted from different color spaces. The evaluation of our countermeasure on three challenging benchmark face-spoofing databases, namely the CASIA face antispoofing database, the replay-attack database, and MSU mobile face spoof database, showed excellent and stable performance across all the three datasets. Most importantly, in interdatabase tests, our proposed approach outperforms the state of the art and yields very promising generalization capabilities, even when only limited training data are used."
]
}
|
1811.05118
|
2900900626
|
Face anti-spoofing is significant to the security of face recognition systems. Previous works on depth supervised learning have proved the effectiveness for face anti-spoofing. Nevertheless, they only considered the depth as an auxiliary supervision in the single frame. Different from these methods, we develop a new method to estimate depth information from multiple RGB frames and propose a depth-supervised architecture which can efficiently encodes spatiotemporal information for presentation attack detection. It includes two novel modules: optical flow guided feature block (OFFB) and convolution gated recurrent units (ConvGRU) module, which are designed to extract short-term and long-term motion to discriminate living and spoofing faces. Extensive experiments demonstrate that the proposed approach achieves state-of-the-art results on four benchmark datasets, namely OULU-NPU, SiW, CASIA-MFSD, and Replay-Attack.
|
Since then, CNN has achieved great breakthrough with the help of hardware development and data abundance. Recently, CNN is also widely used in face anti-spoofing tasks @cite_10 @cite_9 @cite_3 @cite_34 @cite_26 @cite_22 @cite_6 . However, most of the deep learning methods simply consider face anti-spoofing as a binary classification problem with softmax loss. Both @cite_26 and @cite_22 fine-tune a pre-trained VGG-face model and take it as a feature extractor for the subsequent classification. Nagpal al. @cite_9 comprehensively study the influence of different network architectures and hyperparameters on face anti-spoofing. Feng @cite_34 and Li @cite_26 feed different kinds of face images into the CNN network to learn discriminative features on living faces and spoofing faces.
|
{
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_34",
"@cite_10"
],
"mid": [
"2578178601",
"2522438482",
"2802734296",
"2778720069",
"1704933117",
"2510926985",
"2617869948"
],
"abstract": [
"Recently deep Convolutional Neural Networks have been successfully applied in many computer vision tasks and achieved promising results. So some works have introduced the deep learning into face anti-spoofing. However, most approaches just use the final fully-connected layer to distinguish the real and fake faces. Inspired by the idea of each convolutional kernel can be regarded as a part filter, we extract the deep partial features from the convolutional neural network (CNN) to distinguish the real and fake faces. In our prosed approach, the CNN is fine-tuned firstly on the face spoofing datasets. Then, the block principle component analysis (PCA) method is utilized to reduce the dimensionality of features that can avoid the over-fitting problem. Lastly, the support vector machine (SVM) is employed to distinguish the real the real and fake faces. The experiments evaluated on two public available databases, Replay-Attack and CASIA, show the proposed method can obtain satisfactory results compared to the state-of-the-art methods.",
"With the wide applications of user authentication based on face recognition, face spoof attacks against face recognition systems are drawing increasing attentions. While emerging approaches of face antispoofing have been reported in recent years, most of them limit to the non-realistic intra-database testing scenarios instead of the cross-database testing scenarios. We propose a robust representation integrating deep texture features and face movement cue like eye-blink as countermeasures for presentation attacks like photos and replays. We learn deep texture features from both aligned facial images and whole frames, and use a frame difference based approach for eye-blink detection. A face video clip is classified as live if it is categorized as live using both cues. Cross-database testing on public-domain face databases shows that the proposed approach significantly outperforms the state-of-the-art.",
"In the current era, biometric based access control is becoming more popular due to its simplicity and ease to use by the users. It reduces the manual work of identity recognition and facilitates the automatic processing. Face is one of the most important biometric visual information that can be easily captured without user cooperation in uncontrolled environment. Precise detection of spoofed faces should be on the high priority to make face based identity recognition and access control robust against possible attacks. The recently evolved Convolutional Neural Network (CNN) based deep learning technique has been proved as one of the excellent method to deal with the visual information very effectively. The CNN learns the hierarchical features at intermediate layers automatically from the data. Several CNN based methods such as Inception and ResNet have shown outstanding performance for image classification problem. This paper does a performance evaluation of CNNs for face anti-spoofing. The Inception and ResNet CNN architectures are used in this study. The results are computed over benchmark MSU Mobile Face Spoofing Database. The experiments are done by considering the different aspects such as depth of the model, random weight initialization vs weight transfer, fine tuning vs training from scratch and different learning rate. The favorable results are obtained using these CNN architectures for face anti-spoofing in different settings.",
"Face anti-spoofing is very significant to the security of face recognition. Many existing literatures focus on the study of photo attack. For the video attack, however, the related research efforts are still insufficient. In this paper, instead of extracting features from a single image, features are learned from video frames. To realize face anti-spoofing, the spatiotemporal features of continuous video frames are extracted using 3D convolution neural network (CNN) from the short video frame level. Experimental results show that the two sets of face anti-spoofing public databases, Replay-Attack and CASIA, have achieved the HTER (Half Total Error Rate) of 0.04 and 10.65 , respectively, which is better than the state-of-the-art.",
"Though having achieved some progresses, the hand-crafted texture features, e.g., LBP [23], LBP-TOP [11] are still unable to capture the most discriminative cues between genuine and fake faces. In this paper, instead of designing feature by ourselves, we rely on the deep convolutional neural network (CNN) to learn features of high discriminative ability in a supervised manner. Combined with some data pre-processing, the face anti-spoofing performance improves drastically. In the experiments, over 70 relative decrease of Half Total Error Rate (HTER) is achieved on two challenging datasets, CASIA [36] and REPLAY-ATTACK [7] compared with the state-of-the-art. Meanwhile, the experimental results from inter-tests between two datasets indicates CNN can obtain features with better generalization ability. Moreover, the nets trained using combined data from two datasets have less biases between two datasets.",
"A multi-cues integration framework is proposed using a hierarchical neural network.Bottleneck representations are effective in multi-cues feature fusion.Shearlet is utilized to perform face image quality assessment.Motion-based face liveness features are automatically learned using autoencoders. Many trait-specific countermeasures to face spoofing attacks have been developed for security of face authentication. However, there is no superior face anti-spoofing technique to deal with every kind of spoofing attack in varying scenarios. In order to improve the generalization ability of face anti-spoofing approaches, an extendable multi-cues integration framework for face anti-spoofing using a hierarchical neural network is proposed, which can fuse image quality cues and motion cues for liveness detection. Shearlet is utilized to develop an image quality-based liveness feature. Dense optical flow is utilized to extract motion-based liveness features. A bottleneck feature fusion strategy can integrate different liveness features effectively. The proposed approach was evaluated on three public face anti-spoofing databases. A half total error rate (HTER) of 0 and an equal error rate (EER) of 0 were achieved on both REPLAY-ATTACK database and 3D-MAD database. An EER of 5.83 was achieved on CASIA-FASD database.",
"Face recognition systems are gaining momentum with current developments in computer vision. At the same time, tactics to mislead these systems are getting more complex, and counter-measure approaches are necessary. Following the current progress with convolutional neural networks (CNN) in classification tasks, we present an approach based on transfer learning using a pre-trained CNN model using only static features to recognize photo, video or mask attacks. We tested our approach on the REPLAY-ATTACK and 3DMAD public databases. On the REPLAY-ATTACK database our accuracy was 99.04 and the half total error rate (HTER) of 1.20 . For the 3DMAD, our accuracy was of 100.00 and HTER 0.00 . Our results are comparable to the state-of-the-art."
]
}
|
1811.05118
|
2900900626
|
Face anti-spoofing is significant to the security of face recognition systems. Previous works on depth supervised learning have proved the effectiveness for face anti-spoofing. Nevertheless, they only considered the depth as an auxiliary supervision in the single frame. Different from these methods, we develop a new method to estimate depth information from multiple RGB frames and propose a depth-supervised architecture which can efficiently encodes spatiotemporal information for presentation attack detection. It includes two novel modules: optical flow guided feature block (OFFB) and convolution gated recurrent units (ConvGRU) module, which are designed to extract short-term and long-term motion to discriminate living and spoofing faces. Extensive experiments demonstrate that the proposed approach achieves state-of-the-art results on four benchmark datasets, namely OULU-NPU, SiW, CASIA-MFSD, and Replay-Attack.
|
Liu @cite_7 proposes a face anti-spoofing method from a combination of spatial perspective (depth) and temporal perspective (rPPG). They regard facial depth as an auxiliary supervision, along with rPPG signals. For temporal information, they use a simple RNN to learn the corresponding rPPG signals. However, due to the simplicity of sequence-processing, they have a non-rigid registration layer to remove the influence of facial poses and expressions, ignoring that unnatural changes of facial poses or expressions are significant spoofing cues.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2963656031"
],
"abstract": [
"Face anti-spoofing is crucial to prevent face recognition systems from a security breach. Previous deep learning approaches formulate face anti-spoofing as a binary classification problem. Many of them struggle to grasp adequate spoofing cues and generalize poorly. In this paper, we argue the importance of auxiliary supervision to guide the learning toward discriminative and generalizable cues. A CNN-RNN model is learned to estimate the face depth with pixel-wise supervision, and to estimate rPPG signals with sequence-wise supervision. The estimated depth and rPPG are fused to distinguish live vs. spoof faces. Further, we introduce a new face anti-spoofing database that covers a large range of illumination, subject, and pose variations. Experiments show that our model achieves the state-of-the-art results on both intra- and cross-database testing."
]
}
|
1811.05118
|
2900900626
|
Face anti-spoofing is significant to the security of face recognition systems. Previous works on depth supervised learning have proved the effectiveness for face anti-spoofing. Nevertheless, they only considered the depth as an auxiliary supervision in the single frame. Different from these methods, we develop a new method to estimate depth information from multiple RGB frames and propose a depth-supervised architecture which can efficiently encodes spatiotemporal information for presentation attack detection. It includes two novel modules: optical flow guided feature block (OFFB) and convolution gated recurrent units (ConvGRU) module, which are designed to extract short-term and long-term motion to discriminate living and spoofing faces. Extensive experiments demonstrate that the proposed approach achieves state-of-the-art results on four benchmark datasets, namely OULU-NPU, SiW, CASIA-MFSD, and Replay-Attack.
|
Temporal information plays a vital role in face anti-spoofing tasks. @cite_13 @cite_22 @cite_15 focus on the movement of key parts of the face. For example, @cite_13 @cite_22 make spoofing predictions based on eye-blinking. These methods are vulnerable to replay attack since excessively relying on one aspect. Gan @cite_3 proposes a 3D convolution network to distinguish the live vs. spoof. 3D convolution network is a stacked structure to learn the temporal features in a supervised pattern, but depends on significant amount of data and performs poorly on small database. Xu @cite_21 proposes an architecture combining LSTM units with CNN for binary classification. Feng @cite_34 presents a work that takes optical flow magnitude map and Shearlet feature as inputs to CNN. Their work shows that optical flow map presents obvious difference between living faces and different kinds of spoofing faces. All prior temporal-based methods are incapable of catching valid temporal information with a well-designed structure.
|
{
"cite_N": [
"@cite_22",
"@cite_21",
"@cite_3",
"@cite_15",
"@cite_34",
"@cite_13"
],
"mid": [
"2522438482",
"2409050142",
"2778720069",
"",
"2510926985",
"2151343288"
],
"abstract": [
"With the wide applications of user authentication based on face recognition, face spoof attacks against face recognition systems are drawing increasing attentions. While emerging approaches of face antispoofing have been reported in recent years, most of them limit to the non-realistic intra-database testing scenarios instead of the cross-database testing scenarios. We propose a robust representation integrating deep texture features and face movement cue like eye-blink as countermeasures for presentation attacks like photos and replays. We learn deep texture features from both aligned facial images and whole frames, and use a frame difference based approach for eye-blink detection. A face video clip is classified as live if it is categorized as live using both cues. Cross-database testing on public-domain face databases shows that the proposed approach significantly outperforms the state-of-the-art.",
"Temporal features is important for face anti-spoofing. Unfortunately existing methods have limitations to explore such temporal features. In this work, we propose a deep neural network architecture combining Long Short-Term Memory (LSTM) units with Convolutional Neural Networks (CNN). Our architecture works well for face anti-spoofing by utilizing the LSTM units' ability of finding long relation from its input sequences as well as extracting local and dense features through convolution operations. Our best model shows significant performance improvement over general CNN architecture (5.93 vs. 7.34 ), and hand-crafted features (5.93 vs. 10.00 ) on CASIA dataset.",
"Face anti-spoofing is very significant to the security of face recognition. Many existing literatures focus on the study of photo attack. For the video attack, however, the related research efforts are still insufficient. In this paper, instead of extracting features from a single image, features are learned from video frames. To realize face anti-spoofing, the spatiotemporal features of continuous video frames are extracted using 3D convolution neural network (CNN) from the short video frame level. Experimental results show that the two sets of face anti-spoofing public databases, Replay-Attack and CASIA, have achieved the HTER (Half Total Error Rate) of 0.04 and 10.65 , respectively, which is better than the state-of-the-art.",
"",
"A multi-cues integration framework is proposed using a hierarchical neural network.Bottleneck representations are effective in multi-cues feature fusion.Shearlet is utilized to perform face image quality assessment.Motion-based face liveness features are automatically learned using autoencoders. Many trait-specific countermeasures to face spoofing attacks have been developed for security of face authentication. However, there is no superior face anti-spoofing technique to deal with every kind of spoofing attack in varying scenarios. In order to improve the generalization ability of face anti-spoofing approaches, an extendable multi-cues integration framework for face anti-spoofing using a hierarchical neural network is proposed, which can fuse image quality cues and motion cues for liveness detection. Shearlet is utilized to develop an image quality-based liveness feature. Dense optical flow is utilized to extract motion-based liveness features. A bottleneck feature fusion strategy can integrate different liveness features effectively. The proposed approach was evaluated on three public face anti-spoofing databases. A half total error rate (HTER) of 0 and an equal error rate (EER) of 0 were achieved on both REPLAY-ATTACK database and 3D-MAD database. An EER of 5.83 was achieved on CASIA-FASD database.",
"We present a real-time liveness detection approach against photograph spoofing in face recognition, by recognizing spontaneous eyeblinks, which is a non-intrusive manner. The approach requires no extra hardware except for a generic webcamera. Eyeblink sequences often have a complex underlying structure. We formulate blink detection as inference in an undirected conditional graphical framework, and are able to learn a compact and efficient observation and transition potentials from data. For purpose of quick and accurate recognition of the blink behavior, eye closity, an easily-computed discriminative measure derived from the adaptive boosting algorithm, is developed, and then smoothly embedded into the conditional model. An extensive set of experiments are presented to show effectiveness of our approach and how it outperforms the cascaded Adaboost and HMM in task of eyeblink detection."
]
}
|
1811.05085
|
2901907021
|
Sentence specificity quantifies the level of detail in a sentence, characterizing the organization of information in discourse. While this information is useful for many downstream applications, specificity prediction systems predict very coarse labels (binary or ternary) and are trained on and tailored toward specific domains (e.g., news). The goal of this work is to generalize specificity prediction to domains where no labeled data is available and output more nuanced real-valued specificity ratings. We present an unsupervised domain adaptation system for sentence specificity prediction, specifically designed to output real-valued estimates from binary training labels. To calibrate the values of these predictions appropriately, we regularize the posterior distribution of the labels towards a reference distribution. We show that our framework generalizes well to three different domains with 50 68 mean absolute error reduction than the current state-of-the-art system trained for news sentence specificity. We also demonstrate the potential of our work in improving the quality and informativeness of dialogue generation systems.
|
Sentence specificity prediction as a task is proposed by , who repurposed discourse relation annotations from WSJ articles @cite_17 for sentence specificity training. incorporated more news sentences as unlabeled data. developed a system to predict sentence specificity for classroom discussions, however the data is not publicly available. All these systems are classifiers trained with categorical data (2 or 3 classes).
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2166957049"
],
"abstract": [
"We present the second version of the Penn Discourse Treebank, PDTB-2.0, describing its lexically-grounded annotations of discourse relations and their two abstract object arguments over the 1 million word Wall Street Journal corpus. We describe all aspects of the annotation, including (a) the argument structure of discourse relations, (b) the sense annotation of the relations, and (c) the attribution of discourse relations and each of their arguments. We list the differences between PDTB-1.0 and PDTB-2.0. We present representative statistics for several aspects of the annotation in the corpus."
]
}
|
1811.05085
|
2901907021
|
Sentence specificity quantifies the level of detail in a sentence, characterizing the organization of information in discourse. While this information is useful for many downstream applications, specificity prediction systems predict very coarse labels (binary or ternary) and are trained on and tailored toward specific domains (e.g., news). The goal of this work is to generalize specificity prediction to domains where no labeled data is available and output more nuanced real-valued specificity ratings. We present an unsupervised domain adaptation system for sentence specificity prediction, specifically designed to output real-valued estimates from binary training labels. To calibrate the values of these predictions appropriately, we regularize the posterior distribution of the labels towards a reference distribution. We show that our framework generalizes well to three different domains with 50 68 mean absolute error reduction than the current state-of-the-art system trained for news sentence specificity. We also demonstrate the potential of our work in improving the quality and informativeness of dialogue generation systems.
|
We use Self-Ensembling @cite_14 as our underlying framework. Self-Ensembling builds on top of Temporal Ensembling @cite_24 and the Mean-Teacher network @cite_10 , both of which originally proposed for semi-supervised learning. In visual domain adaptation, Self-Ensembling shows superior performance than many recently proposed approaches @cite_25 @cite_18 @cite_11 @cite_7 @cite_8 @cite_19 , including GAN-based approaches. To the best of our knowledge, this approach has not been used on language data.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_24",
"@cite_19",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"2767722847",
"2953136327",
"2949987290",
"2951970475",
"2605488490",
"",
"1882958252",
"2949573501"
],
"abstract": [
"",
"This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (, 2017) of temporal ensembling (;, 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.",
"We propose associative domain adaptation, a novel technique for end-to-end domain adaptation with neural networks, the task of inferring class labels for an unlabeled target domain based on the statistical properties of a labeled source domain. Our training scheme follows the paradigm that in order to effectively derive class labels for the target domain, a network should produce statistically domain invariant embeddings, while minimizing the classification error on the labeled source domain. We accomplish this by reinforcing associations between source and target data directly in embedding space. Our method can easily be added to any existing classification network with no structural and almost no computational overhead. We demonstrate the effectiveness of our approach on various benchmarks and achieve state-of-the-art results across the board with a generic convolutional neural network architecture not specifically tuned to the respective tasks. Finally, we show that the proposed association loss produces embeddings that are more effective for domain adaptation compared to methods employing maximum mean discrepancy as a similarity measure in embedding space.",
"Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial approaches to unsupervised domain adaptation have recently been introduced, which reduce the difference between the training and test domain distributions and thus improve generalization performance. Prior generative approaches show compelling visualizations, but are not optimal on discriminative tasks and can be limited to smaller shifts. Prior discriminative approaches could handle larger domain shifts, but imposed tied weights on the model and did not exploit a GAN-based loss. We first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and we use this generalized view to better relate the prior approaches. We propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard cross-domain digit classification tasks and a new more difficult cross-modality object classification task.",
"In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 to 7.05 in SVHN with 500 labels and from 18.63 to 16.55 in CIFAR-10 with 4000 labels, and further to 5.12 and 12.16 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.",
"Domain Adaptation is an actively researched problem in Computer Vision. In this work, we propose an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space. We accomplish this by inducing a symbiotic relationship between the learned embedding and a generative adversarial network. This is in contrast to methods which use the adversarial framework for realistic data generation and retraining deep models with such data. We demonstrate the strength and generality of our approach by performing experiments on three different tasks with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain adaptation from synthetic to real data. Our method achieves state-of-the art performance in most experimental settings and by far the only GAN-based method that has been shown to work well across different datasets such as OFFICE and DIGITS.",
"",
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.",
"The effectiveness of generative adversarial approaches in producing images according to a specific style or visual domain has recently opened new directions to solve the unsupervised domain adaptation problem. It has been shown that source labeled images can be modified to mimic target samples making it possible to train directly a classifier in the target domain, despite the original lack of annotated data. Inverse mappings from the target to the source domain have also been evaluated but only passing through adapted feature spaces, thus without new image generation. In this paper we propose to better exploit the potential of generative adversarial networks for adaptation by introducing a novel symmetric mapping among domains. We jointly optimize bi-directional image transformations combining them with target self-labeling. Moreover we define a new class consistency loss that aligns the generators in the two directions imposing to conserve the class identity of an image passing through both domain mappings. A detailed qualitative and quantitative analysis of the reconstructed images confirm the power of our approach. By integrating the two domain specific classifiers obtained with our bi-directional network we exceed previous state-of-the-art unsupervised adaptation results on four different benchmark datasets."
]
}
|
1811.05232
|
2900681891
|
Here we propose a general theoretical method for analyzing the risk bound in the presence of adversaries. Specifically, we try to fit the adversarial learning problem into the minimax framework. We first show that the original adversarial learning problem can be reduced to a minimax statistical learning problem by introducing a transport map between distributions. Then, we prove a new risk bound for this minimax problem in terms of covering numbers under a weak version of Lipschitz condition. Our method can be applied to multi-class classification problems and commonly used loss functions such as the hinge and ramp losses. As some illustrative examples, we derive the adversarial risk bounds for SVMs, deep neural networks, and PCA, and our bounds have two data-dependent terms, which can be optimized for achieving adversarial robustness.
|
Two main approaches are used to analyze the generalization bound of a learning algorithm. The first is based on the complexity of the hypothesis class, such as the VC dimension @cite_42 @cite_47 for binary classification, Rademacher and Gaussian complexities @cite_23 @cite_13 , and the covering number @cite_49 @cite_24 @cite_4 . Note that hypothesis complexity-based analyses of generalization error are algorithm independent and consider the worst-case generalization over all functions in the hypothesis class. In contrast, the second approach is based on the properties of a learning algorithm and is therefore algorithm dependent. The properties characterizing the generalization of a learning algorithm include, for example, algorithmic stability @cite_18 @cite_38 @cite_45 , robustness @cite_3 , and algorithmic luckiness @cite_20 . Some other methods exist for analyzing the generalization error in machine learning such as the PAC-Bayesian approach @cite_10 @cite_11 , compression-based bounds @cite_43 @cite_5 , and information-theoretic approaches @cite_14 @cite_0 @cite_1 @cite_36 .
|
{
"cite_N": [
"@cite_36",
"@cite_42",
"@cite_3",
"@cite_43",
"@cite_5",
"@cite_10",
"@cite_20",
"@cite_38",
"@cite_18",
"@cite_4",
"@cite_23",
"@cite_49",
"@cite_14",
"@cite_1",
"@cite_24",
"@cite_0",
"@cite_45",
"@cite_47",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"",
"1892947258",
"2170207925",
"",
"2741952635",
"2188409822",
"",
"27434444",
"",
"2178357350",
"1968436459",
"2963862692",
"",
"",
"",
"",
"2149298154",
"",
""
],
"abstract": [
"",
"",
"We derive generalization bounds for learning algorithms based on their robustness: the property that if a testing sample is \"similar\" to a training sample, then the testing error is close to the training error. This provides a novel approach, different from complexity or stability arguments, to study generalization of learning algorithms. One advantage of the robustness approach, compared to previous methods, is the geometric intuition it conveys. Consequently, robustness-based analysis is easy to extend to learning in non-standard setups such as Markovian samples or quantile loss. We further show that a weak notion of robustness is both sufficient and necessary for generalizability, which implies that robustness is a fundamental property that is required for learning algorithms to work.",
"We discuss basic prediction theory and its impact on classification success evaluation, implications for learning algorithm design, and uses in learning algorithm execution. This tutorial is meant to be a comprehensive compilation of results which are both theoretically rigorous and quantitatively useful.There are two important implications of the results presented here. The first is that common practices for reporting results in classification should change to use the test set bound. The second is that train set bounds can sometimes be used to directly motivate learning algorithms.",
"",
"We present a generalization bound for feedforward neural networks in terms of the product of the spectral norm of the layers and the Frobenius norm of the weights. The generalization bound is derived using a PAC-Bayes analysis.",
"One of the central questions in statistical learning theory is to determine the conditions under which agents can learn from experience. This includes the necessary and sufficient conditions for generalization from a given finite training set to new observations. In this paper, we prove that algorithmic stability in the inference process is equivalent to uniform generalization across all parametric loss functions. We provide various interpretations of this result. For instance, a relationship is proved between stability and data processing, which reveals that algorithmic stability can be improved by post-processing the inferred hypothesis or by augmenting training examples with artificial noise prior to learning. In addition, we establish a relationship between algorithmic stability and the size of the observation space, which provides a formal justification for dimensionality reduction methods. Finally, we connect algorithmic stability to the size of the hypothesis space, which recovers the classical PAC result that the size (complexity) of the hypothesis space should be controlled in order to improve algorithmic stability and improve generalization.",
"",
"The problem of characterizing learnability is the most basic question of statistical learning theory. A fundamental and long-standing answer, at least for the case of supervised classification and regression, is that learnability is equivalent to uniform convergence of the empirical risk to the population risk, and that if a problem is learnable, it is learnable via empirical risk minimization. In this paper, we consider the General Learning Setting (introduced by Vapnik), which includes most statistical learning problems as special cases. We show that in this setting, there are non-trivial learning problems where uniform convergence does not hold, empirical risk minimization fails, and yet they are learnable using alternative mechanisms. Instead of uniform convergence, we identify stability as the key necessary and sufficient condition for learnability. Moreover, we show that the conditions for learnability in the general setting are significantly more complex than in supervised classification and regression.",
"",
"We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.",
"The covering number of a ball of a reproducing kernel Hilbert space as a subset of the continuous function space plays an important role in Learning Theory. We give estimates for this covering number by means of the regularity of the Mercer kernel K. For convolution type kernels K(x, t) = k(x - t) on [0, 1]n, we provide estimates depending on the decay of k, the Fourier transform of k. In particular, when k decays exponentially, our estimate for this covering number is better than all the previous results and covers many important Mercer kernels. A counter example is presented to show that the eigenfunctions of the Hilbert-Schmidt operator LK associated with a Mercer kernel K may not be uniformly bounded. Hence some previous methods used for estimating the covering number in Learning Theory are not valid. We also provide an example of a Mercer kernel to show that LK½ may not be generated by a Mercer kernel.",
"We derive upper bounds on the generalization error of a learning algorithm in terms of the mutual information between its input and output. The bounds provide an information-theoretic understanding of generalization in learning problems, and give theoretical guidelines for striking the right balance between data fit and generalization by controlling the input-output mutual information. We propose a number of methods for this purpose, among which are algorithms that regularize the ERM algorithm with relative entropy or with random noise. Our work extends and leads to nontrivial improvements on the recent results of Russo and Zou.",
"",
"",
"",
"",
"Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems.",
"",
""
]
}
|
1811.05082
|
2901440799
|
Target-based sentiment analysis involves opinion target extraction and target sentiment classification. However, most of the existing works usually studied one of these two sub-tasks alone, which hinders their practical use. This paper aims to solve the complete task of target-based sentiment analysis in an end-to-end fashion, and presents a novel unified model which applies a unified tagging scheme. Our framework involves two stacked recurrent neural networks: The upper one predicts the unified tags to produce the final output results of the primary target-based sentiment analysis; The lower one performs an auxiliary target boundary prediction aiming at guiding the upper network to improve the performance of the primary task. To explore the inter-task dependency, we propose to explicitly model the constrained transitions from target boundaries to target sentiment polarities. We also propose to maintain the sentiment consistency within an opinion target via a gate mechanism which models the relation between the features for the current word and the previous word. We conduct extensive experiments on three benchmark datasets and our framework achieves consistently superior results.
|
As mentioned in Introduction, Target-based Sentiment Analysis are usually divided into two sub-tasks, namely, the Opinion Target Extraction task (OTE) and the Target Sentiment Classification (TSC) task. Although these two sub-tasks are treated as separate tasks and solved individually in most cases, for more practical applications, they should be solved in one framework. Given an input sentence, the output of a method should contain not only the extracted opinion targets, but also the sentiment predictions towards them. Some previous works attempted to discover the relationship between these two sub-tasks and gave a more integrated solution for solving the complete TBSA task. Concretely, @cite_17 employed Conditional Random Fields (CRF) together with hand-crafted linguistic features to detect the boundary of the target mention and predict the sentiment polarity. @cite_2 further improved the performance of the CRF based method by introducing a fully connected layer to consolidate the linguistic features and word embeddings. However, they found that a pipeline method can beat both of the model with joint training and the unified model. In this paper, we reexamine the task, and proposed a new unified solution which outperforms all previous reported methods.
|
{
"cite_N": [
"@cite_2",
"@cite_17"
],
"mid": [
"2252007242",
"2251900677"
],
"abstract": [
"Open domain targeted sentiment is the joint information extraction task that finds target mentions together with the sentiment towards each mention from a text corpus. The task is typically modeled as a sequence labeling problem, and solved using state-of-the-art labelers such as CRF. We empirically study the effect of word embeddings and automatic feature combinations on the task by extending a CRF baseline using neural networks, which have demonstrated large potentials for sentiment analysis. Results show that the neural model can give better results by significantly increasing the recall. In addition, we propose a novel integration of neural and discrete features, which combines their relative advantages, leading to significantly higher results compared to both baselines.",
"We propose a novel approach to sentiment analysis for a low resource setting. The intuition behind this work is that sentiment expressed towards an entity, targeted sentiment, may be viewed as a span of sentiment expressed across the entity. This representation allows us to model sentiment detection as a sequence tagging problem, jointly discovering people and organizations along with whether there is sentiment directed towards them. We compare performance in both Spanish and English on microblog data, using only a sentiment lexicon as an external resource. By leveraging linguisticallyinformed features within conditional random fields (CRFs) trained to minimize empirical risk, our best models in Spanish significantly outperform a strong baseline, and reach around 90 accuracy on the combined task of named entity recognition and sentiment prediction. Our models in English, trained on a much smaller dataset, are not yet statistically significant against their baselines."
]
}
|
1811.05200
|
2900456739
|
We resolve the Ramsey problem for @math for all polynomials @math over @math .
|
Ramsey theory has witnessed exciting development recently. We refer the readers to the papers of Green and Sanders @cite_0 and of Moreira @cite_2 for the problem involving sum and product of @math and @math , and to the paper of Chow, Lindqvist and Prendiville @cite_13 for generalisation of Rado’s criterion to higher powers for sufficiently many variables.
|
{
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_2"
],
"mid": [
"1864140179",
"2808507395",
"2964120202"
],
"abstract": [
"Suppose that @math is coloured with @math colours. Then there is some colour class containing at least @math quadruples of the form @math .",
"We establish partition regularity of the generalised Pythagorean equation in five or more variables. Furthermore, we show how Rado's characterisation of a partition regular equation remains valid over the set of positive @math th powers, provided the equation has at least @math variables. We thus completely describe which diagonal forms are partition regular and which are not, given sufficiently many variables. In addition, we prove a supersaturated version of Rado's theorem for a linear equation restricted either to squares minus one or to logarithmically-smooth numbers.",
""
]
}
|
1811.05303
|
2900560443
|
Translating natural language to SQL queries for table-based question answering is a challenging problem and has received significant attention from the research community. In this work, we extend a pointer-generator and investigate the order-matters problem in semantic parsing for SQL. Even though our model is a straightforward extension of a general-purpose pointer-generator, it outperforms early works for WikiSQL and remains competitive to concurrently introduced, more complex models. Moreover, we provide a deeper investigation of the potential order-matters problem that could arise due to having multiple correct decoding paths, and investigate the use of REINFORCE as well as a dynamic oracle in this context.
|
Earlier works on semantic parsing relied on CCG and other grammars @cite_12 @cite_27 . With the recent advances in recurrent neural networks and attention , neural translation based approaches for semantic parsing have been developed @cite_21 @cite_7 @cite_29 .
|
{
"cite_N": [
"@cite_7",
"@cite_29",
"@cite_21",
"@cite_27",
"@cite_12"
],
"mid": [
"2950632879",
"2610002206",
"2224454470",
"2252136820",
"2111742432"
],
"abstract": [
"Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural \"programmer\", i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic \"computer\", i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.",
"Tasks like code generation and semantic parsing require mapping unstructured (or partially structured) inputs to well-formed, executable outputs. We introduce abstract syntax networks, a modeling framework for these problems. The outputs are represented as abstract syntax trees (ASTs) and constructed by a decoder with a dynamically-determined modular structure paralleling the structure of the output tree. On the benchmark Hearthstone dataset for code generation, our model obtains 79.2 BLEU and 22.7 exact match accuracy, compared to previous state-of-the-art values of 67.1 and 6.1 . Furthermore, we perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with no task-specific engineering.",
"Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domain- or representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations.",
"In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.",
"We consider the problem of learning to parse sentences to lambda-calculus representations of their underlying semantics and present an algorithm that learns a weighted combinatory categorial grammar (CCG). A key idea is to introduce non-standard CCG combinators that relax certain parts of the grammar—for example allowing flexible word order, or insertion of lexical items— with learned costs. We also present a new, online algorithm for inducing a weighted CCG. Results for the approach on ATIS data show 86 F-measure in recovering fully correct semantic analyses and 95.9 F-measure by a partial-match criterion, a more than 5 improvement over the 90.3 partial-match figure reported by He and Young (2006)."
]
}
|
1811.05303
|
2900560443
|
Translating natural language to SQL queries for table-based question answering is a challenging problem and has received significant attention from the research community. In this work, we extend a pointer-generator and investigate the order-matters problem in semantic parsing for SQL. Even though our model is a straightforward extension of a general-purpose pointer-generator, it outperforms early works for WikiSQL and remains competitive to concurrently introduced, more complex models. Moreover, we provide a deeper investigation of the potential order-matters problem that could arise due to having multiple correct decoding paths, and investigate the use of REINFORCE as well as a dynamic oracle in this context.
|
Labels provided for supervision in semantic parsing datasets can be given either as execution results or as an executable program (logical form). Training semantic parsers on logical forms yields better results than having only the execution results @cite_30 but requires a more elaborate data collection scheme. Significant research effort has been dedicated to train semantic parsers only with execution results. Using policy gradient methods (such as REINFORCE) is a common strategy @cite_7 @cite_14 . Alternative methods @cite_22 @cite_31 @cite_2 exist, which also maximize the likelihood of the execution results.
|
{
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_2",
"@cite_31"
],
"mid": [
"2511149293",
"2751448157",
"2757361303",
"2950632879",
"2610403318",
"2612228435"
],
"abstract": [
"We demonstrate the value of collecting semantic parse labels for knowledge base question answering. In particular, (1) unlike previous studies on small-scale datasets, we show that learning from labeled semantic parses significantly improves overall performance, resulting in absolute 5 point gain compared to learning from answers, (2) we show that with an appropriate user interface, one can obtain semantic parses with high accuracy and at a cost comparable or lower than obtaining just answers, and (3) we have created and shared the largest semantic-parse labeled dataset to date in order to advance research in question answering.",
"Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from in the loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets. By applying policy based reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9 to 59.4 and logical form accuracy from 23.4 to 48.3 .",
"",
"Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural \"programmer\", i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic \"computer\", i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.",
"Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself. Consequently, we must search the space of programs for those that output the correct result, while not being misled by spurious programs: incorrect programs that coincidentally output the correct result. We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both. The new algorithm guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL, and by updating parameters such that probability is spread more evenly across consistent programs. We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-the-art results on a recent context-dependent semantic parsing task.",
""
]
}
|
1811.04695
|
2951722761
|
We present methods for the automatic classification of patent applications using an annotated dataset provided by the organizers of the ALTA 2018 shared task - Classifying Patent Applications. The goal of the task is to use computational methods to categorize patent applications according to a coarse-grained taxonomy of eight classes based on the International Patent Classification (IPC). We tested a variety of approaches for this task and the best results, 0.778 micro-averaged F1-Score, were achieved by SVM ensembles using a combination of words and characters as features. Our team, BMZ, was ranked first among 14 teams in the competition.
|
Applications of NLP and IR to legal texts include the use of text summarization methods @cite_10 to summarize legal documents and most recently, court ruling prediction. A few papers have been published on this topic, such as the one by which reported 70 Regarding the classification of patent applications, the task described in this paper, a related dataset WIPO-alpha was used in the experiments and it is often used in such studies. The WIPO-alpha consists of a different number of patents (in the thousands, but it grows every year) and is usually used in its hierarchical call form @cite_7 . Recently, word embeddings and LSTMs were applied to the task @cite_14 . There, the experiments were hierarchically conducted but in a superficial manner.
|
{
"cite_N": [
"@cite_14",
"@cite_10",
"@cite_7"
],
"mid": [
"",
"2897586237",
"2137332449"
],
"abstract": [
"",
"In this paper, we give an overview of the Legal Judgment Prediction (LJP) competition at Chinese AI and Law challenge (CAIL2018). This competition focuses on LJP which aims to predict the judgment results according to the given facts. Specifically, in CAIL2018 , we proposed three subtasks of LJP for the contestants, i.e., predicting relevant law articles, charges and prison terms given the fact descriptions. CAIL2018 has attracted several hundreds participants (601 teams, 1, 144 contestants from 269 organizations). In this paper, we provide a detailed overview of the task definition, related works, outstanding methods and competition results in CAIL2018.",
"Text categorization is the classification to assign a text document to an appropriate category in a predefined set of categories. We focus on the special case when categories are organized in hierarchy. We present a new approach on this recently emerged subfield of text categorization. The algorithm applies an iterative learning module that allow of gradually creating a classifier by trial-and-error-like method. We present a software that has been developed on the basis of the algorithm to illustrate the capability of the algorithm on large data collection. We experimented on the very large benchmark collection, on the WIPO-alpha (World Intellectual Property Organization, Geneva, Switzerland, 2002) English patent database that consists of about 75000 XML documents distributed over 5000 categories. Our software is able to index the corpus quickly and creates a classifier in a few iteration cycles. We present the results achieved by the classifier w.r.t. various test setting"
]
}
|
1811.04695
|
2951722761
|
We present methods for the automatic classification of patent applications using an annotated dataset provided by the organizers of the ALTA 2018 shared task - Classifying Patent Applications. The goal of the task is to use computational methods to categorize patent applications according to a coarse-grained taxonomy of eight classes based on the International Patent Classification (IPC). We tested a variety of approaches for this task and the best results, 0.778 micro-averaged F1-Score, were achieved by SVM ensembles using a combination of words and characters as features. Our team, BMZ, was ranked first among 14 teams in the competition.
|
investigated in depth the hierarchical problem of WIPO-alpha with SVMs @cite_8 @cite_13 @cite_6 . They showed that using a hierarchical approach produced better results. Many studies showed that evaluating a hierarchical classification task is not trivial and many measures can integrate the class ontology. Still, using multiple hierarchical measures can introduce bias @cite_1 . Yet, there was much improvement in the last 3-4 years in the text classification field. This is one reason, why, when reengaging again in the WIPO-alpha dataset, investigating only the top nodes of WIPO class ontology might be a good start for future successive tasks.
|
{
"cite_N": [
"@cite_1",
"@cite_13",
"@cite_6",
"@cite_8"
],
"mid": [
"1480958671",
"2302704358",
"",
"53773398"
],
"abstract": [
"Multi-label Classification (MC) often deals with hierarchically organized class taxonomies. In contrast to Hierarchical Multilabel Classification (HMC), where the class hierarchy is assumed to be known a priori, we are interested in the opposite case where it is unknown and should be extracted from multi-label data automatically. In this case the predictive performance of a classifier can be assessed by well-known Performance Measures (PMs) used in flat MC such as precision and recall. The fact that these PMs treat all class labels as independent labels, in contrast to hierarchically structured taxonomies, is a problem. As an alternative, special hierarchical PMs can be used that utilize hierarchy knowledge and apply this knowledge to the extracted hierarchy. This type of hierarchical PM has only recently been mentioned in literature. The aim of this study is first to verify whether HMC measures do significantly improve quality assessment in this setting. In addition, we seek to find a proper measure that reflects the potential quality of extracted hierarchies in the best possible way. We empirically compare ten hierarchical and four traditional flat PMs in order to investigate relations between them. The performance measurements obtained for predictions of four multi-label classifiers ML-ARAM, ML-kNN, BoosTexter and SVM on four datasets from the text mining domain are analyzed by means of hierarchical clustering and by calculating pairwise statistical consistency and discriminancy.",
"Patent lawsuits are costly and time-consuming. An ability to forecast a patent litigation and time to litigation allows companies to better allocate budget and time in managing their patent portfolios. We develop predictive models for estimating the likelihood of litigation for patents and the expected time to litigation based on both textual and non-textual features. Our work focuses on improving the state-of-the-art by relying on a different set of features and employing more sophisticated algorithms with more realistic data. The rate of patent litigations is very low, which consequently makes the problem difficult. The initial model for predicting the likelihood is further modified to capture a time-to-litigation perspective.",
"",
"Abstract Automatically extracting semantic information about word mean-ing and document topic from text typically involves an extensivenumber of classes. Such classes may represent prede ned wordsenses, topics or document categories and are often organized in ataxonomy. Thelatterencodesimportantinformation,whichshouldbe exploited in learning classi ers from labeled training data. Tothat extent, this paper presents an extension of multiclass SupportVector Machine learning which can incorporate prior knowledgeabout class relationships. The latter can be encoded in the form ofclass attributes, similarities between classes or even a kernel func-tion de ned over the set of classes. The paper also discusses howto specify and optimize meaningful loss functions based on the rel-ative position of classes in the taxonomy. We include experimentalresults for text categorization and for word sense classi cation. 1 Introduction Manyreal-worldclassi cationtasksaremulticlassproblemsinvolvinglargenumbersof classes. This is in particular true for application domains like information re-trieval and natural language processing, where classes may correspond to documentcategories or word senses: several thousand or even tens of thousands of classes arenot uncommon. For instance, the International Patent Classi cation (IPC) scheme[8] consists of approximately 69,000 classes (called groups) that are used to catego-rize patent documents and WordNet 2.0 [3] consists of almost 80,000 word senses(called synsets) de ned by lexicographers to classify the meaning of English nouns.Multiclass problems of this scale pose a severe challenge for learning algorithms andclassi cation accuracies obtained by even the best classi cation methods are oftendisappointingly poor."
]
}
|
1811.04772
|
2900129362
|
This work addresses the question whether it is possible to design a computer-vision based automatic threat recognition (ATR) system so that it can adapt to changing specifications of a threat without having to create a new ATR each time. The changes in threat specifications, which may be warranted by intelligence reports and world events, are typically regarding the physical characteristics of what constitutes a threat: its material composition, its shape, its method of concealment, etc. Here we present our design of an AATR system (Adaptive ATR) that can adapt to changing specifications in materials characterization (meaning density, as measured by its x-ray attenuation coefficient), its mass, and its thickness. Our design uses a two-stage cascaded approach, in which the first stage is characterized by a high recall rate over the entire range of possibilities for the threat parameters that are allowed to change. The purpose of the second stage is to then fine-tune the performance of the overall system for the current threat specifications. The computational effort for this fine-tuning for achieving a desired PD PFA rate is far less than what it would take to create a new classifier with the same overall performance for the new set of threat specifications.
|
ATR based on CT imaging for airport baggage inspection is made challenging by the artifacts that result from met allic objects that can be in arbitrary locations in a bag @cite_11 @cite_10 ; by a lack of apriori structural information as compared to medical applications of CT @cite_16 ; and by large variability in the CT density range among the objects found in bags.
|
{
"cite_N": [
"@cite_16",
"@cite_10",
"@cite_11"
],
"mid": [
"2171409947",
"2539154361",
"1137062290"
],
"abstract": [
"We present a survey of techniques for the reduction of streaking artefacts caused by met allic objects in X-ray Computed Tomography (CT) images. A comprehensive review of the existing state-of-theart Met al Artefact Reduction (MAR) techniques, drawn predominantly from the medical CT literature, is supported by an experimental comparison of twelve MAR techniques. The experimentation is grounded in an evaluation based on a standard scientific comparison protocol for MAR methods, using a software generated medical phantom image as well as a clinical CT scan. The experimentation is extended by considering novel applications of CT imagery consisting of met al objects in non-tissue surroundings acquired from the aviation security screening domain. We address the shortage of thorough performance analyses in the existing MAR literature by conducting a qualitative as well as quantitative comparative evaluation of the selected techniques. We find that the difficulty in generating accurate priors to be the predominant factor limiting the effectiveness of the state-of-the-art medical MAR techniques when applied to non-medical CT imagery. This study thus extends previous works by: comparing several state-of-the-art MAR techniques; considering both medical and non-medical applications and performing a thorough performance analysis, considering both image quality as well as computational demands.",
"In dual energy computerized tomography (DECT) which is widely used in industrial areas and security inspection, met al artifact reduction (MAR) is a troublesome problem. Pronounced streaks appear in the atomic number reconstruction and the value appears to be highly inaccurate when met al objects are present. In this article, a practical MAR method for DECT is proposed. Firstly, sinogram segmentation based on active contour model is implemented to obtain the met al projection region (MPR). Then, TV inpainting for sinogram was applied before reconstruction. Experiments demonstrate that, with our MAR method, the accuracy and image quality of the atomic number can be greatly improved.",
"In aviation security, checked luggage is screened by computed tomography scanning. Met al objects in the bags create artifacts that degrade image quality. Though there exist met al artifact reduction (MAR) methods mainly in medical imaging literature, they require knowledge of the materials in the scan, or are outlier rejection methods.To improve and evaluate a MAR method we previously introduced, that does not require knowledge of the materials in the scan, and gives good results on data with large quantities and different kinds of met al.We describe in detail an optimization which de-emphasizes met al projections and has a constraint for beam hardening and scatter. This method isolates and reduces artifacts in an intermediate image, which is then fed to a previously published sinogram replacement method. We evaluate the algorithm for luggage data containing multiple and large met al objects. We define measures of artifact reduction, and compare this method against others in MAR literature.Met al artifacts were reduced in our test images, even for multiple and large met al objects, without much loss of structure or resolution.Our MAR method outperforms the methods with which we compared it. Our approach does not make assumptions about image content, nor does it discard met al projections."
]
}
|
1811.04697
|
2949863864
|
We present our submission to the WMT18 Multimodal Translation Task. The main feature of our submission is applying a self-attentive network instead of a recurrent neural network. We evaluate two methods of incorporating the visual features in the model: first, we include the image representation as another input to the network; second, we train the model to predict the visual features and use it as an auxiliary objective. For our submission, we acquired both textual and multimodal additional data. Both of the proposed methods yield significant improvements over recurrent networks and self-attentive textual baselines.
|
Currently, most of the work has been done within the framework of sequence-to-sequence learning. Although some of the proposed approaches use explicit image analysis , most methods use image representation obtained using image classification networks pre-trained on ImageNet , usually VGG19 @cite_0 or ResNet .
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"1686810756"
],
"abstract": [
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
}
|
1811.04595
|
2900167402
|
Answering questions according to multi-modal context is a challenging problem as it requires a deep integration of different data sources. Existing approaches only employ partial interactions among data sources in one attention hop. In this paper, we present the Holistic Multi-modal Memory Network (HMMN) framework which fully considers the interactions between different input sources (multi-modal context, question) in each hop. In addition, it takes answer choices into consideration during the context retrieval stage. Therefore, the proposed framework effectively integrates multi-modal context, question, and answer information, which leads to more informative context retrieved for question answering. Our HMMN framework achieves state-of-the-art accuracy on MovieQA dataset. Extensive ablation studies show the importance of holistic reasoning and contributions of different attention strategies.
|
In contrast to VQA, which only involves visual context, multi-modal question answering takes multiple modalities as context, and has attracted great interest. Kembhavi @cite_24 presented the Textbook Question Answering (TextbookQA) dataset that consists of lessons from middle school science curricula with both textual and diagrammatic context. @cite_4 , PororoQA dataset was introduced, which is constructed from children cartoon Pororo with video, dialogue, and description. Tapaswi @cite_35 introduced the movie question answering (MovieQA) dataset which aims to evaluate the story understanding from both video and subtitle modalities. In this paper, we focus on the MovieQA dataset, and related approaches are discussed as follows.
|
{
"cite_N": [
"@cite_24",
"@cite_35",
"@cite_4"
],
"mid": [
"2746097825",
"",
"2726160912"
],
"abstract": [
"We introduce the task of Multi-Modal Machine Comprehension (M3C), which aims at answering multimodal questions given a context of text, diagrams and images. We present the Textbook Question Answering (TQA) dataset that includes 1,076 lessons and 26,260 multi-modal questions, taken from middle school science curricula. Our analysis shows that a significant portion of questions require complex parsing of the text and the diagrams and reasoning, indicating that our dataset is more complex compared to previous machine comprehension and visual question answering datasets. We extend state-of-the-art methods for textual machine comprehension and visual question answering to the TQA dataset. Our experiments show that these models do not perform well on TQA. The presented dataset opens new challenges for research in question answering and reasoning across multiple modalities.",
"",
"Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children's cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark."
]
}
|
1811.04608
|
2900432658
|
A restricted Boltzmann machine (RBM) learns a probability distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling. Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor (multi-way) input. Matrix-variate and tensor-variate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by model construction, which leads to a weak model expression power. This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power. A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed. Numerical experiments compare the MPORBM with the traditional RBM and MvRBM for data classification and image completion and denoising tasks. The expressive power of the MPORBM as a function of the MPO-rank is also investigated.
|
Real-life data are extensively multiway. Researchers have been motivated to develop corresponding multiway RBMs. For example, @cite_9 proposed a factored conditional RBM for modeling motion style. In their model, both historical and current motion vectors are considered as inputs so that the pairwise association between them is captured. However, since the visible layer is in vector form, the spatial information in the multiway data is not retained. @cite_1 a three-way factored conditional RBM was proposed where a three-way weight tensor is employed to capture the correlations between the input, output, and hidden variables. However, their training data still requires vectorization.
|
{
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2115096495",
"2161000554"
],
"abstract": [
"The Conditional Restricted Boltzmann Machine (CRBM) is a recently proposed model for time series that has a rich, distributed hidden state and permits simple, exact inference. We present a new model, based on the CRBM that preserves its most important computational properties and includes multiplicative three-way interactions that allow the effective interaction weight between two units to be modulated by the dynamic state of a third unit. We factor the three-way weight tensor implied by the multiplicative model, reducing the number of parameters from O(N3) to O(N2). The result is an efficient, compact model whose effectiveness we demonstrate by modeling human motion. Like the CRBM, our model can capture diverse styles of motion with a single set of parameters, and the three-way interactions greatly improve the model's ability to blend motion styles or to transition smoothly among them.",
"Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model."
]
}
|
1811.04608
|
2900432658
|
A restricted Boltzmann machine (RBM) learns a probability distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling. Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor (multi-way) input. Matrix-variate and tensor-variate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by model construction, which leads to a weak model expression power. This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power. A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed. Numerical experiments compare the MPORBM with the traditional RBM and MvRBM for data classification and image completion and denoising tasks. The expressive power of the MPORBM as a function of the MPO-rank is also investigated.
|
The above works are both aiming to capture the interaction among different vector inputs and are hence not directly applicable to matrix and tensor data. The first RBM designed for tensor inputs is @cite_3 , which is called a tensor-variate RBM (TvRBM). In TvRBM, the visible layer is represented as a tensor but the hidden layer is still a vector. Furthermore, the connection between the visible and hidden layers is described by a canonical polyadic (CP) tensor decomposition @cite_10 . However, this CP weight tensor is claimed to constrain the model representation capability @cite_4 .
|
{
"cite_N": [
"@cite_4",
"@cite_10",
"@cite_3"
],
"mid": [
"2551541394",
"2024165284",
""
],
"abstract": [
"Restricted Boltzmann Machine (RBM) is an important generative model modeling vectorial data. While applying an RBM in practice to images, the data have to be vectorized. This results in high-dimensional data and valuable spatial information has got lost in vectorization. In this paper, a Matrix-Variate Restricted Boltzmann Machine (MVRBM) model is proposed by generalizing the classic RBM to explicitly model matrix data. In the new RBM model, both input and hidden variables are in matrix forms which are connected by bilinear transforms. The MVRBM has much less model parameters while retaining comparable performance as the classic RBM. The advantages of the MVRBM have been demonstrated on three real-world applications: handwritten digit denoising, reconstruction and recognition.",
"This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or @math -way array. Decompositions of higher-order tensors (i.e., @math -way arrays with @math ) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.",
""
]
}
|
1811.04608
|
2900432658
|
A restricted Boltzmann machine (RBM) learns a probability distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling. Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor (multi-way) input. Matrix-variate and tensor-variate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by model construction, which leads to a weak model expression power. This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power. A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed. Numerical experiments compare the MPORBM with the traditional RBM and MvRBM for data classification and image completion and denoising tasks. The expressive power of the MPORBM as a function of the MPO-rank is also investigated.
|
Another RBM related model that utilizes tensor input is the matrix-variate RBM (MvRBM) @cite_4 . The visible and hidden layers in an MvRBM are both matrices. Nonetheless, to limit the number of parameters, an MvRBM models the connection between the visible and hidden layers through two separate matrices, which restricts the ability of the model to capture correlations between different data modes.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2551541394"
],
"abstract": [
"Restricted Boltzmann Machine (RBM) is an important generative model modeling vectorial data. While applying an RBM in practice to images, the data have to be vectorized. This results in high-dimensional data and valuable spatial information has got lost in vectorization. In this paper, a Matrix-Variate Restricted Boltzmann Machine (MVRBM) model is proposed by generalizing the classic RBM to explicitly model matrix data. In the new RBM model, both input and hidden variables are in matrix forms which are connected by bilinear transforms. The MVRBM has much less model parameters while retaining comparable performance as the classic RBM. The advantages of the MVRBM have been demonstrated on three real-world applications: handwritten digit denoising, reconstruction and recognition."
]
}
|
1811.04608
|
2900432658
|
A restricted Boltzmann machine (RBM) learns a probability distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling. Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor (multi-way) input. Matrix-variate and tensor-variate RBMs, named MvRBM and TvRBM, have been proposed but are all restrictive by model construction, which leads to a weak model expression power. This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power. A novel training algorithm integrating contrastive divergence and an alternating optimization procedure is also developed. Numerical experiments compare the MPORBM with the traditional RBM and MvRBM for data classification and image completion and denoising tasks. The expressive power of the MPORBM as a function of the MPO-rank is also investigated.
|
All these issues have motivated the MPORBM. Specifically, MPORBM not only employs tensorial visible and hidden layers, but also utilizes a general and powerful tensor network, namely an MPO, to connect the tensorial visible and hidden layers. By doing so, an MPORBM achieves a more powerful model representation capacity than MvRBM and at the same time greatly reduces the model parameters compared to a standard RBM. Note that a mapping of the standard RBM with tensor networks has been described in @cite_12 . However, their work does not generalize the standard RBM to tensorial inputs and is therefore still based on visible and hidden units in vector forms.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2582761306"
],
"abstract": [
"The restricted Boltzmann machine (RBM) is one of the fundamental building blocks of deep learning. RBM finds wide applications in dimensional reduction, feature extraction, and recommender systems via modeling the probability distributions of a variety of input data including natural images, speech signals, and customer ratings, etc. We build a bridge between RBM and tensor network states (TNS) widely used in quantum many-body physics research. We devise efficient algorithms to translate an RBM into the commonly used TNS. Conversely, we give sufficient and necessary conditions to determine whether a TNS can be transformed into an RBM of given architectures. Revealing these general and constructive connections can cross-fertilize both deep learning and quantum many-body physics. Notably, by exploiting the entanglement entropy bound of TNS, we can rigorously quantify the expressive power of RBM on complex data sets. Insights into TNS and its entanglement capacity can guide the design of more powerful deep learning architectures. On the other hand, RBM can represent quantum many-body states with fewer parameters compared to TNS, which may allow more efficient classical simulations."
]
}
|
1811.04689
|
2953145910
|
Recent work has shown that exploiting relations between labels improves the performance of multi-label classification. We propose a novel framework based on generative adversarial networks (GANs) to model label dependency. The discriminator learns to model label dependency by discriminating real and generated label sets. To fool the discriminator, the classifier, or generator, learns to generate label sets with dependencies close to real data. Extensive experiments and comparisons on two large-scale image classification benchmark datasets (MS-COCO and NUS-WIDE) show that the discriminator improves generalization ability for different kinds of models
|
The work mentioned above mainly considers the global representation of the whole image, ignoring the relationships between semantic labels and local image regions, which is difficult to decipher given complex backgrounds. To handle such cases, @cite_16 propose a Hypothesis-CNN-Pooling framework to aggregate the label scores of each proposal using category-wise max-pooling. @cite_6 transform the multi-label recognition problem into a multi-class, multi-instance learning problem and make use of label-view information of the proposals to enhance features. Newer work @cite_4 @cite_9 uses long short term memory (LSTM) units to iteratively discover a sequence of attentional and informative regions and further predict labeling scores.
|
{
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_4",
"@cite_6"
],
"mid": [
"2963300078",
"",
"2780096433",
"2410641892"
],
"abstract": [
"This paper proposes a novel deep architecture to address multi-label image recognition, a fundamental and practical task towards general visual understanding. Current solutions for this task usually rely on an extra step of extracting hypothesis regions (i.e., region proposals), resulting in redundant computation and sub-optimal performance. In this work, we achieve the interpretable and contextualized multi-label image classification by developing a recurrent memorized-attention module. This module consists of two alternately performed components: i) a spatial transformer layer to locate attentional regions from the convolutional feature maps in a region-proposal-free way and ii) an LSTM (Long-Short Term Memory) sub-network to sequentially predict semantic labeling scores on the located regions while capturing the global dependencies of these regions. The LSTM also output the parameters for computing the spatial transformer. On large-scale benchmarks of multi-label image classification (e.g., MS-COCO and PASCAL VOC 07), our approach demonstrates superior performances over other existing state-of-the-arts in both accuracy and efficiency.",
"",
"Recognizing multiple labels of images is a fundamental but challenging task in computer vision, and remarkable progress has been attained by localizing semantic-aware image regions and predicting their labels with deep convolutional neural networks. The step of hypothesis regions (region proposals) localization in these existing multi-label image recognition pipelines, however, usually takes redundant computation cost, e.g., generating hundreds of meaningless proposals with non-discriminative information and extracting their features, and the spatial contextual dependency modeling among the localized regions are often ignored or over-simplified. To resolve these issues, this paper proposes a recurrent attention reinforcement learning framework to iteratively discover a sequence of attentional and informative regions that are related to different semantic objects and further predict label scores conditioned on these regions. Besides, our method explicitly models long-term dependencies among these attentional regions that help to capture semantic label co-occurrence and thus facilitate multi-label recognition. Extensive experiments and comparisons on two large-scale benchmarks (i.e., PASCAL VOC and MS-COCO) show that our model achieves superior performance over existing state-of-the-art methods in both performance and efficiency as well as explicitly identifying image-level semantic labels to specific object regions.",
"Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets."
]
}
|
1811.04689
|
2953145910
|
Recent work has shown that exploiting relations between labels improves the performance of multi-label classification. We propose a novel framework based on generative adversarial networks (GANs) to model label dependency. The discriminator learns to model label dependency by discriminating real and generated label sets. To fool the discriminator, the classifier, or generator, learns to generate label sets with dependencies close to real data. Extensive experiments and comparisons on two large-scale image classification benchmark datasets (MS-COCO and NUS-WIDE) show that the discriminator improves generalization ability for different kinds of models
|
Generative adversarial networks (GAN) are a leaning framework that consists of the generator the discriminator. The former tried to transform some known distribution (e.g. gaussian) to the distribution of real data, while the later measure the difference between a target and generated distribution. However, it required the composition of the generator and discriminator to be fully differentiable. In this paper, since the label distributions are discrete multi-hot vector, instead of using non-differential parts such as , we borrow the idea of Wasserstein GAN (WGAN) @cite_25 , which use the earth mover's distance on GAN so that it can measure the distance between a discrete and a continuous distribution. Moreover, inspired by Jang @cite_24 and Li @cite_0 , we also used gumbel sigmoid to reparameterize the sampling procedure to make it differentiable.
|
{
"cite_N": [
"@cite_24",
"@cite_0",
"@cite_25"
],
"mid": [
"2547875792",
"2787284976",
"2605135824"
],
"abstract": [
"Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.",
"Long Short-Term Memory (LSTM) is one of the most widely used recurrent structures in sequence modeling. It aims to use gates to control information flow (e.g., whether to skip some information or not) in the recurrent computations, although its practical implementation based on soft gates only partially achieves this goal. In this paper, we propose a new way for LSTM training, which pushes the output values of the gates towards 0 or 1. By doing so, we can better control the information flow: the gates are mostly open or closed, instead of in a middle state, which makes the results more interpretable. Empirical studies show that (1) Although it seems that we restrict the model capacity, there is no performance drop: we achieve better or comparable performances due to its better generalization ability; (2) The outputs of gates are not sensitive to their inputs: we can easily compress the LSTM unit in multiple ways, e.g., low-rank approximation and low-precision approximation. The compressed models are even better than the baseline models without compression.",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms."
]
}
|
1811.04797
|
2949925331
|
Distracted pedestrians, like distracted drivers, are an increasingly dangerous threat and precursors to pedestrian accidents in urban communities, often resulting in grave injuries and fatalities. Mitigating such hazards to pedestrian safety requires employment of pedestrian safety systems and applications that are effective in detecting them. Designing such frameworks is possible with the availability of sophisticated mobile and wearable devices equipped with high-precision on-board sensors capable of capturing fine-grained user movements and context, especially distracted activities. However, the key technical challenge is accurate recognition of distractions with minimal resources in real-time given the computation and communication limitations of these devices. Several recently published works improve distracted pedestrian safety by leveraging on complex activity recognition frameworks using mobile and wearable sensors to detect pedestrian distractions. Their primary focus, however, was to achieve high detection accuracy, and therefore most designs are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system, we design an efficient and real-time pedestrian distraction detection technique that overcomes some of these shortcomings. We demonstrate its practicality by implementing prototypes on commercially-available mobile and wearable devices and evaluating them using data collected from participants in realistic pedestrian experiments. Using these evaluations, we show that our technique achieves a favorable balance between computational efficiency, detection accuracy and energy consumption compared to some other techniques in the literature.
|
Several research efforts in the literature have employed mobile and or wearable devices to improve pedestrian safety by detecting hazardous contexts using users' smartphone camera @cite_29 @cite_30 @cite_1 @cite_21 . @cite_29 utilized the rear camera of the smartphone to detect vehicles approaching a distracted user (or pedestrian) in order to promptly deliver a danger alert or notification. @cite_1 used image processing techniques and multi-sensor (barometer, accelerometer and gyroscope) information on smartphones to detect surrounding objects. Similarly, @cite_21 used real time video processing of road traffic to help partially sighted pedestrians in spotting obstacles on their path. @cite_30 is another proposal which applied image processing techniques on a smartphone camera feed to simultaneously find multiple obstacles in a user's path unlike the results by @cite_21 . One significant drawback of all these proposals is that they employ costly and resource-intensive image capture and processing techniques, which can adversely impact the performance and battery-life of mobile devices, thus diminishing their chances of being continually adopted by users. Reliance on a smartphone's camera also restricts the ability of these techniques to operate when the camera is obstructed, for example, in a user's pocket.
|
{
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_21",
"@cite_1"
],
"mid": [
"2027120765",
"2090340221",
"1490526505",
"1966120703"
],
"abstract": [
"Using mobile phones while walking for activities that require continuous focus on the screen, such as texting, has become more and more popular in the last years. To avoid colliding with obstacles, such as lampposts and pedestrians, focus has to be taken off the screen in regular intervals. In this paper we introduce SpareEye, an Android application that warns the smartphone user from obstacles in her way. We use only the camera of the phone and no special hardware, ensuring that it requires minimal effort from the user to use the application during everyday life. Experimental results show that we can detect obstacles with high accuracy, with only some false positives and few false negatives.",
"Research in social science has shown that mobile phone conversations distract users, presenting a significant impact to pedestrian safety; for example, a mobile phone user deep in conversation while crossing a street is generally more at risk than other pedestrians not engaged in such behavior. We propose WalkSafe, an Android smartphone application that aids people that walk and talk, improving the safety of pedestrian mobile phone users. WalkSafe uses the back camera of the mobile phone to detect vehicles approaching the user, alerting the user of a potentially unsafe situation; more specifically WalkSafe i) uses machine learning algorithms implemented on the phone to detect the front views and back views of moving vehicles and ii) exploits phone APIs to save energy by running the vehicle detection algorithm only during active calls. We present our initial design, implementation and evaluation of the WalkSafe App that is capable of real-time detection of the front and back views of cars, indicating cars are approaching or moving away from the user, respectively. WalkSafe is implemented on Android phones and alerts the user of unsafe conditions using sound and vibration from the phone. WalkSafe is available on Android Market.",
"In this paper, we present a real-time obstacle detection system for the mobility improvement for the visually impaired using a handheld Smartphone. Though there are many existing assistants for the visually impaired, there is not a single one that is low cost, ultra-portable, non-intrusive and able to detect the low-height objects on the floor. This paper proposes a system to detect any objects attached to the floor regardless of their height. Unlike some existing systems where only histogram or edge information is used, the proposed systemcombines both cues and overcomes some limitations of existing systems. The obstacles on the floor in front of the user can be reliably detected in real time using the proposed system implemented on a Smartphone. The proposed system has been tested in different types of floor conditions and a field trial on five blind participants has been conducted. The experimental results demonstrate its reliability in comparison to existing systems.",
"Accident detection and alarm system is very important to detect possible accidents or dangers for the peoples using their mobile devices while walking, i.e., distracted walking. In this paper, we introduce an automatic accident detection and alarm system, called AutoADAS, which is fully implemented and tested on the real mobile devices. The proposed system can be activated either manually or automatically when user walks. Under the manual mode, user activates the system before distracted walking while under the automatic mode, a \"user behaviour profiling\" module is used to recognize (distracted) walking behaviours and an \"object detection\" module is activated. Using image processing and camera field of view (FOV), the distance and angle between the user and detected objects are estimated and then applied to identify whether any potential accidents can happen. The \"accident analysis and prediction\" module includes: temporal alarm that inputs the user's walking speed and distance with respect to the detected objects and outputs temporal accident prediction; spatial alarm that inputs the user's walking direction and angle with respect to the detected objects and outputs spatial accident prediction. Once the proposed system positively predicts a potential accident, the \"alarm and suggestion\" module alerts the user with text, sound or vibration."
]
}
|
1811.04797
|
2949925331
|
Distracted pedestrians, like distracted drivers, are an increasingly dangerous threat and precursors to pedestrian accidents in urban communities, often resulting in grave injuries and fatalities. Mitigating such hazards to pedestrian safety requires employment of pedestrian safety systems and applications that are effective in detecting them. Designing such frameworks is possible with the availability of sophisticated mobile and wearable devices equipped with high-precision on-board sensors capable of capturing fine-grained user movements and context, especially distracted activities. However, the key technical challenge is accurate recognition of distractions with minimal resources in real-time given the computation and communication limitations of these devices. Several recently published works improve distracted pedestrian safety by leveraging on complex activity recognition frameworks using mobile and wearable sensors to detect pedestrian distractions. Their primary focus, however, was to achieve high detection accuracy, and therefore most designs are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system, we design an efficient and real-time pedestrian distraction detection technique that overcomes some of these shortcomings. We demonstrate its practicality by implementing prototypes on commercially-available mobile and wearable devices and evaluating them using data collected from participants in realistic pedestrian experiments. Using these evaluations, we show that our technique achieves a favorable balance between computational efficiency, detection accuracy and energy consumption compared to some other techniques in the literature.
|
Techniques for aiding pedestrian safety that do not rely on camera input, but rather on a smartphone's microphone @cite_12 and GPS @cite_6 have also been proposed. For instance, @cite_12 employ sound features extracted from the smartphone's microphone to detect oncoming vehicles, while @cite_6 recognizes potential collisions between pedestrians and oncoming vehicles using the smartphone's GPS. One main shortcoming of these systems is that they are useful in detecting only outdoor traffic-related hazards scenarios.
|
{
"cite_N": [
"@cite_6",
"@cite_12"
],
"mid": [
"2601803760",
"2115104908"
],
"abstract": [
"We implement a collision prevention system, called pSafety, which instantaneously informs pedestrians and drivers of the potential threatening accidents. Unlike other systems, pSafety alerts pedestrians of threatening vehicles coming from not only the line-of-sight, but also non-line-of-sight due to obstructions of the wall corner or other vehicles. pSafety collects GPS information from smartphones of pedestrians and vehicle drivers through mobile networks. The main challenge of pSafety is that current smartphones demonstrate larger distance errors on the order of a few meters due to its intrinsic low-cost GPS receivers. To address the impact of large-error positioning to pSafety, we regard each participant on the map as a sector that indicates a predicted location. We subsequently design the Sector Overlap Detection Algorithm, called SODA, to detect whether two sectors are overlapping in time O(1). To avoid warning fatigue, we additionally provide a threat ranking method to evaluate the degree of risk for each potential collision event. Through our designed App, pedestrians and drivers both could receive a clear view of potential risks and then take proper actions to avoid accidents. In our implementation, we show that pSafety rapidly informs participants (i.e., pedestrians and drivers) and provides each participant a sufficient response time to avoid collision.",
"Pedestrians' use of Motion Pictures Expert Group audio layer 3 players or mobile phones can pose the risk of being hit by motor vehicles. We present an approach for detecting a crash risk level using the computing power and the microphone of mobile devices that can be used to alert the user in advance of an approaching vehicle so as to avoid a crash. A single feature extractor classifier is not usually able to deal with the diversity of risky acoustic scenarios. In this paper, we address the problem of detection of vehicles approaching a pedestrian by a novel simple nonresource intensive acoustic method. The method uses a set of existing statistical tools to mine signal features. Audio features are adaptively thresholded for relevance and classified with a three-component heuristic. The resulting acoustic hazard detection system has a very low false-positive detection rate. The results of this study could help mobile device manufacturers to embed the presented features into future potable devices and contribute to road safety."
]
}
|
1811.04797
|
2949925331
|
Distracted pedestrians, like distracted drivers, are an increasingly dangerous threat and precursors to pedestrian accidents in urban communities, often resulting in grave injuries and fatalities. Mitigating such hazards to pedestrian safety requires employment of pedestrian safety systems and applications that are effective in detecting them. Designing such frameworks is possible with the availability of sophisticated mobile and wearable devices equipped with high-precision on-board sensors capable of capturing fine-grained user movements and context, especially distracted activities. However, the key technical challenge is accurate recognition of distractions with minimal resources in real-time given the computation and communication limitations of these devices. Several recently published works improve distracted pedestrian safety by leveraging on complex activity recognition frameworks using mobile and wearable sensors to detect pedestrian distractions. Their primary focus, however, was to achieve high detection accuracy, and therefore most designs are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system, we design an efficient and real-time pedestrian distraction detection technique that overcomes some of these shortcomings. We demonstrate its practicality by implementing prototypes on commercially-available mobile and wearable devices and evaluating them using data collected from participants in realistic pedestrian experiments. Using these evaluations, we show that our technique achieves a favorable balance between computational efficiency, detection accuracy and energy consumption compared to some other techniques in the literature.
|
Furthermore, techniques that employ specialized devices and sensors for improving pedestrian safety have also been proposed. @cite_7 uses information from specialized motion sensors attached to pedestrians' shoes to profile step and slope in order to detect curbs, ramps and other obstructions. Similarly, Ramos and Irani @cite_2 used a depth camera (paired with a smartphone), while Ahn and Kim @cite_23 and @cite_15 employed an ultrasonic sensor for detecting pedestrian hazards and or for guided navigation. Besides relying on specialized sensors or hardware, these systems attempt to address pedestrian safety by detecting obstacles or other potential hazards (to pedestrians). .
|
{
"cite_N": [
"@cite_15",
"@cite_23",
"@cite_7",
"@cite_2"
],
"mid": [
"1512548251",
"2095416579",
"2082422581",
"2058723972"
],
"abstract": [
"It is well recognized that walking while using mobile phones will make people more susceptible at various risks. Existing studies to improve smartphone users' safety are mainly limited to detecting incoming vehicles. They are not able to address some more common and equally dangerous accidents such as trips, falling from stairs, platforms or falling into an open manhole. These hazards are generally caused by sudden change of ground. In this paper, we propose UltraSee, the first system that is able to detect sudden change of ground for pedestrian mobile phone users. UltraSee augments smartphones with a small ultrasonic sensor which can detect the abrupt change of distance ahead. UltraSee also leverages the context information of smartphone usage such as screen status and holding orientation to improve detection accuracy and reduce energy consumption as well as unnecessary alarms. We have carried out extensive experiments in different scenarios and by different users. The results show that UltraSee can achieve accident detection rate of 94 with false positive rate of 4.4 and reduce unnecessary alarms by 90 . In terms of energy consumption, UltraSee costs only about 20 energy compared to the existing works that only rely on smartphone cameras.",
"Recently, multitasking on smart phones during navigation has emerged as a problematic social behavior due to its potential danger. In this paper, we propose to alleviate the situation by employing a sensor system for obstacle detection and aid the multi-tasking user so that one can safely navigate and carry out the on-going secondary task as effectively as possible. As such, we have implemented an ultrasonic sensor system interfaced into the smart phone that can constantly appraise the user of the incoming obstacles. We ran experiments to validate our approach, checking whether such a system would help user bump less into obstacles than without, and observe their multitasking behavior such as the physical attentional switch. Our experiments have shown that with the aid of the sensor system, the user's attention switch was significantly reduced, however, there was no differences in the performance (e.g. no. of collision) because the momentary strategic spatial and path planning was quite effective with slow pace and light pedestrian traffic. We conclude therefore human dual or multitasking ability is sufficient to overlap \"casual\" video watching and \"slow\" navigation. However, with more stringent task, e.g. text messaging and heavier traffic, we expect the reduced attention shift to ultimately improve the performance as well.",
"This video is a demonstration of the work discussed in our full paper available in the MobiSys'15 proceedings. The video illustrates a sensing technology for fine-grained location classification in an urban environment, for enhancing pedestrian safety. Our system seeks to detect the transitions from sidewalk locations to in-street locations, to enable applications such as alerting texting pedestrians when they step into the street. Existing positioning technologies are not sufficiently precise to allow distinguishing a position on the sidewalk from a position in the street, as explored in our previous work. To this end, we use shoe-mounted inertial sensors for location classification based on surface gradient profile and step patterns. This approach is different from existing shoe sensing solutions that focus on dead reckoning and inertial navigation. The shoe sensors relay inertial sensor measurements to a smartphone, which extracts the step pattern and the inclination of the ground a pedestrian is walking on. This allows detecting transitions such as stepping over a curb or walking down sidewalk ramps that lead into the street. We carried out walking trials in metropolitan environments in United States (Manhattan) and Europe (Turin). The results from these experiments show that we can accurately determine transitions between sidewalk and street locations to identify pedestrian risk.",
"Mobile device use while walking, or eyes-busy mobile interaction, is a leading cause of life-threatening pedestrian collisions. We introduce CrashAlert, a system that augments mobile devices with a depth camera, to provide distance and location visual cues of obstacles on the user's path. In a realistic environment outside the lab, CrashAlert users improve their handling of potential collisions, dodging and slowing down for simple ones while lifting their head in more complex situations. Qualitative results outline the value of extending users' peripheral alertness in eyes-busy mobile interaction through non-intrusive depth cues, as used in CrashAlert. We present the design features of our system and lessons learned from our evaluation."
]
}
|
1811.04797
|
2949925331
|
Distracted pedestrians, like distracted drivers, are an increasingly dangerous threat and precursors to pedestrian accidents in urban communities, often resulting in grave injuries and fatalities. Mitigating such hazards to pedestrian safety requires employment of pedestrian safety systems and applications that are effective in detecting them. Designing such frameworks is possible with the availability of sophisticated mobile and wearable devices equipped with high-precision on-board sensors capable of capturing fine-grained user movements and context, especially distracted activities. However, the key technical challenge is accurate recognition of distractions with minimal resources in real-time given the computation and communication limitations of these devices. Several recently published works improve distracted pedestrian safety by leveraging on complex activity recognition frameworks using mobile and wearable sensors to detect pedestrian distractions. Their primary focus, however, was to achieve high detection accuracy, and therefore most designs are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system, we design an efficient and real-time pedestrian distraction detection technique that overcomes some of these shortcomings. We demonstrate its practicality by implementing prototypes on commercially-available mobile and wearable devices and evaluating them using data collected from participants in realistic pedestrian experiments. Using these evaluations, we show that our technique achieves a favorable balance between computational efficiency, detection accuracy and energy consumption compared to some other techniques in the literature.
|
The problem of detecting distracted pedestrians can be generalized as a concurrent activity recognition or CAR problem where the goal is to detect concurrent pedestrian activities of being mobile (e.g., walking, running or climbing descending stairs) and being distracted (e.g., texting, eating or reading). CAR techniques that can distinguish different combinations of elementary activities have been extensively used in the literature for complex human activity recognition. For instance, @cite_25 @cite_5 used and data, from two smartphones, one in trouser pocket and the other on the wrist, to recognize activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. @cite_9 also employed multi-sensor time series data to recognize sequential, concurrent, and generic complex activities by building a dictionary of the time series patterns (called ) to represent atomic activities.
|
{
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_25"
],
"mid": [
"2556269267",
"",
"2304267454"
],
"abstract": [
"Smoking is known to be one of the main causes for premature deaths. A reliable smoking detection method can enable applications for an insight into a user's smoking behaviour and for use in smoking cessation programs. However, it is difficult to accurately detect smoking because it can be performed in various postures or in combination with other activities, it is less-repetitive, and it may be confused with other similar activities, such as drinking and eating. In this paper, we propose to use a two-layer hierarchical smoking detection algorithm (HLSDA) that uses a classifier at the first layer, followed by a lazy context-rule-based correction method that utilizes neighbouring segments to improve the detection. We evaluated our algorithm on a dataset of 45 hours collected over a three month period where 11 participants performed 17 hours (230 cigarettes) of smoking while sitting, standing, walking, and in a group conversation. The rest of 28 hours consists of other similar activities, such as eating, and drinking. We show that our algorithm improves recall as well as precision for smoking compared to a single layer classification approach. For smoking activity, we achieve an F-measure of 90-97 in person-dependent evaluations and 83-94 in person-independent evaluations. In most cases, our algorithm corrects up to 50 of the misclassified smoking segments. Our algorithm also improves the detection of eating and drinking in a similar way. We make our dataset and data logger publicly available for the reproducibility of our work.",
"",
"The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2–30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available."
]
}
|
1811.04797
|
2949925331
|
Distracted pedestrians, like distracted drivers, are an increasingly dangerous threat and precursors to pedestrian accidents in urban communities, often resulting in grave injuries and fatalities. Mitigating such hazards to pedestrian safety requires employment of pedestrian safety systems and applications that are effective in detecting them. Designing such frameworks is possible with the availability of sophisticated mobile and wearable devices equipped with high-precision on-board sensors capable of capturing fine-grained user movements and context, especially distracted activities. However, the key technical challenge is accurate recognition of distractions with minimal resources in real-time given the computation and communication limitations of these devices. Several recently published works improve distracted pedestrian safety by leveraging on complex activity recognition frameworks using mobile and wearable sensors to detect pedestrian distractions. Their primary focus, however, was to achieve high detection accuracy, and therefore most designs are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system, we design an efficient and real-time pedestrian distraction detection technique that overcomes some of these shortcomings. We demonstrate its practicality by implementing prototypes on commercially-available mobile and wearable devices and evaluating them using data collected from participants in realistic pedestrian experiments. Using these evaluations, we show that our technique achieves a favorable balance between computational efficiency, detection accuracy and energy consumption compared to some other techniques in the literature.
|
However several shortcomings in these approaches, as outlined below, prevent them from being effectively used in pedestrian safety applications. For instance, 's work @cite_5 requires the system to keep track of time segments that precede and follow the current one, and thus, unsuitable for pedestrian safety applications that require real-time operation and feedback. Others are not suitable for implementation on resource-constrained mobile and wearable devices, primarily due to their use of complex feature sets and classification functions. As discussed before, one of the main functional requirement for a mobile wearable device based CAR framework for pedestrian safety is computational and energy efficiency. Earlier research efforts in energy-aware recognition mechanisms @cite_10 have achieved a favorable balance between classification accuracy and energy consumption, but these schemes have been successful in recognizing only simple activities, such as standing, walking and sitting, but not concurrent activities. Recently, @cite_17 proposed an energy-aware CAR framework for real-time applications by using a minimal feature set to recognize individual data segments and a hierarchical classification mechanism for concurrent activity recognition. However, their framework employs a specialized wearable device hardware, and may not work for commercial off-the-shelf mobile devices thus making them less likely to be adopted by users.
|
{
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_17"
],
"mid": [
"2556269267",
"",
"1980909913"
],
"abstract": [
"Smoking is known to be one of the main causes for premature deaths. A reliable smoking detection method can enable applications for an insight into a user's smoking behaviour and for use in smoking cessation programs. However, it is difficult to accurately detect smoking because it can be performed in various postures or in combination with other activities, it is less-repetitive, and it may be confused with other similar activities, such as drinking and eating. In this paper, we propose to use a two-layer hierarchical smoking detection algorithm (HLSDA) that uses a classifier at the first layer, followed by a lazy context-rule-based correction method that utilizes neighbouring segments to improve the detection. We evaluated our algorithm on a dataset of 45 hours collected over a three month period where 11 participants performed 17 hours (230 cigarettes) of smoking while sitting, standing, walking, and in a group conversation. The rest of 28 hours consists of other similar activities, such as eating, and drinking. We show that our algorithm improves recall as well as precision for smoking compared to a single layer classification approach. For smoking activity, we achieve an F-measure of 90-97 in person-dependent evaluations and 83-94 in person-independent evaluations. In most cases, our algorithm corrects up to 50 of the misclassified smoking segments. Our algorithm also improves the detection of eating and drinking in a similar way. We make our dataset and data logger publicly available for the reproducibility of our work.",
"",
"This paper presents an energy-aware method for recognizing time series acceleration data containing both activities and gestures using a wearable device coupled with a smartphone. In our method, we use a small wearable device to collect accelerometer data from a user's wrist, recognizing each data segment using a minimal feature set chosen automatically for that segment. For each collected data segment, if our model finds that recognizing the segment requires high-cost features that the wearable device cannot extract, such as dynamic time warping for gesture recognition, then the segment is transmitted to the smartphone where the high-cost features are extracted and recognition is performed. Otherwise, only the minimum required set of low-cost features are extracted from the segment on the wearable device and only the recognition result, i.e., label, is transmitted to the smartphone in place of the raw data, reducing transmission costs. Our method automatically constructs this adaptive processing pipeline solely from training data."
]
}
|
1811.04797
|
2949925331
|
Distracted pedestrians, like distracted drivers, are an increasingly dangerous threat and precursors to pedestrian accidents in urban communities, often resulting in grave injuries and fatalities. Mitigating such hazards to pedestrian safety requires employment of pedestrian safety systems and applications that are effective in detecting them. Designing such frameworks is possible with the availability of sophisticated mobile and wearable devices equipped with high-precision on-board sensors capable of capturing fine-grained user movements and context, especially distracted activities. However, the key technical challenge is accurate recognition of distractions with minimal resources in real-time given the computation and communication limitations of these devices. Several recently published works improve distracted pedestrian safety by leveraging on complex activity recognition frameworks using mobile and wearable sensors to detect pedestrian distractions. Their primary focus, however, was to achieve high detection accuracy, and therefore most designs are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system, we design an efficient and real-time pedestrian distraction detection technique that overcomes some of these shortcomings. We demonstrate its practicality by implementing prototypes on commercially-available mobile and wearable devices and evaluating them using data collected from participants in realistic pedestrian experiments. Using these evaluations, we show that our technique achieves a favorable balance between computational efficiency, detection accuracy and energy consumption compared to some other techniques in the literature.
|
In our preliminary work @cite_11 towards designing an efficient CAR model suitable for detecting distracted pedestrian activities, we proposed a novel CAR technique called DFAM. We also undertook a preliminary evaluation of DFAM primarily in a personalized setting, where both training and testing of DFAM were conducted using the same participants' motion data. In this paper, we significantly expand our DFAM evaluation in a personalized setting, and comprehensively evaluate a generalized setting where the training and testing are conducted on disjoint sets of participants. Such a generalized evaluation can validate if DFAM can accurately detect distracted pedestrian activities even when personalized training data from a user is not available. We were also able to significantly reduce the resource footprint of our initial DFAM implementation by designing a hierarchical activity recognition model, presented and evaluated in Section .
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2963574905"
],
"abstract": [
"Pedestrian safety continues to be a significant con- cern in urban communities with distraction being one of the main contributing factor behind serious accidents involving pedestrians. The advent of sophisticated mobile and wearable devices, equipped with high-precision on-board sensors capable of measuring fine-grained user movements and context, provides a tremendous opportunity for designing effective pedestrian safety systems and applications. Accurate recognition of pedestrian distractions in real-time given the memory, computation and com- munication limitations of these devices, however, remains a key technical challenge in the design of such systems. Earlier research efforts in this direction have primarily focused on achieving high distraction detection accuracy, resulting in techniques that are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not real- time, or require specialized hardware and thus less likely to be adopted by most users. Our goal in this paper is to design a pedestrian distraction detection technique that overcomes some of these shortcomings (of existing techniques) and achieves a favorable balance between computational efficiency, detection accuracy, and energy consumption."
]
}
|
1811.04533
|
2949658958
|
Feature pyramids are widely exploited by both the state-of-the-art one-stage object detectors (e.g., DSSD, RetinaNet, RefineDet) and the two-stage object detectors (e.g., Mask R-CNN, DetNet) to alleviate the problem arising from scale variation across object instances. Although these object detectors with feature pyramids achieve encouraging results, they have some limitations due to that they only simply construct the feature pyramid according to the inherent multi-scale, pyramidal architecture of the backbones which are actually designed for object classification task. Newly, in this work, we present a method called Multi-Level Feature Pyramid Network (MLFPN) to construct more effective feature pyramids for detecting objects of different scales. First, we fuse multi-level features (i.e. multiple layers) extracted by backbone as the base feature. Second, we feed the base feature into a block of alternating joint Thinned U-shape Modules and Feature Fusion Modules and exploit the decoder layers of each u-shape module as the features for detecting objects. Finally, we gather up the decoder layers with equivalent scales (sizes) to develop a feature pyramid for object detection, in which every feature map consists of the layers (features) from multiple levels. To evaluate the effectiveness of the proposed MLFPN, we design and train a powerful end-to-end one-stage object detector we call M2Det by integrating it into the architecture of SSD, which gets better detection performance than state-of-the-art one-stage detectors. Specifically, on MS-COCO benchmark, M2Det achieves AP of 41.0 at speed of 11.8 FPS with single-scale inference strategy and AP of 44.2 with multi-scale inference strategy, which is the new state-of-the-art results among one-stage detectors. The code will be made available on this https URL.
|
The first one is ( a series of resized copies of the input image) to produce semantically representative multi-scale features. Features from images of different scales yield predictions separately and these predictions work together to give the final prediction. In terms of recognition accuracy and localization precision, features from various-sized images do surpass features that are based merely on single-scale images. Methods such as @cite_22 and SNIP @cite_9 employed this tactic. Despite the performance gain, such a strategy could be costly time-wise and memory-wise, which forbid its application in real-time tasks. Considering this major drawback, methods such as SNIP @cite_9 can choose to only employ featurized image pyramids during the test phase as a fallback, whereas other methods including Fast R-CNN @cite_21 and Faster R-CNN @cite_2 chose not to use this strategy by default.
|
{
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_22",
"@cite_2"
],
"mid": [
"2951581050",
"",
"2341497066",
"2953106684"
],
"abstract": [
"An analysis of different techniques for recognizing and detecting objects under extreme scale variation is presented. Scale specific and scale invariant design of detectors are compared by training them with different configurations of input data. By evaluating the performance of different network architectures for classifying small objects on ImageNet, we show that CNNs are not robust to changes in scale. Based on this analysis, we propose to train and test detectors on the same scales of an image-pyramid. Since small and large objects are difficult to recognize at smaller and larger scales respectively, we present a novel training scheme called Scale Normalization for Image Pyramids (SNIP) which selectively back-propagates the gradients of object instances of different sizes as a function of the image scale. On the COCO dataset, our single model performance is 45.7 and an ensemble of 3 networks obtains an mAP of 48.3 . We use off-the-shelf ImageNet-1000 pre-trained models and only train with bounding box supervision. Our submission won the Best Student Entry in the COCO 2017 challenge. Code will be made available at this http URL .",
"",
"The field of object detection has made significant advances riding on the wave of region-based ConvNets, but their training procedure still includes many heuristics and hyperparameters that are costly to tune. We present a simple yet surprisingly effective online hard example mining (OHEM) algorithm for training region-based ConvNet detectors. Our motivation is the same as it has always been – detection datasets contain an overwhelming number of easy examples and a small number of hard examples. Automatic selection of these hard examples can make training more effective and efficient. OHEM is a simple and intuitive algorithm that eliminates several heuristics and hyperparameters in common use. But more importantly, it yields consistent and significant boosts in detection performance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness increases as datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset. Moreover, combined with complementary advances in the field, OHEM leads to state-of-the-art results of 78.9 and 76.3 mAP on PASCAL VOC 2007 and 2012 respectively.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
]
}
|
1811.04533
|
2949658958
|
Feature pyramids are widely exploited by both the state-of-the-art one-stage object detectors (e.g., DSSD, RetinaNet, RefineDet) and the two-stage object detectors (e.g., Mask R-CNN, DetNet) to alleviate the problem arising from scale variation across object instances. Although these object detectors with feature pyramids achieve encouraging results, they have some limitations due to that they only simply construct the feature pyramid according to the inherent multi-scale, pyramidal architecture of the backbones which are actually designed for object classification task. Newly, in this work, we present a method called Multi-Level Feature Pyramid Network (MLFPN) to construct more effective feature pyramids for detecting objects of different scales. First, we fuse multi-level features (i.e. multiple layers) extracted by backbone as the base feature. Second, we feed the base feature into a block of alternating joint Thinned U-shape Modules and Feature Fusion Modules and exploit the decoder layers of each u-shape module as the features for detecting objects. Finally, we gather up the decoder layers with equivalent scales (sizes) to develop a feature pyramid for object detection, in which every feature map consists of the layers (features) from multiple levels. To evaluate the effectiveness of the proposed MLFPN, we design and train a powerful end-to-end one-stage object detector we call M2Det by integrating it into the architecture of SSD, which gets better detection performance than state-of-the-art one-stage detectors. Specifically, on MS-COCO benchmark, M2Det achieves AP of 41.0 at speed of 11.8 FPS with single-scale inference strategy and AP of 44.2 with multi-scale inference strategy, which is the new state-of-the-art results among one-stage detectors. The code will be made available on this https URL.
|
The second one is detecting objects in the extracted from inherent layers within the network while merely taking a single-scale image. This strategy demands significantly less additional memory and computational cost than the first one, enabling deployment during both the training and test phases in real-time networks. Moreover, the feature pyramid constructing module can be easily revised and fit into state-of-the-art deep neural networks based detectors. MS-CNN @cite_24 , SSD @cite_23 , DSSD @cite_11 , FPN @cite_6 , YOLOv3 @cite_0 , RetinaNet @cite_12 , and RefineDet @cite_8 adopted this tactic in different ways.
|
{
"cite_N": [
"@cite_8",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2949533892",
"2490270993",
"2796347433",
"2193145675",
"2743473392",
"2579985080"
],
"abstract": [
"",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the proposal sub-network, detection is performed at multiple output layers, so that receptive fields match objects of different scales. These complementary scale-specific detectors are combined to produce a strong multi-scale object detector. The unified network is learned end-to-end, by optimizing a multi-task loss. Feature upsampling by deconvolution is also explored, as an alternative to input upsampling, to reduce the memory and computation costs. State-of-the-art object detection performance, at up to 15 fps, is reported on datasets, such as KITTI and Caltech, containing a substantial number of small objects.",
"We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL",
"The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset."
]
}
|
1811.04533
|
2949658958
|
Feature pyramids are widely exploited by both the state-of-the-art one-stage object detectors (e.g., DSSD, RetinaNet, RefineDet) and the two-stage object detectors (e.g., Mask R-CNN, DetNet) to alleviate the problem arising from scale variation across object instances. Although these object detectors with feature pyramids achieve encouraging results, they have some limitations due to that they only simply construct the feature pyramid according to the inherent multi-scale, pyramidal architecture of the backbones which are actually designed for object classification task. Newly, in this work, we present a method called Multi-Level Feature Pyramid Network (MLFPN) to construct more effective feature pyramids for detecting objects of different scales. First, we fuse multi-level features (i.e. multiple layers) extracted by backbone as the base feature. Second, we feed the base feature into a block of alternating joint Thinned U-shape Modules and Feature Fusion Modules and exploit the decoder layers of each u-shape module as the features for detecting objects. Finally, we gather up the decoder layers with equivalent scales (sizes) to develop a feature pyramid for object detection, in which every feature map consists of the layers (features) from multiple levels. To evaluate the effectiveness of the proposed MLFPN, we design and train a powerful end-to-end one-stage object detector we call M2Det by integrating it into the architecture of SSD, which gets better detection performance than state-of-the-art one-stage detectors. Specifically, on MS-COCO benchmark, M2Det achieves AP of 41.0 at speed of 11.8 FPS with single-scale inference strategy and AP of 44.2 with multi-scale inference strategy, which is the new state-of-the-art results among one-stage detectors. The code will be made available on this https URL.
|
To the best of our knowledge, MS-CNN @cite_24 proposed two sub-networks and first incorporated multi-scale features into deep convolutional neural networks for object detection. The proposal sub-net exploited feature maps of several resolutions to detect multi-scale objects in an image. SSD @cite_23 exploited feature maps from the later layers of VGG16 base-net and extra feature layers for predictions at multiple scales. FPN @cite_6 utilized lateral connections and a top-down pathway to produce a feature pyramid and achieved more powerful representations. DSSD @cite_11 implemented deconvolution layers for aggregating context and enhancing the high-level semantics for shallow features. RefineDet @cite_8 adopted two-step cascade regression, which achieves a remarkable progress on accuracy while keeping the efficiency of SSD.
|
{
"cite_N": [
"@cite_8",
"@cite_6",
"@cite_24",
"@cite_23",
"@cite_11"
],
"mid": [
"",
"2949533892",
"2490270993",
"2193145675",
"2579985080"
],
"abstract": [
"",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the proposal sub-network, detection is performed at multiple output layers, so that receptive fields match objects of different scales. These complementary scale-specific detectors are combined to produce a strong multi-scale object detector. The unified network is learned end-to-end, by optimizing a multi-task loss. Feature upsampling by deconvolution is also explored, as an alternative to input upsampling, to reduce the memory and computation costs. State-of-the-art object detection performance, at up to 15 fps, is reported on datasets, such as KITTI and Caltech, containing a substantial number of small objects.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset."
]
}
|
1811.04516
|
2949078140
|
We show that it is possible to reduce a high-dimensional object like a neural network agent into a low-dimensional vector representation with semantic meaning that we call agent embeddings, akin to word or face embeddings. This can be done by collecting examples of existing networks, vectorizing their weights, and then learning a generative model over the weight space in a supervised fashion. We investigate a pole-balancing task, Cart-Pole, as a case study and show that multiple new pole-balancing networks can be generated from their agent embeddings without direct access to training data from the Cart-Pole simulator. In general, the learned embedding space is helpful for mapping out the space of solutions for a given task. We observe in the case of Cart-Pole the surprising finding that good agents make different decisions despite learning similar representations, whereas bad agents make similar (bad) decisions while learning dissimilar representations. Linearly interpolating between the latent embeddings for a good agent and a bad agent yields an agent embedding that generates a network with intermediate performance, where the performance can be tuned according to the coefficient of interpolation. Linear extrapolation in the latent space also results in performance boosts, up to a point.
|
One line of work that has proven useful in increasing our understanding of deep neural network models is that of convergent learning @cite_9 , which measures correlations between the weights of different neural networks with the same architecture to determine the similarity of representations learned by these different networks. Convergent learning investigations have hitherto, to our knowledge, only been done on image classifiers, but we extend them to reinforcement learning agents in this paper.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2178314882"
],
"abstract": [
"Recent success in training deep neural networks have prompted active investigation into the features learned on their intermediate layers. Such research is difficult because it requires making sense of non-linear computations performed by millions of parameters, but valuable because it increases our ability to understand current models and create improved versions of them. In this paper we investigate the extent to which neural networks exhibit what we call convergent learning, which is when the representations learned by multiple nets converge to a set of features which are either individually similar between networks or where subsets of features span similar low-dimensional spaces. We propose a specific method of probing representations: training multiple networks and then comparing and contrasting their individual, learned representations at the level of neurons or groups of neurons. We begin research into this question using three techniques to approximately align different neural networks on a feature level: a bipartite matching approach that makes one-to-one assignments between neurons, a sparse prediction approach that finds one-to-many mappings, and a spectral clustering approach that finds many-to-many mappings. This initial investigation reveals a few previously unknown properties of neural networks, and we argue that future research into the question of convergent learning will yield many more. The insights described here include (1) that some features are learned reliably in multiple networks, yet other features are not consistently learned; (2) that units learn to span low-dimensional subspaces and, while these subspaces are common to multiple networks, the specific basis vectors learned are not; (3) that the representation codes show evidence of being a mix between a local code and slightly, but not fully, distributed codes across multiple units."
]
}
|
1811.04516
|
2949078140
|
We show that it is possible to reduce a high-dimensional object like a neural network agent into a low-dimensional vector representation with semantic meaning that we call agent embeddings, akin to word or face embeddings. This can be done by collecting examples of existing networks, vectorizing their weights, and then learning a generative model over the weight space in a supervised fashion. We investigate a pole-balancing task, Cart-Pole, as a case study and show that multiple new pole-balancing networks can be generated from their agent embeddings without direct access to training data from the Cart-Pole simulator. In general, the learned embedding space is helpful for mapping out the space of solutions for a given task. We observe in the case of Cart-Pole the surprising finding that good agents make different decisions despite learning similar representations, whereas bad agents make similar (bad) decisions while learning dissimilar representations. Linearly interpolating between the latent embeddings for a good agent and a bad agent yields an agent embedding that generates a network with intermediate performance, where the performance can be tuned according to the coefficient of interpolation. Linear extrapolation in the latent space also results in performance boosts, up to a point.
|
Generative modeling is the technique of learning the underlying data distribution of a training set, with the objective of generating new data points similar to those from the training set. Deep neural networks have been used to build generative models for images @cite_25 , audio @cite_3 , video @cite_26 , natural language sentences @cite_14 , DNA sequences @cite_15 , and even protein structures @cite_12 . Complex semantic attributes can often be reduced to simple linear vectors and linear arithmetic in the latent spaces of these generative models.
|
{
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_3",
"@cite_15",
"@cite_25",
"@cite_12"
],
"mid": [
"2132339004",
"2520707650",
"2519091744",
"2963008857",
"2173520492",
"2891185006"
],
"abstract": [
"A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"",
"Disentangling factors of variation has always been a challenging problem in representation learning. Existing algorithms suffer from many limitations, such as unpredictable disentangling factors, bad quality of generated images from encodings, lack of identity information, etc. In this paper, we proposed a supervised algorithm called DNA-GAN trying to disentangle different attributes of images. The latent representations of images are DNA-like, in which each individual piece represents an independent factor of variation. By annihilating the recessive piece and swapping a certain piece of two latent representations, we obtain another two different representations which could be decoded into images. In order to obtain realistic images and also disentangled representations, we introduced the discriminator for adversarial training. Experiments on Multi-PIE and CelebA datasets demonstrate the effectiveness of our method and the advantage of overcoming limitations existing in other methods.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Analyzing the structure and function of proteins is a key part of understanding biology at the molecular and cellular level. In addition, a major engineering challenge is to design new proteins in a principled and methodical way. Current computational modeling methods for protein design are slow and often require human oversight and intervention. Here, we apply Generative Adversarial Networks (GANs) to the task of generating protein structures, toward application in fast de novo protein design. We encode protein structures in terms of pairwise distances between alpha-carbons on the protein backbone, which eliminates the need for the generative model to learn translational and rotational symmetries. We then introduce a convex formulation of corruption-robust 3D structure recovery to fold the protein structures from generated pairwise distance maps, and solve these problems using the Alternating Direction Method of Multipliers. We test the effectiveness of our models by predicting completions of corrupted protein structures and show that the method is capable of quickly producing biochemically viable solutions."
]
}
|
1811.04516
|
2949078140
|
We show that it is possible to reduce a high-dimensional object like a neural network agent into a low-dimensional vector representation with semantic meaning that we call agent embeddings, akin to word or face embeddings. This can be done by collecting examples of existing networks, vectorizing their weights, and then learning a generative model over the weight space in a supervised fashion. We investigate a pole-balancing task, Cart-Pole, as a case study and show that multiple new pole-balancing networks can be generated from their agent embeddings without direct access to training data from the Cart-Pole simulator. In general, the learned embedding space is helpful for mapping out the space of solutions for a given task. We observe in the case of Cart-Pole the surprising finding that good agents make different decisions despite learning similar representations, whereas bad agents make similar (bad) decisions while learning dissimilar representations. Linearly interpolating between the latent embeddings for a good agent and a bad agent yields an agent embedding that generates a network with intermediate performance, where the performance can be tuned according to the coefficient of interpolation. Linear extrapolation in the latent space also results in performance boosts, up to a point.
|
The salient aspect of meta-learning that our work is connected to is the use of neural networks to generate other neural networks. This has been done before in the context of hyperparameter optimization, where one neural network is used to tune the hyperparameters of another neural network. zoph2016neural used a neural network as a reinforcement learning agent to select architectural choices (like the width of the convolution kernel or the operations in a recurrent cell) in the design of another neural network. This is known as neural architecture search, and several efficiency improvements to the original idea have since been proposed @cite_24 @cite_0 . smithson2016neural modeled hyperparameter optimization in a neural network as a response surface that can be approximated by another neural network. andrychowicz2016learning,ravi2016optimization used an external LSTM to meta-learn the optimization function used to update a child network.
|
{
"cite_N": [
"@cite_24",
"@cite_0"
],
"mid": [
"2785366763",
"2951104886"
],
"abstract": [
"We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89 , which is on par with NASNet (, 2018), whose test error is 2.65 .",
"This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms."
]
}
|
1811.04588
|
2949533007
|
Concepts, which represent a group of different instances sharing common properties, are essential information in knowledge representation. Most conventional knowledge embedding methods encode both entities (concepts and instances) and relations as vectors in a low dimensional semantic space equally, ignoring the difference between concepts and instances. In this paper, we propose a novel knowledge graph embedding model named TransC by differentiating concepts and instances. Specifically, TransC encodes each concept in knowledge graph as a sphere and each instance as a vector in the same semantic space. We use the relative positions to model the relations between concepts and instances (i.e., instanceOf), and the relations between concepts and sub-concepts (i.e., subClassOf). We evaluate our model on both link prediction and triple classification tasks on the dataset based on YAGO. Experimental results show that TransC outperforms state-of-the-art methods, and captures the semantic transitivity for instanceOf and subClassOf relation. Our codes and datasets can be obtained from https: this http URL.
|
RESCAL @cite_7 is the first bilinear model. It associates each entity with a vector to capture its latent semantics. Each relation is represented as a matrix which models pairwise interactions between latent factors. Many extensions of RESCAL have been proposed by restricting bilinear functions in recent years. For example, DistMult @cite_11 simplifies RESCAL by restricting the matrices representing relations to diagonal matrices. HolE @cite_23 combines the expressive power of RESCAL with the efficiency and simplicity of DistMult. It represents both entities and relations as vectors in @math . ComplEx @cite_10 extends DistMult by introducing complex-valued embeddings so as to better model asymmetric relations.
|
{
"cite_N": [
"@cite_10",
"@cite_23",
"@cite_7",
"@cite_11"
],
"mid": [
"2432356473",
"2949972983",
"205829674",
"2951077644"
],
"abstract": [
"In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.",
"Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator HolE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. In extensive experiments we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction in knowledge graphs and relational learning benchmark datasets.",
"Relational learning is becoming increasingly important in many areas of application. Here, we present a novel approach to relational learning based on the factorization of a three-way tensor. We show that unlike other tensor approaches, our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization. We substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution. Furthermore, we show on common benchmark datasets that our approach achieves better or on-par results, if compared to current state-of-the-art relational learning solutions, while it is significantly faster to compute.",
"We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning."
]
}
|
1811.04588
|
2949533007
|
Concepts, which represent a group of different instances sharing common properties, are essential information in knowledge representation. Most conventional knowledge embedding methods encode both entities (concepts and instances) and relations as vectors in a low dimensional semantic space equally, ignoring the difference between concepts and instances. In this paper, we propose a novel knowledge graph embedding model named TransC by differentiating concepts and instances. Specifically, TransC encodes each concept in knowledge graph as a sphere and each instance as a vector in the same semantic space. We use the relative positions to model the relations between concepts and instances (i.e., instanceOf), and the relations between concepts and sub-concepts (i.e., subClassOf). We evaluate our model on both link prediction and triple classification tasks on the dataset based on YAGO. Experimental results show that TransC outperforms state-of-the-art methods, and captures the semantic transitivity for instanceOf and subClassOf relation. Our codes and datasets can be obtained from https: this http URL.
|
External information like textual information is significant for knowledge representation. TEKE @cite_4 uses external context information in a text corpus to represent both entities and words into a joint vector space with alignment models. DKRL @cite_9 directly learns entity representations from entity descriptions. @cite_26 @cite_24 @cite_3 use logical rules to strengthen representations of knowledge graphs. All models above do not differentiate between concepts and instances. To the best of our knowledge, our proposed TransC is the first attempt which represents concepts, instances, and relations differently in the same space.
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_9",
"@cite_3",
"@cite_24"
],
"mid": [
"2274308990",
"2571811098",
"2499696929",
"2296268288",
"2563063592"
],
"abstract": [
"Knowledge bases (KBs) are often greatly incomplete, necessitating a demand for KB completion. A promising approach is to embed KBs into latent spaces and make inferences by learning and operating on latent representations. Such embedding models, however, do not make use of any rules during inference and hence have limited accuracy. This paper proposes a novel approach which incorporates rules seamlessly into embedding models for KB completion. It formulates inference as an integer linear programming (ILP) problem, with the objective function generated from embedding models and the constraints translated from rules. Solving the ILP problem results in a number of facts which 1) are the most preferred by the embedding models, and 2) comply with all the rules. By incorporating rules, our approach can greatly reduce the solution space and significantly improve the inference accuracy of embedding models. We further provide a slacking technique to handle noise in KBs, by explicitly modeling the noise with slack variables. Experimental results on two publicly available data sets show that our approach significantly and consistently outperforms state-of-the-art embedding models in KB completion. Moreover, the slacking technique is effective in identifying erroneous facts and ambiguous entities, with a precision higher than 90 .",
"Learning the representations of a knowledge graph has attracted significant research interest in the field of intelligent Web. By regarding each relation as one translation from head entity to tail entity, translation-based methods including TransE, TransH and TransR are simple, effective and achieving the state-of-the-art performance. However, they still suffer the following issues: (i) low performance when modeling 1-to-N, N-to-1 and N-to-N relations. (ii) limited performance due to the structure sparseness of the knowledge graph. In this paper, we propose a novel knowledge graph representation learning method by taking advantage of the rich context information in a text corpus. The rich textual context information is incorporated to expand the semantic structure of the knowledge graph and each relation is enabled to own different representations for different head and tail entities to better handle 1-to-N, N-to-1 and N-to-N relations. Experiments on multiple benchmark datasets show that our proposed method successfully addresses the above issues and significantly outperforms the state-of-the-art methods.",
"Representation learning (RL) of knowledge graphs aims to project both entities and relations into a continuous low-dimensional space. Most methods concentrate on learning representations with knowledge triples indicating relations between entities. In fact, in most knowledge graphs there are usually concise descriptions for entities, which cannot be well utilized by existing methods. In this paper, we propose a novel RL method for knowledge graphs taking advantages of entity descriptions. More specifically, we explore two encoders, including continuous bag-of-words and deep convolutional neural models to encode semantics of entity descriptions. We further learn knowledge representations with both triples and descriptions. We evaluate our method on two tasks, including knowledge graph completion and entity classification. Experimental results on real-world datasets show that, our method outperforms other baselines on the two tasks, especially under the zero-shot setting, which indicates that our method is capable of building representations for novel entities according to their descriptions. The source code of this paper can be obtained from https: github.com xrb92 DKRL.",
"Matrix factorization approaches to relation extraction provide several attractive features: they support distant supervision, handle open schemas, and leverage unlabeled data. Unfortunately, these methods share a shortcoming with all other distantly supervised approaches: they cannot learn to extract target relations without existing data in the knowledge base, and likewise, these models are inaccurate for relations with sparse data. Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge. However, usually a large set of such formulae is necessary to achieve generalization. In this paper, we introduce a paradigm for learning low-dimensional embeddings of entity-pairs and relations that combine the advantages of matrix factorization with first-order logic domain knowledge. We introduce simple approaches for estimating such embeddings, as well as a novel training algorithm to jointly optimize over factual and first-order logic information. Our results show that this method is able to learn accurate extractors with little or no distant supervision alignments, while at the same time generalizing to textual patterns that do not appear in the formulae.",
""
]
}
|
1906.10557
|
2955378840
|
This position paper describes an experiment conducted to understand the relationships between different physiological measures including pupil Diameter, Blinking Rate, Heart Rate, and Heart Rate Variability in order to develop an estimation of users' mental load in real-time (see Sidebar 1). Our experiment involved performing a task to spot a correct or an incorrect word or sentence with different difficulties in order to induce mental load. We briefly present the analysis of task performance and response time for the items of the experiment task.
|
In relation to our long-term goal, we currently find empirical evidence that few of the physiological behaviours are correlated with each other. For instance, both pupil dilation and blink-eye rate were seen to be correlating with each other in a digit-sorting task @cite_5 . However, to the best of our knowledge, the relationship between all these behaviours have not been explored and is necessary to efficiently create an estimation for cognitive load or mental load. As highlighted we need to understand the relationships between different behaviours because this can help in creating a linear mixed-effect regression model that may estimate mental load in real-time during HCI or HRI. Consequently, We, in this paper, presents intial finding of a study conducted in order to collect data on these behaviours in a synchronous way to help us analyze the relationships between different physiological behaviours and later create a dataset that can be used to estimate cognitive load in real-time.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2154200694"
],
"abstract": [
"Pupil dilation and blinks provide complementary, mutually exclusive indices of information processing. Though each index is associated with cognitive load, the occurrence of a blink precludes the measurement of pupil diameter. These indices have generally been assessed in independent literatures. We examine the extent to which these measures are related on two cognitive tasks using a novel method that quantifies the proportion of trials on which blinks occur at each sample acquired during the trial. This measure allows cross-correlation of continuous pupil-dilation and blink waveforms. Results indicate that blinks occur during early sensory processing and following sustained information processing. Pupil dilation better reflects sustained information processing. Together these indices provide a rich picture of the time course of information processing, from early reactivity through sustained cognition, and after stimulusrelated cognition ends. Descriptors: Blinks, Pupil dilation, Cognitive load, Attention, Stroop"
]
}
|
1906.10491
|
2955283348
|
Dense semantic 3D reconstruction is typically formulated as a discrete or continuous problem over label assignments in a voxel grid, combining semantic and depth likelihoods in a Markov Random Field framework. The depth and semantic information is incorporated as a unary potential, smoothed by a pairwise regularizer. However, modelling likelihoods as a unary potential does not model the problem correctly leading to various undesirable visibility artifacts. We propose to formulate an optimization problem that directly optimizes the reprojection error of the 3D model with respect to the image estimates, which corresponds to the optimization over rays, where the cost function depends on the semantic class and depth of the first occupied voxel along the ray. The 2-label formulation is made feasible by transforming it into a graph-representable form under QPBO relaxation, solvable using graph cut. The multi-label problem is solved by applying alpha-expansion using the same relaxation in each expansion move. Our method was indeed shown to be feasible in practice, running comparably fast to the competing methods, while not suffering from ray potential approximation artifacts.
|
The silhouettes of objects in the input image contain important information about the geometry. They constrain the solution, such that for every ray passing through the silhouette there must be at least one occupied voxel, and every ray outside of the silhouette consists of free space voxels only. This constraint has been used in form of a convex relaxation in @cite_5 . @cite_19 proposes an intelligent unary ballooning visibility term based on the consensus from different views. In @cite_4 , the silhouettes are handled in a two-stage process, where the initial surface is reprojected into each image and the interior is heuristically corrected using the sets of erroneous pixels, by finding the most photo-consistent voxels along the ray. Recently, also approaches which jointly reason about geometry and semantic labels have been proposed @cite_0 . For volumetric 3D reconstruction in indoor environments a Conditional Random Field (CRF) model was proposed in @cite_9 . It includes higher order potentials over groups of voxels to include priors from 2D object detections and 3D surface detections in the raw input depth data. Furthermore, potentials over rays are used to enforce visibility of only one voxel along a ray.
|
{
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_5"
],
"mid": [
"2163309730",
"2171481071",
"2150134683",
"2150375237",
"2141690762"
],
"abstract": [
"We formulate multi-view 3D shape reconstruction as the computation of a minimum cut on the dual graph of a semi- regular, multi-resolution, tetrahedral mesh. Our method does not assume that the surface lies within a finite band around the visual hull or any other base surface. Instead, it uses photo-consistency to guide the adaptive subdivision of a coarse mesh of the bounding volume. This generates a multi-resolution volumetric mesh that is densely tesselated in the parts likely to contain the unknown surface. The graph-cut on the dual graph of this tetrahedral mesh produces a minimum cut corresponding to a triangulated surface that minimizes a global surface cost functional. Our method makes no assumptions about topology and can recover deep concavities when enough cameras observe them. Our formulation also allows silhouette constraints to be enforced during the graph-cut step to counter its inherent bias for producing minimal surfaces. Local shape refinement via surface deformation is used to recover details in the reconstructed surface. Reconstructions of the Multi- View Stereo Evaluation benchmark datasets and other real datasets show the effectiveness of our method.",
"Scene understanding is an important yet very challenging problem in computer vision. In the past few years, researchers have taken advantage of the recent diffusion of depth-RGB (RGB-D) cameras to help simplify the problem of inferring scene semantics. However, while the added 3D geometry is certainly useful to segment out objects with different depth values, it also adds complications in that the 3D geometry is often incorrect because of noisy depth measurements and the actual 3D extent of the objects is usually unknown because of occlusions. In this paper we propose a new method that allows us to jointly refine the 3D reconstruction of the scene (raw depth values) while accurately segmenting out the objects or scene elements from the 3D reconstruction. This is achieved by introducing a new model which we called Voxel-CRF. The Voxel-CRF model is based on the idea of constructing a conditional random field over a 3D volume of interest which captures the semantic and 3D geometric relationships among different elements (voxels) of the scene. Such model allows to jointly estimate (1) a dense voxel-based 3D reconstruction and (2) the semantic labels associated with each voxel even in presence of partial occlusions using an approximate yet efficient inference strategy. We evaluated our method on the challenging NYU Depth dataset (Version 1 and 2). Experimental results show that our method achieves competitive accuracy in inferring scene semantics and visually appealing results in improving the quality of the 3D reconstruction. We also demonstrate an interesting application of object removal and scene completion from RGB-D images.",
"Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being 'too noisy'. Unfortunately, these priors generally yield overly smooth reconstructions and or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other's task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.",
"We present a new formulation to multi-view stereo that treats the problem as probabilistic 3D segmentation. Previous work has used the stereo photo-consistency criterion as a detector of the boundary between the 3D scene and the surrounding empty space. Here we show how the same criterion can also provide a foreground background model that can predict if a 3D location is inside or outside the scene. This model replaces the commonly used naive foreground model based on ballooning which is known to perform poorly in concavities. We demonstrate how the probabilistic visibility is linked to previous work on depth-map fusion and we present a multi-resolution graph-cut implementation using the new ballooning term that is very efficient both in terms of computation time and memory requirements.",
"We propose a convex formulation for silhouette and stereo fusion in 3D reconstruction from multiple images. The key idea is to show that the reconstruction problem can be cast as one of minimizing a convex functional, where the exact silhouette consistency is imposed as convex constraints that restrict the domain of feasible functions. As a consequence, we can retain the original stereo-weighted surface area as a cost functional without heuristic modifications of this energy by balloon terms or other strategies, yet still obtain meaningful (nonempty) reconstructions which are guaranteed to be silhouette-consistent. We prove that the proposed convex relaxation approach provides solutions that lie within a bound of the optimal solution. Compared to existing alternatives, the proposed method does not depend on initialization and leads to a simpler and more robust numerical scheme for imposing silhouette consistency obtained by projection onto convex sets. We show that this projection can be solved exactly using an efficient algorithm. We propose a parallel implementation of the resulting convex optimization problem on a graphics card. Given a photoconsistency map and a set of image silhouettes, we are able to compute highly accurate and silhouette-consistent reconstructions for challenging real-world data sets. In particular, experimental results demonstrate that the proposed silhouette constraints help to preserve fine-scale details of the reconstructed shape. Computation times depend on the resolution of the input imagery and vary between a few seconds and a couple of minutes for all experiments in this paper."
]
}
|
1906.10244
|
2953793649
|
Financial decisions impact our lives, and thus everyone from the regulator to the consumer is interested in fair, sound, and explainable decisions. There is increasing competitive desire and regulatory incentive to deploy AI mindfully within financial services. An important mechanism towards that end is to explain AI decisions to various stakeholders. State-of-the-art explainable AI systems mostly serve AI engineers and offer little to no value to business decision makers, customers, and other stakeholders. Towards addressing this gap, in this work we consider the scenario of explaining loan denials. We build the first-of-its-kind dataset that is representative of loan-applicant friendly explanations. We design a novel Generative Adversarial Network (GAN) that can accommodate smaller datasets, to generate user-friendly textual explanations. We demonstrate how our system can also generate explanations serving different purposes: those that help educate the loan applicants, or help them take appropriate action towards a future approval.
|
A nice summary concerning explainability from an AI engineer's perspective is provided in @cite_13 and @cite_18 . In @cite_16 , the authors highlight the regions in an image that were most important to the model in classifying the image. However, such explanations are not useful to an end-user in either understanding the AI's decision or in debugging the model @cite_1 . In @cite_6 , the authors discuss the main factors used by the AI system in arriving at a certain decision and also discuss how changing a factor changes the decision. This kind of explanation helps in debugging for the AI engineers. While impressive in helping an AI engineer, these works are not accessible to a wider set of users.
|
{
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_6",
"@cite_16",
"@cite_13"
],
"mid": [
"2762968537",
"2949690500",
"2765811634",
"2962858109",
"2439568532"
],
"abstract": [
"We characterize three notions of explainable AI that cut across research fields: opaque systems that offer no insight into its algo- rithmic mechanisms; interpretable systems where users can mathemat- ically analyze its algorithmic mechanisms; and comprehensible systems that emit symbols enabling user-driven explanations of how a conclusion is reached. The paper is motivated by a corpus analysis of NIPS, ACL, COGSCI, and ICCV ECCV paper titles showing differences in how work on explainable AI is positioned in various fields. We close by introducing a fourth notion: truly explainable systems, where automated reasoning is central to output crafted explanations without requiring human post processing as final step of the generative process.",
"A rich line of research attempts to make deep neural networks more transparent by generating human-interpretable 'explanations' of their decision process, especially for interactive tasks like Visual Question Answering (VQA). In this work, we analyze if existing explanations indeed make a VQA model -- its responses as well as failures -- more predictable to a human. Surprisingly, we find that they do not. On the other hand, we find that human-in-the-loop approaches that treat the model as a black-box do.",
"The ubiquity of systems using artificial intelligence or \"AI\" has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before---applications range from clinical decision support to autonomous driving and predictive policing. That said, there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems. There are many ways to hold AI systems accountable. In this work, we focus on one: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation, and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. In this work, we review contexts in which explanation is currently required under the law, and then list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans.",
"We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: github.com ramprs grad-cam along with a demo on CloudCV [2] and video at youtu.be COjUB9Izk6E.",
"Supervised machine learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world? We want models to be not only good, but interpretable. And yet the task of interpretation appears underspecified. Papers provide diverse and sometimes non-overlapping motivations for interpretability, and offer myriad notions of what attributes render models interpretable. Despite this ambiguity, many papers proclaim interpretability axiomatically, absent further explanation. In this paper, we seek to refine the discourse on interpretability. First, we examine the motivations underlying interest in interpretability, finding them to be diverse and occasionally discordant. Then, we address model properties and techniques thought to confer interpretability, identifying transparency to humans and post-hoc explanations as competing notions. Throughout, we discuss the feasibility and desirability of different notions, and question the oft-made assertions that linear models are interpretable and that deep neural networks are not."
]
}
|
1906.10244
|
2953793649
|
Financial decisions impact our lives, and thus everyone from the regulator to the consumer is interested in fair, sound, and explainable decisions. There is increasing competitive desire and regulatory incentive to deploy AI mindfully within financial services. An important mechanism towards that end is to explain AI decisions to various stakeholders. State-of-the-art explainable AI systems mostly serve AI engineers and offer little to no value to business decision makers, customers, and other stakeholders. Towards addressing this gap, in this work we consider the scenario of explaining loan denials. We build the first-of-its-kind dataset that is representative of loan-applicant friendly explanations. We design a novel Generative Adversarial Network (GAN) that can accommodate smaller datasets, to generate user-friendly textual explanations. We demonstrate how our system can also generate explanations serving different purposes: those that help educate the loan applicants, or help them take appropriate action towards a future approval.
|
More recently, there have been efforts in understanding the human interpretability of AI systems. The authors in @cite_20 provide a taxonomy for human interpretability of AI systems. A non-AI engineer perspective regarding explanations of AI system is provided in miller . A nice perspective of user-centered explanations is provided in @cite_12 , wherein the author emphasizes the need for persuasive explanations. The authors in @cite_14 explore the notion of interactivity from the lens of the user. In @cite_0 , the authors discuss how humans understand explanations from machine learning systems through a user study. The metrics used to measure human interpretability are those concerning explanation length, number of concepts used in the explanation, and the number of repeated terms. Interpretability is measured in terms of the time to response and the accuracy of the response. While these efforts are significant in quantifying human interpretability, they fall short of generating user-friendly explanations, which is the focus of this work.
|
{
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_12",
"@cite_20"
],
"mid": [
"2785327160",
"1501005121",
"2769542353",
"2594475271"
],
"abstract": [
"Recent years have seen a boom in interest in machine learning systems that can provide a human-understandable rationale for their predictions or decisions. However, exactly what kinds of explanation are truly human-interpretable remains poorly understood. This work advances our understanding of what makes explanations interpretable in the specific context of verification. Suppose we have a machine learning system that predicts X, and we provide rationale for this prediction X. Given an input, an explanation, and an output, is the output consistent with the input and the supposed rationale? Via a series of user-studies, we identify what kinds of increases in complexity have the greatest effect on the time it takes for humans to verify the rationale, and which seem relatively insensitive.",
"Intelligent systems that learn interactively from their end-users are quickly becoming widespread. Until recently, this progress has been fueled mostly by advances in machine learning; however, more and more researchers are realizing the importance of studying users of these systems. In this article we promote this approach and demonstrate how it can result in better user experiences and more effective learning systems. We present a number of case studies that characterize the impact of interactivity, demonstrate ways in which some existing systems fail to account for the user, and explore new ways for learning systems to interact with their users. We argue that the design process for interactive machine learning systems should involve users at all stages: explorations that reveal human interaction patterns and inspire novel interaction methods, as well as refinement stages to tune details of the interface and choose among alternatives. After giving a glimpse of the progress that has been made so far, we discuss the challenges that we face in moving the field forward.",
"Transparency, user trust, and human comprehension are popular ethical motivations for interpretable machine learning. In support of these goals, researchers evaluate model explanation performance using humans and real world applications. This alone presents a challenge in many areas of artificial intelligence. In this position paper, we propose a distinction between descriptive and persuasive explanations. We discuss reasoning suggesting that functional interpretability may be correlated with cognitive function and user preferences. If this is indeed the case, evaluation and optimization using functional metrics could perpetuate implicit cognitive bias in explanations that threaten transparency. Finally, we propose two potential research directions to disambiguate cognitive function and explanation models, retaining control over the tradeoff between accuracy and interpretability.",
"As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning."
]
}
|
1906.10267
|
2953516263
|
The problem of multi-domain learning of deep networks is considered. An adaptive layer is induced per target domain and a novel procedure, denoted covariance normalization (CovNorm), proposed to reduce its parameters. CovNorm is a data driven method of fairly simple implementation, requiring two principal component analyzes (PCA) and fine-tuning of a mini-adaptation layer. Nevertheless, it is shown, both theoretically and experimentally, to have several advantages over previous approaches, such as batch normalization or geometric matrix approximations. Furthermore, CovNorm can be deployed both when target datasets are available sequentially or simultaneously. Experiments show that, in both cases, it has performance comparable to a fully fine-tuned network, using as few as 0.13 of the corresponding parameters per target domain.
|
Multitask learning: Multi-task learning @cite_39 @cite_34 addresses the solution of multiple tasks by the same model. It assumes that all tasks have the same visual domain. Popular examples include classification and bounding box regression in object detection @cite_37 @cite_33 , joint estimation of surface normals and depth @cite_50 or segmentation @cite_40 , joint representation in terms of attributes and facial landmarks @cite_35 @cite_49 , among others. Multitask learning is sometimes also used to solve auxiliary tasks that strengthen performance of a task of interest, e.g. by accounting for context @cite_36 , or representing objects in terms of classes and attributes @cite_4 @cite_40 @cite_24 @cite_48 . Recently, there have been attempts to learn models that solve many problems jointly @cite_43 @cite_7 @cite_17 .
|
{
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_36",
"@cite_48",
"@cite_39",
"@cite_24",
"@cite_43",
"@cite_40",
"@cite_50",
"@cite_49",
"@cite_34",
"@cite_17"
],
"mid": [
"1896424170",
"",
"2170881581",
"639708223",
"",
"874179280",
"",
"",
"2605805765",
"2963677766",
"",
"",
"",
"2742079690",
"2964185501"
],
"abstract": [
"Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21].",
"",
"We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two subnetworks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268).",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"",
"There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2 mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.",
"",
"",
"The role of semantics in zero-shot learning is considered. The effectiveness of previous approaches is analyzed according to the form of supervision provided. While some learn semantics independently, others only supervise the semantic subspace explained by training classes. Thus, the former is able to constrain the whole space but lacks the ability to model semantic correlations. The latter addresses this issue but leaves part of the semantic space unsupervised. This complementarity is exploited in a new convolutional neural network (CNN) framework, which proposes the use of semantics as constraints for recognition. Although a CNN trained for classification has no transfer ability, this can be encouraged by learning an hidden semantic layer together with a semantic code for classification. Two forms of semantic constraints are then introduced. The first is a loss-based regularizer that introduces a generalization constraint on each semantic predictor. The second is a codeword regularizer that favors semantic-to-class mappings consistent with prior semantic knowledge while allowing these to be learned from data. Significant improvements over the state-of-the-art are achieved on several datasets.",
"Numerous deep learning applications benefit from multitask learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task.",
"",
"",
"",
"Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach, and decomposition approach, and then discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as dimensionality reduction and feature hashing are reviewed to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.",
"Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a structure among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity. We proposes a fully computational approach for modeling the structure of space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning. We study the consequences of this structure, e.g. nontrivial emerged relationships, and exploit them to reduce the demand for labeled data. We provide a set of tools for computing and probing this taxonomical structure including a solver users can employ to find supervision policies for their use cases."
]
}
|
1906.10267
|
2953516263
|
The problem of multi-domain learning of deep networks is considered. An adaptive layer is induced per target domain and a novel procedure, denoted covariance normalization (CovNorm), proposed to reduce its parameters. CovNorm is a data driven method of fairly simple implementation, requiring two principal component analyzes (PCA) and fine-tuning of a mini-adaptation layer. Nevertheless, it is shown, both theoretically and experimentally, to have several advantages over previous approaches, such as batch normalization or geometric matrix approximations. Furthermore, CovNorm can be deployed both when target datasets are available sequentially or simultaneously. Experiments show that, in both cases, it has performance comparable to a fully fine-tuned network, using as few as 0.13 of the corresponding parameters per target domain.
|
Most multitask learning approaches emphasize the learning of the interrelationships between tasks. This is frequently accomplished by using a single network, combining domain agnostic lower-level network layers with task specific network heads and loss functions @cite_35 @cite_50 @cite_36 @cite_4 @cite_33 @cite_7 , or some more sophisticated forms of network branching @cite_15 . The branching architecture is incompatible with MDL, where each task has its own input, different from those of all other tasks. Even when multi-task learning is addressed with multiple tower networks, the emphasis tends to be on inter-tower connections, e.g. through cross-stitching @cite_40 @cite_38 . In MDL, such connections are not feasible, because different networks can join the ecology of Figure asynchronously, as devices are turned on and off.
|
{
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_36",
"@cite_40",
"@cite_50",
"@cite_15"
],
"mid": [
"1896424170",
"2963268748",
"2170881581",
"639708223",
"",
"874179280",
"",
"",
"2549401308"
],
"abstract": [
"Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21].",
"Residual learning has recently surfaced as an effective means of constructing very deep neural networks for object recognition. However, current incarnations of residual networks do not allow for the modeling and integration of complex relations between closely coupled recognition tasks or across domains. Such problems are often encountered in multimedia applications involving large-scale content recognition. We propose a novel extension of residual learning for deep networks that enables intuitive learning across multiple related tasks using cross-connections called cross-residuals. These cross-residuals connections can be viewed as a form of in-network regularization and enables greater network generalization. We show how cross-residual learning (CRL) can be integrated in multitask networks to jointly train and detect visual concepts across several tasks. We present a single multitask cross-residual network with >40 less parameters that is able to achieve competitive, or even better, detection performance on a visual sentiment concept detection problem normally requiring multiple specialized single-task networks. The resulting multitask cross-residual network also achieves better detection performance by about 10.4 over a standard multitask residual network without cross-residuals with even a small amount of cross-task weighting.",
"We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two subnetworks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268).",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"",
"There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2 mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.",
"",
"",
"Multi-task learning aims to improve generalization performance of multiple prediction tasks by appropriately sharing relevant information across them. In the context of deep neural networks, this idea is often realized by hand-designed network architectures with layers that are shared across tasks and branches that encode task-specific features. However, the space of possible multi-task deep architectures is combinatorially large and often the final architecture is arrived at by manual exploration of this space, which can be both error-prone and tedious. We propose an automatic approach for designing compact multi-task deep learning architectures. Our approach starts with a thin multi-layer network and dynamically widens it in a greedy manner during training. By doing so iteratively, it creates a tree-like deep architecture, on which similar tasks reside in the same branch until at the top layers. Evaluation on person attributes classification tasks involving facial and clothing attributes suggests that the models produced by the proposed method are fast, compact and can closely match or exceed the state-of-the-art accuracy from strong baselines by much more expensive models."
]
}
|
1906.10267
|
2953516263
|
The problem of multi-domain learning of deep networks is considered. An adaptive layer is induced per target domain and a novel procedure, denoted covariance normalization (CovNorm), proposed to reduce its parameters. CovNorm is a data driven method of fairly simple implementation, requiring two principal component analyzes (PCA) and fine-tuning of a mini-adaptation layer. Nevertheless, it is shown, both theoretically and experimentally, to have several advantages over previous approaches, such as batch normalization or geometric matrix approximations. Furthermore, CovNorm can be deployed both when target datasets are available sequentially or simultaneously. Experiments show that, in both cases, it has performance comparable to a fully fine-tuned network, using as few as 0.13 of the corresponding parameters per target domain.
|
Lifelong learning: Lifelong learning aims to learn multiple tasks sequentially with a shared model. This can be done by adapting the parameters of a network or adapting the network architecture. Since training data is discarded upon its use, constraints are needed to force the model to remember what was previously learned. Methods that only change parameters either use the model output on previous tasks @cite_12 , previous parameters values @cite_46 , or previous network activations @cite_3 to regularize the learning of the target task. They are very effective at parameter sharing, since a single model solves all tasks. However, this model is not optimal for any specific task, and can perform poorly on all tasks, depending on the mismatch between source and target domains @cite_16 . We show that they can significantly underperform MDL with CovNorm. Methods that adapt the network architecture usually add a tower per new task @cite_27 @cite_9 . These methods have much larger complexity than MDL, since several towers can be needed to solve a single task @cite_27 , and there is no sharing of fixed layers across tasks.
|
{
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_27",
"@cite_46",
"@cite_16",
"@cite_12"
],
"mid": [
"2554616628",
"2605911906",
"2426267443",
"2605043629",
"2964189064",
"2473930607"
],
"abstract": [
"In this paper we introduce a model of lifelong learning, based on a Network of Experts. New tasks experts are learned and added to the model sequentially, building on what was learned before. To ensure scalability of this process, data from previous tasks cannot be stored and hence is not available when learning a new task. A critical issue in such context, not addressed in the literature so far, relates to the decision which expert to deploy at test time. We introduce a set of gating autoencoders that learn a representation for the task at hand, and, at test time, automatically forward the test sample to the relevant expert. This also brings memory efficiency as only one expert network has to be loaded into memory at any given time. Further, the autoencoders inherently capture the relatedness of one task to another, based on which the most relevant prior model to be used for training a new expert, with fine-tuning or learning-without-forgetting, can be selected. We evaluate our method on image classification and video prediction problems.",
"This paper introduces a new lifelong learning solution where a single model is trained for a sequence of tasks. The main challenge that vision systems face in this context is catastrophic forgetting: as they tend to adapt to the most recently seen task, they lose performance on the tasks that were learned previously. Our method aims at preserving the knowledge of the previous tasks while learning a new one by using autoencoders. For each task, an under-complete autoencoder is learned, capturing the features that are crucial for its achievement. When a new task is presented to the system, we prevent the reconstructions of the features with these autoencoders from changing, which has the effect of preserving the information on which the previous tasks are mainly relying. At the same time, the features are given space to adjust to the most recent environment as only their projection into a low dimension submanifold is controlled. The proposed system is evaluated on image classification tasks and shows a reduction of forgetting over the state-of-the-art",
"Methods and systems for performing a sequence of machine learning tasks. One system includes a sequence of deep neural networks (DNNs), including: a first DNN corresponding to a first machine learning task, wherein the first DNN comprises a first plurality of indexed layers, and each layer in the first plurality of indexed layers is configured to receive a respective layer input and process the layer input to generate a respective layer output; and one or more subsequent DNNs corresponding to one or more respective machine learning tasks, wherein each subsequent DNN comprises a respective plurality of indexed layers, and each layer in a respective plurality of indexed layers with index greater than one receives input from a preceding layer of the respective subsequent DNN, and one or more preceding layers of respective preceding DNNs, wherein a preceding layer is a layer whose index is one less than the current index.",
"Catastrophic forgetting is a problem of neural networks that loses the information of the first task after training the second task. Here, we propose a method, i.e. incremental moment matching (IMM), to resolve this problem. IMM incrementally matches the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively. To make the search space of posterior parameter smooth, the IMM procedure is complemented by various transfer learning techniques including weight transfer, L2-norm of the old and the new parameter, and a variant of dropout with the old parameter. We analyze our approach on a variety of datasets including the MNIST, CIFAR-10, Caltech-UCSD-Birds, and Lifelog datasets. The experimental results show that IMM achieves state-of-the-art performance by balancing the information between an old and a new network.",
"A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.",
"When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance."
]
}
|
1906.10253
|
2955499770
|
The strength or weakness of an algorithm is ultimately governed by the confidence of its result. When the domain of the problem is large (e.g. traversal of a high-dimensional space), a perfect solution cannot be obtained, so approximations must be made. These approximations often lead to a reported quantity of interest (QOI) which varies between runs, decreasing the confidence of any single run. When the algorithm further computes this final QOI based on uncertain or noisy data, the variability (or lack of confidence) of the final QOI increases. Unbounded, these two sources of uncertainty (algorithmic approximations and uncertainty in input data) can result in a reported statistic that has low correlation with ground truth. In biological applications, this is especially applicable, as the search space is generally approximated at least to some degree (e.g. a high percentage of protein structures are invalid or energetically unfavorable) and the explicit conversion from continuous to discrete space for protein representation implies some uncertainty in the input data. This research applies uncertainty quantification techniques to the difficult protein-protein docking problem, first showing the variability that exists in existing software, and then providing a method for computing probabilistic certificates in the form of Chernoff-like bounds. Finally, this paper leverages these probabilistic certificates to accurately bound the uncertainty in docking from two docking algorithms, providing a QOI that is both robust and statistically meaningful.
|
Amadei al @cite_18 suggested it was possible to separate the protein configurational space into two subspaces, the essential'' subspace (only a few degrees of freedom that give rise to the large-scale motions of the protein) and the physically constrained'' subspace, consisting of largely Gaussian motion. Thus, identifying a small number of hinge'' residues that give rise to the 'essential' subspace has been the focus of another group of work. Shamsuddin al @cite_17 constructed @math basis vectors to define the motion of rigid bodies joined by hinge residues, HingeProt @cite_1 is a popular method for identifying hinges using cross-correlation of movements from a normal modes analysis (NMA).
|
{
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_17"
],
"mid": [
"2016280956",
"",
"2106301834"
],
"abstract": [
"Analysis of extended molecular dynamics (MD) simulations of several proteins in aquous solutions reveals that it is possible to separate the configurational space into two subspaces: (1) an ‘essential’ subspace containing only a few degrees of freedom in which anharmonic motion occurs that comprises most of the positional fluctuations; and (2) the remaining space in which the motion has a narrow Gaussian distribution and which can be considered as ‘physically constrained’.",
"",
"Proteins play a significant role in virtually all biological processes, with motions of interest occurring on the timescale of pico- to nano-seconds. Existing experimental techniques are not able to reliably provide the level of detail required for studying the exact mechanisms of the molecular motion and the underlying structure-function relationship, essential for effective drug screening and design. Theoretical models and computational tools are instrumental for gaining better mechanistic understanding and predictive power. We focus on “hinge” proteins, which exhibit a rotational movement of one region of the protein relative to another; this motion is similar to that allowed by revolute joints. By applying rigidity theoretic techniques that analyze the protein's geometric structure, we predict a relative axis of motion between a given pair of domains. To the best of our knowledge, this is the first computational method to predict such an axis based on a single conformational state. The most closely related approaches for predicting hinges either use a single conformation to identify the residues permitting the motion (Stonehinge, HingeProt), or require two conformations as input (DynDom). Our results show that rigidity theory can be applied to analyze proteins and accurately predict information that may elucidate conformational changes tied to protein function."
]
}
|
1906.10515
|
2955278383
|
To achieve fully autonomous navigation, vehicles need to compute an accurate model of their direct surrounding. In this paper, a 3D surface reconstruction algorithm from heterogeneous density 3D data is presented. The proposed method is based on a TSDF voxel-based representation, where an adaptive neighborhood kernel sourced on a Gaussian confidence evaluation is introduced. This enables to keep a good trade-off between the density of the reconstructed mesh and its accuracy. Experimental evaluations carried on both synthetic (CARLA) and real (KITTI) 3D data show a good performance compared to a state of the art method used for surface reconstruction.
|
Some methods use directly the 3D set of points received from the input sensor, which might be useful for visualization or obstacle detection tasks. However, the level of details of the representation depends on the amount of data used, that rapidly becomes prohibitive for outdoor scenes. Conversely, other works propose a regularly sampled grid as introduced in @cite_14 , where occupancy information is stored into each cell. This enables to handle big amounts of data more efficiently and reduce memory needs, specially by using recursive structures such as octrees @cite_11 @cite_17 . These approaches have become widely used for terrain traversability assessment, mapping and visualization. However, their discrete nature does not enable a continuous representation, which might be desirable for other tasks such as physical modeling.
|
{
"cite_N": [
"@cite_14",
"@cite_17",
"@cite_11"
],
"mid": [
"2154418813",
"",
"2146746326"
],
"abstract": [
"We describe the use of multiple wide-angle sonar range measurements to map the surroundings of an autonomous mobile robot. A sonar range reading provides information concerning empty and occupied volumes in a cone (subtending 30 degrees in our case) in front of the sensor. The reading is modelled as probability profiles projected onto a rasterized map, where somewhere occupied and everywhere empty areas are represented. Range measurements from multiple points of view (taken from multiple sensors on the robot, and from the same sensors after robot moves) are systematically integrated in the map. Overlapping empty volumes re-inforce each other, and serve to condense the range of occupied volumes. The map definition improves as more readings are added. The final map shows regions probably occupied, probably unoccupied, and unknown areas. The method deals effectively with clutter, and can be used for motion planning and for extended landmark recognition. This system has been tested on the Neptune mobile robot at CMU.",
"",
"The authors are building a prototype legged rover, called the Ambler (loosely an acronym for autonomous mobile exploration robot) and testing it on full-scale, rugged terrain of the sort that might be encountered on the Martian surface. They present an overview of their research program, focusing on locomotion, perception, planning, and control. They summarize some of the most important goals and requirements of a rover design and describe how locomotion, perception, and planning systems can satisfy these requirements. Since the program is relatively young (one year old at the time of writing) they identify issues and approaches and describe work in progress rather than report results. It is expected that many of the technologies developed will be applicable to other planetary bodies and to terrestrial concerns such as hazardous waste assessment and remediation, ocean floor exploration, and mining. >"
]
}
|
1906.10515
|
2955278383
|
To achieve fully autonomous navigation, vehicles need to compute an accurate model of their direct surrounding. In this paper, a 3D surface reconstruction algorithm from heterogeneous density 3D data is presented. The proposed method is based on a TSDF voxel-based representation, where an adaptive neighborhood kernel sourced on a Gaussian confidence evaluation is introduced. This enables to keep a good trade-off between the density of the reconstructed mesh and its accuracy. Experimental evaluations carried on both synthetic (CARLA) and real (KITTI) 3D data show a good performance compared to a state of the art method used for surface reconstruction.
|
Alternatively, the graphics community has explored different methods to create a triangular mesh from the 3D points of the scanned surface. These methods have a wide range of applications on completely different fields as described in @cite_16 . For simplicity, we distinguish between explicit and implicit methods.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2366389387"
],
"abstract": [
"The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contain a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations-not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques and provide directions for future work in surface reconstruction."
]
}
|
1906.10515
|
2955278383
|
To achieve fully autonomous navigation, vehicles need to compute an accurate model of their direct surrounding. In this paper, a 3D surface reconstruction algorithm from heterogeneous density 3D data is presented. The proposed method is based on a TSDF voxel-based representation, where an adaptive neighborhood kernel sourced on a Gaussian confidence evaluation is introduced. This enables to keep a good trade-off between the density of the reconstructed mesh and its accuracy. Experimental evaluations carried on both synthetic (CARLA) and real (KITTI) 3D data show a good performance compared to a state of the art method used for surface reconstruction.
|
In @cite_6 , range information across different viewpoints are integrated to average a TSDF from where the scalar field is obtained. By using this technique, 3D modeling has been performed in both small indoor @cite_24 and large scale outdoor scenes @cite_20 @cite_10 . These methods typically require a large number of viewpoints to output a dense reconstruction and are susceptible to outliers.
|
{
"cite_N": [
"@cite_24",
"@cite_20",
"@cite_10",
"@cite_6"
],
"mid": [
"1987648924",
"1716229439",
"",
"2009422376"
],
"abstract": [
"We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware. We fuse all of the depth data streamed from a Kinect sensor into a single global implicit surface model of the observed scene in real-time. The current sensor pose is simultaneously obtained by tracking the live depth frame relative to the global model using a coarse-to-fine iterative closest point (ICP) algorithm, which uses all of the observed depth data available. We demonstrate the advantages of tracking against the growing full surface model compared with frame-to-frame tracking, obtaining tracking and mapping results in constant time within room sized scenes with limited drift and high accuracy. We also show both qualitative and quantitative results relating to various aspects of our tracking and mapping system. Modelling of natural scenes, in real-time with only commodity sensor and GPU hardware, promises an exciting step forward in augmented reality (AR), in particular, it allows dense surfaces to be reconstructed in real-time, with a level of detail and robustness beyond any solution yet presented using passive computer vision.",
"In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynamically, (ii) extracting a dense point cloud from the regions that leave the KinectFusion volume due to this variation, and, (iii) incrementally adding the resulting points to a triangular mesh representation of the environment. The system is implemented as a set of hierarchical multi-threaded components which are capable of operating in real-time. The architecture facilitates the creation and integration of new modules with minimal impact on the performance on the dense volume tracking and surface reconstruction modules. We provide experimental results demonstrating the system’s ability to map areas considerably beyond the scale of the original KinectFusion algorithm including a two story apartment and an extended sequence taken from a car at night. In order to overcome failure of the iterative closest point (ICP) based odometry in areas of low geometric features we have evaluated the Fast Odometry from Vision (FOVIS) system as an alternative. We provide a comparison between the two approaches where we show a trade off between the reduced drift of the visual odometry approach and the higher local mesh quality of the ICP-based approach. Finally we present ongoing work on incorporating full simultaneous localisation and mapping (SLAM) pose-graph optimisation.",
"",
"A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scan-convert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a run-length encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the final manifold by extracting an isosurface from the volumetric grid. We show that under certain assumptions, this isosurface is optimal in the least squares sense. To fill gaps in the model, we tessellate over the boundaries between regions seen to be empty and regions never observed. Using this method, we are able to integrate a large number of range images (as many as 70) yielding seamless, high-detail models of up to 2.6 million triangles."
]
}
|
1906.10284
|
2955238251
|
This paper introduces single-image 3D scene reconstruction from water reflection photography, i.e., images capturing direct and water-reflected real-world scenes. Water reflection offers an additional viewpoint to the direct sight, collectively forming a stereo pair. The water-reflected scene, however, includes internally scattered and reflected environmental illumination in addition to the scene radiance, which precludes direct stereo matching. We derive a principled iterative method that disentangles this scene radiometry and geometry for reconstructing 3D scene structure as well as its high-dynamic range appearance. In the presence of waves, we simultaneously recover the wave geometry as surface normal perturbations of the water surface. Most important, we show that the water reflection enables calibration of the camera. In other words, we show that capturing a direct and water-reflected scene in a single exposure forms a self-calibrating catadioptric stereo camera. We demonstrate our method on a number of images taken in the wild. The results demonstrate a new means for leveraging this accidental catadioptric camera.
|
Stereo matching with translucency , @cite_20 @cite_3 or image-based reflection separation , @cite_14 @cite_22 @cite_32 @cite_16 explicitly model transmission through semi-translucent surfaces. They utilize either 3D recovery or models in the Fourier domain for blind separation of reflected and transmitted images. They cannot, however, be applied to non-planar surfaces such as wavy water surfaces.
|
{
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_32",
"@cite_3",
"@cite_16",
"@cite_20"
],
"mid": [
"2107530646",
"1558369361",
"1983897339",
"2170326985",
"1986257159",
"2122985851"
],
"abstract": [
"When we take a picture through transparent glass, the image we obtain is often a linear superposition of two images: The image of the scene beyond the glass plus the image of the scene reflected by the glass. Decomposing the single input image into two images is a massively ill-posed problem: In the absence of additional knowledge about the scene being viewed, there are an infinite number of valid decompositions. In this paper, we focus on an easier problem: user assisted separation in which the user interactively labels a small number of gradients as belonging to one of the layers. Even given labels on part of the gradients, the problem is still ill-posed and additional prior knowledge is needed. Following recent results on the statistics of natural images, we use a sparsity prior over derivative filters. This sparsity prior is optimized using the iterative reweighted least squares (IRLS) approach. Our results show that using a prior derived from the statistics of natural images gives a far superior performance compared to a Gaussian prior and it enables good separations from a modest number of labeled gradients.",
"In this paper we present an approach for separating two transparent layers in images and video sequences. Given two initial unknown physical mixtures, I 1 and I 2, of real scene layers, L 1 and L 2, we seek a layer separation which minimizes the structural correlations across the two layers, at every image point. Such a separation is achieved by transferring local grayscale structure from one image to the other wherever it is highly correlated with the underlying local grayscale structure in the other image, and vice versa. This bi-directional transfer operation, which we call the “layer information exchange”, is performed on diminishing window sizes, from global image windows (i.e., the entire image), down to local image windows, thus detecting similar grayscale structures at varying scales across pixels. We show the applicability of this approach to various real-world scenarios, including image and video transparency separation. In particular, we show that this approach can be used for separating transparent layers in images obtained under different polarizations, as well as for separating complex non-rigid transparent motions in video sequences. These can be done without prior knowledge of the layer mixing model (simple additive, alpha-mated composition with an unknown alpha-map, or other), and under unknown complex temporal changes (e.g., unknown varying lighting conditions).",
"We propose a physically-based approach to separate reflection using multiple polarized images with a background scene captured behind glass. The input consists of three polarized images, each captured from the same view point but with a different polarizer angle separated by 45 degrees. The output is the high-quality separation of the reflection and background layers from each of the input images. A main technical challenge for this problem is that the mixing coefficient for the reflection and background layers depends on the angle of incidence and the orientation of the plane of incidence, which are spatially varying over the pixels of an image. Exploiting physical properties of polarization for a double-surfaced glass medium, we propose a multiscale scheme which automatically finds the optimal separation of the reflection and background layers. Through experiments, we demonstrate that our approach can generate superior results to those of previous methods.",
"In our fractional stereo matching problem, a foreground object with a fractional boundary is blended with a background scene using unknown transparencies. Due to the spatially varying disparities in different layers, one foreground pixel may be blended with different background pixels in stereo images, making the color constancy commonly assumed in traditional stereo matching not hold any more. To tackle this problem, in this paper, we introduce a probabilistic framework constraining the matching of pixel colors, disparities, and alpha values in different layers, and propose an automatic optimization method to solve a maximizing a posterior (MAP) problem using expectation-maximization (EM), given only a short-baseline stereo input image pair. Our method encodes the effect of background occlusion by layer blending without requiring a special detection process. The alpha computation process in our unified framework can be regarded as a new approach by natural image matting, which handles appropriately the situation when the background color is similar to that of the foreground object. We demonstrate the efficacy of our method by experimenting with challenging stereo images and making comparisons with state-of-the-art methods.",
"Convolutive mixtures of images are common in photography of semi-reflections. They also occur in microscopy and tomography. Their formation process involves focusing on an object layer, over which defocused layers are superimposed. We seek blind source separation (BSS) of such mixtures. However, achieving this by direct optimization of mutual information is very complex and suffers from local minima. Thus, we devise an efficient approach to solve these problems. While achieving high quality image separation, we take steps that make the problem significantly simpler than a direct formulation of convolutive image mixtures. These steps make the problem practically convex, yielding a unique global solution to which convergence can be fast. The convolutive BSS problem is converted into a set of instantaneous (pointwise) problems, using a short time Fourier transform (STFT). Standard BSS solutions for instantaneous problems suffer, however, from scale and permutation ambiguities. We overcome these ambiguities by exploiting a parametric model of the defocus point spread function. Moreover, we enhance the efficiency of the approach by exploiting the sparsity of the STFT representation as a prior. We apply our algorithm to semi-reflections, and demonstrate it in experiments.",
"In this paper, we address the stereo matching problem in the presence of reflections and translucency, where image formation can be modeled as the additive superposition of layers at different depth. The presence of such effects violates the Lambertian assumption underlying traditional stereo vision algorithms, making it impossible to recover component depths using direct color matching based methods. We develop several techniques to estimate both depths and colors of the component layers. Depth hypotheses are enumerated in pairs, one from each layer, in a nested plane sweep. For each pair of depth hypotheses, we compute a component-color-independent matching error per pixel, using a spatial-temporal differencing technique. We then use graph cut optimization to solve for the depths of both layers. This is followed by an iterative color update algorithm whose convergence is proven in our paper. We show convincing results of depth and color estimates for both synthetic and real image sequences."
]
}
|
1906.10284
|
2955238251
|
This paper introduces single-image 3D scene reconstruction from water reflection photography, i.e., images capturing direct and water-reflected real-world scenes. Water reflection offers an additional viewpoint to the direct sight, collectively forming a stereo pair. The water-reflected scene, however, includes internally scattered and reflected environmental illumination in addition to the scene radiance, which precludes direct stereo matching. We derive a principled iterative method that disentangles this scene radiometry and geometry for reconstructing 3D scene structure as well as its high-dynamic range appearance. In the presence of waves, we simultaneously recover the wave geometry as surface normal perturbations of the water surface. Most important, we show that the water reflection enables calibration of the camera. In other words, we show that capturing a direct and water-reflected scene in a single exposure forms a self-calibrating catadioptric stereo camera. We demonstrate our method on a number of images taken in the wild. The results demonstrate a new means for leveraging this accidental catadioptric camera.
|
0 Nishino and Nayer proposed a K. Nishino and S. K. Nayar. Corneal Imaging System: Environment from Eyes. International Journal on Computer Vision, Oct 2006 @cite_30
|
{
"cite_N": [
"@cite_30"
],
"mid": [
"2032118045"
],
"abstract": [
"This paper provides a comprehensive analysis of exactly what visual information about the world is embedded within a single image of an eye. It turns out that the cornea of an eye and a camera viewing the eye form a catadioptric imaging system. We refer to this as a corneal imaging system. Unlike a typical catadioptric system, a corneal one is flexible in that the reflector (cornea) is not rigidly attached to the camera. Using a geometric model of the cornea based on anatomical studies, its 3D location and orientation can be estimated from a single image of the eye. Once this is done, a wide-angle view of the environment of the person can be obtained from the image. In addition, we can compute the projection of the environment onto the retina with its center aligned with the gaze direction. This foveated retinal image reveals what the person is looking at. We present a detailed analysis of the characteristics of the corneal imaging system including field of view, resolution and locus of viewpoints. When both eyes of a person are captured in an image, we have a stereo corneal imaging system. We analyze the epipolar geometry of this stereo system and show how it can be used to compute 3D structure. The framework we present in this paper for interpreting eye images is passive and non-invasive. It has direct implications for several fields including visual recognition, human-machine interfaces, computer graphics and human affect studies."
]
}
|
1906.10284
|
2955238251
|
This paper introduces single-image 3D scene reconstruction from water reflection photography, i.e., images capturing direct and water-reflected real-world scenes. Water reflection offers an additional viewpoint to the direct sight, collectively forming a stereo pair. The water-reflected scene, however, includes internally scattered and reflected environmental illumination in addition to the scene radiance, which precludes direct stereo matching. We derive a principled iterative method that disentangles this scene radiometry and geometry for reconstructing 3D scene structure as well as its high-dynamic range appearance. In the presence of waves, we simultaneously recover the wave geometry as surface normal perturbations of the water surface. Most important, we show that the water reflection enables calibration of the camera. In other words, we show that capturing a direct and water-reflected scene in a single exposure forms a self-calibrating catadioptric stereo camera. We demonstrate our method on a number of images taken in the wild. The results demonstrate a new means for leveraging this accidental catadioptric camera.
|
TORRALBA, Antonio; FREEMAN, William T. Accidental pinhole and pinspeck cameras: Revealing the scene outside the picture. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012. p. 374-381 @cite_28
|
{
"cite_N": [
"@cite_28"
],
"mid": [
"2144742844"
],
"abstract": [
"We identify and study two types of “accidental” images that can be formed in scenes. The first is an accidental pinhole camera image. These images are often mistaken for shadows, but can reveal structures outside a room, or the unseen shape of the light aperture into the room. The second class of accidental images are “inverse” pinhole camera images, formed by subtracting an image with a small occluder present from a reference image without the occluder. The reference image can be an earlier frame of a video sequence. Both types of accidental images happen in a variety of different situations (an indoor scene illuminated by natural light, a street with a person walking under the shadow of a building, etc.). Accidental cameras can reveal information about the scene outside the image, the lighting conditions, or the aperture by which light enters the scene."
]
}
|
1906.10284
|
2955238251
|
This paper introduces single-image 3D scene reconstruction from water reflection photography, i.e., images capturing direct and water-reflected real-world scenes. Water reflection offers an additional viewpoint to the direct sight, collectively forming a stereo pair. The water-reflected scene, however, includes internally scattered and reflected environmental illumination in addition to the scene radiance, which precludes direct stereo matching. We derive a principled iterative method that disentangles this scene radiometry and geometry for reconstructing 3D scene structure as well as its high-dynamic range appearance. In the presence of waves, we simultaneously recover the wave geometry as surface normal perturbations of the water surface. Most important, we show that the water reflection enables calibration of the camera. In other words, we show that capturing a direct and water-reflected scene in a single exposure forms a self-calibrating catadioptric stereo camera. We demonstrate our method on a number of images taken in the wild. The results demonstrate a new means for leveraging this accidental catadioptric camera.
|
BOUMAN, Katherine L., et al Turning corners into cameras: Principles and methods. In: Proceedings of the IEEE International Conference on Computer Vision. 2017. p. 2270-2278. @cite_11
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2777077087"
],
"abstract": [
"We show that walls, and other obstructions with edges, can be exploited as naturally-occurring “cameras” that reveal the hidden scenes beyond them. In particular, we demonstrate methods for using the subtle spatio-temporal radiance variations that arise on the ground at the base of a wall's edge to construct a one-dimensional video of the hidden scene behind the wall. The resulting technique can be used for a variety of applications in diverse physical settings. From standard RGB video recordings, we use edge cameras to recover 1-D videos that reveal the number and trajectories of people moving in an occluded scene. We further show that adjacent wall edges, such as those that arise in the case of an open doorway, yield a stereo camera from which the 2-D location of hidden, moving objects can be recovered. We demonstrate our technique in a number of indoor and outdoor environments involving varied floor surfaces and illumination conditions."
]
}
|
1906.10284
|
2955238251
|
This paper introduces single-image 3D scene reconstruction from water reflection photography, i.e., images capturing direct and water-reflected real-world scenes. Water reflection offers an additional viewpoint to the direct sight, collectively forming a stereo pair. The water-reflected scene, however, includes internally scattered and reflected environmental illumination in addition to the scene radiance, which precludes direct stereo matching. We derive a principled iterative method that disentangles this scene radiometry and geometry for reconstructing 3D scene structure as well as its high-dynamic range appearance. In the presence of waves, we simultaneously recover the wave geometry as surface normal perturbations of the water surface. Most important, we show that the water reflection enables calibration of the camera. In other words, we show that capturing a direct and water-reflected scene in a single exposure forms a self-calibrating catadioptric stereo camera. We demonstrate our method on a number of images taken in the wild. The results demonstrate a new means for leveraging this accidental catadioptric camera.
|
GLUCKMAN, Joshua; NAYAR, Shree K. Catadioptric stereo using planar mirrors. International Journal of Computer Vision, 2001, 44.1: 65-79. @cite_2
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"1598351315"
],
"abstract": [
"By using mirror reflections of a scene, stereo images can be captured with a single camera (catadioptric stereo). In addition to simplifying data acquisition single camera stereo provides both geometric and radiometric advantages over traditional two camera stereo. In this paper, we discuss the geometry and calibration of catadioptric stereo with two planar mirrors. In particular, we will show that the relative orientation of a catadioptric stereo rig is restricted to the class of planar motions thus reducing the number of external calibration parameters from 6 to 5. Next we derive the epipolar geometry for catadioptric stereo and show that it has 6 degrees of freedom rather than 7 for traditional stereo. Furthermore, we show how focal length can be recovered from a single catadioptric image solely from a set of stereo correspondences. To test the accuracy of the calibration we present a comparison to Tsai camera calibration and we measure the quality of Euclidean reconstruction. In addition, we will describe a real-time system which demonstrates the viability of stereo with mirrors as an alternative to traditional two camera stereo."
]
}
|
1906.10284
|
2955238251
|
This paper introduces single-image 3D scene reconstruction from water reflection photography, i.e., images capturing direct and water-reflected real-world scenes. Water reflection offers an additional viewpoint to the direct sight, collectively forming a stereo pair. The water-reflected scene, however, includes internally scattered and reflected environmental illumination in addition to the scene radiance, which precludes direct stereo matching. We derive a principled iterative method that disentangles this scene radiometry and geometry for reconstructing 3D scene structure as well as its high-dynamic range appearance. In the presence of waves, we simultaneously recover the wave geometry as surface normal perturbations of the water surface. Most important, we show that the water reflection enables calibration of the camera. In other words, we show that capturing a direct and water-reflected scene in a single exposure forms a self-calibrating catadioptric stereo camera. We demonstrate our method on a number of images taken in the wild. The results demonstrate a new means for leveraging this accidental catadioptric camera.
|
E. H. Adelson and J. Y. Wang. Single lens stereo with a plenoptic camera. IEEE Trans. on Pattern Analysis and Machine Intelligence, 14(2):99–106, 1992. 1 @cite_29
|
{
"cite_N": [
"@cite_29"
],
"mid": [
"2143050242"
],
"abstract": [
"Ordinary cameras gather light across the area of their lens aperture, and the light striking a given subregion of the aperture is structured somewhat differently than the light striking an adjacent subregion. By analyzing this optical structure, one can infer the depths of the objects in the scene, i.e. one can achieve single lens stereo. The authors describe a camera for performing this analysis. It incorporates a single main lens along with a lenticular array placed at the sensor plane. The resulting plenoptic camera provides information about how the scene would look when viewed from a continuum of possible viewpoints bounded by the main lens aperture. Deriving depth information is simpler than in a binocular stereo system because the correspondence problem is minimized. The camera extracts information about both horizontal and vertical parallax, which improves the reliability of the depth estimates. >"
]
}
|
1906.10284
|
2955238251
|
This paper introduces single-image 3D scene reconstruction from water reflection photography, i.e., images capturing direct and water-reflected real-world scenes. Water reflection offers an additional viewpoint to the direct sight, collectively forming a stereo pair. The water-reflected scene, however, includes internally scattered and reflected environmental illumination in addition to the scene radiance, which precludes direct stereo matching. We derive a principled iterative method that disentangles this scene radiometry and geometry for reconstructing 3D scene structure as well as its high-dynamic range appearance. In the presence of waves, we simultaneously recover the wave geometry as surface normal perturbations of the water surface. Most important, we show that the water reflection enables calibration of the camera. In other words, we show that capturing a direct and water-reflected scene in a single exposure forms a self-calibrating catadioptric stereo camera. We demonstrate our method on a number of images taken in the wild. The results demonstrate a new means for leveraging this accidental catadioptric camera.
|
S. Baker and S. Nayar. A theory of single-viewpoint catadioptric image formation. International Journal on Computer Vision, 35(2):175–196, 1999. 1 @cite_23
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"165571211"
],
"abstract": [
"Conventional video cameras have limited fields of view which make them restrictive for certain applications in computational vision. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. One important design goal for catadioptric sensors is choosing the shapes of the mirrors in a way that ensures that the complete catadioptric system has a single effective viewpoint. The reason a single viewpoint is so desirable is that it is a requirement for the generation of pure perspective images from the sensed images. In this paper, we derive the complete class of single-lens single-mirror catadioptric sensors that have a single viewpoint. We describe all of the solutions in detail, including the degenerate ones, with reference to many of the catadioptric systems that have been proposed in the literature. In addition, we derive a simple expression for the spatial resolution of a catadioptric sensor in terms of the resolution of the cameras used to construct it. Moreover, we include detailed analysis of the defocus blur caused by the use of a curved mirror in a catadioptric sensor."
]
}
|
1906.10250
|
2955366701
|
Recently, the problem of allocating one resource per agent with initial endowments () has seen a renewed interest: indeed, while in the general domain Top Trading Cycle is known to be the only procedure guaranteeing Pareto-optimality, individual rationality, and strategy proofness, the situation differs in single-peaked domains. Bade (2019) presented the Crawler, an alternative procedure enjoying the same properties (with the additional advantage of being implementable in obviously dominant strategies); while (2015) showed that allowing mutually beneficial swap-deals among the agents was already enough to guarantee Pareto-optimality. In this paper we significantly deepen our understanding of this decentralized procedures: we show in particular that the single-peaked domains happen to be maximal'' if one wishes to guarantee this convergence property. Interestingly, we also observe that the set of allocations reachable by swap-deals always contains the outcome of the Crawler. To further investigate how these different mechanisms compare, we pay special attention to the average and minimum rank of the resource obtained by the agents in the outcome allocation. We provide theoretical bounds on the loss potentially induced by these procedures with respect to these criteria, and complement these results with an extensive experimental study which shows how different variants of swap dynamics behave. In fact, even the simplest dynamics exhibit very good results, and it is possible to further guide the process towards our objectives, if one is ready to sacrifice a bit in terms of decentralization. On our way, we also show that a simple variant of the Crawler allows to check efficiently that an allocation is Pareto-optimal in single-peaked domains.
|
The theory of fair division has been introduced by @cite_27 who defined "the Problem of Fair Division". This seminal work has led to an extensive literature in economics (see @cite_3 @cite_26 or @cite_24 for complete surveys). Most of these works focus on settings with divisible resources and or allow for monetary transfers between the agents. @cite_4 and @cite_22 are among the first ones to consider indivisible resources. While in the former an additional divisible resource plays the role of money, in the latter all the resources are assumed to be indivisible.
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_3",
"@cite_24",
"@cite_27"
],
"mid": [
"1577069963",
"",
"2152907289",
"2029467699",
"2885811088",
""
],
"abstract": [
"The concept of fair division is as old as civil society itself. Aristotle's \"equal treatment of equals\" was the first step toward a formal definition of distributive fairness. The concept of collective welfare, more than two centuries old, is a pillar of modern economic analysis. Reflecting fifty years of research, this book examines the contribution of modern microeconomic thinking to distributive justice. Taking the modern axiomatic approach, it compares normative arguments of distributive justice and their relation to efficiency and collective welfare. The book begins with the epistemological status of the axiomatic approach and the four classic principles of distributive justice: compensation, reward, exogenous rights, and fitness. It then presents the simple ideas of equal gains, equal losses, and proportional gains and losses. The book discusses three cardinal interpretations of collective welfare: Bentham's \"utilitarian\" proposal to maximize the sum of individual utilities, the Nash product, and the egalitarian leximin ordering. It also discusses the two main ordinal definitions of collective welfare: the majority relation and the Borda scoring method. The Shapley value is the single most important contribution of game theory to distributive justice. A formula to divide jointly produced costs or benefits fairly, it is especially useful when the pattern of externalities renders useless the simple ideas of equality and proportionality. The book ends with two versatile methods for dividing commodities efficiently and fairly when only ordinal preferences matter: competitive equilibrium with equal incomes and egalitarian equivalence. The book contains a wealth of empirical examples and exercises.",
"",
"Abstract An economic model of trading in commodities that are inherently indivisible, like houses, is investigated from a game-theoretic point of view. The concepts of balanced game and core are developed, and a general theorem of Scarf's is applied to prove that the market in question has a nonempty core, that is, at least one outcome that no subset of traders can improve upon. A number of examples are discussed, and the final section reviews a series of other models involving indivisible commodities, with references to the literature.",
"Governments and institutions, perhaps even more than markets, determine who gets what in our society. They make the crucial choices about who pays the taxes, who gets into college, who gets medical care, who gets drafted, where the hazardous waste dump is sited, and how much we pay for public services. Debate about these issues inevitably centres on the question of whether the solution is \"fair\". In \"Equity: In Theory and Practice\", H. Peyton Young offers a systematic explanation of what we mean by fairness in distributing public resources and burdens, and applies the theory to actual cases. Young begins by reviewing some of the major theories of social justice, showing that none of them explains how societies resolve distributive problems in practice. He then suggests an alternative approach to analyzing fairness in concrete situations: equity, he argues, does not boil down to a single formula, but represents a balance between competing principles of need, desert, and social utility. The studies Young uses to illustrate his approach include the design of income tax schedules, priority schemes for allocating scarce medical resources, formulas for distributing political representation, and criteria for setting fees for public services. Each represents a unique blend of historical perspective, rigorous analysis, and an emphasis on practical solutions.",
"In fair division of indivisible goods, using sequences of sincere choices (or picking sequences) is a natural way to allocate the objects. The idea is as follows: at each stage, a designated agent picks one object among those that remain. Another intuitive way to obtain an allocation is to give objects to agents in the first place, and to let agents exchange them as long as such \"deals\" are beneficial. This paper investigates these notions, when agents have additive preferences over objects, and unveils surprising connections between them, and with other efficiency and fairness notions. In particular, we show that an allocation is sequenceable if and only if it is optimal for a certain type of deals, namely cycle deals involving a single object. Furthermore, any Pareto-optimal allocation is sequenceable, but not the converse. Regarding fairness, we show that an allocation can be envy-free and non-sequenceable, but that every competitive equilibrium with equal incomes is sequenceable. To complete the picture, we show how some domain restrictions may affect the relations between these notions. Finally, we experimentally explore the links between the scales of efficiency and fairness.",
""
]
}
|
1906.10250
|
2955366701
|
Recently, the problem of allocating one resource per agent with initial endowments () has seen a renewed interest: indeed, while in the general domain Top Trading Cycle is known to be the only procedure guaranteeing Pareto-optimality, individual rationality, and strategy proofness, the situation differs in single-peaked domains. Bade (2019) presented the Crawler, an alternative procedure enjoying the same properties (with the additional advantage of being implementable in obviously dominant strategies); while (2015) showed that allowing mutually beneficial swap-deals among the agents was already enough to guarantee Pareto-optimality. In this paper we significantly deepen our understanding of this decentralized procedures: we show in particular that the single-peaked domains happen to be maximal'' if one wishes to guarantee this convergence property. Interestingly, we also observe that the set of allocations reachable by swap-deals always contains the outcome of the Crawler. To further investigate how these different mechanisms compare, we pay special attention to the average and minimum rank of the resource obtained by the agents in the outcome allocation. We provide theoretical bounds on the loss potentially induced by these procedures with respect to these criteria, and complement these results with an extensive experimental study which shows how different variants of swap dynamics behave. In fact, even the simplest dynamics exhibit very good results, and it is possible to further guide the process towards our objectives, if one is ready to sacrifice a bit in terms of decentralization. On our way, we also show that a simple variant of the Crawler allows to check efficiently that an allocation is Pareto-optimal in single-peaked domains.
|
In the present paper we focus on the model defined by @cite_22 , called or , in which there are exactly as many indivisible resources as agents and no money. @cite_22 defined the (TTC) procedure which has been extensively studied and shown to be unique when preferences are strict .
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"2152907289"
],
"abstract": [
"Abstract An economic model of trading in commodities that are inherently indivisible, like houses, is investigated from a game-theoretic point of view. The concepts of balanced game and core are developed, and a general theorem of Scarf's is applied to prove that the market in question has a nonempty core, that is, at least one outcome that no subset of traders can improve upon. A number of examples are discussed, and the final section reviews a series of other models involving indivisible commodities, with references to the literature."
]
}
|
1906.10250
|
2955366701
|
Recently, the problem of allocating one resource per agent with initial endowments () has seen a renewed interest: indeed, while in the general domain Top Trading Cycle is known to be the only procedure guaranteeing Pareto-optimality, individual rationality, and strategy proofness, the situation differs in single-peaked domains. Bade (2019) presented the Crawler, an alternative procedure enjoying the same properties (with the additional advantage of being implementable in obviously dominant strategies); while (2015) showed that allowing mutually beneficial swap-deals among the agents was already enough to guarantee Pareto-optimality. In this paper we significantly deepen our understanding of this decentralized procedures: we show in particular that the single-peaked domains happen to be maximal'' if one wishes to guarantee this convergence property. Interestingly, we also observe that the set of allocations reachable by swap-deals always contains the outcome of the Crawler. To further investigate how these different mechanisms compare, we pay special attention to the average and minimum rank of the resource obtained by the agents in the outcome allocation. We provide theoretical bounds on the loss potentially induced by these procedures with respect to these criteria, and complement these results with an extensive experimental study which shows how different variants of swap dynamics behave. In fact, even the simplest dynamics exhibit very good results, and it is possible to further guide the process towards our objectives, if one is ready to sacrifice a bit in terms of decentralization. On our way, we also show that a simple variant of the Crawler allows to check efficiently that an allocation is Pareto-optimal in single-peaked domains.
|
@cite_35 renewed the interest for the assignment problem in the economic literature by investigating randomized procedures. Subsequent work by @cite_32 introduced single-peaked preferences in their setting. @cite_15 later considered deterministic and probabilistic solutions under single-peaked preferences.
|
{
"cite_N": [
"@cite_35",
"@cite_15",
"@cite_32"
],
"mid": [
"2014437184",
"1999442185",
"2088605609"
],
"abstract": [
"A random assignment is ordinally efficient if it is not stochastically dominated with respect to individual preferences over sure objects. Ordinal efficiency implies (is implied by) ex post (ex ante) efficiency. A simple algorithm characterizes ordinally efficient assignments: our solution, probabilistic serial (PS), is a central element within their set. Random priority (RP) orders agents from the uniform distribution, then lets them choose successively their best remaining object. RP is ex post, but not always ordinally, efficient. PS is envy-free, RP is not; RP is strategy-proof, PS is not. Ordinal efficiency, Strategyproofness, and equal treatment of equals are incompatible. Journal of Economic Literature Classification Numbers: C78, D61, D63.",
"We consider the problem of assigning agents to slots on a line, where only one agent can be served at a slot and each agent prefers to be served as close as possible to his target. Our focus is on aggregate gap minimizing methods, i.e., those that minimize the total gap between targets and assigned slots. We first consider deterministic assignment of agents to slots, and provide a direct method for testing if a given deterministic assignment is aggregate gap minimizing. We then consider probabilistic assignment of agents to slots, and make use of the previous method to propose an aggregate gap minimizing modification of the classic random priority method to solve this class of problems. We also provide some logical relations in our setting among standard axioms in the literature on assignment problems, and explore the robustness of our results to several extensions of our setting.",
"We consider the problem of assigning indivisible goods among a group of agents with lotteries when the preference profile is single-peaked. Unfortunately, even on this restricted domain of preferences, equal treatment of equals, stochastic dominance efficiency, and stochastic dominance strategy-proofness are incompatible."
]
}
|
1906.10250
|
2955366701
|
Recently, the problem of allocating one resource per agent with initial endowments () has seen a renewed interest: indeed, while in the general domain Top Trading Cycle is known to be the only procedure guaranteeing Pareto-optimality, individual rationality, and strategy proofness, the situation differs in single-peaked domains. Bade (2019) presented the Crawler, an alternative procedure enjoying the same properties (with the additional advantage of being implementable in obviously dominant strategies); while (2015) showed that allowing mutually beneficial swap-deals among the agents was already enough to guarantee Pareto-optimality. In this paper we significantly deepen our understanding of this decentralized procedures: we show in particular that the single-peaked domains happen to be maximal'' if one wishes to guarantee this convergence property. Interestingly, we also observe that the set of allocations reachable by swap-deals always contains the outcome of the Crawler. To further investigate how these different mechanisms compare, we pay special attention to the average and minimum rank of the resource obtained by the agents in the outcome allocation. We provide theoretical bounds on the loss potentially induced by these procedures with respect to these criteria, and complement these results with an extensive experimental study which shows how different variants of swap dynamics behave. In fact, even the simplest dynamics exhibit very good results, and it is possible to further guide the process towards our objectives, if one is ready to sacrifice a bit in terms of decentralization. On our way, we also show that a simple variant of the Crawler allows to check efficiently that an allocation is Pareto-optimal in single-peaked domains.
|
@cite_9 defined a set of Pareto-efficient procedures generalizing the TTC algorithm when allowing for indifferences, which however include procedures that are not strategy-proof. @cite_20 and @cite_37 independently proposed general frameworks for efficient and strategy-proof generalization of the TTC procedure with indifferences.
|
{
"cite_N": [
"@cite_9",
"@cite_37",
"@cite_20"
],
"mid": [
"1588041343",
"2053487230",
""
],
"abstract": [
"The (Shapley-Scarf) housing market is a well-studied and fundamental model of an exchange economy. Each agent owns a single house and the goal is to reallocate the houses to the agents in a mutually beneficial and stable manner. Recently, Alcalde-Unzu and Molis (2011) and Jaramillo and Manjunath (2011) independently examined housing markets in which agents can express indifferences among houses. They proposed two important families of mechanisms, known as TTAS and TCR respectively. We formulate a family of mechanisms which not only includes TTAS and TCR but also satisfies many desirable properties of both families. As a corollary, we show that TCR is strict core selecting (if the strict core is non-empty). Finally, we settle an open question regarding the computational complexity of the TTAS mechanism. Our study also raises a number of interesting research questions.",
"We consider the problem of reallocating indivisible objects amongst a set of agents when the preference ordering of each agent may contain indifferences. The same model, but with strict preferences, goes back to the seminal work of Shapley and Scarf in 1974. When preferences are strict, we now know that the Top-Trading Cycles (TTC) mechanism invented by Gale is Pareto efficient, strategy-proof, and finds a core allocation, and that it is the only mechanism satisfying these properties. In the extensive literature on this problem since then, the TTC mechanism has been characterized in multiple ways, establishing its central role within the class of all allocation mechanisms. The question motivating our work is the extent to which these results can be generalized to the setting with indifferences. Our main contribution is a general framework to design strategyproof mechanisms that find a Pareto optimal allocation in the weak-core. Along the way, we establish a sufficient condition for a mechanism (within a broad class of mechanisms) to be strategyproof and use this condition to design fast algorithms for finding a \"good\" reallocation. Our results generalize and unify two (different) mechanisms for the reallocation problem derived, independently of each other, by Manjunath and Jaramillo, and Alcalde-Unzu and Molis.",
""
]
}
|
1906.10250
|
2955366701
|
Recently, the problem of allocating one resource per agent with initial endowments () has seen a renewed interest: indeed, while in the general domain Top Trading Cycle is known to be the only procedure guaranteeing Pareto-optimality, individual rationality, and strategy proofness, the situation differs in single-peaked domains. Bade (2019) presented the Crawler, an alternative procedure enjoying the same properties (with the additional advantage of being implementable in obviously dominant strategies); while (2015) showed that allowing mutually beneficial swap-deals among the agents was already enough to guarantee Pareto-optimality. In this paper we significantly deepen our understanding of this decentralized procedures: we show in particular that the single-peaked domains happen to be maximal'' if one wishes to guarantee this convergence property. Interestingly, we also observe that the set of allocations reachable by swap-deals always contains the outcome of the Crawler. To further investigate how these different mechanisms compare, we pay special attention to the average and minimum rank of the resource obtained by the agents in the outcome allocation. We provide theoretical bounds on the loss potentially induced by these procedures with respect to these criteria, and complement these results with an extensive experimental study which shows how different variants of swap dynamics behave. In fact, even the simplest dynamics exhibit very good results, and it is possible to further guide the process towards our objectives, if one is ready to sacrifice a bit in terms of decentralization. On our way, we also show that a simple variant of the Crawler allows to check efficiently that an allocation is Pareto-optimal in single-peaked domains.
|
@cite_1 explored another direction by restricting preferences to single-peaked domains. In this case she presents the that satisfies the same properties as TTC hence overcoming Ma's result on the single-peaked domain.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2906482047"
],
"abstract": [
"Abstract The crawler is a new efficient, strategyproof, and individually rational mechanism for housing markets with single-peaked preferences. In a housing market each agent is endowed with exactly one house. These houses are ordered - by their size for example - and all agents preferences are single-peaked with respect to that order. The crawler screens agents in order of their houses' sizes, starting with the smallest. The first agent who does not want to move to a larger house is matched with his most preferred house. Agents who currently occupy houses sized between this agent's original and chosen houses “crawl” to the next largest unmatched house. This process is repeated until all agents are matched. The crawler is easier to understand than Gale's top trading cycles and can be extended to allow for indifferences."
]
}
|
1906.10250
|
2955366701
|
Recently, the problem of allocating one resource per agent with initial endowments () has seen a renewed interest: indeed, while in the general domain Top Trading Cycle is known to be the only procedure guaranteeing Pareto-optimality, individual rationality, and strategy proofness, the situation differs in single-peaked domains. Bade (2019) presented the Crawler, an alternative procedure enjoying the same properties (with the additional advantage of being implementable in obviously dominant strategies); while (2015) showed that allowing mutually beneficial swap-deals among the agents was already enough to guarantee Pareto-optimality. In this paper we significantly deepen our understanding of this decentralized procedures: we show in particular that the single-peaked domains happen to be maximal'' if one wishes to guarantee this convergence property. Interestingly, we also observe that the set of allocations reachable by swap-deals always contains the outcome of the Crawler. To further investigate how these different mechanisms compare, we pay special attention to the average and minimum rank of the resource obtained by the agents in the outcome allocation. We provide theoretical bounds on the loss potentially induced by these procedures with respect to these criteria, and complement these results with an extensive experimental study which shows how different variants of swap dynamics behave. In fact, even the simplest dynamics exhibit very good results, and it is possible to further guide the process towards our objectives, if one is ready to sacrifice a bit in terms of decentralization. On our way, we also show that a simple variant of the Crawler allows to check efficiently that an allocation is Pareto-optimal in single-peaked domains.
|
Following this line of work, we assume single-peaked preferences in this paper. This domain of preferences has been introduced by @cite_36 and @cite_16 . It has been more specifically studied in voting theory and now are a very common domain of preferences . Numerous works have used single-peaked preferences in the context of fair division. @cite_23 studied the fair division problem with single-peaked preferences and divisible objects. He defines and characterizes the , the unique strategy-proof, efficient and anonymous allocation rule in this setting. The fairness properties of this rule have been subsequently explored by @cite_17 and @cite_6 @cite_2 who showed that it is envy-free, one-sided resource monotonic and consistent. As already mentioned, @cite_15 extended this research area to indivisible resources and considered the problem of assigning objects to a line under single-peaked preferences. Subsequently, @cite_18 investigated the computational aspects of this assignment problem.
|
{
"cite_N": [
"@cite_18",
"@cite_36",
"@cite_6",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_17"
],
"mid": [
"2598782820",
"2014607560",
"2088903596",
"2078793241",
"",
"1999442185",
"2036910311",
"2041385195"
],
"abstract": [
"We consider the problem of assigning agents to slots on a line, where only one agent can be served at a slot and each agent prefers to be served as close as possible to his target. We introduce a general approach to compute aggregate gap-minimizing assignments, as well as gap-egalitarian assignments. The approach relies on an algorithm which is shown to be faster than general purpose algorithms for the assignment problem. We also extend the approach to probabilistic assignments and explore the computational features of existing, as well as new, methods for this setting.",
"When a decision is reached by voting or is arrived at by a group all of whose members are not in complete accord, there is no part of economic theory which applies. This paper is intended to help fill this gap; to provide a type of reasoning which will contribute to the development of the theory of tradeunions, the firm, and the cartel; and to provide the basis for a theory of the equilibrium distribution of taxation or of public expenditure. Still other uses of the theory might be not less important. For reasons of space we avoid discussion of many points that demand fuller treatment and only attempt to indicate the course of the argument.",
"Abstract We consider the problem of fairly allocating an infinitely divisible commodity among agents with single-peaked preferences. We search for methods of performing this division, or solutions , that satisfy the following property of consistency : any recommendation made for any economy is in agreement with the recommendation made for any \"reduced\" economy obtained by imagining the departure of some of the agents with their allotted consumptions. Our main result is that essentially all efficient subsolutions of the no-envy solution satisfying consistency must contain a certain solution known as the uniform rule. We also characterize the uniform rule on the basis of a \"converse\" of consistency and the distributional requirement of individual rationality from equal division. Journal of Economic Literature Classification Numbers: D63, D71.",
"",
"",
"We consider the problem of assigning agents to slots on a line, where only one agent can be served at a slot and each agent prefers to be served as close as possible to his target. Our focus is on aggregate gap minimizing methods, i.e., those that minimize the total gap between targets and assigned slots. We first consider deterministic assignment of agents to slots, and provide a direct method for testing if a given deterministic assignment is aggregate gap minimizing. We then consider probabilistic assignment of agents to slots, and make use of the previous method to propose an aggregate gap minimizing modification of the classic random priority method to solve this class of problems. We also provide some logical relations in our setting among standard axioms in the literature on assignment problems, and explore the robustness of our results to several extensions of our setting.",
"Originally published in 1951, Social Choice and Individual Values introduced \"Arrow's Impossibility Theorem\" and founded the field of social choice theory in economics and political science. This new edition, including a new foreword by Nobel laureate Eric Maskin, reintroduces Arrow's seminal book to a new generation of students and researchers. \"Far beyond a classic, this small book unleashed the ongoing explosion of interest in social choice and voting theory. A half-century later, the book remains full of profound insight: its central message, 'Arrow's Theorem,' has changed the way we think.\"-Donald G. Saari, author of Decisions and Elections: Explaining the Unexpected",
"We consider the problem of allocating some amount of a commodity among a group of agents with single-peaked preferences. We show that the uniform rule is the only rule satisfying equal treatment of equals, Pareto efficiency, and strategy-proofness. This characterization strengthens two interesting results due to Sprumont (1991). Our method of proof involves only elementary arguments."
]
}
|
1906.10250
|
2955366701
|
Recently, the problem of allocating one resource per agent with initial endowments () has seen a renewed interest: indeed, while in the general domain Top Trading Cycle is known to be the only procedure guaranteeing Pareto-optimality, individual rationality, and strategy proofness, the situation differs in single-peaked domains. Bade (2019) presented the Crawler, an alternative procedure enjoying the same properties (with the additional advantage of being implementable in obviously dominant strategies); while (2015) showed that allowing mutually beneficial swap-deals among the agents was already enough to guarantee Pareto-optimality. In this paper we significantly deepen our understanding of this decentralized procedures: we show in particular that the single-peaked domains happen to be maximal'' if one wishes to guarantee this convergence property. Interestingly, we also observe that the set of allocations reachable by swap-deals always contains the outcome of the Crawler. To further investigate how these different mechanisms compare, we pay special attention to the average and minimum rank of the resource obtained by the agents in the outcome allocation. We provide theoretical bounds on the loss potentially induced by these procedures with respect to these criteria, and complement these results with an extensive experimental study which shows how different variants of swap dynamics behave. In fact, even the simplest dynamics exhibit very good results, and it is possible to further guide the process towards our objectives, if one is ready to sacrifice a bit in terms of decentralization. On our way, we also show that a simple variant of the Crawler allows to check efficiently that an allocation is Pareto-optimal in single-peaked domains.
|
Most of the procedures we talked about, among which the TTC algorithm, are centralized procedures which rely on a benevolent entity to proceed. Of particular interest for us are the decentralized procedures. A growing literature exists on such procedures. @cite_21 and @cite_14 respectively introduced the and the , two semi-decentralized procedures where the agents announce their preferred resources to a referee. These procedures return envy-free and balanced allocations when there are two agents. Another semi-decentralized procedure is the in which agents come in turn to pick one of the remaining resource .
|
{
"cite_N": [
"@cite_14",
"@cite_21"
],
"mid": [
"2156641215",
"2063574089"
],
"abstract": [
"We propose a procedure for dividing indivisible items between two players in which each player ranks the items from best to worst. It ensures that each player receives a subset of items that it values more than the other player's complementary subset, given that such an envy-free division is possible. We show that the possibility of one player's undercutting the other's proposal, and implementing the reduced subset for himself or herself, makes the proposer \"reasonable\" and generally leads to an envy-free division, even when the players rank items exactly the same. Although the undercut procedure is manipulable, each player's maximin strategy is to be truthful. Applications of the undercut procedure are briefly discussed.",
"The paper investigates how far a particular procedure, called the “descending demand procedure,” can take us in finding equitable allocations of indivisible goods. Both interpersonal and intrapersonal criteria of equitability are considered. It is shown that the procedure generally fares well on an interpersonal criterion of “balancedness”; specifically, the resulting allocations are Pareto-optimal and maximize the well-being of the worst-off individual. As a criterion of intrapersonal equitability, the property of envy-freeness is considered. To accommodate envy-freeness, a modification of the basic procedure is suggested. With two individuals, the modified procedure is shown to select the envy-free allocations that are balanced, i.e. the allocations that maximize the well-being of the worse-off individual among all envy-free allocations."
]
}
|
1906.10250
|
2955366701
|
Recently, the problem of allocating one resource per agent with initial endowments () has seen a renewed interest: indeed, while in the general domain Top Trading Cycle is known to be the only procedure guaranteeing Pareto-optimality, individual rationality, and strategy proofness, the situation differs in single-peaked domains. Bade (2019) presented the Crawler, an alternative procedure enjoying the same properties (with the additional advantage of being implementable in obviously dominant strategies); while (2015) showed that allowing mutually beneficial swap-deals among the agents was already enough to guarantee Pareto-optimality. In this paper we significantly deepen our understanding of this decentralized procedures: we show in particular that the single-peaked domains happen to be maximal'' if one wishes to guarantee this convergence property. Interestingly, we also observe that the set of allocations reachable by swap-deals always contains the outcome of the Crawler. To further investigate how these different mechanisms compare, we pay special attention to the average and minimum rank of the resource obtained by the agents in the outcome allocation. We provide theoretical bounds on the loss potentially induced by these procedures with respect to these criteria, and complement these results with an extensive experimental study which shows how different variants of swap dynamics behave. In fact, even the simplest dynamics exhibit very good results, and it is possible to further guide the process towards our objectives, if one is ready to sacrifice a bit in terms of decentralization. On our way, we also show that a simple variant of the Crawler allows to check efficiently that an allocation is Pareto-optimal in single-peaked domains.
|
Following the development of multi-agent systems, fully decentralized procedures have been defined through the idea of local exchanges between agents in the idea of Pigou-Dalton deals . @cite_0 considered the problem of reallocating tasks among individually rational agents. @cite_10 and @cite_28 investigated different complexity problems in this setting. @cite_29 and @cite_8 respectively characterized the class of deals and the class of preferences required to reach socially optimal allocations. @cite_5 focused on reaching efficient and envy-free allocations. Similar procedures were also introduced in the area of two-sided matching .
|
{
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_0",
"@cite_5",
"@cite_10"
],
"mid": [
"2155400215",
"2340867128",
"",
"2181896167",
"1574410118",
"2103331030"
],
"abstract": [
"We investigate the properties of an abstract negotiation framework where agents autonomously negotiate over allocations of indivisible resources. In this framework, reaching an allocation that is optimal may require very complex multilateral deals. Therefore, we are interested in identifying classes of valuation functions such that any negotiation conducted by means of deals involving only a single resource at a time is bound to converge to an optimal allocation whenever all agents model their preferences using these functions. In the case of negotiation with monetary side payments amongst self-interested but myopic agents, the class of modular valuation functions turns out to be such a class. That is, modularity is a sufficient condition for convergence in this framework. We also show that modularity is not a necessary condition. Indeed, there can be no condition on individual valuation functions that would be both necessary and sufficient in this sense. Evaluating conditions formulated with respect to the whole profile of valuation functions used by the agents in the system would be possible in theory, but turns out to be computationally intractable in practice. Our main result shows that the class of modular functions is maximal in the sense that no strictly larger class of valuation functions would still guarantee an optimal outcome of negotiation, even when we permit more general bilateral deals. We also establish similar results in the context of negotiation without side payments.",
"Reallocating resources to get mutually beneficial outcomes is a fundamental problem in various multi-agent settings. In the first part of the paper we focus on the setting in which agents express additive cardinal utilities over objects. We present computational hardness results as well as polynomial-time algorithms for testing Pareto optimality under different restrictions such as two utility values or lexicographic utilities. In the second part of the paper we assume that agents express only their (ordinal) preferences over single objects, and that their preferences are additively separable. In this setting, we present characterizations and polynomial-time algorithms for possible and necessary Pareto optimality.",
"",
"We analyze task reallocation where individually rational ([R) agents (re)contract tasks among themselves based on marginal costs. A task allocation graph is introduced as a tool for analyzing contract types. Traditional single task contracts always have a short path (sequence of contracts) to the optimal task allocation but an IR path may not exist, or it may not be short. We analyze an algorithm for finding the shortest IR path. Next we introduce cluster contracts, swaps, and multiagent contracts. Each of the four contract types avoids some local optima that the others do not. Even if the protocol is equipped with all four types, local optima exist. To attack this problem, we introduce OCSMcontracts which combine the ideas behind the four earlier types into an atomic contract type. If the protocol is equipped with OCSM-contracts, any sequence of IR contracts leads to the optimal task allocation in a finite number of steps: an oracle--or speculation--is not needed for choosing the path (no subset of OCSMcontracts suffices even with an oracle). This means that the multiagent search does not need to backtrack. This is a powerful result for small problem instances. For large ones, the anytime feature of our multicontract-type algorithm--with provably monotonic improvement of each agent’s solution--is more important. 1",
"Part 1 Machines that make deals: the premise machine encounters social engineering for machines scenarios how does this differ from Al? how does this differ from game theory? Part 2 Interaction mechanisms: the negotiation problem in different domains attributes of negotiation mechanisms assumptions incentive compatibility. Part 3 Task-oriented domains: domain definition attributes and examples a negotiation mechanism evaluation of the negotiation mechanism an alternative, one-step protocol mechanisms that maximize the product of utilities the bottom line. Part 4 Deception-free protocols: non-manipulable negotiation mechanisms probabilistic deals subadditive domains concave domains modular domains summary of incentive compatible mechanisms the bottom line. Part 5 State-oriented domains: side-effects in encounters domain definition attributes and examples a negotiation mechanism worth of a goal conflict resolution semi-co-operative deals in non-conflict situations unified negotiation protocols (UNP) multi-plan deals the hierarchy of deal types - summary unbounded worth of a goal - tidy agents the bottom line. Part 6 Strategic manipulation: negotiation with incomplete information incomplete information about worth of goals using the revelation principle to re-design the mechanisms the bottom line. Part 7 Worth-oriented domains: goal relaxation domain definition one agent best plan negotiation over sub-optimal states examples of worth functions the bottom line. Appendices: strict tolerant mechanisms some related work proofs.",
"We study the complexity of a multilateral negotiation framework where autonomous agents agree on a sequence of deals to exchange sets of discrete resources in order to both further their own goals and to achieve a distribution of resources that is socially optimal. When analysing such a framework, we can distinguish different aspects of complexity: How many deals are required to reach an optimal allocation of resources? How many communicative exchanges are required to agree on one such deal? How complex a communication language do we require? And finally, how complex is the reasoning task faced by each agent? This paper presents a number of results pertaining, in particular, to the first of these questions."
]
}
|
1906.10250
|
2955366701
|
Recently, the problem of allocating one resource per agent with initial endowments () has seen a renewed interest: indeed, while in the general domain Top Trading Cycle is known to be the only procedure guaranteeing Pareto-optimality, individual rationality, and strategy proofness, the situation differs in single-peaked domains. Bade (2019) presented the Crawler, an alternative procedure enjoying the same properties (with the additional advantage of being implementable in obviously dominant strategies); while (2015) showed that allowing mutually beneficial swap-deals among the agents was already enough to guarantee Pareto-optimality. In this paper we significantly deepen our understanding of this decentralized procedures: we show in particular that the single-peaked domains happen to be maximal'' if one wishes to guarantee this convergence property. Interestingly, we also observe that the set of allocations reachable by swap-deals always contains the outcome of the Crawler. To further investigate how these different mechanisms compare, we pay special attention to the average and minimum rank of the resource obtained by the agents in the outcome allocation. We provide theoretical bounds on the loss potentially induced by these procedures with respect to these criteria, and complement these results with an extensive experimental study which shows how different variants of swap dynamics behave. In fact, even the simplest dynamics exhibit very good results, and it is possible to further guide the process towards our objectives, if one is ready to sacrifice a bit in terms of decentralization. On our way, we also show that a simple variant of the Crawler allows to check efficiently that an allocation is Pareto-optimal in single-peaked domains.
|
The idea of using swap deals was explored for instance by @cite_31 , who studied barter exchange networks. @cite_34 and @cite_12 studied dynamics of swap-deals by considering an underlying social network constraining the possible interactions of the agents, and focusing on complexity issues. These results were recently extended by @cite_19 .
|
{
"cite_N": [
"@cite_19",
"@cite_31",
"@cite_34",
"@cite_12"
],
"mid": [
"2064109654",
"2296489440",
"2741643477",
"2888333077"
],
"abstract": [
"",
"Of late online social networks have become popular, with interest spanning various aspects including search, analysis mining, and their potential use for item barter exchange markets. The idea is that users can leverage their social network for exchanging items they possess with other users. The problem of generating recommendations for item exchanges between users, consisting of synchronous exchange cycles has been investigated[2]. In this paper, we identify the shortcomings of the above exchange model and propose an asynchronous model that makes use of credit points. Rather than insist on exchanging items synchronously, we award points to users whenever they give items to other users, which can be redeemed later. Points and their redemption raise an issue of fairness which intuitively means users who contribute more should have a greater priority over others for receiving items they wish for. We focus on fairness maximization and prove that it is NPhard and cannot be approximated within any factor in polynomial time unless P=NP. We then develop efficient heuristic algorithms, and experimentally demonstrate their effectiveness and scalability on both synthetic data and a real dataset from readitswapit.co.uk.",
"This article deals with object allocation where each agent receives a single item. Starting from an initial endowment, the agents can be better off by exchanging their objects. However, not all trades are likely because some participants are unable to communicate. By considering that the agents are embedded in a social network, we propose to study the allocations emerging from a sequence of simple swaps between pairs of neighbors in the network. This model raises natural questions regarding (i) the reachability of a given assignment, (ii) the ability of an agent to obtain a given object, and (iii) the search of Pareto-efficient allocations. We investigate the complexity of these problems by providing, according to the structure of the social network, polynomial and NP-complete cases.",
"We examine a resource allocation problem where each agent is to be assigned exactly one object. Agents are initially endowed with a resource that they can swap with one another. However, not all exchanges are plausible: we represent required connections between agents with a social network. Agents may only perform pairwise exchanges with their neighbors and only if it brings them preferred objects. We analyze this distributed process through two dual questions. Could an agent obtain a certain object if the swaps occurred favourably? Can an agent be guaranteed a certain level of satisfaction regardless of the actual exchanges? These questions are investigated through parameterized complexity, focusing on budget constraints such as the number of exchanges an agent may be involved in or the total duration of the process."
]
}
|
1906.10317
|
2956120409
|
We present an approach to estimate the severity of traffic related accidents in aggregated (area-level) and disaggregated (point level) data. Exploring spatial features, we measure complexity of road networks using several area level variables. Also using temporal and other situational features from open data for New York City, we use Gradient Boosting models for inference and measuring feature importance along with Gaussian Processes to model spatial dependencies in the data. The results show significant importance of complexity in aggregated model as well as as other features in prediction which may be helpful in framing policies and targeting interventions for preventing severe traffic related accidents and injuries.
|
Significant amount of literature can be attributed to modeling traffic accident and their severity using diverse set of variables. Features like curvature, road width, urban rural area and gender of driver area have been shown to be significant in accident modeling @cite_0 . Another work @cite_2 describe models for predicting the expected number of accidents at urban junctions and road links as accurately as possible, explaining 60
|
{
"cite_N": [
"@cite_0",
"@cite_2"
],
"mid": [
"1973345185",
"1981655153"
],
"abstract": [
"Accident prediction models are invaluable tools that have many applications in road safety analysis. However, there are certain statistical issues related to accident modeling that either deserve further attention or have not been dealt with adequately in the road safety literature. This paper discusses and illustrates how to deal with two statistical issues related to modeling accidents using Poisson and negative binomial regression. The first issue is that of model building or deciding which explanatory variables to include in an accident prediction model. The study differentiates between applications for which it is advisable to avoid model over-fitting and other applications for which it is desirable to fit the model to the data as closely as possible. It then suggests procedures for developing parsimonious models, i.e., models that are not over-fitted, and best-fit models. The second issue discussed in the paper is that of outlier analysis. The study suggests a procedure for the identification and exclusion of extremely influential outliers from the development of Poisson and negative binomial regression models. The procedures suggested for model building and conducting outlier analysis are more straightforward to apply in the case of Poisson regression models because of an added complexity presented by the shape parameter of the negative binomial distribution. The paper, therefore, presents flowcharts detailing the application of the procedures when modeling is carried out using negative binomial regression. The described procedures are then applied in the development of negative binomial accident prediction models for the urban arterials of the cities of Vancouver and Richmond located in the province of British Columbia, Canada.",
"Abstract This paper describes some of the main findings from two separate studies on accident prediction models for urban junctions and urban road links described in [Uheldsmodel for bygader-Del1: Modeller for 3-og 4-benede kryds. Notat 22, The Danish Road Directorate, 1995; Uheldsmodel for bygader- Del2: Modeller for straekninger. Notat 59, The Danish Road Directorate, 1998] ( Greibe and Hemdorff, 1995 , Greibe and Hemdorff, 1998 ). The main objective for the studies was to establish simple, practicable accident models that can predict the expected number of accidents at urban junctions and road links as accurately as possible. The models can be used to identify factors affecting road safety and in relation to ‘black spot’ identification and network safety analysis undertaken by local road authorities. The accident prediction models are based on data from 1036 junctions and 142 km road links in urban areas. Generalised linear modelling techniques were used to relate accident frequencies to explanatory variables. The estimated accident prediction models for road links were capable of describing more than 60 of the systematic variation (‘percentage-explained’ value) while the models for junctions had lower values. This indicates that modelling accidents for road links is less complicated than for junctions, probably due to a more uniform accident pattern and a simpler traffic flow exposure or due to lack of adequate explanatory variables for junctions. Explanatory variables describing road design and road geometry proved to be significant for road link models but less important in junction models. The most powerful variable for all models was motor vehicle traffic flow."
]
}
|
1906.10324
|
2955620119
|
In real-world applications the Perspective-n-Point (PnP) problem should generally be applied in a sequence of images which a set of drift-prone features are tracked over time. In this paper, we consider both the temporal dependency of camera poses and the uncertainty of features for the sequential camera pose estimation. Using the Extended Kalman Filter (EKF), a priori estimate of the camera pose is calculated from the camera motion model and then corrected by minimizing the reprojection error of the reference points. Experimental results, using both simulated and real data, demonstrate that the proposed method improves the robustness of the camera pose estimation, in the presence of noise, compared to the state-of-the-art.
|
P3P @cite_8 is one of the first algorithms for the pose estimation of calibrated cameras from 3 correspondences between 3D reference points and their 2D projections. In addition, there is a variety of PnP algorithms deriving closed-form solutions from a limited number of points, namely P4P @cite_8 and P5P @cite_6 . Since these algorithms are only applicable to a small portion of correspondences, they are sensitive to noise. Although other traditional PnP solutions have no restriction on the number of points, they are computationally expensive. For example, the time complexity of @cite_24 as the lowest complexity method is @math while it is very sensitive to noise. The time complexity of more robust methods like @cite_19 and @cite_27 are significantly increased to @math and @math respectively.
|
{
"cite_N": [
"@cite_8",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_27"
],
"mid": [
"2085261163",
"2095627417",
"2097251816",
"2134237713",
"2122612384"
],
"abstract": [
"A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing",
"We describe two direct quasilinear methods for camera pose (absolute orientation) and calibration from a single image of 4 or 5 known 3D points. They generalize the 6 point 'Direct Linear Transform' method by incorporating partial prior camera knowledge, while still allowing some unknown calibration parameters to be recovered. Only linear algebra is required, the solution is unique in non-degenerate cases, and additional points can be included for improved stability. Both methods fail for coplanar points, but we give an experimental eigendecomposition based one that handles both planar and nonplanar cases. Our methods use recent polynomial solving technology, and we give a brief summary of this. One of our aims was to try to understand the numerical behaviour of modern polynomial solvers on some relatively simple test cases, with a view to other vision applications.",
"This paper concerns an efficient algorithm for the solution of the exterior orientation problem. Orthogonal decompositions are used to first isolate the unknown depths of feature points in the camera reference frame, allowing the problem to be reduced to an absolute orientation with scale problem, which is solved using the singular value decomposition (SVD). The key feature of this approach is the low computational cost compared to existing approaches.",
"The determination of camera position and orientation from known correspondences of 3D reference points and their images is known as pose estimation in computer vision and space resection in photogrammetry. It is well-known that from three corresponding points there are at most four algebraic solutions. Less appears to be known about the cases of four and five corresponding points. We propose a family of linear methods that yield a unique solution to 4- and 5-point pose determination for generic reference points. We first review the 3-point algebraic method. Then we present our two-step, 4-point and one-step, 5-point linear algorithms. The 5-point method can also be extended to handle more than five points. Finally, we demonstrate our methods on both simulated and real images. We show that they do not degenerate for coplanar configurations and even outperform the special linear algorithm for coplanar configurations in practice.",
"Estimation of camera pose from an image of n points or lines with known correspondence is a thoroughly studied problem in computer vision. Most solutions are iterative and depend on nonlinear optimization of some geometric constraint, either on the world coordinates or on the projections to the image plane. For real-time applications, we are interested in linear or closed-form solutions free of initialization. We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. We then analyze the sensitivity of our solutions to image noise and show that the sensitivity analysis can be used as a conservative predictor of error for our algorithms. We present a number of simulations which compare our results to two other recent linear algorithms, as well as to iterative approaches. We conclude with tests on real imagery in an augmented reality setup."
]
}
|
1906.10324
|
2955620119
|
In real-world applications the Perspective-n-Point (PnP) problem should generally be applied in a sequence of images which a set of drift-prone features are tracked over time. In this paper, we consider both the temporal dependency of camera poses and the uncertainty of features for the sequential camera pose estimation. Using the Extended Kalman Filter (EKF), a priori estimate of the camera pose is calculated from the camera motion model and then corrected by minimizing the reprojection error of the reference points. Experimental results, using both simulated and real data, demonstrate that the proposed method improves the robustness of the camera pose estimation, in the presence of noise, compared to the state-of-the-art.
|
introduced Efficient PnP (EPnP) @cite_15 which is the first efficient non-iterative O(n) solution. EPnP represents reference points by a weighted sum of four virtual control points. Then the problem is solved using fourth order polynomials with simple linearization techniques. The Robust PnP (RPnP) @cite_21 divides reference points into 3-point subsets in order to generate quadratic polynomials for each subset, and then the squared sum of those polynomials is used as a cost function.
|
{
"cite_N": [
"@cite_15",
"@cite_21"
],
"mid": [
"1991544872",
"1984667320"
],
"abstract": [
"We propose a non-iterative solution to the PnP problem--the estimation of the pose of a calibrated camera from n 3D-to-2D point correspondences--whose computational complexity grows linearly with n. This is in contrast to state-of-the-art methods that are O(n 5) or even O(n 8), without being more accurate. Our method is applicable for all n?4 and handles properly both planar and non-planar configurations. Our central idea is to express the n 3D points as a weighted sum of four virtual control points. The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12×12 matrix and solving a small constant number of quadratic equations to pick the right weights. Furthermore, if maximal precision is required, the output of the closed-form solution can be used to initialize a Gauss-Newton scheme, which improves accuracy with negligible amount of additional time. The advantages of our method are demonstrated by thorough testing on both synthetic and real-data.",
"We propose a noniterative solution for the Perspective-n-Point ( P n P ) problem, which can robustly retrieve the optimum by solving a seventh order polynomial. The central idea consists of three steps: 1) to divide the reference points into 3-point subsets in order to achieve a series of fourth order polynomials, 2) to compute the sum of the square of the polynomials so as to form a cost function, and 3) to find the roots of the derivative of the cost function in order to determine the optimum. The advantages of the proposed method are as follows: First, it can stably deal with the planar case, ordinary 3D case, and quasi-singular case, and it is as accurate as the state-of-the-art iterative algorithms with much less computational time. Second, it is the first noniterative P n P solution that can achieve more accurate results than the iterative algorithms when no redundant reference points can be used (n 5). Third, large-size point sets can be handled efficiently because its computational complexity is O(n)."
]
}
|
1906.10324
|
2955620119
|
In real-world applications the Perspective-n-Point (PnP) problem should generally be applied in a sequence of images which a set of drift-prone features are tracked over time. In this paper, we consider both the temporal dependency of camera poses and the uncertainty of features for the sequential camera pose estimation. Using the Extended Kalman Filter (EKF), a priori estimate of the camera pose is calculated from the camera motion model and then corrected by minimizing the reprojection error of the reference points. Experimental results, using both simulated and real data, demonstrate that the proposed method improves the robustness of the camera pose estimation, in the presence of noise, compared to the state-of-the-art.
|
In the Direct-Least-Squares (DLS) @cite_20 method, the PnP problem is solved by minimizing a nonlinear geometric cost function. However, it suffers from rotational degeneracy since the Cayley representation is used for the rotations. The Accurate and Scalable PnP (ASPnP) @cite_5 and the Optimal PnP (OPnP) @cite_7 use a quaternion representation of rotation to overcome this problem and yield more accurate results.
|
{
"cite_N": [
"@cite_5",
"@cite_7",
"@cite_20"
],
"mid": [
"2046335473",
"2142482451",
"2096758544"
],
"abstract": [
"",
"In this paper, we revisit the classical perspective-n-point (PnP) problem, and propose the first non-iterative O(n) solution that is fast, generally applicable and globally optimal. Our basic idea is to formulate the PnP problem into a functional minimization problem and retrieve all its stationary points by using the Gr\"obner basis technique. The novelty lies in a non-unit quaternion representation to parameterize the rotation and a simple but elegant formulation of the PnP problem into an unconstrained optimization problem. Interestingly, the polynomial system arising from its first-order optimality condition assumes two-fold symmetry, a nice property that can be utilized to improve speed and numerical stability of a Grobner basis solver. Experiment results have demonstrated that, in terms of accuracy, our proposed solution is definitely better than the state-of-the-art O(n) methods, and even comparable with the reprojection error minimization method.",
"In this work, we present a Direct Least-Squares (DLS) method for computing all solutions of the perspective-n-point camera pose determination (PnP) problem in the general case (n ≥ 3). Specifically, based on the camera measurement equations, we formulate a nonlinear least-squares cost function whose optimality conditions constitute a system of three third-order polynomials. Subsequently, we employ the multiplication matrix to determine all the roots of the system analytically, and hence all minima of the LS, without requiring iterations or an initial guess of the parameters. A key advantage of our method is scalability, since the order of the polynomial system that we solve is independent of the number of points. We compare the performance of our algorithm with the leading PnP approaches, both in simulation and experimentally, and demonstrate that DLS consistently achieves accuracy close to the Maximum-Likelihood Estimator (MLE)."
]
}
|
1906.10324
|
2955620119
|
In real-world applications the Perspective-n-Point (PnP) problem should generally be applied in a sequence of images which a set of drift-prone features are tracked over time. In this paper, we consider both the temporal dependency of camera poses and the uncertainty of features for the sequential camera pose estimation. Using the Extended Kalman Filter (EKF), a priori estimate of the camera pose is calculated from the camera motion model and then corrected by minimizing the reprojection error of the reference points. Experimental results, using both simulated and real data, demonstrate that the proposed method improves the robustness of the camera pose estimation, in the presence of noise, compared to the state-of-the-art.
|
One of the fastest iterative PnP algorithms is the LHM method @cite_0 . It minimizes an error metric based on collinearity in object space and relies on an initial estimation of the camera pose with a weak-perspective assumption. In contrast to LHM, the Procrustes PnP @cite_9 iteratively minimizes the error between the object and the back-projected image points.
|
{
"cite_N": [
"@cite_0",
"@cite_9"
],
"mid": [
"2162440713",
"2061781270"
],
"abstract": [
"Determining the rigid transformation relating 2D images to known 3D geometry is a classical problem in photogrammetry and computer vision. Heretofore, the best methods for solving the problem have relied on iterative optimization methods which cannot be proven to converge and or which do not effectively account for the orthonormal structure of rotation matrices. We show that the pose estimation problem can be formulated as that of minimizing an error metric based on collinearity in object (as opposed to image) space. Using object space collinearity error, we derive an iterative algorithm which directly computes orthogonal rotation matrices and which is globally convergent. Experimentally, we show that the method is computationally efficient, that it is no less accurate than the best currently employed optimization methods, and that it outperforms all tested methods in robustness to outliers.",
"In this paper we formulate the Perspective-n-Point (a.k.a. exterior orientation) problem in terms of an instance of the an isotropic orthogonal Procrustes problem, and derive its solution. Experiments with synthetic and real data demonstrate that our method reaches the best trade-off between speed and accuracy. The MATLAB code reported in the paper testifies that it is also exceedingly simple to implement."
]
}
|
1906.10324
|
2955620119
|
In real-world applications the Perspective-n-Point (PnP) problem should generally be applied in a sequence of images which a set of drift-prone features are tracked over time. In this paper, we consider both the temporal dependency of camera poses and the uncertainty of features for the sequential camera pose estimation. Using the Extended Kalman Filter (EKF), a priori estimate of the camera pose is calculated from the camera motion model and then corrected by minimizing the reprojection error of the reference points. Experimental results, using both simulated and real data, demonstrate that the proposed method improves the robustness of the camera pose estimation, in the presence of noise, compared to the state-of-the-art.
|
With the possibility of outliers, it is necessary to incorporate an outlier-removal scheme like RANSAC @cite_8 . combine an algebraic outlier rejection strategy with the linear formulation of the PnP solution in EPnP algorithm called Robust Efficient Procrustes PnP (REPPnP) @cite_1 . It sequentially removes correspondences yielding algebraic errors more than a specific threshold. Final results are obtained by iteratively solving the closed-form Orthogonal Procrustes problem. The authors have extended their method to integrate image points uncertainties introducing the Covariant EPPnP (CEPPnP) @cite_17 . To incorporate feature uncertainties in EPnP, a Gaussian distribution models the error for each of the observed 2D feature points. Then the PnP solution is formulated as a maximum likelihood problem, approximated by an unconstrained Sampson error function. It naturally penalizes the noisiest correspondences. However, as noted in the article, the feature uncertainties are assumed to be known in advance.
|
{
"cite_N": [
"@cite_1",
"@cite_17",
"@cite_8"
],
"mid": [
"1975461372",
"2039032788",
"2085261163"
],
"abstract": [
"We propose a real-time, robust to outliers and accurate solution to the Perspective-n-Point (PnP) problem. The main advantages of our solution are twofold: first, it in- tegrates the outlier rejection within the pose estimation pipeline with a negligible computational overhead, and sec- ond, its scalability to arbitrarily large number of correspon- dences. Given a set of 3D-to-2D matches, we formulate pose estimation problem as a low-rank homogeneous sys- tem where the solution lies on its 1D null space. Outlier correspondences are those rows of the linear system which perturb the null space and are progressively detected by projecting them on an iteratively estimated solution of the null space. Since our outlier removal process is based on an algebraic criterion which does not require computing the full-pose and reprojecting back all 3D points on the image plane at each step, we achieve speed gains of more than 100× compared to RANSAC strategies. An extensive exper- imental evaluation will show that our solution yields accu- rate results in situations with up to 50 of outliers, and can process more than 1000 correspondences in less than 5ms.",
"We propose a real-time and accurate solution to the Perspective-n-Point (PnP) problem –estimating the pose of a calibrated camera from n 3D-to-2D point correspondences– that exploits the fact that in practice the 2D position of not all 2D features is estimated with the same accuracy. Assuming a model of such feature uncertainties is known in advance, we reformulate the PnP problem as a maximum likelihood minimization approximated by an unconstrained Sampson error function, which naturally penalizes the most noisy correspondences. The advantages of this approach are thoroughly demonstrated in synthetic experiments where feature uncertainties are exactly known. Pre-estimating the features uncertainties in real experiments is, though, not easy. In this paper we model feature uncertainty as 2D Gaussian distributions representing the sensitivity of the 2D feature detectors to different camera viewpoints. When using these noise models with our PnP formulation we still obtain promising pose estimation results that outperform the most recent approaches.",
"A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing"
]
}
|
1906.10324
|
2955620119
|
In real-world applications the Perspective-n-Point (PnP) problem should generally be applied in a sequence of images which a set of drift-prone features are tracked over time. In this paper, we consider both the temporal dependency of camera poses and the uncertainty of features for the sequential camera pose estimation. Using the Extended Kalman Filter (EKF), a priori estimate of the camera pose is calculated from the camera motion model and then corrected by minimizing the reprojection error of the reference points. Experimental results, using both simulated and real data, demonstrate that the proposed method improves the robustness of the camera pose estimation, in the presence of noise, compared to the state-of-the-art.
|
It is also worth mentioning that some algorithms like EPnP, REPPnP and CEPPnP propose separate solutions for planar and non-planar reference points. As a result, these methods may yield inaccurate results in cases with near-planar configurations @cite_7 .
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2142482451"
],
"abstract": [
"In this paper, we revisit the classical perspective-n-point (PnP) problem, and propose the first non-iterative O(n) solution that is fast, generally applicable and globally optimal. Our basic idea is to formulate the PnP problem into a functional minimization problem and retrieve all its stationary points by using the Gr\"obner basis technique. The novelty lies in a non-unit quaternion representation to parameterize the rotation and a simple but elegant formulation of the PnP problem into an unconstrained optimization problem. Interestingly, the polynomial system arising from its first-order optimality condition assumes two-fold symmetry, a nice property that can be utilized to improve speed and numerical stability of a Grobner basis solver. Experiment results have demonstrated that, in terms of accuracy, our proposed solution is definitely better than the state-of-the-art O(n) methods, and even comparable with the reprojection error minimization method."
]
}
|
1906.10324
|
2955620119
|
In real-world applications the Perspective-n-Point (PnP) problem should generally be applied in a sequence of images which a set of drift-prone features are tracked over time. In this paper, we consider both the temporal dependency of camera poses and the uncertainty of features for the sequential camera pose estimation. Using the Extended Kalman Filter (EKF), a priori estimate of the camera pose is calculated from the camera motion model and then corrected by minimizing the reprojection error of the reference points. Experimental results, using both simulated and real data, demonstrate that the proposed method improves the robustness of the camera pose estimation, in the presence of noise, compared to the state-of-the-art.
|
MLPNP @cite_26 uses image points uncertainties to present a new maximum likelihood solution to the PnP problem. First, the uncertainties propagate to the forward-projected bearing vectors. Then the null space of bearing vectors is used to obtain a linear maximum likelihood solution. Finally, the result of the ML estimator is iteratively refined with the Gauss-Newton optimization.
|
{
"cite_N": [
"@cite_26"
],
"mid": [
"2950738961"
],
"abstract": [
"Abstract. In this paper, a statistically optimal solution to the Perspective-n-Point (PnP) problem is presented. Many solutions to the PnP problem are geometrically optimal, but do not consider the uncertainties of the observations. In addition, it would be desirable to have an internal estimation of the accuracy of the estimated rotation and translation parameters of the camera pose. Thus, we propose a novel maximum likelihood solution to the PnP problem, that incorporates image observation uncertainties and remains real-time capable at the same time. Further, the presented method is general, as is works with 3D direction vectors instead of 2D image points and is thus able to cope with arbitrary central camera models. This is achieved by projecting (and thus reducing) the covariance matrices of the observations to the corresponding vector tangent space."
]
}
|
1906.10324
|
2955620119
|
In real-world applications the Perspective-n-Point (PnP) problem should generally be applied in a sequence of images which a set of drift-prone features are tracked over time. In this paper, we consider both the temporal dependency of camera poses and the uncertainty of features for the sequential camera pose estimation. Using the Extended Kalman Filter (EKF), a priori estimate of the camera pose is calculated from the camera motion model and then corrected by minimizing the reprojection error of the reference points. Experimental results, using both simulated and real data, demonstrate that the proposed method improves the robustness of the camera pose estimation, in the presence of noise, compared to the state-of-the-art.
|
There are some related studies in the field of visual servoing which use the EKF for camera pose estimation; for example, @cite_25 and @cite_22 use EKF and Iterative Adaptive EKF (IAEKF) respectively for real-time control of robot motion. Similar to our method, they formulate the control error in the image space. However, they use the Euler angle representation of the rotation matrix. Apart from the infamous gimbal lock problem, this adds to the computational complexity of the algorithm @cite_23 .
|
{
"cite_N": [
"@cite_22",
"@cite_25",
"@cite_23"
],
"mid": [
"2166429996",
"569732077",
"1663961697"
],
"abstract": [
"The problem of estimating position and orientation (pose) of an object in real time constitutes an important issue for vision-based control of robots. Many vision-based pose-estimation schemes in robot control rely on an extended Kalman filter (EKF) that requires tuning of filter parameters. To obtain satisfactory results, EKF-based techniques rely on “known” noise statistics, initial object pose, and sufficiently high sampling rates for good approximation of measurement-function linearization. Deviations from such assumptions usually lead to degraded pose estimation during visual servoing. In this paper, a new algorithm, namely iterative adaptive EKF (IAEKF), is proposed by integrating mechanisms for noise adaptation and iterative-measurement linearization. The experimental results are provided to demonstrate the superiority of IAEKF in dealing with erroneous a priori statistics, poor pose initialization, variations in the sampling rate, and trajectory dynamics.",
"Abstract This paper concerns the position-based visual servo control of autonomous robotic manipulators in space. It focuses on the development of a real-time vision-based pose and motion estimation algorithm of a non-cooperative target by photogrammetry and extended Kalman filter for robotic manipulators to perform autonomous capture. Optical flow algorithm is adopted to track the target features in order to improve the image processing efficiency. Then, a close-loop position-based visual servo control strategy is devised to determine the desired pose of the end-effector at the rendezvous point based on the estimated pose and motion of the target. The corresponding desired joint angles of the robotic manipulator in the joint space are derived by the inverse kinematics of the robotic manipulator. The developed algorithm and position-based visual servo control strategy are validated experimentally by a custom built robotic manipulator with an eye-in-hand configuration. The experimental results demonstrate the proposed estimation algorithm and control scheme are feasible and effective.",
""
]
}
|
1906.10417
|
2954864319
|
Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept, called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of state and input chance constraints for potentially large-scale systems. The certificate is realized through a stochastic tube that safely connects the current system state with a terminal set of states, that is known to be safe. A novel formulation in terms of a convex receding horizon problem allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing a simple RL algorithm with safety certificates.
|
Driven by the rapid progress in reinforcement learning there is also a growing awareness regarding safety aspects of machine learning systems @cite_5 , see e.g. @cite_10 for a comprehensive overview. As opposed to most methods developed in the context of safe RL, the approach presented in this paper keeps the system safe at all times, including exploration, and considers continuous state and action spaces. This is possible through the use of models and corresponding uncertainty estimates of the system, which can be sequentially improved by, e.g., a RL algorithm to allow greater exploration.
|
{
"cite_N": [
"@cite_5",
"@cite_10"
],
"mid": [
"2462906003",
"1845972764"
],
"abstract": [
"Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (\"avoiding side effects\" and \"avoiding reward hacking\"), an objective function that is too expensive to evaluate frequently (\"scalable supervision\"), or undesirable behavior during the learning process (\"safe exploration\" and \"distributional shift\"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.",
"Safe Reinforcement Learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and or respect safety constraints during the learning and or deployment processes. We categorize and analyze two approaches of Safe Reinforcement Learning. The first is based on the modification of the optimality criterion, the classic discounted finite infinite horizon, with a safety factor. The second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric. We use the proposed classification to survey the existing literature, as well as suggesting future directions for Safe Reinforcement Learning."
]
}
|
1906.10417
|
2954864319
|
Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept, called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of state and input chance constraints for potentially large-scale systems. The certificate is realized through a stochastic tube that safely connects the current system state with a terminal set of states, that is known to be safe. A novel formulation in terms of a convex receding horizon problem allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing a simple RL algorithm with safety certificates.
|
In model-free safe reinforcement learning methods, policy search algorithms have been proposed, e.g. @cite_44 , which provide safety guarantees in expectation by solving a constrained policy optimization using a modified trust-region policy gradient method @cite_50 . Efficient policy tuning with respect to best worst-case performance (also worst-case stability under physical constraints) can be achieved using Bayesian min-max optimization, see e.g. @cite_40 , or by safety constrained Bayesian optimization as e.g. in @cite_52 @cite_38 . These techniques share the limitation that they need to be tailored to a task-specific class of policies. Furthermore, most techniques require to repeatedly execute experiments, which prohibits fully autonomous safe learning in closed-loop'.
|
{
"cite_N": [
"@cite_38",
"@cite_52",
"@cite_44",
"@cite_40",
"@cite_50"
],
"mid": [
"2100484286",
"2143346970",
"2962803570",
"2209113413",
"2949608212"
],
"abstract": [
"In this paper, the problem of safe exploration in the active learning context is considered. Safe exploration is especially important for data sampling from technical and industrial systems, e.g. combustion engines and gas turbines, where critical and unsafe measurements need to be avoided. The objective is to learn data-based regression models from such technical systems using a limited budget of measured, i.e. labelled, points while ensuring that critical regions of the considered systems are avoided during measurements. We propose an approach for learning such models and exploring new data regions based on Gaussian processes GP's. In particular, we employ a problem specific GP classifier to identify safe and unsafe regions, while using a differential entropy criterion for exploring relevant data regions. A theoretical analysis is shown for the proposed algorithm, where we provide an upper bound for the probability of failure. To demonstrate the efficiency and robustness of our safe exploration scheme in the active learning setting, we test the approach on a policy exploration task for the inverse pendulum hold up problem.",
"This paper introduces a learning-based robust control algorithm that provides robust stability and performance guarantees during learning. The approach uses Gaussian process (GP) regression based on data gathered during operation to update an initial model of the system and to gradually decrease the uncertainty related to this model. Embedding this data-based update scheme in a robust control framework guarantees stability during the learning process. Traditional robust control approaches have not considered online adaptation of the model and its uncertainty before. As a result, their controllers do not improve performance during operation. Typical machine learning algorithms that have achieved similar high-performance behavior by adapting the model and controller online do not provide the guarantees presented in this paper. In particular, this paper considers a stabilization task, linearizes the nonlinear, GP-based model around a desired operating point, and solves a convex optimization problem to obtain a linear robust controller. The resulting performance improvements due to the learning-based controller are demonstrated in experiments on a quadrotor vehicle.",
"For many applications of reinforcement learning it can be more convenient to specify both a reward function and constraints, rather than trying to design behavior through the reward function. For example, systems that physically interact with or around humans should satisfy safety constraints. Recent advances in policy search algorithms (, 2016; , 2015; , 2016; , 2016) have enabled new capabilities in high-dimensional control, but do not consider the constrained setting. We propose Constrained Policy Optimization (CPO), the first general-purpose policy search algorithm for constrained reinforcement learning with guarantees for near-constraint satisfaction at each iteration. Our method allows us to train neural network policies for high-dimensional control while making guarantees about policy behavior all throughout training. Our guarantees are based on a new theoretical result, which is of independent interest: we prove a bound relating the expected returns of two policies to an average divergence between them. We demonstrate the effectiveness of our approach on simulated robot locomotion tasks where the agent must satisfy constraints motivated by safety.",
"Robotic systems typically have numerous parameters, e.g. the choice of planning algorithm, real-valued parameters of motion and vision modules, and control parameters. We consider the problem of optimizing these parameters for best worst-case performance over a range of environments. To this end we first propose to evaluate system parameters by adversarially optimizing over environment parameters to find particularly hard environments. This is then nested in a game-theoretic minimax optimization setting, where an outerloop aims to find best worst-case system parameters. For both optimization levels we use Bayesian global optimization (GP-UCB) which provides the necessary confidence bounds to handle the stochasticity of the performance. We compare our method (Nested Minimax) with an existing relaxation method we adapted to become applicable in our setting. By construction our approach provides more robustness to performance stochasticity. We demonstrate the method for planning algorithm selection on a pick'n'place application and for control parameter optimization on a triple inverted pendulum for robustness to adversarial perturbations.",
"We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters."
]
}
|
1906.10417
|
2954864319
|
Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept, called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of state and input chance constraints for potentially large-scale systems. The certificate is realized through a stochastic tube that safely connects the current system state with a terminal set of states, that is known to be safe. A novel formulation in terms of a convex receding horizon problem allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing a simple RL algorithm with safety certificates.
|
Closely related to the approach proposed in this paper, the concept of a safety framework for learning-based control emerged from robust reachability analysis, robust invariance, as well as classical Lyapunov-based methods @cite_25 @cite_27 @cite_13 @cite_24 . The concept consists of a safe set in the state space and a safety controller as originally proposed in @cite_15 for the case of perfectly known system dynamics in the context of safety barrier functions. While the system state is contained in the safe set, any feasible input (including learning-based controllers) can be applied to the system. However, if such an input would cause the system to leave the safe set, the safety controller interferes. Since this strategy is compatible with any learning-based control algorithm, it serves as a universal safety certification concept. Previously proposed concepts are limited to a robust treatment of the uncertainty, in order to provide rigorous safety guarantees. This potentially results in a conservative system behavior, or even ill-posedness of the overall safety requirement e.g. in case of frequently considered Gaussian distributed additive system noise which has unbounded support.
|
{
"cite_N": [
"@cite_24",
"@cite_27",
"@cite_15",
"@cite_13",
"@cite_25"
],
"mid": [
"2766730210",
"",
"1972149633",
"2962789054",
"2127192854"
],
"abstract": [
"Abstract Learning in interacting dynamical systems can lead to instabilities and violations of critical safety constraints, which is limiting its application to constrained system networks. This paper introduces two safety frameworks that can be applied together with any learning method for ensuring constraint satisfaction in a network of uncertain systems, which are coupled in the dynamics and in the state constraints. The proposed techniques make use of a safe set to modify control inputs that may compromise system safety, while accepting safe inputs from the learning procedure. Two different safe sets for distributed systems are proposed by extending recent results for structured invariant sets. The sets differ in their dynamical allocation to local sets and provide different trade-offs between required communication and achieved set size. The proposed algorithms are proven to keep the system in the safe set at all times and their effectiveness and behavior is illustrated in a numerical example.",
"",
"Abstract This paper presents a new safety feedback design for nonlinear systems based on barrier certificates and the idea of control Lyapunov functions. In contrast to existing methods, this approach ensures safety independently of abstract high-level tasks that might be unknown or change over time. Leaving as much freedom as possible to the safe system, the authors believe that the flexibility of this approach is very promising. The design is validated using an illustrative example.",
"The control of complex systems faces a trade-off between high performance and safety guarantees, which in particular restricts the application of learning-based methods to safety-critical systems. A recently proposed framework to address this issue is the use of a safety controller, which guarantees to keep the system within a safe region of the state space. This paper introduces efficient techniques for the synthesis of a safe set and control law, which offer improved scalability properties by relying on approximations based on convex optimization problems. The first proposed method requires only an approximate linear system model and Lipschitz continuity of the unknown nonlinear dynamics. The second method extends the results by showing how a Gaussian process prior on the unknown system dynamics can be used in order to reduce conservatism of the resulting safe set. We demonstrate the results with numerical examples, including an autonomous convoy of vehicles.",
"For some time now machine learning methods have been widely used in perception for autonomous robots. While there have been many results describing the performance of machine learning techniques with regards to their accuracy or convergence rates, relatively little work has been done on developing theoretical performance guarantees about their stability and robustness. As a result, many machine learning techniques are still limited to being used in situations where safety and robustness are not critical for success. One way to overcome this difficulty is by using reachability analysis, which can be used to compute regions of the state space, known as reachable sets, from which the system can be guaranteed to remain safe over some time horizon regardless of the disturbances. In this paper we show how reachability analysis can be combined with machine learning in a scenario in which an aerial robot is attempting to learn the dynamics of a ground vehicle using a camera with a limited field of view. The resulting simulation data shows that by combining these two paradigms, one can create robotic systems that feature the best qualities of each, namely high performance and guaranteed safety."
]
}
|
1906.10417
|
2954864319
|
Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept, called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of state and input chance constraints for potentially large-scale systems. The certificate is realized through a stochastic tube that safely connects the current system state with a terminal set of states, that is known to be safe. A novel formulation in terms of a convex receding horizon problem allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing a simple RL algorithm with safety certificates.
|
Compared to previous research using similar model predictive control based safety mechanisms such as @cite_54 @cite_22 @cite_46 @cite_36 , we introduce a probabilistic formulation of the safe set and consider safety in probability for all future times, allowing one to prescribe a desired degree of conservatism. The proposed method only requires an implicit description of the safe set as opposed to an explicit representation, which enables scalability with respect to the state dimension, while being independent of a particular RL algorithm.
|
{
"cite_N": [
"@cite_46",
"@cite_54",
"@cite_36",
"@cite_22"
],
"mid": [
"2586823359",
"2962775887",
"2945623569",
"2914264703"
],
"abstract": [
"Self-learning approaches, such as reinforcement learning, offer new possibilities for autonomous control of uncertain or time-varying systems. However, exploring an unknown environment under limited prediction capabilities is a challenge for a learning agent. If the environment is dangerous, free exploration can result in physical damage or in an otherwise unacceptable behavior. With respect to existing methods, the main contribution of this paper is the definition of a new approach that does not require global safety functions, nor specific formulations of the dynamics or of the environment, but relies on interval estimation of the dynamics of the agent during the exploration phase, assuming a limited capability of the agent to perceive the presence of incoming fatal states. Two algorithms are presented with this approach. The first is the Safety Handling Exploration with Risk Perception Algorithm (SHERPA), which provides safety by individuating temporary safety functions, called backups. SHERPA is shown in a simulated, simplified quadrotor task, for which dangerous states are avoided. The second algorithm, denominated OptiSHERPA, can safely handle more dynamically complex systems for which SHERPA is not sufficient through the use of safety metrics. An application of OptiSHERPA is simulated on an aircraft altitude control task.",
"While it has been repeatedly shown that learning-based controllers can provide superior performance, they often lack of safety guarantees. This paper aims at addressing this problem by introducing a model predictive safety certification (MPSC) scheme for linear systems with additive disturbances. The scheme verifies safety of a proposed learning-based input and modifies it as little as necessary in order to keep the system within a given set of constraints. Safety is thereby related to the existence of a model predictive controller (MPC) providing a feasible trajectory towards a safe target set. A robust MPC formulation accounts for the fact that the model is generally uncertain in the context of learning, which allows for proving constraint satisfaction at all times under the proposed MPSC strategy. The MPSC scheme can be used in order to expand any potentially conservative set of safe states and we provide an iterative technique for enlarging the safe set. Finally, a practical data-based design procedure for MPSC is proposed using scenario optimization.",
"Reinforcement learning is a promising approach to learning control policies for complex robotics tasks. A key challenge is ensuring safety of the learned control policy---e.g., that a walking robot does not fall over, or a quadcopter does not run into a wall. We focus on the setting where the dynamics are known, and the goal is to prove that a policy learned in simulation satisfies a given safety constraint. Existing approaches for ensuring safety suffer from a number of limitations---e.g., they do not scale to high-dimensional state spaces, or they only ensure safety for a fixed environment. We propose an approach based on shielding, which uses a backup controller to override the learned controller as necessary to ensure that safety holds. Rather than compute when to use the backup controller ahead-of-time, we perform this computation online. By doing so, we ensure that our approach is computationally efficient, and furthermore, can be used to ensure safety even in novel environments. We empirically demonstrate that our approach can ensure safety in experiments on cart-pole and on a bicycle with random obstacles.",
"This paper presents an online approach to safety critical control. The common approach for enforcing safety of a system requires the offline computation of a viable set, which is either hard and time consuming or very restrictive in terms of operational freedom for the system. The first part of this work shows how one can constrain a system to stay within reach of an appropriately chosen backup set in a minimally invasive way by performing online sensitivity analysis around a backup trajectory. For linear systems, we show how to use an optimal backup strategy in the form of a Model Predictive Controller (MPC) to maximize the operational freedom of the system. The second part of this work shows how to leverage this capability and factor in state constraints to enforce set invariance only based on online computations of sensitivities. For linear systems, the optimal strategy is again considered and we show how one can perform the sensitivity analysis based on a measure of feasibility of a state constrained MPC. This approach is illustrated in simulation on a linear inverted pendulum."
]
}
|
1811.04199
|
2900355810
|
Modern Convolutional Neural Networks (CNNs) are complex, encompassing millions of parameters. Their deployment exerts computational, storage and energy demands, particularly on embedded platforms. Existing approaches to prune or sparsify CNNs require retraining to maintain inference accuracy. Such retraining is not feasible in some contexts. In this paper, we explore the sparsification of CNNs by proposing three model-independent methods. Our methods are applied on-the-fly and require no retraining. We show that the state-of-the-art models' weights can be reduced by up to 73 (compression factor of 3.7x) without incurring more than 5 loss in Top-5 accuracy. Additional fine-tuning gains only 8 in sparsity, which indicates that our fast on-the-fly methods are effective.
|
Generally, this research can be classified into three broad categories: (1) pruning and weight sparsifying @cite_19 @cite_12 @cite_9 @cite_13 @cite_30 @cite_26 , (2) structural pruning @cite_0 @cite_27 @cite_14 @cite_24 , and (3) low rank approximation @cite_10 @cite_25 @cite_7 @cite_17 . Nearly all of this work uses retraining to fine-tune the resulting sparsified, pruned or reduced model @cite_8 @cite_4 .
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_10",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_24",
"@cite_13",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2119144962",
"1605005685",
"2754084392",
"2770051797",
"2551895583",
"2745660053",
"2189774688",
"2513419314",
"2736953746",
"2619096655",
"2619122421",
"2114766824",
"2963891483",
"2737244778",
"2095705004"
],
"abstract": [
"",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"Deep convolutional neural networks have shown promising results in image and speech recognition applications. The learning capability of the network improves with increasing depth and size of each layer. However this capability comes at the cost of increased computational complexity. Thus reduction in hardware complexity and faster classification are highly desired. This work proposes an optimization method for fixed point deep convolutional neural networks. The parameters of a pre-trained high precision network are first directly quantized using L2 error minimization. We quantize each layer one by one, while other layers keep computation with high precision, to know the layer-wise sensitivity on word-length reduction. Then the network is retrained with quantized weights. Two examples on object recognition, MNIST and CIFAR-10, are presented. Our results indicate that quantization induces sparsity in the network which reduces the effective number of network parameters and improves generalization. This work reduces the required memory storage by a factor of 1 10 and achieves better classification results than the high precision networks.",
"Deep compression refers to removing the redundancy of parameters and feature maps for deep learning models. Low-rank approximation and pruning for sparse structures play a vital role in many compression works. However, weight filters tend to be both low-rank and sparse. Neglecting either part of these structure information in previous methods results in iteratively retraining, compromising accuracy, and low compression rates. Here we propose a unified framework integrating the low-rank and sparse decomposition of weight matrices with the feature map reconstructions. Our model includes methods like pruning connections as special cases, and is optimized by a fast SVD-free algorithm. It has been theoretically proven that, with a small sample, due to its generalizability, our model can well reconstruct the feature maps on both training and test data, which results in less compromising accuracy prior to the subsequent retraining. With such a warm start to retrain, the compression method always possesses several merits: (a) higher compression rates, (b) little loss of accuracy, and (c) fewer rounds to compress deep models. The experimental results on several popular models such as AlexNet, VGG-16, and GoogLeNet show that our model can significantly reduce the parameters for both convolutional and fully-connected layers. As a result, our model reduces the size of VGG-16 by 15×, better than other recent compression methods that use a single strategy.",
"The @math -means clustering algorithm is a ubiquitous tool in data mining and machine learning that shows promising performance. However, its high computational cost has hindered its applications in broad domains. Researchers have successfully addressed these obstacles with dimensionality reduction methods. Recently, [1] develop a state-of-the-art random projection (RP) method for faster @math -means clustering. Their method delivers many improvements over other dimensionality reduction methods. For example, compared to the advanced singular value decomposition based feature extraction approach, [1] reduce the running time by a factor of @math for data matrix @math with @math data points and @math features, while losing only a factor of one in approximation accuracy. Unfortunately, they still require @math for matrix multiplication and this cost will be prohibitive for large values of @math and @math . To break this bottleneck, we carefully build a sparse embedded @math -means clustering algorithm which requires @math ( @math denotes the number of non-zeros in @math ) for fast matrix multiplication. Moreover, our proposed algorithm improves on [1]'s results for approximation accuracy by a factor of one. Our empirical studies corroborate our theoretical findings, and demonstrate that our approach is able to significantly accelerate @math -means clustering, while achieving satisfactory clustering performance.",
"The emergence of Deep neural networks has seen human-level performance on large scale computer vision tasks such as image classification. However these deep networks typically contain large amount of parameters due to dense matrix multiplications and convolutions. As a result, these architectures are highly memory intensive, making them less suitable for embedded vision applications. Sparse Computations are known to be much more memory efficient. In this work, we train and build neural networks which implicitly use sparse computations. We introduce additional gate variables to perform parameter selection and show that this is equivalent to using a spike-and-slab prior. We experimentally validate our method on both small and large networks which result in highly sparse neural network models.",
"Learning robust regression model from high-dimensional corrupted data is an essential and difficult problem in many practical applications. The state-of-the-art methods have studied low-rank regression models that are robust against typical noises (like Gaussian noise and out-sample sparse noise) or outliers, such that a regression model can be learned from clean data lying on underlying subspaces. However, few of the existing low-rank regression methods can handle the outliers noise lying on the sparsely corrupted disjoint subspaces. To address this issue, we propose a low-rank-sparse subspace representation for robust regression, hereafter referred to as LRS-RR in this paper. The main contribution include the following: (1) Unlike most of the existing regression methods, we propose an approach with two phases of low-rank-sparse subspace recovery and regression optimization being carried out simultaneously,(2) we also apply the linearized alternating direction method with adaptive penalty to solved the formulated LRS-RR problem and prove the convergence of the algorithm and analyze its complexity, (3) we demonstrate the efficiency of our method for the high-dimensional corrupted data on both synthetic data and two benchmark datasets against several state-of-the-art robust methods.",
"This paper proposes to learn high-performance deep ConvNets with sparse neural connections, referred to as sparse ConvNets, for face recognition. The sparse ConvNets are learned in an iterative way, each time one additional layer is sparsified and the entire model is re-trained given the initial weights learned in previous iterations. One important finding is that directly training the sparse ConvNet from scratch failed to find good solutions for face recognition, while using a previously learned denser model to properly initialize a sparser model is critical to continue learning effective features for face recognition. This paper also proposes a new neural correlation-based weight selection criterion and empirically verifies its effectiveness in selecting informative connections from previously learned models in each iteration. When taking a moderately sparse structure (26 -76 of weights in the dense model), the proposed sparse ConvNet model significantly improves the face recognition performance of the previous state-of-theart DeepID2+ models given the same training data, while it keeps the performance of the baseline model with only 12 of the original parameters.",
"High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation. Experimental results show that SSL achieves on average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25 to 92.60 , which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by around 1 . Open source code is in this https URL",
"Sparsity helps reducing the computation complexity of DNNs by skipping the multiplication with zeros. The granularity of sparsity affects the efficiency of hardware architecture and the prediction accuracy. In this paper we quantitatively measure the accuracy-sparsity relationship with different granularity. Coarse-grained sparsity brings more regular sparsity pattern, making it easier for hardware acceleration, and our experimental results show that coarsegrained sparsity have very small impact on the sparsity ratio given no loss of accuracy. Moreover, due to the index saving effect, coarse-grained sparsity is able to obtain similar or even better compression rates than fine-grained sparsity at the same accuracy threshold. Our analysis, which is based on the framework of a recent sparse convolutional neural network (SCNN) accelerator, further demonstrates that it saves 30 – 35 of memory references compared with fine-grained sparsity.",
"Sparsity helps reduce the computational complexity of deep neural networks by skipping zeros. Taking advantage of sparsity is listed as a high priority in next generation DNN accelerators such as TPU. The structure of sparsity, i.e., the granularity of pruning, affects the efficiency of hardware accelerator design as well as the prediction accuracy. Coarse-grained pruning creates regular sparsity patterns, making it more amenable for hardware acceleration but more challenging to maintain the same accuracy. In this paper we quantitatively measure the trade-off between sparsity regularity and prediction accuracy, providing insights in how to maintain accuracy while having more a more structured sparsity pattern. Our experimental results show that coarse-grained pruning can achieve a sparsity ratio similar to unstructured pruning without loss of accuracy. Moreover, due to the index saving effect, coarse-grained pruning is able to obtain a better compression ratio than fine-grained sparsity at the same accuracy threshold. Based on the recent sparse convolutional neural network accelerator (SCNN), our experiments further demonstrate that coarse-grained sparsity saves about 2x the memory references compared to fine-grained sparsity. Since memory reference is more than two orders of magnitude more expensive than arithmetic operations, the regularity of sparse structure leads to more efficient hardware design.",
"Modern neural networks are often augmented with an attention mechanism, which tells the network where to focus within the input. We propose in this paper a new framework for sparse and structured attention, building upon a smoothed max operator. We show that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes softmax and a slight generalization of the recently-proposed sparsemax as special cases. However, we also show how our framework can incorporate modern structured penalties, resulting in more interpretable attention mechanisms, that focus on entire segments or groups of an input. We derive efficient algorithms to compute the forward and backward passes of our attention mechanisms, enabling their use in a neural network trained with backpropagation. To showcase their potential as a drop-in replacement for existing ones, we evaluate our attention mechanisms on three large-scale tasks: textual entailment, machine translation, and sentence summarization. Our attention mechanisms improve interpretability without sacrificing performance; notably, on textual entailment and summarization, we outperform the standard attention mechanisms based on softmax and sparsemax.",
"We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.",
"Low-rank approximation is a common tool used to accelerate kernel methods: the @math kernel matrix @math is approximated via a rank- @math matrix @math which can be stored in much less space and processed more quickly. In this work we study the limits of computationally efficient low-rank kernel approximation. We show that for a broad class of kernels, including the popular Gaussian and polynomial kernels, computing a relative error @math -rank approximation to @math is at least as difficult as multiplying the input data matrix @math by an arbitrary matrix @math . Barring a breakthrough in fast matrix multiplication, when @math is not too large, this requires @math time where @math is the number of non-zeros in @math . This lower bound matches, in many parameter regimes, recent work on subquadratic time algorithms for low-rank approximation of general kernels [MM16,MW17], demonstrating that these algorithms are unlikely to be significantly improved, in particular to @math input sparsity runtimes. At the same time there is hope: we show for the first time that @math time approximation is possible for general radial basis function kernels (e.g., the Gaussian kernel) for the closely related problem of low-rank approximation of the kernelized dataset.",
"This paper presents methods to reduce the complexity of convolutional neural networks (CNN). These include: (1) A method to quickly and easily sparsify a given network. (2) Fine tune the sparse network to obtain the lost accuracy back (3) Quantize the network to be able to implement it using 8-bit fixed point multiplications efficiently. (4) We then show how an inference engine can be designed to take advantage of the sparsity. These techniques were applied to full frame semantic segmentation and the degradation due to the sparsity and quantization is found to be negligible. We show by analysis that the complexity reduction achieved is significant. Results of implementation on Texas Instruments TDA2x SoC [17] are presented. We have modified Caffe CNN framework to do the sparse, quantized training described in this paper. The source code for the training is made available at https: github.com tidsp caffe-jacinto",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets."
]
}
|
1811.04199
|
2900355810
|
Modern Convolutional Neural Networks (CNNs) are complex, encompassing millions of parameters. Their deployment exerts computational, storage and energy demands, particularly on embedded platforms. Existing approaches to prune or sparsify CNNs require retraining to maintain inference accuracy. Such retraining is not feasible in some contexts. In this paper, we explore the sparsification of CNNs by proposing three model-independent methods. Our methods are applied on-the-fly and require no retraining. We show that the state-of-the-art models' weights can be reduced by up to 73 (compression factor of 3.7x) without incurring more than 5 loss in Top-5 accuracy. Additional fine-tuning gains only 8 in sparsity, which indicates that our fast on-the-fly methods are effective.
|
Le @cite_13 proposed Optimal Brain Damage to reduce neural connection of pairs using the of model parameters. Others @cite_11 @cite_16 extend this work to use second order derivatives. More recently, @cite_19 @cite_27 @cite_31 explore coarse-grain and fine-grain pruning and evaluate the trade-off between accuracy and sparsity using recent CNNs.
|
{
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"2736953746",
"2619096655",
"2963674932",
"2618305643",
"2114766824",
"2125389748"
],
"abstract": [
"Sparsity helps reducing the computation complexity of DNNs by skipping the multiplication with zeros. The granularity of sparsity affects the efficiency of hardware architecture and the prediction accuracy. In this paper we quantitatively measure the accuracy-sparsity relationship with different granularity. Coarse-grained sparsity brings more regular sparsity pattern, making it easier for hardware acceleration, and our experimental results show that coarsegrained sparsity have very small impact on the sparsity ratio given no loss of accuracy. Moreover, due to the index saving effect, coarse-grained sparsity is able to obtain similar or even better compression rates than fine-grained sparsity at the same accuracy threshold. Our analysis, which is based on the framework of a recent sparse convolutional neural network (SCNN) accelerator, further demonstrates that it saves 30 – 35 of memory references compared with fine-grained sparsity.",
"Sparsity helps reduce the computational complexity of deep neural networks by skipping zeros. Taking advantage of sparsity is listed as a high priority in next generation DNN accelerators such as TPU. The structure of sparsity, i.e., the granularity of pruning, affects the efficiency of hardware accelerator design as well as the prediction accuracy. Coarse-grained pruning creates regular sparsity patterns, making it more amenable for hardware acceleration but more challenging to maintain the same accuracy. In this paper we quantitatively measure the trade-off between sparsity regularity and prediction accuracy, providing insights in how to maintain accuracy while having more a more structured sparsity pattern. Our experimental results show that coarse-grained pruning can achieve a sparsity ratio similar to unstructured pruning without loss of accuracy. Moreover, due to the index saving effect, coarse-grained pruning is able to obtain a better compression ratio than fine-grained sparsity at the same accuracy threshold. Based on the recent sparse convolutional neural network accelerator (SCNN), our experiments further demonstrate that coarse-grained sparsity saves about 2x the memory references compared to fine-grained sparsity. Since memory reference is more than two orders of magnitude more expensive than arithmetic operations, the regularity of sparse structure leads to more efficient hardware design.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.",
"We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.",
"We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization."
]
}
|
1811.04199
|
2900355810
|
Modern Convolutional Neural Networks (CNNs) are complex, encompassing millions of parameters. Their deployment exerts computational, storage and energy demands, particularly on embedded platforms. Existing approaches to prune or sparsify CNNs require retraining to maintain inference accuracy. Such retraining is not feasible in some contexts. In this paper, we explore the sparsification of CNNs by proposing three model-independent methods. Our methods are applied on-the-fly and require no retraining. We show that the state-of-the-art models' weights can be reduced by up to 73 (compression factor of 3.7x) without incurring more than 5 loss in Top-5 accuracy. Additional fine-tuning gains only 8 in sparsity, which indicates that our fast on-the-fly methods are effective.
|
@cite_28 proposed a try-and-learn algorithm for pruning redundant filters in CNNs. They use a reward function to aggressively prune with minimal loss of accuracy; however, their method requires retraining as well as user input. Earlier,
|
{
"cite_N": [
"@cite_28"
],
"mid": [
"2963140066"
],
"abstract": [
"Many state-of-the-art computer vision algorithms use large scale convolutional neural networks (CNNs) as basic building blocks. These CNNs are known for their huge number of parameters, high redundancy in weights, and tremendous computing resource consumptions. This paper presents a learning algorithm to simplify and speed up these CNNs. Specifically, we introduce a “try-and-learn” algorithm to train pruning agents that remove unnecessary CNN filters in a data-driven way. With the help of a novel reward function, our agents removes a significant number of filters in CNNs while maintaining performance at a desired level. Moreover, this method provides an easy control of the tradeoff between network performance and its scale. Performance of our algorithm is validated with comprehensive pruning experiments on several popular CNNs for visual recognition and semantic segmentation tasks."
]
}
|
1811.04199
|
2900355810
|
Modern Convolutional Neural Networks (CNNs) are complex, encompassing millions of parameters. Their deployment exerts computational, storage and energy demands, particularly on embedded platforms. Existing approaches to prune or sparsify CNNs require retraining to maintain inference accuracy. Such retraining is not feasible in some contexts. In this paper, we explore the sparsification of CNNs by proposing three model-independent methods. Our methods are applied on-the-fly and require no retraining. We show that the state-of-the-art models' weights can be reduced by up to 73 (compression factor of 3.7x) without incurring more than 5 loss in Top-5 accuracy. Additional fine-tuning gains only 8 in sparsity, which indicates that our fast on-the-fly methods are effective.
|
@cite_9 proposed ConvNets as a framework that can be used to iteratively learn sparsified neural connections through correlations among neural activations. The connections are dropped iteratively one layer at a time and the model is retrained.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2189774688"
],
"abstract": [
"This paper proposes to learn high-performance deep ConvNets with sparse neural connections, referred to as sparse ConvNets, for face recognition. The sparse ConvNets are learned in an iterative way, each time one additional layer is sparsified and the entire model is re-trained given the initial weights learned in previous iterations. One important finding is that directly training the sparse ConvNet from scratch failed to find good solutions for face recognition, while using a previously learned denser model to properly initialize a sparser model is critical to continue learning effective features for face recognition. This paper also proposes a new neural correlation-based weight selection criterion and empirically verifies its effectiveness in selecting informative connections from previously learned models in each iteration. When taking a moderately sparse structure (26 -76 of weights in the dense model), the proposed sparse ConvNet model significantly improves the face recognition performance of the previous state-of-theart DeepID2+ models given the same training data, while it keeps the performance of the baseline model with only 12 of the original parameters."
]
}
|
1811.04199
|
2900355810
|
Modern Convolutional Neural Networks (CNNs) are complex, encompassing millions of parameters. Their deployment exerts computational, storage and energy demands, particularly on embedded platforms. Existing approaches to prune or sparsify CNNs require retraining to maintain inference accuracy. Such retraining is not feasible in some contexts. In this paper, we explore the sparsification of CNNs by proposing three model-independent methods. Our methods are applied on-the-fly and require no retraining. We show that the state-of-the-art models' weights can be reduced by up to 73 (compression factor of 3.7x) without incurring more than 5 loss in Top-5 accuracy. Additional fine-tuning gains only 8 in sparsity, which indicates that our fast on-the-fly methods are effective.
|
@cite_12 proposed a framework that compensates for the loss of accuracy after sparsification by retraining. They quantize their sparsified model for an embedded architecture and observe a nearly 4 @math improvement in the inference speed with 80 @cite_0 propose SSL: a sparsifying framework to exploit and regularize structural sparsity of a sample DNN. Their evaluation using ResNet reduces a few layers while improving inference accuracy by around 1.5 @cite_30 propose the SCNN accelerator architecture that utilizes a compressed encoding of sparse weights and activations. They observe up to 2.7 @math energy improvement during both re-training and inference.
|
{
"cite_N": [
"@cite_0",
"@cite_30",
"@cite_12"
],
"mid": [
"2513419314",
"",
"2737244778"
],
"abstract": [
"High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation. Experimental results show that SSL achieves on average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25 to 92.60 , which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by around 1 . Open source code is in this https URL",
"",
"This paper presents methods to reduce the complexity of convolutional neural networks (CNN). These include: (1) A method to quickly and easily sparsify a given network. (2) Fine tune the sparse network to obtain the lost accuracy back (3) Quantize the network to be able to implement it using 8-bit fixed point multiplications efficiently. (4) We then show how an inference engine can be designed to take advantage of the sparsity. These techniques were applied to full frame semantic segmentation and the degradation due to the sparsity and quantization is found to be negligible. We show by analysis that the complexity reduction achieved is significant. Results of implementation on Texas Instruments TDA2x SoC [17] are presented. We have modified Caffe CNN framework to do the sparse, quantized training described in this paper. The source code for the training is made available at https: github.com tidsp caffe-jacinto"
]
}
|
1811.04281
|
2900080494
|
Accurate segmentation of brain tissue in magnetic resonance images (MRI) is a diffcult task due to different types of brain abnormalities. Using information and features from multimodal MRI including T1, T1-weighted inversion recovery (T1-IR) and T2-FLAIR and differential geometric features including the Jacobian determinant(JD) and the curl vector(CV) derived from T1 modality can result in a more accurate analysis of brain images. In this paper, we use the differential geometric information including JD and CV as image characteristics to measure the differences between different MRI images, which represent local size changes and local rotations of the brain image, and we can use them as one CNN channel with other three modalities (T1-weighted, T1-IR and T2-FLAIR) to get more accurate results of brain segmentation. We test this method on two datasets including IBSR dataset and MRBrainS datasets based on the deep voxelwise residual network, namely VoxResNet, and obtain excellent improvement over single modality or three modalities and increases average DSC(Cerebrospinal Fluid (CSF), Gray Matter (GM) and White Matter (WM)) by about 1.5 on the well-known MRBrainS18 dataset and about 2.5 on the IBSR dataset. Moreover, we discuss that one modality combined with its JD or CV information can replace the segmentation effect of three modalities, which can provide medical conveniences for doctor to diagnose because only to extract T1-modality MRI image of patients. Finally, we also compare the segmentation performance of our method in two networks, VoxResNet and U-Net network. The results show VoxResNet has a better performance than U-Net network with our method in brain MRI segmentation. We believe the proposed method can advance the performance in brain segmentation and clinical diagnosis.
|
@cite_18 proposed a 2D patch-wise CNN method to segment gray matter, white matter and cerebrospinal fluid from multimodal MR images of infants, and outperforms the traditional methods and machine learning algorithms; @cite_10 proposed a semantic-wise full convolution networks method and obtained improved results than Zhang's method. Their overall DSC were 85.5 In this paper, we use the differential geometric information including JD and CV, which represent the change rate of the area or volume of the brain image, and we can use them as one CNN channel with other three modalities to get more accurate results of brain segmentation. We test this method on three datasets including IBSR dataset, MRBrainS13 dataset and MRBrainS18 datasets based on the deep voxelwise residual network VoxResNet @cite_11 .
|
{
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_11"
],
"mid": [
"2082526668",
"2441649867",
"2518214538"
],
"abstract": [
"Abstract The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement.",
"The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement.",
"Recently deep residual learning with residual units for training very deep neural networks advanced the state-of-the-art performance on 2D image recognition tasks, e.g., object detection and segmentation. However, how to fully leverage contextual representations for recognition tasks from volumetric data has not been well studied, especially in the field of medical image computing, where a majority of image modalities are in volumetric format. In this paper we explore the deep residual learning on the task of volumetric brain segmentation. There are at least two main contributions in our work. First, we propose a deep voxelwise residual network, referred as VoxResNet, which borrows the spirit of deep residual learning in 2D image recognition tasks, and is extended into a 3D variant for handling volumetric data. Second, an auto-context version of VoxResNet is proposed by seamlessly integrating the low-level image appearance features, implicit shape information and high-level context together for further improving the volumetric segmentation performance. Extensive experiments on the challenging benchmark of brain segmentation from magnetic resonance (MR) images corroborated the efficacy of our proposed method in dealing with volumetric data. We believe this work unravels the potential of 3D deep learning to advance the recognition performance on volumetric image segmentation."
]
}
|
1811.04387
|
2900179308
|
Convolutional networks have achieved great success in various vision tasks. This is mainly due to a considerable amount of research on network structure. In this study, instead of focusing on architectures, we focused on the convolution unit itself. The existing convolution unit has a fixed shape, and is limited to observing restricted receptive fields. In an earlier work, we proposed the active convolution unit (ACU), which can freely define its shape and learn by itself. In this paper, we propose a detailed analysis of the proposed unit and show that it is an efficient representation of a sparse weight convolution. Furthermore, we expand the unit to a grouped ACU, which can observe multiple receptive fields in one layer. We found that the performance of a naive grouped convolution is degraded by increasing the number of groups; however, the proposed unit retains the accuracy even though the number of parameters reduces. Based on this result, we suggest a depthwise ACU, and various experiments have shown that our unit is efficient and can replace the existing convolutions.
|
Our approach is based on the success of CNN for image classifications. The methodology of such a classification has spread to various other applications including semantic segmentation @cite_8 @cite_40 @cite_4 and object detection @cite_30 @cite_5 @cite_6 @cite_36 .
|
{
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_8",
"@cite_36",
"@cite_6",
"@cite_40",
"@cite_5"
],
"mid": [
"2102605133",
"1923697677",
"2412782625",
"2193145675",
"",
"2952632681",
""
],
"abstract": [
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
""
]
}
|
1811.04387
|
2900179308
|
Convolutional networks have achieved great success in various vision tasks. This is mainly due to a considerable amount of research on network structure. In this study, instead of focusing on architectures, we focused on the convolution unit itself. The existing convolution unit has a fixed shape, and is limited to observing restricted receptive fields. In an earlier work, we proposed the active convolution unit (ACU), which can freely define its shape and learn by itself. In this paper, we propose a detailed analysis of the proposed unit and show that it is an efficient representation of a sparse weight convolution. Furthermore, we expand the unit to a grouped ACU, which can observe multiple receptive fields in one layer. We found that the performance of a naive grouped convolution is degraded by increasing the number of groups; however, the proposed unit retains the accuracy even though the number of parameters reduces. Based on this result, we suggest a depthwise ACU, and various experiments have shown that our unit is efficient and can replace the existing convolutions.
|
Most research on CNN has focused on developing an architecture to achieve a better result. AlexNet @cite_14 uses various types of convolutions and stacks some pooling layers to reduce spatial dimensions. VGG @cite_51 is based on the idea that a stack of two @math convolution layers is more effective than @math layers. This network is used broadly for many applications owing to the simplicity of the topology. GoogleNet @cite_34 @cite_7 @cite_37 introduced an Inception layer for the composition of various receptive fields. This network showed that a carefully crafted design can achieve better result while maintaining a constant computational budget. The residual network @cite_10 @cite_18 @cite_28 solves the gradient vanishing problem by adding shortcut connections to implement identity mapping, allowing for deeper networks to be configured. Later, many variant of a residual network were proposed @cite_49 @cite_2 @cite_33 .
|
{
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_18",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_10",
"@cite_49",
"@cite_2",
"@cite_34",
"@cite_51"
],
"mid": [
"2949605076",
"",
"2302255633",
"",
"2950179405",
"2401231614",
"2949650786",
"2531425418",
"2220894487",
"2274287116",
"1686810756"
],
"abstract": [
"Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set.",
"",
"Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https: github.com KaimingHe resnet-1k-layers.",
"",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at this https URL",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Deep convolutional neural networks (DCNNs) have shown remarkable performance in image classification tasks in recent years. Generally, deep neural network architectures are stacks consisting of a large number of convolutional layers, and they perform downsampling along the spatial dimension via pooling to reduce memory usage. Concurrently, the feature map dimension (i.e., the number of channels) is sharply increased at downsampling locations, which is essential to ensure effective performance because it increases the diversity of high-level attributes. This also applies to residual networks and is very closely related to their performance. In this research, instead of sharply increasing the feature map dimension at units that perform downsampling, we gradually increase the feature map dimension at all units to involve as many locations as possible. This design, which is discussed in depth together with our new insights, has proven to be an effective means of improving generalization ability. Furthermore, we propose a novel residual unit capable of further improving the classification accuracy with our new network architecture. Experiments on benchmark CIFAR-10, CIFAR-100, and ImageNet datasets have shown that our network architecture has superior generalization ability compared to the original residual networks. Code is available at this https URL",
"We seek to improve deep neural networks by generalizing the pooling operations that play a central role in current architectures. We pursue a careful exploration of approaches to allow pooling to learn and to adapt to complex and variable patterns. The two primary directions lie in (1) learning a pooling function via (two strategies of) combining of max and average pooling, and (2) learning a pooling function in the form of a tree-structured fusion of pooling filters that are themselves learned. In our experiments every generalized pooling operation we explore improves performance when used in place of average or max pooling. We experimentally demonstrate that the proposed pooling operations provide a boost in invariance properties relative to conventional pooling and set the state of the art on several widely adopted benchmark datasets; they are also easy to implement, and can be applied within various deep neural network architectures. These benefits come with only a light increase in computational overhead during training and a very modest increase in the number of model parameters.",
"Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.