aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1902.03185 | 2913294020 | The human ability to coordinate and cooperate has been vital to the development of societies for thousands of years. While it is not fully clear how this behavior arises, social norms are thought to be a key factor in this development. In contrast to laws set by authorities, norms tend to evolve in a bottom-up manner from interactions between members of a society. While much behavior can be explained through the use of social norms, it is difficult to measure the extent to which they shape society as well as how they are affected by other societ al dynamics. In this paper, we discuss the design and evaluation of a reinforcement learning model for understanding how the opportunity to choose who you interact with in a society affects the overall societ al outcome and the strength of social norms. We first study the emergence of norms and then the emergence of cooperation in presence of norms. In our model, agents interact with other agents in a society in the form of repeated matrix-games: coordination games and cooperation games. In particular, in our model, at each each stage, agents are either able to choose a partner to interact with or are forced to interact at random and learn using policy gradients. | The traditional reinforcement learning objective is to maximize cumulative reward over a trajectory of states and actions @cite_19 . Instead of modeling the environment dynamics explicitly, the agent aims to optimize its behavior by interacting directly with the environment over a large number of episodes. | {
"cite_N": [
"@cite_19"
],
"mid": [
"1515851193"
],
"abstract": [
"From the Publisher: In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability."
]
} |
1902.03185 | 2913294020 | The human ability to coordinate and cooperate has been vital to the development of societies for thousands of years. While it is not fully clear how this behavior arises, social norms are thought to be a key factor in this development. In contrast to laws set by authorities, norms tend to evolve in a bottom-up manner from interactions between members of a society. While much behavior can be explained through the use of social norms, it is difficult to measure the extent to which they shape society as well as how they are affected by other societ al dynamics. In this paper, we discuss the design and evaluation of a reinforcement learning model for understanding how the opportunity to choose who you interact with in a society affects the overall societ al outcome and the strength of social norms. We first study the emergence of norms and then the emergence of cooperation in presence of norms. In our model, agents interact with other agents in a society in the form of repeated matrix-games: coordination games and cooperation games. In particular, in our model, at each each stage, agents are either able to choose a partner to interact with or are forced to interact at random and learn using policy gradients. | propose a method to study norm emergence using Q-learning and policy hill climbing, though they note that Q-learning converges only to deterministic behavior, which could be problematic @cite_16 . This work highlighted the emergence of norms and the capability of learning agents to converge to similar patterns of behavior and achieve a social goal. use Q-learning in a variety of games that can be defined as social dilemmas; they account for Q-learning being deterministic by training under different environmental settings, e.g., in the presence of many or few resources @cite_15 . The authors measure the effect of network size and learning capability on the outcome of the society, and conclude that it is likely that, as the size of the network increases, so does the complexity of the resulting behavior. Like other social dilemmas, such as the IPD, resource appropriation games such as the Tragedy of the Commons have been further studied using reinforcement learning techniques @cite_13 . In their investigation, use a reinforcement learning model to observe the interactions of evolving agents @cite_13 and describe agents' behavior through the use of intuitive societ al metrics to monitor some important social dynamics such as equality, peace and sustainability. | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_13"
],
"mid": [
"2964345382",
"92376239",
"2963689090"
],
"abstract": [
"Matrix games like Prisoner's Dilemma have guided research on social dilemmas for decades. However, they necessarily treat the choice to cooperate or defect as an atomic action. In real-world social dilemmas these choices are temporally extended. Cooperativeness is a property that applies to policies, not elementary actions. We introduce sequential social dilemmas that share the mixed incentive structure of matrix game social dilemmas but also require agents to learn policies that implement their strategic intentions. We analyze the dynamics of policies learned by multiple self-interested independent learning agents, each using its own deep Q-network, on two Markov games we introduce here: 1. a fruit Gathering game and 2. a Wolfpack hunting game. We characterize how learned behavior in each domain changes as a function of environmental factors including resource abundance. Our experiments show how conflict can emerge from competition over shared resources and shed light on how the sequential nature of real world social dilemmas affects cooperation.",
"Behavioral norms are key ingredients that allow agent coordination where societ al laws do not sufficiently constrain agent behaviors. Whereas social laws need to be enforced in a top-down manner, norms evolve in a bottom-up manner and are typically more self-enforcing. While effective norms can significantly enhance performance of individual agents and agent societies, there has been little work in multiagent systems on the formation of social norms. We propose a model that supports the emergence of social norms via learning from interaction experiences. In our model, individual agents repeatedly interact with other agents in the society over instances of a given scenario. Each interaction is framed as a stage game. An agent learns its policy to play the game over repeated interactions with multiple agents. We term this mode of learning social learning, which is distinct from an agent learning from repeated interactions against the same player. We are particularly interested in situations where multiple action combinations yield the same optimal payoff. The key research question is to find out if the entire population learns to converge to a consistent norm. In addition to studying such emergence of social norms among homogeneous learners via social learning, we study the effects of heterogeneous learners, population size, multiple social groups, etc.",
"Humanity faces numerous problems of common-pool resource appropriation. This class of multi-agent social dilemma includes the problems of ensuring sustainable use of fresh water, common fisheries, grazing pastures, and irrigation systems. Abstract models of common-pool resource appropriation based on non-cooperative game theory predict that self-interested agents will generally fail to find socially positive equilibria---a phenomenon called the tragedy of the commons. However, in reality, human societies are sometimes able to discover and implement stable cooperative solutions. Decades of behavioral game theory research have sought to uncover aspects of human behavior that make this possible. Most of that work was based on laboratory experiments where participants only make a single choice: how much to appropriate. Recognizing the importance of spatial and temporal resource dynamics, a recent trend has been toward experiments in more complex real-time video game-like environments. However, standard methods of non-cooperative game theory can no longer be used to generate predictions for this case. Here we show that deep reinforcement learning can be used instead. To that end, we study the emergent behavior of groups of independently learning agents in a partially observed Markov game modeling common-pool resource appropriation. Our experiments highlight the importance of trial-and-error learning in common-pool resource appropriation and shed light on the relationship between exclusion, sustainability, and inequality."
]
} |
1902.03356 | 2972976851 | We propose meta-curvature (MC), a framework to learn curvature information for better generalization and fast model adaptation. MC expands on the model-agnostic meta-learner (MAML) by learning to transform the gradients in the inner optimization such that the transformed gradients achieve better generalization performance to a new task. For training large scale neural networks, we decompose the curvature matrix into smaller matrices in a novel scheme where we capture the dependencies of the model's parameters with a series of tensor products. We demonstrate the effects of our proposed method on several few-shot learning tasks and datasets. Without any task specific techniques and architectures, the proposed method achieves substantial improvement upon previous MAML variants and outperforms the recent state-of-the-art methods. Furthermore, we observe faster convergence rates of the meta-training process. Finally, we present an analysis that explains better generalization performance with the meta-trained curvature. | Meta-SGD @cite_37 suggests to learn coordinate-wise learning rates. We can interpret it as an diagonal approximation to meta-curvature in a similar vein to recent adaptive learning rates methods, such as @cite_27 @cite_43 @cite_18 , performing diagonal approximations of second-order matrices. Recently, @cite_9 suggested to learn layer-wise learning rates through the meta-training. However, both methods do not consider the dependencies between the parameters, which is our main focus of this work. | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_9",
"@cite_43",
"@cite_27"
],
"mid": [
"2742093937",
"2146502635",
"2963303956",
"2964121744",
"2746314669"
],
"abstract": [
"Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.",
"We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.",
"",
"Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.",
"Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56 , 15.20 , and 1.30 test error respectively. Code is available at this https URL"
]
} |
1902.03356 | 2972976851 | We propose meta-curvature (MC), a framework to learn curvature information for better generalization and fast model adaptation. MC expands on the model-agnostic meta-learner (MAML) by learning to transform the gradients in the inner optimization such that the transformed gradients achieve better generalization performance to a new task. For training large scale neural networks, we decompose the curvature matrix into smaller matrices in a novel scheme where we capture the dependencies of the model's parameters with a series of tensor products. We demonstrate the effects of our proposed method on several few-shot learning tasks and datasets. Without any task specific techniques and architectures, the proposed method achieves substantial improvement upon previous MAML variants and outperforms the recent state-of-the-art methods. Furthermore, we observe faster convergence rates of the meta-training process. Finally, we present an analysis that explains better generalization performance with the meta-trained curvature. | As a good test bed to evaluate few-shot learning, huge progress has been made in the few-shot classification task. Triggered by @cite_51 , many recent studies have focused on discovering effective inductive bias on classification task. For example, network architectures that perform nearest neighbor search @cite_51 @cite_54 were suggested. Some improved the performance by modeling the interactions or correlation between training examples @cite_1 @cite_12 @cite_29 @cite_11 @cite_47 . In order to overcome the nature of few-shot learning, the generative models have been suggested to augment the training data @cite_4 @cite_44 or generate model parameters for the specified task @cite_22 @cite_13 . The state-of-the-art results are achieved by additionally training 64-way classification task for pretraining @cite_13 @cite_19 @cite_11 with larger ResNet models @cite_13 @cite_19 @cite_47 @cite_3 . In this work, our focus is to improve the model-agnostic few-shot learner that is broadly applicable to other tasks, e.g. reinforcement learning setup. | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_22",
"@cite_54",
"@cite_29",
"@cite_1",
"@cite_3",
"@cite_44",
"@cite_19",
"@cite_47",
"@cite_51",
"@cite_12",
"@cite_11"
],
"mid": [
"2625674597",
"2964249870",
"2964112702",
"",
"",
"2787501667",
"",
"2963845150",
"",
"",
"2963341924",
"",
""
],
"abstract": [
"In this paper, we are interested in the few-shot learning problem. In particular, we focus on a challenging scenario where the number of categories is large and the number of examples per novel category is very limited, e.g. 1, 2, or 3. Motivated by the close relationship between the parameters and the activations in a neural network associated with the same category, we propose a novel method that can adapt a pre-trained neural network to novel categories by directly predicting the parameters from the activations. Zero training is required in adaptation to novel categories, and fast inference is realized by a single forward pass. We evaluate our method by doing few-shot image recognition on the ImageNet dataset, which achieves the state-of-the-art classification accuracy on novel categories by a significant margin while keeping comparable performance on the large-scale categories. We also test our method on the MiniImageNet dataset and it strongly outperforms the previous state-of-the-art methods.",
"Learning to classify new categories based on just one or a few examples is a long-standing challenge in modern computer vision. In this work, we proposes a simple yet effective method for few-shot (and one-shot) object recognition. Our approach is based on a modified auto-encoder, denoted Delta-encoder, that learns to synthesize new samples for an unseen category just by seeing few examples from it. The synthesized samples are then used to train a classifier. The proposed approach learns to both extract transferable intra-class deformations, or \"deltas\", between same-class pairs of training examples, and to apply those deltas to the few provided examples of a novel class (unseen during training) in order to efficiently synthesize samples from that new class. The proposed method improves over the state-of-the-art in one-shot object-recognition and compares favorably in the few-shot case. Upon acceptance code will be made available.",
"",
"",
"",
"Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task. We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the resulting Simple Neural AttentIve Learner (or SNAIL) on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement learning, SNAIL attains state-of-the-art performance by significant margins.",
"",
"Humans can quickly learn new visual concepts, perhaps because they can easily visualize or imagine what novel objects look like from different views. Incorporating this ability to hallucinate novel instances of new concepts might help machine vision systems perform better low-shot learning, i.e., learning concepts from few examples. We present a novel approach to low-shot learning that uses this idea. Our approach builds on recent progress in meta-learning (\"learning to learn\") by combining a meta-learner with a \"hallucinator\" that produces additional training examples, and optimizing both models jointly. Our hallucinator can be incorporated into a variety of meta-learners and provides significant gains: up to a 6 point boost in classification accuracy when only a single training example is available, yielding state-of-the-art performance on the challenging ImageNet low-shot classification benchmark.",
"",
"",
"Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.",
"",
""
]
} |
1902.03417 | 2954888401 | Urban wastewater sector is being pushed to optimize processes in order to reduce energy consumption without compromising its quality standards. Energy costs can represent a significant share of the global operational costs (between 50 and 60 ) in an intensive energy consumer. Pumping is the largest consumer of electrical energy in a wastewater treatment plant. Thus, the optimal control of pump units can help the utilities to decrease operational costs. This work describes an innovative predictive control policy for wastewater variable-frequency pumps that minimize electrical energy consumption, considering uncertainty forecasts for wastewater intake rate and information collected by sensors accessible through the Supervisory Control and Data Acquisition system. The proposed control method combines statistical learning (regression and predictive models) and deep reinforcement learning (Proximal Policy Optimization). The following main original contributions are produced: i) model-free and data-driven predictive control; ii) control philosophy focused on operating the tank with a variable wastewater set-point level; iii) use of supervised learning to generate synthetic data for pre-training the reinforcement learning policy, without the need to physically interact with the system. The results for a real case-study during 90 days show a 16.7 decrease in electrical energy consumption while still achieving a 97 reduction in the number of alarms (tank level above 7.2 meters) when compared with the current operating scenario (operating with a fixed set-point level). The numerical analysis showed that the proposed data-driven method is able to explore the trade-off between number of alarms and consumption minimization, offering different options to decision-makers. | conducted a literature review of energy efficiency actions for pumping systems, grouped by component design, selection and dimensioning, control and adjustment of variable-speed pump units @cite_6 . The category control and adjustment'' was divided in the following sub-categories: (a) variable frequency drive control; (b) load shifting; (c) process optimization. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2516233585"
],
"abstract": [
"The energy consumption across the globe is increasing at an alarming rate. This has already shown its impact in the depletion of energy sources and environmental issues (global warming, and weakening of the ozone layer). Certainly, this escalating tendency of energy insufficiency will get aggravated in the future. Efficiency enhancement initiatives are considered to be the key solution in reducing the energy utilization and eventually resisting the global environmental impacts. Of the world’s total energy generated, pumping systems, especially the centrifugal pumps consume about 20 . Consequently, the primary focus of global energy policy makers is to enhance energy efficiency in pumping systems. As per the literature, remarkable energy savings can be accomplished by controlling the speed of the pumping system using Variable Frequency Drives (VFDs). For this reason, studies and researches focus primarily on VFD control techniques to improve the efficiency of the pumping system. This article also focuses on component selection, and system dimensioning in addition to the control techniques. Comparison of recent research outcomes of energy efficiency improvements in pumping system has been made to provide an insight for future research."
]
} |
1902.03417 | 2954888401 | Urban wastewater sector is being pushed to optimize processes in order to reduce energy consumption without compromising its quality standards. Energy costs can represent a significant share of the global operational costs (between 50 and 60 ) in an intensive energy consumer. Pumping is the largest consumer of electrical energy in a wastewater treatment plant. Thus, the optimal control of pump units can help the utilities to decrease operational costs. This work describes an innovative predictive control policy for wastewater variable-frequency pumps that minimize electrical energy consumption, considering uncertainty forecasts for wastewater intake rate and information collected by sensors accessible through the Supervisory Control and Data Acquisition system. The proposed control method combines statistical learning (regression and predictive models) and deep reinforcement learning (Proximal Policy Optimization). The following main original contributions are produced: i) model-free and data-driven predictive control; ii) control philosophy focused on operating the tank with a variable wastewater set-point level; iii) use of supervised learning to generate synthetic data for pre-training the reinforcement learning policy, without the need to physically interact with the system. The results for a real case-study during 90 days show a 16.7 decrease in electrical energy consumption while still achieving a 97 reduction in the number of alarms (tank level above 7.2 meters) when compared with the current operating scenario (operating with a fixed set-point level). The numerical analysis showed that the proposed data-driven method is able to explore the trade-off between number of alarms and consumption minimization, offering different options to decision-makers. | This problem is gaining new attention with demand-side management programs and opportunities created by the synergy between smart electric and water distribution networks @cite_8 . This new paradigm requires tractable convex mixed-integer non-linear programming (MINLP) problems, robust to highly dynamic electricity tariffs @cite_15 , or the use of different variants of dynamic programming for solving the optimization problem @cite_30 . Furthermore, opportunities like the participation in ancillary services (i.e., short term operational reserve, firm frequency response, frequency control by demand management) will emerge for flexible pumping systems @cite_13 . | {
"cite_N": [
"@cite_30",
"@cite_15",
"@cite_13",
"@cite_8"
],
"mid": [
"2081870710",
"2226806742",
"2296333757",
""
],
"abstract": [
"The optimal operation scheduling of a pumping station with multiple pumps is formulated as a dynamic programming problem. Based on the characteristics of the problem, an extended reduced dynamic programming algorithm (RDPA) is proposed to solve the problem. Both the energy cost and the maintenance cost are considered in the performance function of the optimization problem. The extended RDPA can significantly reduce the computational time when it is compared to conventional DP algorithms. Simulation shows the feasibility of the reduction of the operation cost.",
"We address the day-ahead pump scheduling problem for a class of branched water networks with one pumping station raising water to tanks at different places and levels. This common class is representative of rural drinking water distribution networks, though not exclusively so. Many sophisticated heuristic algorithms have been designed to tackle the challenging general problem. By focusing on a class of networks, we show that a pure model-based approach relying on a tractable mathematical program is pertinent for real-size applications. The practical advantages of this approach are that it produces optimal or near-optimal solutions with performance guarantees in near real-time, and that it is replicable without algorithmic development. We apply the approach to a real drinking water supply system and compare it to the current operational strategy based on historical data. An extensive empirical analysis assesses the financial and practical benefits: (1) it achieves significant savings in terms of operation costs and energy consumption, (2) its robustness to dynamic pricing means that demand-response can be efficiently implemented in this type of energy-intensive utility.",
"Significant changes in the power generation mix are posing new challenges for the balancing systems of the grid. Many of these challenges are in the secondary electricity grid regulation services and could be met through demand response (DR) services. We explore the opportunities for a water distribution system (WDS) to provide balancing services with demand response through pump scheduling and evaluate the associated benefits. Using a benchmark network and demand response mechanisms available in the UK, these benefits are assessed in terms of reduced green house gas (GHG) emissions from the grid due to the displacement of more polluting power sources and additional revenues for water utilities. The optimal pump scheduling problem is formulated as a mixed-integer optimisation problem and solved using a branch and bound algorithm. This new formulation finds the optimal level of power capacity to commit to the provision of demand response for a range of reserve energy provision and frequency response schemes offered in the UK. For the first time we show that DR from WDS can offer financial benefits to WDS operators while providing response energy to the grid with less greenhouse gas emissions than competing reserve energy technologies. Using a Monte Carlo simulation based on data from 2014, we demonstrate that the cost of providing the storage energy is less than the financial compensation available for the equivalent energy supply. The GHG emissions from the demand response provision from a WDS are also shown to be smaller than those of contemporary competing technologies such as open cycle gas turbines. The demand response services considered vary in their response time and duration as well as commitment requirements. The financial viability of a demand response service committed continuously is shown to be strongly dependent on the utilisation of the pumps and the electricity tariffs used by water utilities. Through the analysis of range of water demand scenarios and financial incentives using real market data, we demonstrate how a WDS can participate in a demand response scheme and generate financial gains and environmental benefits.",
""
]
} |
1902.03417 | 2954888401 | Urban wastewater sector is being pushed to optimize processes in order to reduce energy consumption without compromising its quality standards. Energy costs can represent a significant share of the global operational costs (between 50 and 60 ) in an intensive energy consumer. Pumping is the largest consumer of electrical energy in a wastewater treatment plant. Thus, the optimal control of pump units can help the utilities to decrease operational costs. This work describes an innovative predictive control policy for wastewater variable-frequency pumps that minimize electrical energy consumption, considering uncertainty forecasts for wastewater intake rate and information collected by sensors accessible through the Supervisory Control and Data Acquisition system. The proposed control method combines statistical learning (regression and predictive models) and deep reinforcement learning (Proximal Policy Optimization). The following main original contributions are produced: i) model-free and data-driven predictive control; ii) control philosophy focused on operating the tank with a variable wastewater set-point level; iii) use of supervised learning to generate synthetic data for pre-training the reinforcement learning policy, without the need to physically interact with the system. The results for a real case-study during 90 days show a 16.7 decrease in electrical energy consumption while still achieving a 97 reduction in the number of alarms (tank level above 7.2 meters) when compared with the current operating scenario (operating with a fixed set-point level). The numerical analysis showed that the proposed data-driven method is able to explore the trade-off between number of alarms and consumption minimization, offering different options to decision-makers. | All the aforementioned works share the following characteristics: i) are focused in operational scheduling of water distribution systems; ii) do not cover real-time control of pumping units (i.e., continuous optimization) and are designed for fixed-speed motored pumps (i.e., integer variables); iii) the formulations rely in approximations of the hydraulic model. Moreover, in WWTP, long-term'' flexibility (i.e., load shifting) is lower in comparison to water distribution. Therefore, most of the potential energy savings are a result of short-term'' flexibility, i.e., energy optimization close to real-time of variable-speed pumps. For instance, the MPC proposed by van in @cite_2 , can only be applicable in practice under certain conditions: i) constant or known water inflow rate; ii) large water reservoirs that allow pump scheduling in a 24 hours window (e.g., considering different electricity tariffs); iii) binary (on off) control of pumps. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2156368412"
],
"abstract": [
"This paper defines and simulates a closed-loop optimal control strategy for load shifting in a plant that is charged for electricity on both time-of-use (TOU) and maximum demand (MD). A model predictive control approach is used to implement the closed-loop optimal control model, and the optimization problem is solved with integer programming. The simulated control model yields near optimal switching times which reduce the TOU and MD costs. The results show a saving of 5.8 for the overall plant, and the largest portion of the saving is due to a reduction in MD. The effect of disturbances, model uncertainty and plant failure is also simulated to demonstrate the benefits of a model predictive control model."
]
} |
1902.03417 | 2954888401 | Urban wastewater sector is being pushed to optimize processes in order to reduce energy consumption without compromising its quality standards. Energy costs can represent a significant share of the global operational costs (between 50 and 60 ) in an intensive energy consumer. Pumping is the largest consumer of electrical energy in a wastewater treatment plant. Thus, the optimal control of pump units can help the utilities to decrease operational costs. This work describes an innovative predictive control policy for wastewater variable-frequency pumps that minimize electrical energy consumption, considering uncertainty forecasts for wastewater intake rate and information collected by sensors accessible through the Supervisory Control and Data Acquisition system. The proposed control method combines statistical learning (regression and predictive models) and deep reinforcement learning (Proximal Policy Optimization). The following main original contributions are produced: i) model-free and data-driven predictive control; ii) control philosophy focused on operating the tank with a variable wastewater set-point level; iii) use of supervised learning to generate synthetic data for pre-training the reinforcement learning policy, without the need to physically interact with the system. The results for a real case-study during 90 days show a 16.7 decrease in electrical energy consumption while still achieving a 97 reduction in the number of alarms (tank level above 7.2 meters) when compared with the current operating scenario (operating with a fixed set-point level). The numerical analysis showed that the proposed data-driven method is able to explore the trade-off between number of alarms and consumption minimization, offering different options to decision-makers. | For this problem, the industry state of the art is to turn on off pumps according to a level-based control system. For instance, this patent @cite_22 describes a method for operating a pumping system of a WWTP where the pump starts operating if a level of a wastewater in a tank exceeds a first level, and the pump stops pumping if the level of the wastewater in the tank drops below a second level. However, control algorithms based on soft computing are earning attention and being explored for WWTP and water distribution systems, mainly in cases were the physical (or mathematical) model is not available or is too complex to be integrated in a classical controller. | {
"cite_N": [
"@cite_22"
],
"mid": [
"1029101094"
],
"abstract": [
"A method is provided for operating a wastewater pumping station of a wastewater pumping network. The pumping station includes a pump, that starts pumping if a level of a wastewater in a tank exceeds a first wastewater level, and the pump stops pumping if the level of the wastewater in the tank drops below a second level. The method includes determining a magnitude of a parameter (Psys, Q, n, ΔP, Pelectrical, cos φ, I) expressing the load of the wastewater pumping network. If it is determined that the magnitude of the parameter has passed a specified threshold, the pump is activated to start pumping in an energy optimization mode. A control unit is also provided for the wastewater pumping station of the wastewater pumping network, and a system is provided for centrally controlling a plurality of pumps of wastewater pumping stations in a wastewater pumping network."
]
} |
1902.03417 | 2954888401 | Urban wastewater sector is being pushed to optimize processes in order to reduce energy consumption without compromising its quality standards. Energy costs can represent a significant share of the global operational costs (between 50 and 60 ) in an intensive energy consumer. Pumping is the largest consumer of electrical energy in a wastewater treatment plant. Thus, the optimal control of pump units can help the utilities to decrease operational costs. This work describes an innovative predictive control policy for wastewater variable-frequency pumps that minimize electrical energy consumption, considering uncertainty forecasts for wastewater intake rate and information collected by sensors accessible through the Supervisory Control and Data Acquisition system. The proposed control method combines statistical learning (regression and predictive models) and deep reinforcement learning (Proximal Policy Optimization). The following main original contributions are produced: i) model-free and data-driven predictive control; ii) control philosophy focused on operating the tank with a variable wastewater set-point level; iii) use of supervised learning to generate synthetic data for pre-training the reinforcement learning policy, without the need to physically interact with the system. The results for a real case-study during 90 days show a 16.7 decrease in electrical energy consumption while still achieving a 97 reduction in the number of alarms (tank level above 7.2 meters) when compared with the current operating scenario (operating with a fixed set-point level). The numerical analysis showed that the proposed data-driven method is able to explore the trade-off between number of alarms and consumption minimization, offering different options to decision-makers. | described a fuzzy logic controller to regulate the aeration in the bioreactor of a WTTP @cite_31 . The controller integrated information from two signals, dissolved oxygen and oxidation-reduction potential values, to minimize the electrical energy consumption. Reinforcement learning (RL) was also applied to optimize different processes in WWTP. proposed a model-free control approach for advanced oxidation processes (or Fenton process) since according to the authors it is extremely difficult do develop a precise mathematical model and the system is subjected to several uncertainties and time-evolving characteristics @cite_23 . As an alternative to proportional–integral–derivative (PID) controllers, Hern 'andez-del- explored RL for oxygen control in the N-ammonia removal process, which main objective was to minimize WTTP operational cost (including energy costs) @cite_24 . optimized water quality and energy consumption of the aeration process by combining boosting trees for feature selection and different machine learning algorithms (e.g., artificial neural networks, random forests) for modeling the relationship between input, controllable and output variables @cite_7 . | {
"cite_N": [
"@cite_24",
"@cite_31",
"@cite_7",
"@cite_23"
],
"mid": [
"2020066883",
"1990365467",
"2479755133",
"1989275221"
],
"abstract": [
"The aim of this paper is to face one of the main problems in the control of wastewater treatment plants (WWTPs). It appears that the control system does not respond as it should because of changes on influent load or flow. In that case, it is required that a plant operator tunes up the parameters of the plant. The dissolved oxygen setpoint is one of those parameters. In this paper, we present a model-free reinforcement learning agent that autonomously learns to actively tune up the oxygen setpoint by itself. By active, we mean continuous, minute after minute, tuning up. By autonomous and adaptive, we mean that the agent learns just by itself from its direct interaction with the WWTP. This agent has been tested with data from the well-known public benchamark simulation model no. 1, and the results that are obtained allow us to conclude that it is possible to build agents that actively and autonomously adapt to each new scenario to control a WWTP.",
"Many uncertain factors affect the operation of Wastewater Treatment Plants. Due to the complexity of biological wastewater treatment processes, classical methods show significant difficulties when trying to control them automatically. Consequently soft computing techniques and, specifically, fuzzy logic appears to be a good candidate for controlling these ill-defined, time-varying and non-linear systems. This paper describes the development and implementation of a Fuzzy Logic Controller to regulate the aeration in the Taradell Wastewater Treatment Plant. The main goal of this control process is to save energy without decreasing the quality of the effluent discharged. The fuzzy controller integrates the information coming from two different signals: the Dissolved Oxygen and Oxidation-Reduction Potential values. The simulation results proved that fuzzy logic is a good tool for controlling the aeration of the wastewater treatment plant. The results obtained show that energy savings of more than 10 can be ac...",
"Abstract Being water quality oriented, large-scale industries such as wastewater treatment plants tend to overlook potential savings in energy consumption. Wastewater treatment process includes energy intensive equipment such as pumps and blowers to move and treat wastewater. Presently, a data-driven approach has been applied for aeration process modeling and optimization of one large scale wastewater in Midwest. More specifically, aeration process optimization is carried out with an aim to minimize energy usage without sacrificing water quality. Models developed by data mining algorithms are useful in developing a clear and concise relationship among input and output variables. Results indicate that a great deal of saving in energy can be made while keeping the water quality within limit. Limitation of the work is also discussed.",
"This article presents a proposal, based on the model-free learning control (MFLC) approach, for the control of the advanced oxidation process in wastewater plants. This is prompted by the fact that many organic pollutants in industrial wastewaters are resistant to conventional biological treatments, and the fact that advanced oxidation processes, controlled with learning controllers measuring the oxidation-reduction potential (ORP), give a cost-effective solution. The proposed automation strategy denoted MFLC-MSA is based on the integration of reinforcement learning with multiple step actions. This enables the most adequate control strategy to be learned directly from the process response to selected control inputs. Thus, the proposed methodology is satisfactory for oxidation processes of wastewater treatment plants, where the development of an adequate model for control design is usually too costly. The algorithm proposed has been tested in a lab pilot plant, where phenolic wastewater is oxidized to carboxylic acids and carbon dioxide. The obtained experimental results show that the proposed MFLC-MSA strategy can achieve good performance to guarantee on-specification discharge at maximum degradation rate using readily available measurements such as pH and ORP, inferential measurements of oxidation kinetics and peroxide consumption, respectively."
]
} |
1902.03417 | 2954888401 | Urban wastewater sector is being pushed to optimize processes in order to reduce energy consumption without compromising its quality standards. Energy costs can represent a significant share of the global operational costs (between 50 and 60 ) in an intensive energy consumer. Pumping is the largest consumer of electrical energy in a wastewater treatment plant. Thus, the optimal control of pump units can help the utilities to decrease operational costs. This work describes an innovative predictive control policy for wastewater variable-frequency pumps that minimize electrical energy consumption, considering uncertainty forecasts for wastewater intake rate and information collected by sensors accessible through the Supervisory Control and Data Acquisition system. The proposed control method combines statistical learning (regression and predictive models) and deep reinforcement learning (Proximal Policy Optimization). The following main original contributions are produced: i) model-free and data-driven predictive control; ii) control philosophy focused on operating the tank with a variable wastewater set-point level; iii) use of supervised learning to generate synthetic data for pre-training the reinforcement learning policy, without the need to physically interact with the system. The results for a real case-study during 90 days show a 16.7 decrease in electrical energy consumption while still achieving a 97 reduction in the number of alarms (tank level above 7.2 meters) when compared with the current operating scenario (operating with a fixed set-point level). The numerical analysis showed that the proposed data-driven method is able to explore the trade-off between number of alarms and consumption minimization, offering different options to decision-makers. | More related with the work of the present paper, described a rule-based method to control WWTP pumps according to the measured wastewater tank level and minimize electrical energy consumption by using a fuzzy logic controller that work as follows: on rainy day's pumps react faster and frequency is increased quickly to avoid flooding; on dry day's pumps can react more slowly and frequency is decreased softly to prevent draining of the tank @cite_19 . | {
"cite_N": [
"@cite_19"
],
"mid": [
"2122464586"
],
"abstract": [
"In this study, first, designing and operating conditions of a wastewater treatment plant located in the south of Iskenderun of Hatay, Turkey, is examined and analyzed. It is shown that influent pumps have large part of energy consumption in plant. Second, some simulation and experimental studies were performed on plant. Energy efficiency and plant optimization have provided 40 of energy consumption. Also, experimental study is simulated by Matlab Fuzzy LogicController. The experimental results are in good agreement with the simulation results. © 2013 American Institute of Chemical Engineers Environ Prog, 33: 556–563, 2014"
]
} |
1902.03417 | 2954888401 | Urban wastewater sector is being pushed to optimize processes in order to reduce energy consumption without compromising its quality standards. Energy costs can represent a significant share of the global operational costs (between 50 and 60 ) in an intensive energy consumer. Pumping is the largest consumer of electrical energy in a wastewater treatment plant. Thus, the optimal control of pump units can help the utilities to decrease operational costs. This work describes an innovative predictive control policy for wastewater variable-frequency pumps that minimize electrical energy consumption, considering uncertainty forecasts for wastewater intake rate and information collected by sensors accessible through the Supervisory Control and Data Acquisition system. The proposed control method combines statistical learning (regression and predictive models) and deep reinforcement learning (Proximal Policy Optimization). The following main original contributions are produced: i) model-free and data-driven predictive control; ii) control philosophy focused on operating the tank with a variable wastewater set-point level; iii) use of supervised learning to generate synthetic data for pre-training the reinforcement learning policy, without the need to physically interact with the system. The results for a real case-study during 90 days show a 16.7 decrease in electrical energy consumption while still achieving a 97 reduction in the number of alarms (tank level above 7.2 meters) when compared with the current operating scenario (operating with a fixed set-point level). The numerical analysis showed that the proposed data-driven method is able to explore the trade-off between number of alarms and consumption minimization, offering different options to decision-makers. | Data mining algorithms were also explored to optimize the pump operation, including the modeling of input and output variables. Wei and Kusiak applied static multi-layer perceptron and dynamic neural networks to forecast the influent flow in WWTP @cite_10 . Zhang and Kusiak tested seven data mining algorithms based in 5-min and 30-min data to construct a pump energy consumption model and a water flow rate (after the pumps) model of the preliminary treatment process of a WWTP @cite_5 . These two data-driven models were incorporated in different formulations of an optimization problem to generate optimal pump schedules: (a) MINLP problem to reduce energy consumption solved with particle swarm optimization @cite_14 or with greedy electromagnetism-like algorithm @cite_9 ; (b) bi-objective optimization solved with artificial immune network algorithm to minimize the energy consumption and maximize the pumped wastewater flow rate @cite_34 . These optimization models can be enhanced with a discrete-state Markov process for modeling the maintenance decisions @cite_12 . | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_5",
"@cite_34",
"@cite_10",
"@cite_12"
],
"mid": [
"2129513038",
"",
"2020395529",
"2212491726",
"2008279066",
"2152004941"
],
"abstract": [
"This paper discusses energy savings in wastewater processing plant pump operations and proposes a pump system scheduling model to generate operational schedules to reduce energy consumption. A neural network algorithm is utilized to model pump energy consumption and fluid flow rate after pumping. The scheduling model is a mixed-integer nonlinear programming problem (MINLP). As solving a data-driven MINLP is challenging, a migrated particle swarm optimization algorithm is proposed. The modeling and optimization results show that the performance of the pump system can be significantly improved based on the computed schedules.",
"",
"In this paper, a data-mining approach for modeling pumps in the wastewater preliminary treatment process is discussed. Data-mining algorithms are utilized to develop pump performance models based on industrial data collected at a municipal wastewater processing plant. The performance of wastewater pumps is described by two parameters, pump energy consumption and water flow rate after the pumps. Two types of models, dynamic and steady state, are established to predict pump energy consumption and water flow rate. The first type of model is developed based on 5-min data and the second type is built based on 30-min data. The accuracy of the models has been validated.",
"In this paper, a data-driven framework for improving the performance of wastewater pumping systems has been developed by fusing knowledge including the data mining, mathematical modeling, and computational intelligence. Modeling pump system performance in terms of the energy consumption and pumped wastewater flow rate based on industrial data with neural networks is examined. A bi-objective optimization model incorporating data-driven components is formulated to minimize the energy consumption and maximize the pumped wastewater flow rate. An adaptive mechanism is developed to automatically determine weights associated with two objectives by considering the wet well level and influent flow rate. The optimization model is solved by an artificial immune network algorithm. A comparative analysis between the optimization results and the observed data is performed to demonstrate the improvement of the pumping system performance. Results indicate that saving energy while maintaining the pumping performance is potentially achievable with the proposed data-driven framework.",
"Predicting influent flow is important in the management of a wastewater treatment plant (WWTP). Because influent flow includes municipal sewage and rainfall runoff, it exhibits nonlinear spatial and temporal behavior and therefore makes it difficult to model. In this paper, a neural network approach is used to predict influent flow in the WWTP. The model inputs include historical influent data collected at a local WWTP, rainfall data and radar reflectivity data collected by the local weather station. A static multi-layer perceptron neural network performs well for the current time prediction but a time lag occurs and increases with the time horizon. A dynamic neural network with an online corrector is proposed to solve the time lag problem and increase the prediction accuracy for longer time horizons. The computational results show that the proposed neural network accurately predicts the influent flow for time horizons up to 300 min.",
"A data-driven model for scheduling pumps in a wastewater treatment process is proposed. The objective is to minimize the cost of pump operations and maintenance. A neural network algorithm is applied to model performance of the pumps using the data collected at a municipal wastewater treatment plant. The discrete-state Markov process is utilized to develop a model of maintenance decisions. The developed pump performance and maintenance models are integrated into a scheduling model. A hierarchical particle swarm optimization algorithm is designed to solve the proposed scheduling model. The concepts developed in this paper are illustrated with two case studies."
]
} |
1902.03417 | 2954888401 | Urban wastewater sector is being pushed to optimize processes in order to reduce energy consumption without compromising its quality standards. Energy costs can represent a significant share of the global operational costs (between 50 and 60 ) in an intensive energy consumer. Pumping is the largest consumer of electrical energy in a wastewater treatment plant. Thus, the optimal control of pump units can help the utilities to decrease operational costs. This work describes an innovative predictive control policy for wastewater variable-frequency pumps that minimize electrical energy consumption, considering uncertainty forecasts for wastewater intake rate and information collected by sensors accessible through the Supervisory Control and Data Acquisition system. The proposed control method combines statistical learning (regression and predictive models) and deep reinforcement learning (Proximal Policy Optimization). The following main original contributions are produced: i) model-free and data-driven predictive control; ii) control philosophy focused on operating the tank with a variable wastewater set-point level; iii) use of supervised learning to generate synthetic data for pre-training the reinforcement learning policy, without the need to physically interact with the system. The results for a real case-study during 90 days show a 16.7 decrease in electrical energy consumption while still achieving a 97 reduction in the number of alarms (tank level above 7.2 meters) when compared with the current operating scenario (operating with a fixed set-point level). The numerical analysis showed that the proposed data-driven method is able to explore the trade-off between number of alarms and consumption minimization, offering different options to decision-makers. | Considering the revised literature, the present paper produces the following original contributions. Applies a model-free and data-driven control approach based in RL, in contrast to the use of meta-heuristics @cite_14 @cite_9 @cite_34 or fuzzy logic control @cite_19 . The control philosophy is focused in operating the tank with a variable water level, instead of controlling the frequency increase decrease rate like in @cite_19 . The vector of system states from RL is modified to include probabilistic forecasts of wastewater intake and implement a predictive control of the pumping system, which was never proposed in the literature. Finally, a real-world implementation of the RL control method is made possible by applying data-mining algorithms to construct models from data and generate synthetic data for pre-training the RL algorithm, without the need to physically interact with the system. This also represents an original contribution compared to other control problems like @cite_24 @cite_23 . | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_34"
],
"mid": [
"2129513038",
"",
"2020066883",
"2122464586",
"1989275221",
"2212491726"
],
"abstract": [
"This paper discusses energy savings in wastewater processing plant pump operations and proposes a pump system scheduling model to generate operational schedules to reduce energy consumption. A neural network algorithm is utilized to model pump energy consumption and fluid flow rate after pumping. The scheduling model is a mixed-integer nonlinear programming problem (MINLP). As solving a data-driven MINLP is challenging, a migrated particle swarm optimization algorithm is proposed. The modeling and optimization results show that the performance of the pump system can be significantly improved based on the computed schedules.",
"",
"The aim of this paper is to face one of the main problems in the control of wastewater treatment plants (WWTPs). It appears that the control system does not respond as it should because of changes on influent load or flow. In that case, it is required that a plant operator tunes up the parameters of the plant. The dissolved oxygen setpoint is one of those parameters. In this paper, we present a model-free reinforcement learning agent that autonomously learns to actively tune up the oxygen setpoint by itself. By active, we mean continuous, minute after minute, tuning up. By autonomous and adaptive, we mean that the agent learns just by itself from its direct interaction with the WWTP. This agent has been tested with data from the well-known public benchamark simulation model no. 1, and the results that are obtained allow us to conclude that it is possible to build agents that actively and autonomously adapt to each new scenario to control a WWTP.",
"In this study, first, designing and operating conditions of a wastewater treatment plant located in the south of Iskenderun of Hatay, Turkey, is examined and analyzed. It is shown that influent pumps have large part of energy consumption in plant. Second, some simulation and experimental studies were performed on plant. Energy efficiency and plant optimization have provided 40 of energy consumption. Also, experimental study is simulated by Matlab Fuzzy LogicController. The experimental results are in good agreement with the simulation results. © 2013 American Institute of Chemical Engineers Environ Prog, 33: 556–563, 2014",
"This article presents a proposal, based on the model-free learning control (MFLC) approach, for the control of the advanced oxidation process in wastewater plants. This is prompted by the fact that many organic pollutants in industrial wastewaters are resistant to conventional biological treatments, and the fact that advanced oxidation processes, controlled with learning controllers measuring the oxidation-reduction potential (ORP), give a cost-effective solution. The proposed automation strategy denoted MFLC-MSA is based on the integration of reinforcement learning with multiple step actions. This enables the most adequate control strategy to be learned directly from the process response to selected control inputs. Thus, the proposed methodology is satisfactory for oxidation processes of wastewater treatment plants, where the development of an adequate model for control design is usually too costly. The algorithm proposed has been tested in a lab pilot plant, where phenolic wastewater is oxidized to carboxylic acids and carbon dioxide. The obtained experimental results show that the proposed MFLC-MSA strategy can achieve good performance to guarantee on-specification discharge at maximum degradation rate using readily available measurements such as pH and ORP, inferential measurements of oxidation kinetics and peroxide consumption, respectively.",
"In this paper, a data-driven framework for improving the performance of wastewater pumping systems has been developed by fusing knowledge including the data mining, mathematical modeling, and computational intelligence. Modeling pump system performance in terms of the energy consumption and pumped wastewater flow rate based on industrial data with neural networks is examined. A bi-objective optimization model incorporating data-driven components is formulated to minimize the energy consumption and maximize the pumped wastewater flow rate. An adaptive mechanism is developed to automatically determine weights associated with two objectives by considering the wet well level and influent flow rate. The optimization model is solved by an artificial immune network algorithm. A comparative analysis between the optimization results and the observed data is performed to demonstrate the improvement of the pumping system performance. Results indicate that saving energy while maintaining the pumping performance is potentially achievable with the proposed data-driven framework."
]
} |
1902.03245 | 2919060532 | Although artificial intelligence holds promise for addressing societ al challenges, issues of exactly which tasks to automate and the extent to do so remain understudied. We approach the problem of task delegability from a human-centered perspective by developing a framework on human perception of task delegation to artificial intelligence. We consider four high-level factors that can contribute to a delegation decision: motivation, difficulty, risk, and trust. To obtain an empirical understanding of human preferences in different tasks, we build a dataset of 100 tasks from academic papers, popular media portrayal of AI, and everyday life. For each task, we administer a survey to collect judgments of each factor and ask subjects to pick the extent to which they prefer AI involvement. We find little preference for full AI control and a strong preference for machine-in-the-loop designs, in which humans play the leading role. Our framework can effectively predict human preferences in degrees of AI assistance. Among the four factors, trust is the most predictive of human preferences of optimal human-machine delegation. This framework represents a first step towards characterizing human preferences of automation across tasks. We hope this work may encourage and aid in future efforts towards understanding such individual attitudes; our goal is to inform the public and the AI research community rather than dictating any direction in technology development. | sheridan-parasuraman2000model 's Levels of Automation is the closest to our work @cite_7 . However, their work is primarily concerned with performance-based criteria (e.g., capability, reliability, cost), while our primary interest involves human preferences. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2123179704"
],
"abstract": [
"We outline a model for types and levels of automation that provides a framework and an objective basis for deciding which system functions should be automated and to what extent. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design."
]
} |
1902.03245 | 2919060532 | Although artificial intelligence holds promise for addressing societ al challenges, issues of exactly which tasks to automate and the extent to do so remain understudied. We approach the problem of task delegability from a human-centered perspective by developing a framework on human perception of task delegation to artificial intelligence. We consider four high-level factors that can contribute to a delegation decision: motivation, difficulty, risk, and trust. To obtain an empirical understanding of human preferences in different tasks, we build a dataset of 100 tasks from academic papers, popular media portrayal of AI, and everyday life. For each task, we administer a survey to collect judgments of each factor and ask subjects to pick the extent to which they prefer AI involvement. We find little preference for full AI control and a strong preference for machine-in-the-loop designs, in which humans play the leading role. Our framework can effectively predict human preferences in degrees of AI assistance. Among the four factors, trust is the most predictive of human preferences of optimal human-machine delegation. This framework represents a first step towards characterizing human preferences of automation across tasks. We hope this work may encourage and aid in future efforts towards understanding such individual attitudes; our goal is to inform the public and the AI research community rather than dictating any direction in technology development. | Shared-control design paradigms. Many tasks are amen-able to a mix of human and machine control. Mixed-initiative systems and collaborative control have gained traction over function allocation @cite_26 @cite_38 , mainly through recognition that such systems can transform the task itself through conflicts and interdependence @cite_34 . | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_34"
],
"mid": [
"2059216172",
"",
"2028626340"
],
"abstract": [
"Recent debate has centered on the relative promise of focusinguser-interface research on developing new metaphors and tools thatenhance users abilities to directly manipulate objects versusdirecting effort toward developing interface agents that provideautomation. In this paper, we review principles that show promisefor allowing engineers to enhance human-computer interactionthrough an elegant coupling of automated services with directmanipulation. Key ideas will be highlighted in terms of the Lookoutsystem for scheduling and meeting management.",
"",
"Coactive Design is a new approach to address the increasingly sophisticated roles that people and robots play as the use of robots expands into new, complex domains. The approach is motivated by the desire for robots to perform less like teleoperated tools or independent automatons and more like interdependent teammates. In this article, we describe what it means to be interdependent, why this is important, and the design implications that follow from this perspective. We argue for a human-robot system model that supports interdependence through careful attention to requirements for observability, predictability, and directability. We present a Coactive Design method and show how it can be a useful approach for developers trying to understand how to translate high-level teamwork concepts into reusable control algorithms, interface elements, and behaviors that enable robots to fulfill their envisioned role as teammates. As an example of the coactive design approach, we present our results from the DARPA Virtual Robotics Challenge, a competition designed to spur development of advanced robots that can assist humans in recovering from natural and man-made disasters. Twenty-six teams from eight countries competed in three different tasks providing an excellent evaluation of the relative effectiveness of different approaches to human-machine system design."
]
} |
1902.03245 | 2919060532 | Although artificial intelligence holds promise for addressing societ al challenges, issues of exactly which tasks to automate and the extent to do so remain understudied. We approach the problem of task delegability from a human-centered perspective by developing a framework on human perception of task delegation to artificial intelligence. We consider four high-level factors that can contribute to a delegation decision: motivation, difficulty, risk, and trust. To obtain an empirical understanding of human preferences in different tasks, we build a dataset of 100 tasks from academic papers, popular media portrayal of AI, and everyday life. For each task, we administer a survey to collect judgments of each factor and ask subjects to pick the extent to which they prefer AI involvement. We find little preference for full AI control and a strong preference for machine-in-the-loop designs, in which humans play the leading role. Our framework can effectively predict human preferences in degrees of AI assistance. Among the four factors, trust is the most predictive of human preferences of optimal human-machine delegation. This framework represents a first step towards characterizing human preferences of automation across tasks. We hope this work may encourage and aid in future efforts towards understanding such individual attitudes; our goal is to inform the public and the AI research community rather than dictating any direction in technology development. | We broadly split work on such systems into categories. We find this split more flexible and practical for our application than the Levels of Automation. One one side we have human-in-the-loop ML designs, wherein humans assist machines. People handle edge cases, label, and refine system outputs. Such designs enjoy prevalence in applications from vision recognition to machine translation @cite_5 @cite_17 @cite_24 @cite_17 . | {
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_17"
],
"mid": [
"2042296808",
"2103490241",
"2003238113"
],
"abstract": [
"",
"We present an interactive, hybrid human-computer method for object classification. The method applies to classes of objects that are recognizable by people with appropriate expertise (e.g., animal species or airplane model), but not (in general) by people without such expertise. It can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively. The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate our methods on Birds-200, a difficult dataset of 200 tightly-related bird species, and on the Animals With Attributes dataset. Our results demonstrate that incorporating user input drives up recognition accuracy to levels that are good enough for practical applications, while at the same time, computer vision reduces the amount of human interaction required.",
"Perceptual user interfaces (PUIs) are an important part of ubiquitous computing. Creating such interfaces is difficult because of the image and signal processing knowledge required for creating classifiers. We propose an interactive machine-learning (IML) model that allows users to train, classify view and correct the classifications. The concept and implementation details of IML are discussed and contrasted with classical machine learning models. Evaluations of two algorithms are also presented. We also briefly describe Image Processing with Crayons (Crayons), which is a tool for creating new camera-based interfaces using a simple painting metaphor. The Crayons tool embodies our notions of interactive machine learning"
]
} |
1902.03245 | 2919060532 | Although artificial intelligence holds promise for addressing societ al challenges, issues of exactly which tasks to automate and the extent to do so remain understudied. We approach the problem of task delegability from a human-centered perspective by developing a framework on human perception of task delegation to artificial intelligence. We consider four high-level factors that can contribute to a delegation decision: motivation, difficulty, risk, and trust. To obtain an empirical understanding of human preferences in different tasks, we build a dataset of 100 tasks from academic papers, popular media portrayal of AI, and everyday life. For each task, we administer a survey to collect judgments of each factor and ask subjects to pick the extent to which they prefer AI involvement. We find little preference for full AI control and a strong preference for machine-in-the-loop designs, in which humans play the leading role. Our framework can effectively predict human preferences in degrees of AI assistance. Among the four factors, trust is the most predictive of human preferences of optimal human-machine delegation. This framework represents a first step towards characterizing human preferences of automation across tasks. We hope this work may encourage and aid in future efforts towards understanding such individual attitudes; our goal is to inform the public and the AI research community rather than dictating any direction in technology development. | Alternatively, a machine-in-the-loop paradigm places the human in the primary position of action and control while the machine assists. Examples of this include a creative-writing assistance system that generates contextual suggestions @cite_22 @cite_18 , and predicting situations in which people are likely to make errors in judgments in decision-making @cite_19 . Even tasks which should not be automated may still benefit from machine assistance, especially if human performance is not the upper bound as kleinberg-bail found in judge bail decisions @cite_39 . | {
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_22",
"@cite_39"
],
"mid": [
"2354301041",
"2236262502",
"",
"2551317447"
],
"abstract": [
"An increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm. Motivated by this development, an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors. To investigate what a general framework for human error prediction might look like, we focus on a model system with a rich history in the behavioral sciences: the decisions made by chess players as they select moves in a game. We carry out our analysis at a large scale, employing datasets with several million recorded games, and using chess tablebases to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging even for the best players in the world. We organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable: the skill of the decision-maker, the time available to make the decision, and the inherent difficulty of the decision. We identify rich structure in all three of these categories of features, and find strong evidence that in our domain, features describing the inherent difficulty of an instance are significantly more powerful than features based on skill or time.",
"We present Creative Help, an application that helps writers by generating suggestions for the next sentence in a story as it being written. Users can modify or delete suggestions according to their own vision of the unfolding narrative. The application tracks users’ changes to suggestions in order to measure their perceived helpfulness to the story, with fewer edits indicating more helpful suggestions. We demonstrate how the edit distance between a suggestion and its resulting modification can be used to comparatively evaluate different models for generating suggestions. We describe a generation model that uses case-based reasoning to find relevant suggestions from a large corpus of stories. The application shows that this model generates suggestions that are more helpful than randomly selected suggestions at a level of marginal statistical significance. By giving users control over the generated content, Creative Help provides a new opportunity in open-domain interactive storytelling.",
"",
"Presented on October 24, 2016 at 10:00 a.m. in the Klaus Advanced Computing Building, room 1116"
]
} |
1902.03245 | 2919060532 | Although artificial intelligence holds promise for addressing societ al challenges, issues of exactly which tasks to automate and the extent to do so remain understudied. We approach the problem of task delegability from a human-centered perspective by developing a framework on human perception of task delegation to artificial intelligence. We consider four high-level factors that can contribute to a delegation decision: motivation, difficulty, risk, and trust. To obtain an empirical understanding of human preferences in different tasks, we build a dataset of 100 tasks from academic papers, popular media portrayal of AI, and everyday life. For each task, we administer a survey to collect judgments of each factor and ask subjects to pick the extent to which they prefer AI involvement. We find little preference for full AI control and a strong preference for machine-in-the-loop designs, in which humans play the leading role. Our framework can effectively predict human preferences in degrees of AI assistance. Among the four factors, trust is the most predictive of human preferences of optimal human-machine delegation. This framework represents a first step towards characterizing human preferences of automation across tasks. We hope this work may encourage and aid in future efforts towards understanding such individual attitudes; our goal is to inform the public and the AI research community rather than dictating any direction in technology development. | Trust and reliance on machines. Finally, we consider the community's interest in trust. As automation grows in complexity, complete understanding becomes impossible; trust serves as a proxy for rational decisions in the face of uncertainty, and appropriate use of technology becomes a concern @cite_28 . As such, calibration of trust continues to be a popular avenue of research @cite_30 @cite_36 . identify three bases of trust in automation: performance, process, and purpose. Performance describes the automation's ability to reliably achieve the operator's goals. Process describes the inner workings of the automation; examples include dependability, integrity, and interpretability (in particular, interpretable ML has received significant interest @cite_9 @cite_6 @cite_27 @cite_1 ). Finally, purpose refers to the intent behind the automation and its alignment with the user's goals. | {
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_36",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_27"
],
"mid": [
"2090668966",
"2110171129",
"2809814728",
"",
"2594475271",
"2516809705",
"2551974706"
],
"abstract": [
"The invention contemplates a telephoto-lens construction comprising four parts, one or more of which is a multiple-element part, and so devised as to achieve superior achromatic quality over a wide field of view, within a structural length which is less than the focal length. This result is achieved by following a particular schedule of regions from which to select optical glasses for the respective parts, or for the respective multiple elements of one or more of such parts, and by forming lens elements with such selected glasses in accordance with a particular schedule of refractive powers.",
"Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation.",
"We conducted a study to investigate trust in and dependence upon robotic decision support among nurses and doctors on a labor and delivery floor. There is evidence that suggestions provided by embo...",
"",
"As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.",
"",
"Example-based explanations are widely used in the effort to improve the interpretability of highly complex distributions. However, prototypes alone are rarely sufficient to represent the gist of the complexity. In order for users to construct better mental models and understand complex data distributions, we also need criticism to explain what are captured by prototypes. Motivated by the Bayesian model criticism framework, we develop MMD-critic which efficiently learns prototypes and criticism, designed to aid human interpretability. A human subject pilot study shows that the MMD-critic selects prototypes and criticism that are useful to facilitate human understanding and reasoning. We also evaluate the prototypes selected by MMD-critic via a nearest prototype classifier, showing competitive performance compared to baselines."
]
} |
1902.03338 | 2911321315 | WarpFlow is a fast, interactive data querying and processing system with a focus on petabyte-scale spatiotemporal datasets and Tesseract queries. With the rapid growth in smartphones and mobile navigation services, we now have an opportunity to radically improve urban mobility and reduce friction in how people and packages move globally every minute-mile, with data. WarpFlow speeds up three key metrics for data engineers working on such datasets -- time-to-first-result, time-to-full-scale-result, and time-to-trained-model for machine learning. | First, systems such as PostgreSQL @cite_5 , MySQL @cite_19 offer a host of geospatial extensions (e.g., PostGIS @cite_12 ). To tackle larger datasets on distributed clusters, recent analytical systems propose novel extensions and specialized in-memory data structures (e.g., for paths and trajectories) on Spark Hadoop @cite_20 clusters @cite_35 @cite_9 @cite_1 @cite_3 @cite_37 @cite_21 . | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_5",
"@cite_12",
"@cite_20"
],
"mid": [
"2065820792",
"2610195107",
"2436533802",
"2295338537",
"2751600782",
"2799180173",
"",
"",
"",
"2189465200"
],
"abstract": [
"Recently, MapReduce frameworks, e.g., Hadoop, have been used extensively in different applications that include tera-byte sorting, machine learning, and graph processing. With the huge volumes of spatial data coming from different sources, there is an increasing demand to exploit the efficiency of Hadoop, coupled with the flexibility of the MapReduce framework, in spatial data processing. However, Hadoop falls short in supporting spatial data efficiently as the core is unaware of spatial data properties. This paper describes SpatialHadoop; a full-edged MapReduce framework with native support for spatial data. SpatialHadoop is a comprehensive extension to Hadoop that injects spatial data awareness in each Hadoop layer, namely, the language, storage, MapReduce, and operations layers. In the language layer, SpatialHadoop adds a simple and ex- pressive high level language for spatial data types and operations. In the storage layer, SpatialHadoop adapts traditional spatial index structures, Grid, R-tree and R+-tree, to form a two-level spatial index. SpatialHadoop enriches the MapReduce layer by two new components, SpatialFileSplitter and SpatialRecordReader, for efficient and scalable spatial data processing. In the operations layer, SpatialHadoop is already equipped with a dozen of operations, including range query, kNN, and spatial join. The flexibility and open source nature of SpatialHadoop allows more spatial operations to be implemented efficiently using MapReduce. Extensive experiments on a real system prototype and real datasets show that SpatialHadoop achieves orders of magnitude better performance than Hadoop for spatial data processing.",
"In the realm of smart cities, telecommunication companies (telcos) are expected to play a protagonistic role as these can capture a variety of natural phenomena on an ongoing basis, e.g., traffic in a city, mobility patterns for emergency response or city planning. The key challenges for telcos in this era is to ingest in the most compact manner huge amounts of network logs, perform big data exploration and analytics on the generated data within a tolerable elapsed time. This paper introduces SPATE, an innovative telco big data exploration framework whose objectives are two-fold: (i) minimizing the storage space needed to incrementally retain data over time, and (ii) minimizing the response time for spatiotemporal data exploration queries over recent data. The storage layer of our framework uses lossless data compression to ingest recent streams of telco big data in the most compact manner retaining full resolution for data exploration tasks. The indexing layer of our system then takes care of the progressive loss of detail in information, coined decaying, as data ages with time. The exploration layer provides visual means to explore the generated spatio-temporal information space. We measure the efficiency of the proposed framework using a 5GB anonymized real telco network trace and a variety of telco-specific tasks, such as OLAP and OLTP querying, privacy-aware data sharing, multivariate statistics, clustering and regression. We show that out framework can achieve comparable response times to the state-of-the-art using an order of magnitude less storage space.",
"Large spatial data becomes ubiquitous. As a result, it is critical to provide fast, scalable, and high-throughput spatial queries and analytics for numerous applications in location-based services (LBS). Traditional spatial databases and spatial analytics systems are disk-based and optimized for IO efficiency. But increasingly, data are stored and processed in memory to achieve low latency, and CPU time becomes the new bottleneck. We present the Simba (Spatial In-Memory Big data Analytics) system that offers scalable and efficient in-memory spatial query processing and analytics for big spatial data. Simba is based on Spark and runs over a cluster of commodity machines. In particular, Simba extends the Spark SQL engine to support rich spatial queries and analytics through both SQL and the DataFrame API. It introduces indexes over RDDs in order to work with big spatial data and complex spatial operations. Lastly, Simba implements an effective query optimizer, which leverages its indexes and novel spatial-aware optimizations, to achieve both low latency and high throughput. Extensive experiments over large data sets demonstrate Simba's superior performance compared against other spatial analytics system.",
"This paper introduces G eo S park an in-memory cluster computing framework for processing large-scale spatial data. G eo S park consists of three layers: Apache Spark Layer, Spatial RDD Layer and Spatial Query Processing Layer. Apache Spark Layer provides basic Spark functionalities that include loading storing data to disk as well as regular RDD operations. Spatial RDD Layer consists of three novel Spatial Resilient Distributed Datasets (SRDDs) which extend regular Apache Spark RDDs to support geometrical and spatial objects. G eo S park provides a geometrical operations library that accesses Spatial RDDs to perform basic geometrical operations (e.g., Overlap, Intersect). System users can leverage the newly defined SRDDs to effectively develop spatial data processing programs in Spark. The Spatial Query Processing Layer efficiently executes spatial query processing algorithms (e.g., Spatial Range, Join, KNN query) on SRDDs. G eo S park also allows users to create a spatial index (e.g., R-tree, Quad-tree) that boosts spatial data processing performance in each SRDD partition. Preliminary experiments show that G eo S park achieves better run time performance than its Hadoop-based counterparts (e.g., SpatialHadoop).",
"Mobile and sensing devices have already become ubiquitous. They have made tracking moving objects an easy task. As a result, mobile applications like Uber and many IoT projects have generated massive amounts of trajectory data that can no longer be processed by a single machine efficiently. Among the typical query operations over trajectories, similarity search is a common yet expensive operator in querying trajectory data. It is useful for applications in different domains such as traffic and transportation optimizations, weather forecast and modeling, and sports analytics. It is also a fundamental operator for many important mining operations such as clustering and classification of trajectories. In this paper, we propose a distributed query framework to process trajectory similarity search over a large set of trajectories. We have implemented the proposed framework in Spark, a popular distributed data processing engine, by carefully considering different design choices. Our query framework supports both the Hausdorff distance the Frechet distance. Extensive experiments have demonstrated the excellent scalability and query efficiency achieved by our design, compared to other methods and design alternatives.",
"Trajectory analytics can benefit many real-world applications, e.g., frequent trajectory based navigation systems, road planning, car pooling, and transportation optimizations. Existing algorithms focus on optimizing this problem in a single machine. However, the amount of trajectories exceeds the storage and processing capability of a single machine, and it calls for large-scale trajectory analytics in distributed environments. The distributed trajectory analytics faces challenges of data locality aware partitioning, load balance, easy-to-use interface, and versatility to support various trajectory similarity functions. To address these challenges, we propose a distributed in-memory trajectory analytics system DITA. We propose an effective partitioning method, global index and local index, to address the data locality problem. We devise cost-based techniques to balance the workload. We develop a filter-verification framework to improve the performance. Moreover, DITA can support most of existing similarity functions to quantify the similarity between trajectories. We integrate our framework seamlessly into Spark SQL, and make it support SQL and DataFrame API interfaces. We have conducted extensive experiments on real world datasets, and experimental results show that DITA outperforms existing distributed trajectory similarity search and join approaches significantly.",
"",
"",
"",
"MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time."
]
} |
1902.03338 | 2911321315 | WarpFlow is a fast, interactive data querying and processing system with a focus on petabyte-scale spatiotemporal datasets and Tesseract queries. With the rapid growth in smartphones and mobile navigation services, we now have an opportunity to radically improve urban mobility and reduce friction in how people and packages move globally every minute-mile, with data. WarpFlow speeds up three key metrics for data engineers working on such datasets -- time-to-first-result, time-to-full-scale-result, and time-to-trained-model for machine learning. | Specifically, the techniques in @cite_9 @cite_1 @cite_3 adopt Spark's RDD model @cite_20 and extend it with a two-level indexing structure. This helps prune RDD partitions but partitions containing matched data need to be paged into memory for further filtering. These techniques work well when (1) the data partition and indices fit in main memory on a distributed cluster, (2) data overflows are paged into local disks on the cluster, (3) the queries rely on the partition and block indices to retrieve only relevant data partitions into available memory. In such cases, the techniques work well to optimize CPU costs and can safely ignore IO costs in a cluster. However, for our pipelines, we deal with numerous large datasets on a shared cluster, so developers can run pipelines on these datasets concurrently. We need to optimize both CPU and IO costs making use of fine-grained indexing to selectively access the relevant data records, without first having to load the partitions. As we see later, our techniques scale for multiple, large datasets on networked file systems, while minimizing the resource footprint for cost efficiency. | {
"cite_N": [
"@cite_9",
"@cite_20",
"@cite_1",
"@cite_3"
],
"mid": [
"2436533802",
"2189465200",
"2751600782",
"2799180173"
],
"abstract": [
"Large spatial data becomes ubiquitous. As a result, it is critical to provide fast, scalable, and high-throughput spatial queries and analytics for numerous applications in location-based services (LBS). Traditional spatial databases and spatial analytics systems are disk-based and optimized for IO efficiency. But increasingly, data are stored and processed in memory to achieve low latency, and CPU time becomes the new bottleneck. We present the Simba (Spatial In-Memory Big data Analytics) system that offers scalable and efficient in-memory spatial query processing and analytics for big spatial data. Simba is based on Spark and runs over a cluster of commodity machines. In particular, Simba extends the Spark SQL engine to support rich spatial queries and analytics through both SQL and the DataFrame API. It introduces indexes over RDDs in order to work with big spatial data and complex spatial operations. Lastly, Simba implements an effective query optimizer, which leverages its indexes and novel spatial-aware optimizations, to achieve both low latency and high throughput. Extensive experiments over large data sets demonstrate Simba's superior performance compared against other spatial analytics system.",
"MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time.",
"Mobile and sensing devices have already become ubiquitous. They have made tracking moving objects an easy task. As a result, mobile applications like Uber and many IoT projects have generated massive amounts of trajectory data that can no longer be processed by a single machine efficiently. Among the typical query operations over trajectories, similarity search is a common yet expensive operator in querying trajectory data. It is useful for applications in different domains such as traffic and transportation optimizations, weather forecast and modeling, and sports analytics. It is also a fundamental operator for many important mining operations such as clustering and classification of trajectories. In this paper, we propose a distributed query framework to process trajectory similarity search over a large set of trajectories. We have implemented the proposed framework in Spark, a popular distributed data processing engine, by carefully considering different design choices. Our query framework supports both the Hausdorff distance the Frechet distance. Extensive experiments have demonstrated the excellent scalability and query efficiency achieved by our design, compared to other methods and design alternatives.",
"Trajectory analytics can benefit many real-world applications, e.g., frequent trajectory based navigation systems, road planning, car pooling, and transportation optimizations. Existing algorithms focus on optimizing this problem in a single machine. However, the amount of trajectories exceeds the storage and processing capability of a single machine, and it calls for large-scale trajectory analytics in distributed environments. The distributed trajectory analytics faces challenges of data locality aware partitioning, load balance, easy-to-use interface, and versatility to support various trajectory similarity functions. To address these challenges, we propose a distributed in-memory trajectory analytics system DITA. We propose an effective partitioning method, global index and local index, to address the data locality problem. We devise cost-based techniques to balance the workload. We develop a filter-verification framework to improve the performance. Moreover, DITA can support most of existing similarity functions to quantify the similarity between trajectories. We integrate our framework seamlessly into Spark SQL, and make it support SQL and DataFrame API interfaces. We have conducted extensive experiments on real world datasets, and experimental results show that DITA outperforms existing distributed trajectory similarity search and join approaches significantly."
]
} |
1902.03338 | 2911321315 | WarpFlow is a fast, interactive data querying and processing system with a focus on petabyte-scale spatiotemporal datasets and Tesseract queries. With the rapid growth in smartphones and mobile navigation services, we now have an opportunity to radically improve urban mobility and reduce friction in how people and packages move globally every minute-mile, with data. WarpFlow speeds up three key metrics for data engineers working on such datasets -- time-to-first-result, time-to-full-scale-result, and time-to-trained-model for machine learning. | Second, WarpFlow supports two execution environments for pipelines. For long running pipelines that need to deal with machine restarts and pipeline retries, systems like MapReduce @cite_16 , Flume @cite_25 and Spark @cite_20 adopt checkpoint logs that allow a system to recover from any state. For fast, interactive and short-running queries, systems like Dremel @cite_15 drop this overhead, support an always running cluster and push retries to the client applications. WarpFlow supports the best of both worlds, by offering the developer two modes by relying on two separate execution engines -- one for long running queries and one for fast, interactive queries. | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"1994752578",
"2173213060",
"1978924650",
"2189465200"
],
"abstract": [
"Dremel is a scalable, interactive ad-hoc query system for analysis of read-only nested data. By combining multi-level execution trees and columnar data layout, it is capable of running aggregation queries over trillion-row tables in seconds. The system scales to thousands of CPUs and petabytes of data, and has thousands of users at Google. In this paper, we describe the architecture and implementation of Dremel, and explain how it complements MapReduce-based computing. We present a novel columnar storage representation for nested records and discuss experiments on few-thousand node instances of the system.",
"MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.",
"MapReduce and similar systems significantly ease the task of writing data-parallel code. However, many real-world computations require a pipeline of MapReduces, and programming and managing such pipelines can be difficult. We present FlumeJava, a Java library that makes it easy to develop, test, and run efficient data-parallel pipelines. At the core of the FlumeJava library are a couple of classes that represent immutable parallel collections, each supporting a modest number of operations for processing them in parallel. Parallel collections and their operations present a simple, high-level, uniform abstraction over different data representations and execution strategies. To enable parallel operations to run efficiently, FlumeJava defers their evaluation, instead internally constructing an execution plan dataflow graph. When the final results of the parallel operations are eventually needed, FlumeJava first optimizes the execution plan, and then executes the optimized operations on appropriate underlying primitives (e.g., MapReduces). The combination of high-level abstractions for parallel data and computation, deferred evaluation and optimization, and efficient parallel primitives yields an easy-to-use system that approaches the efficiency of hand-optimized pipelines. FlumeJava is in active use by hundreds of pipeline developers within Google.",
"MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time."
]
} |
1902.03320 | 2953656161 | This paper develops a method for robots to integrate stability into actively seeking out informative measurements through coverage. We derive a controller using hybrid systems theory that allows us to consider safe equilibrium policies during active data collection. We show that our method is able to maintain Lyapunov attractiveness while still actively seeking out data. Using incremental sparse Gaussian processes, we define distributions which allow a robot to actively seek out informative measurements. We illustrate our methods for shape estimation using a cart double pendulum, dynamic model learning of a hovering quadrotor, and generating galloping gaits starting from stationary equilibrium by learning a dynamics model for the half-cheetah system from the Roboschool environment. | Our approach overcomes these issues by using a sample-based KL-divergence measure @cite_9 as a replacement for the ergodic metric. This form of measure has been used previously; however, it relied on motion primitives in order to compute control actions @cite_9 . We avoid this issue by using hybrid systems theory in order to compute a controller that sufficiently reduces the KL-divergence measure from an equilibrium stable policy. As a result, we can use approximate models of dynamical systems instead of complete dynamic reconstructions in order to actively collect data while ensuring safety in the exploration process through a notion of attractiveness. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2734815958"
],
"abstract": [
"In search and surveillance applications in robotics, it is intuitive to spatially distribute robot trajectories with respect to the probability of locating targets in the domain. Ergodic coverage is one such approach to trajectory planning in which a robot is directed such that the percentage of time spent in a region is in proportion to the probability of locating targets in that region. In this work, we extend the ergodic coverage algorithm to robots operating in constrained environments and present a formulation that can capture sensor footprint and avoid obstacles and restricted areas in the domain. We demonstrate that our formulation easily extends to coordination of multiple robots equipped with different sensing capabilities to perform ergodic coverage of a domain."
]
} |
1902.03376 | 2950712152 | Evaluating the clinical similarities between pairwise patients is a fundamental problem in healthcare informatics. A proper patient similarity measure enables various downstream applications, such as cohort study and treatment comparative effectiveness research. One major carrier for conducting patient similarity research is Electronic Health Records(EHRs), which are usually heterogeneous, longitudinal, and sparse. Though existing studies on learning patient similarity from EHRs have shown being useful in solving real clinical problems, their applicability is limited due to the lack of medical interpretations. Moreover, most previous methods assume a vector-based representation for patients, which typically requires aggregation of medical events over a certain time period. As a consequence, temporal information will be lost. In this paper, we propose a patient similarity evaluation framework based on the temporal matching of longitudinal patient EHRs. Two efficient methods are presented, unsupervised and supervised, both of which preserve the temporal properties in EHRs. The supervised scheme takes a convolutional neural network architecture and learns an optimal representation of patient clinical records with medical concept embedding. The empirical results on real-world clinical data demonstrate substantial improvement over the baselines. We make our code and sample data available for further study. | In healthcare informatics domain, there are a lot of works focusing on patient similarity. For example, @cite_3 proposed a patient similarity algorithm named SimSvm that uses Support Vector Machine(SVM) to weight the similarity measures. @cite_6 proposed a patient similarity based disease prognosis strategy named SimProX. This model uses a Local Spline Regression (LSR) based method to embed these patient events into an intrinsic space, then measure the patient similarity by the Euclidean distance in the embedded space. These methods do not take the temporal information into consideration when evaluating patient similarities. Wang @cite_18 presented an One-Sided Convolutional Matrix Factorization for detection of temporal patterns. Cheng @cite_2 @cite_15 proposed an adjustable temporal fusion scheme using CNN-extracted features. Based on patients similarity, plenty of applications are enabled. In @cite_12 , Ng provided personalized predictive healthcare model by matching clinical similar patients with a locally supervised metric learning measure. @cite_21 proposed Integrated Method for Personalised Modelling (IMPM) to provide personalised treatment and personalised drug design. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_12"
],
"mid": [
"2103106495",
"2088744061",
"1535919221",
"2113098151",
"2511950764",
"2582955413",
""
],
"abstract": [
"Large collections of electronic clinical records today provide us with a vast source of information on medical practice. However, the utilization of those data for exploratory analysis to support clinical decisions is still limited. Extracting useful patterns from such data is particularly challenging because it is longitudinal, sparse and heterogeneous. In this paper, we propose a Nonnegative Matrix Factorization (NMF) based framework using a convolutional approach for open-ended temporal pattern discovery over large collections of clinical records. We call the method One-Sided Convolutional NMF (OSC-NMF). Our framework can mine common as well as individual shift-invariant temporal patterns from heterogeneous events over different patient groups, and handle sparsity as well as scalability problems well. Furthermore, we use an event matrix based representation that can encode quantitatively all key temporal concepts including order, concurrency and synchronicity. We derive efficient multiplicative update rules for OSC-NMF, and also prove theoretically its convergence. Finally, the experimental results on both synthetic and real world electronic patient data are presented to demonstrate the effectiveness of the proposed method.",
"Personalised modelling aims to create a unique computational diagnostic or prognostic model for an individual. The paper reports a new Integrated Method for Personalised Modelling (IMPM) that applies global optimisation of variables (features) and neighbourhood of appropriate data samples to create an accurate personalised model for an individual. The proposed IMPM allows for adaptation, monitoring and improvement of an individual's model. Three medical decision support problems are used as illustrations: cancer diagnosis and profiling; risk of disease evaluation based on whole genome SNPs data; chronic disease decision support. The method leads to improved accuracy and unique personalised profiling that could be used for personalised treatment and personalised drug design.",
"Prognosis refers to the prediction of the future health status of a patient. Providing prognostic insight to clinicians is critical for physician decision support. In this paper we present a collaborative disease prognosis strategy leveraging the information of the clinically similar patient cohort, using a Local Spline Regression (LSR) based similarity measure. To improve the reliability of the approach, the algorithm can also incorporate physician's feedback in the form of whether the patients in a retrieved cohort are indeed similar to the query patient. The proposed methodology was tested on a real clinical data set containing records of over two hundred thousand patients over three years. We report the retrieval as well as prognosis performance to demonstrate the effectiveness of the system.",
"Identifying historical records of patients who are similar to the new patient could help to retrieve similar reference cases for predicting the clinical outcome of the new patient. Amongst different potential applications, this study illustrates use of patient similarity in predicting survival of patients suffering from hepatocellular carcinoma (HCC) treated with locoregional chemotherapy. This study used 14 similarity measures derived from relevant clinical and imaging parameters to classify the HCC patient pairs into two classes, namely the difference between their survival time being longer or no longer than 12 months. Furthermore, this paper proposes and presents a patient similarity algorithm for the classification, named SimSVM. With the 14 similarity measures as input, SimSVM outputs the predicted class and the degree of similarity or dissimilarity. A dataset was collected from 30 patients, forming 300 and 135 patient pairs as training and test datasets respectively. The trained SimSVM with linear kernel gave the best accuracy (66.7 ), sensitivity (64.8 ) and specificity (67.9 ) on the test dataset.",
"",
"The widespread availability of electronic health records (EHRs) promises to usher in the era of personalized medicine. However, the problem of extracting useful clinical representations from longitudinal EHR data remains challenging. In this paper, we explore deep neural network models with learned medical feature embedding to deal with the problems of high dimensionality and temporality. Specifically, we use a multi-layer convolutional neural network (CNN) to parameterize the model and is thus able to capture complex non-linear longitudinal evolution of EHRs. Our model can effectively capture local short temporal dependency in EHRs, which is beneficial for risk prediction. To account for high dimensionality, we use the embedding medical features in the CNN model which hold the natural medical concepts. Our initial experiments produce promising results and demonstrate the effectiveness of both the medical feature embedding and the proposed convolutional neural network in risk prediction on cohorts of congestive heart failure and diabetes patients compared with several strong baselines.",
""
]
} |
1902.03376 | 2950712152 | Evaluating the clinical similarities between pairwise patients is a fundamental problem in healthcare informatics. A proper patient similarity measure enables various downstream applications, such as cohort study and treatment comparative effectiveness research. One major carrier for conducting patient similarity research is Electronic Health Records(EHRs), which are usually heterogeneous, longitudinal, and sparse. Though existing studies on learning patient similarity from EHRs have shown being useful in solving real clinical problems, their applicability is limited due to the lack of medical interpretations. Moreover, most previous methods assume a vector-based representation for patients, which typically requires aggregation of medical events over a certain time period. As a consequence, temporal information will be lost. In this paper, we propose a patient similarity evaluation framework based on the temporal matching of longitudinal patient EHRs. Two efficient methods are presented, unsupervised and supervised, both of which preserve the temporal properties in EHRs. The supervised scheme takes a convolutional neural network architecture and learns an optimal representation of patient clinical records with medical concept embedding. The empirical results on real-world clinical data demonstrate substantial improvement over the baselines. We make our code and sample data available for further study. | There are many research have been conducted on clustering patients based on machine learning. In order to rate patients health perceptions, Sewitch @cite_25 make cluster analysis using k-means to identify the patients groups based on the discovering the multivariate pattern. To capture underlying structure in the history of present illness section from patients EHR, Henao @cite_32 proposed a statistical model that groups patients based on text data in the initial history of present illness (HPI) and final diagnosis (DX) of a patient’s EHR. For human disease gene expression, Huang @cite_9 presented a new recursive K-means spectral clustering method (ReKS) to efficient cluster human diseases. Most of these research have demonstrate effectiveness of their model with real-world experiments, that convinces us of the applicability of clustering patients on cohorts discovering. | {
"cite_N": [
"@cite_9",
"@cite_32",
"@cite_25"
],
"mid": [
"1488479429",
"67428211",
"177896962"
],
"abstract": [
"Clustering of gene expression data simplifies subsequent data analyses and forms the basis of numerous approaches for biomarker identification, prediction of clinical outcome, and personalized therapeutic strategies. The most popular clustering methods such as K-means and hierarchical clustering are intuitive and easy to use, but they require arbitrary choices on their various parameters (number of clusters for K-means, and a threshold to cut the tree for hierarchical clustering). Human disease gene expression data are in general more difficult to cluster efficiently due to background (genotype) heterogeneity, disease stage and progression differences and disease subtyping; all of which cause gene expression datasets to be more heterogeneous. Spectral clustering has been recently introduced in many fields as a promising alternative to standard clustering methods. The idea is that pairwise comparisons can help reveal global features through the eigen techniques. In this paper, we developed a new recursive K-means spectral clustering method (ReKS) for disease gene expression data. We benchmarked ReKS on three large-scale cancer datasets and we compared it to different clustering methods with respect to execution time, background models and external biological knowledge. We found ReKS to be superior to the hierarchical methods and equally good to K-means, but much faster than them and without the requirement for a priori knowledge of K. Overall, ReKS offers an attractive alternative for efficient clustering of human disease data.",
"We propose a mixture model for text data designed to capture underlying structure in the history of present illness section of electronic medical records data. Additionally, we propose a method to induce bias that leads to more homogeneous sets of diagnoses for patients in each cluster. We apply our model to a collection of electronic records from an emergency department and compare our results to three other relevant models in order to assess performance. Results using standard metrics demonstrate that patient clusters from our model are more homogeneous when compared to others, and qualitative analyses suggest that our approach leads to interpretable patient sub-populations when applied to real data. Finally, we demonstrate an example of our patient clustering model to identify adverse drug events.",
"Abstract Background Little is known about how patients rate their health perceptions. Our objectives were to identify systematic multivariate patterns of perceptions using cluster analysis, and to investigate associations among the clusters, psychosocial characteristics and medication nonadherence. Methods Demographic, clinical and psychosocial data on 200 patients with inflammatory bowel disease (IBD) were collected prior to the index office visit and health perceptions were collected afterwards. Cluster analysis using a k -means method was used to identify subgroups of patients based on their responses to the Patient–Physician Discordance Scales (PPDS), an instrument that assesses perceptions of health status and of the clinical visit. Results We identified five different patient groups: a “healthy, not distressed, good communication, low expectation for medication testing” group; a “healthy, relatively distressed, good communication, high expectation for medication, low expectation for testing” group; a “symptomatic, distressed, good communication, high expectation for medication testing” group; a “healthy, not distressed, good communication, high expectation for medication testing” group; and a “relatively healthy, relatively distressed, poor communication, low expectation for medication testing” group. After adjustment for age, sex, language, form of IBD, and disease activity, statistically significant between-clusters differences were found in psychological distress, social support satisfaction and medication nonadherence. Conclusions Distinct patterns of patients' health perceptions correlated with psychological health and adherence to treatment. This categorization may be used to help identify patients at higher risks for ineffective communication and nonadherence to medication."
]
} |
1902.03376 | 2950712152 | Evaluating the clinical similarities between pairwise patients is a fundamental problem in healthcare informatics. A proper patient similarity measure enables various downstream applications, such as cohort study and treatment comparative effectiveness research. One major carrier for conducting patient similarity research is Electronic Health Records(EHRs), which are usually heterogeneous, longitudinal, and sparse. Though existing studies on learning patient similarity from EHRs have shown being useful in solving real clinical problems, their applicability is limited due to the lack of medical interpretations. Moreover, most previous methods assume a vector-based representation for patients, which typically requires aggregation of medical events over a certain time period. As a consequence, temporal information will be lost. In this paper, we propose a patient similarity evaluation framework based on the temporal matching of longitudinal patient EHRs. Two efficient methods are presented, unsupervised and supervised, both of which preserve the temporal properties in EHRs. The supervised scheme takes a convolutional neural network architecture and learns an optimal representation of patient clinical records with medical concept embedding. The empirical results on real-world clinical data demonstrate substantial improvement over the baselines. We make our code and sample data available for further study. | With medical concept embedding, we look forward to calculating the similarity amongst patients according to their EHRs. Considering the representations of patient medical events do not have a common time dimension, we cannot compare the patient event matrix directly. @cite_26 provided a relevant similarity measures between temporal series of brain functional images belonging to different subjects. Similar to @cite_26 , we adopt the RV coefficient to measure patient similarities. Note, however, that this coefficient only considers linear relationships between two data sets. To do more systematic research on measuring similarity of patient, our model also measures non-linear correlation between two patients using dCov coefficient. Apart from those unsupervised approach, we adopt the supervised learning method. We modify the Convolutional Neural Network(CNN) to derive the similarity scores for pairs of patients. The Convolutional networks models which are originally invented for image processing have wide applications in other domains. @cite_22 @cite_31 and @cite_7 respectively obtains the continuous representations of the sentences or short texts by a convolutional deep network, then the similarity can be effectively established. | {
"cite_N": [
"@cite_31",
"@cite_26",
"@cite_22",
"@cite_7"
],
"mid": [
"2951359136",
"2024545575",
"1966443646",
"2128892113"
],
"abstract": [
"Semantic matching is of central importance to many natural language tasks bordes2014semantic,RetrievalQA . A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models.",
"Standard group analyses of fMRI data rely on spatial and temporal averaging of individuals. This averaging operation is only sensible when the mean is a good representation of the group. This is not the case if subjects are not homogeneous, and it is therefore a major concern in fMRI studies to assess this group homogeneity. We present a method that provides relevant distances or similarity measures between temporal series of brain functional images belonging to different subjects. The method allows a multivariate comparison between data sets of several subjects in the time or in the space domain. These analyses assess the global intersubject variability before averaging subjects and drawing conclusions across subjects, at the population level. We adapt the RV coefficient to measure meaningful spatial or temporal similarities and use multidimensional scaling to give a visual representation of each subject's position with respect to other subjects in the group. We also provide a measure for detecting subjects that may be outliers. Results show that the method is a powerful tool to detect subjects with specific temporal or spatial patterns, and that, despite the apparent loss of information, restricting the analysis to a homogeneous subgroup of subjects does not reduce the statistical sensitivity of standard group fMRI analyses.",
"Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3 absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"Many machine learning problems can be interpreted as learning for matching two types of objects (e.g., images and captions, users and products, queries and documents, etc.). The matching level of two objects is usually measured as the inner product in a certain feature space, while the modeling effort focuses on mapping of objects from the original space to the feature space. This schema, although proven successful on a range of matching tasks, is insufficient for capturing the rich structure in the matching process of more complicated objects. In this paper, we propose a new deep architecture to more effectively model the complicated matching relations between two objects from heterogeneous domains. More specifically, we apply this model to matching tasks in natural language, e.g., finding sensible responses for a tweet, or relevant answers to a given question. This new architecture naturally combines the localness and hierarchy intrinsic to the natural language problems, and therefore greatly improves upon the state-of-the-art models."
]
} |
1902.02893 | 2919328485 | Reinforcement learning (RL) agents have traditionally been tasked with maximizing the value function of a Markov decision process (MDP), either in continuous settings, with fixed discount factor @math , or in episodic settings, with @math . While this has proven effective for specific tasks with well-defined objectives (e.g., games), it has never been established that fixed discounting is suitable for general purpose use (e.g., as a model of human preferences). This paper characterizes rationality in sequential decision making using a set of seven axioms and arrives at a form of discounting that generalizes traditional fixed discounting. In particular, our framework admits a state-action dependent "discount" factor that is not constrained to be less than 1, so long as there is eventual long run discounting. Although this broadens the range of possible preference structures in continuous settings, we show that there exists a unique "optimizing MDP" with fixed @math whose optimal value function matches the true utility of the optimal policy, and we quantify the difference between value and utility for suboptimal policies. Our work can be seen as providing a normative justification for (a slight generalization of) Martha White's RL task formalism (2017) and other recent departures from the traditional RL, and is relevant to task specification in RL, inverse RL and preference-based RL. | koopmans1960stationary provided the first axiomatic development of discounted additive utility over time. This and several follow-up works are summarized and expanded upon by koopmans1972representations and meyer1976preferences . The applicability of these and other existing discounted additive utility frameworks to general purpose RL is limited in several respects. For instance, as remarked by sobel2013discounting , most axiomatic justifications for discounting have assumed deterministic outcomes, and only a handful of analyses address stochasticity @cite_5 @cite_19 @cite_18 . Naively packaging deterministic outcome streams into arbitrary lotteries (as suggested by meyer1976preferences , .3) is difficult to interpret in case of control (see our commentary in Subsection on the resolution of intra-trajectory uncertainty) and entirely hypothetical since the agent never makes such choices---compare our (slightly) less hypothetical original position'' approach in Subsection . | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_18"
],
"mid": [
"2041192133",
"",
"2088108678"
],
"abstract": [
"In the literature on multiperiod planning under uncertainty, it is generally postulated that preferences may be represented by a von Neumann-Morgenstern utility index that is additive over time. This paper accomplishes two objectives: First, an axiomatic basis is provided for a more general class of non-additive utility indices defined over infinite consumption streams. Second, this class of utility functions is applied to extend existing results (J. Econ. Theory 4 (1972), 479–513; J. Econ. Theory 11 (1975), 329–339) on the nature of optimal growth under uncertainty. Of particular interest are the existence and stability of a stochastic steady state.",
"",
"Although most applications of discounting occur in risky settings, the best-known axiomatic justifications are deterministic. This paper provides an axiomatic rationale for discounting in a stochastic framework. Consider a representation of time and risk preferences with a binary relation on a real vector space of vector-valued discrete-time stochastic processes on a probability space. Four axioms imply that there are unique discount factors such that preferences among stochastic processes correspond to preferences among present value random vectors. The familiar axioms are weak ordering, continuity and nontriviality. The fourth axiom, decomposition, is non-standard and key. These axioms and the converse of decomposition are assumed in previous axiomatic justifications for discounting with nonlinear intraperiod utility functions in deterministic frameworks. Thus, the results here provide the weakest known sufficient conditions for discounting in deterministic or stochastic settings. In addition to the four axioms, if there exists a von Neumann-Morgenstern utility function corresponding to the binary relation, then that function is risk neutral (i.e., affine). In this sense, discounting axioms imply risk neutrality. Copyright Springer Science+Business Media, LLC 2013"
]
} |
1902.02893 | 2919328485 | Reinforcement learning (RL) agents have traditionally been tasked with maximizing the value function of a Markov decision process (MDP), either in continuous settings, with fixed discount factor @math , or in episodic settings, with @math . While this has proven effective for specific tasks with well-defined objectives (e.g., games), it has never been established that fixed discounting is suitable for general purpose use (e.g., as a model of human preferences). This paper characterizes rationality in sequential decision making using a set of seven axioms and arrives at a form of discounting that generalizes traditional fixed discounting. In particular, our framework admits a state-action dependent "discount" factor that is not constrained to be less than 1, so long as there is eventual long run discounting. Although this broadens the range of possible preference structures in continuous settings, we show that there exists a unique "optimizing MDP" with fixed @math whose optimal value function matches the true utility of the optimal policy, and we quantify the difference between value and utility for suboptimal policies. Our work can be seen as providing a normative justification for (a slight generalization of) Martha White's RL task formalism (2017) and other recent departures from the traditional RL, and is relevant to task specification in RL, inverse RL and preference-based RL. | frederick2002time provide a comprehensive empirical review of the discounted additive utility model as it pertains to human behavior and conclude that it has little empirical support.'' While this is consistent with our normative position, it does not invalidate discounting, as humans are known to exhibit regular violations of rationality @cite_8 . Such violations are not surprising, but rather a necessary result of bounded rationality @cite_9 . Our work is in a similar vein to russell2014update in that it argues that this boundedness necessitates new research directions. | {
"cite_N": [
"@cite_9",
"@cite_8"
],
"mid": [
"2076337359",
"1986646649"
],
"abstract": [
"Abstract : The purpose of this paper is to discuss the asymptotic behavior of the sequence (f sub n(i)) generated by a nonlinear recurrence relation. This problem arises in connection with an equipment replacement problem, cf. S. Dreyfus, A Note on an Industrial Replacement Process.",
"Alternative descriptions of a decision problem often give rise to different preferences, contrary to the principle of invariance that underlines the rational theory of choice. Violations of this theory are traced to the rules that govern the framing of decision and to the psychological principles of evaluation embodied in prospect theory. Invariance and dominance are obeyed when their application is transparent and often violated in other situations. Because these rules are normatively essential but descriptively invalid, no theory of choice can be both normatively adequate and descriptively accurate."
]
} |
1902.02893 | 2919328485 | Reinforcement learning (RL) agents have traditionally been tasked with maximizing the value function of a Markov decision process (MDP), either in continuous settings, with fixed discount factor @math , or in episodic settings, with @math . While this has proven effective for specific tasks with well-defined objectives (e.g., games), it has never been established that fixed discounting is suitable for general purpose use (e.g., as a model of human preferences). This paper characterizes rationality in sequential decision making using a set of seven axioms and arrives at a form of discounting that generalizes traditional fixed discounting. In particular, our framework admits a state-action dependent "discount" factor that is not constrained to be less than 1, so long as there is eventual long run discounting. Although this broadens the range of possible preference structures in continuous settings, we show that there exists a unique "optimizing MDP" with fixed @math whose optimal value function matches the true utility of the optimal policy, and we quantify the difference between value and utility for suboptimal policies. Our work can be seen as providing a normative justification for (a slight generalization of) Martha White's RL task formalism (2017) and other recent departures from the traditional RL, and is relevant to task specification in RL, inverse RL and preference-based RL. | The MDP is the predominant model used in RL research. It is commonly assumed that complex preference structures, including arbitrary human preferences, can be well represented by traditional MDP value functions @cite_22 @cite_20 . To the authors' knowledge, the best (indeed, ) theoretical justification for this assumption is due to ng2000algorithms , discussed in Subsection . Some RL researchers have proposed generalizations of the traditional value functions that include a transition dependent discount factor @cite_21 @cite_1 . Our work can be understood as a normative justification for (a slight generalization of) these approaches. This, and further connections to the RL literature are discussed in Section . | {
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_22",
"@cite_20"
],
"mid": [
"",
"2963604043",
"1999874108",
"2626804490"
],
"abstract": [
"",
"One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple \"imagined\" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function, thereby focusing the model upon the aspects of the environment most relevant to planning. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures.",
"We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.",
"For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback."
]
} |
1902.03057 | 2920456178 | Service robots are expected to be more autonomous and efficiently work in human-centric environments. For this type of robots, open-ended object recognition is a challenging task due to the high demand for two essential capabilities: (i) the accurate and real-time response, and (ii) the ability to learn new object categories from very few examples on-site. These capabilities are required for such robots since no matter how extensive the training data used for batch learning, the robot might be faced with an unknown object when operating in everyday environments. In this work, we present OrthographicNet, a deep transfer learning based approach, for 3D object recognition in open-ended domains. In particular, OrthographicNet generates a rotation and scale invariant global feature for a given object, enabling to recognize the same or similar objects seen from different perspectives. Experimental results show that our approach yields significant improvements over the previous state-of-the-art approaches concerning scalability, memory usage, and object recognition performance. Regarding real-time performance, two real-world demonstrations validate the promising performance of the proposed architecture. Moreover, our approach demonstrates the capability of learning from very few training examples in a real-world setting. | The pointset-based approaches are completely different from the other two. PointNet proposed by @cite_31 directly takes unordered point sets as inputs. PointNet learns a global representation of a point cloud based on computing individual point features from per-point Multi-Layer-Perceptron (MLP) first and then aggregating all features of the given object. Recently @cite_33 improved PointNet by exploiting local structures induced by the metric space. In particular, PointNet++ segments a point cloud into smaller clusters, and then send each cluster through a small PointNet. Of course, this leads to a complicated architecture with reduced speed and not suitable for real-time application. In another work, Klokov et.al @cite_5 proposed Kd-Networks for the recognition of 3D object represented by 3D point cloud. We compare our method with sate-of-the-art deep learning methods including 3DShapeNet @cite_21 , DeepPano @cite_7 , BeamNet @cite_1 , GeometryImage @cite_16 , and PointNet @cite_31 . | {
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_21",
"@cite_1",
"@cite_5",
"@cite_31",
"@cite_16"
],
"mid": [
"2624503621",
"1629010235",
"",
"2952908780",
"2606987267",
"2555254696",
"2518780089"
],
"abstract": [
"Few prior works study deep learning on point sets. PointNet by is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.",
"This letter introduces a robust representation of 3-D shapes, named DeepPano, learned with deep convolutional neural networks (CNN). Firstly, each 3-D shape is converted into a panoramic view, namely a cylinder projection around its principle axis. Then, a variant of CNN is specifically designed for learning the deep representations directly from such views. Different from typical CNN, a row-wise max-pooling layer is inserted between the convolution and fully-connected layers, making the learned representations invariant to the rotation around a principle axis. Our approach achieves state-of-the-art retrieval classification results on two large-scale 3-D model datasets (ModelNet-10 and ModelNet-40), outperforming typical methods by a large margin.",
"",
"This paper addresses 3D shape recognition. Recent work typically represents a 3D shape as a set of binary variables corresponding to 3D voxels of a uniform 3D grid centered on the shape, and resorts to deep convolutional neural networks(CNNs) for modeling these binary variables. Robust learning of such CNNs is currently limited by the small datasets of 3D shapes available, an order of magnitude smaller than other common datasets in computer vision. Related work typically deals with the small training datasets using a number of ad hoc, hand-tuning strategies. To address this issue, we formulate CNN learning as a beam search aimed at identifying an optimal CNN architecture, namely, the number of layers, nodes, and their connectivity in the network, as well as estimating parameters of such an optimal CNN. Each state of the beam search corresponds to a candidate CNN. Two types of actions are defined to add new convolutional filters or new convolutional layers to a parent CNN, and thus transition to children states. The utility function of each action is efficiently computed by transferring parameter values of the parent CNN to its children, thereby enabling an efficient beam search. Our experimental evaluation on the 3D ModelNet dataset demonstrates that our model pursuit using the beam search yields a CNN with superior performance on 3D shape classification than the state of the art.",
"We present a new deep learning architecture (called Kd-network) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and share parameters of these transformations according to the subdivisions of the point clouds imposed onto them by Kd-trees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform two-dimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behaviour. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation.",
"During the last few years, Convolutional Neural Networks are slowly but surely becoming the default method solve many computer vision related problems. This is mainly due to the continuous success that they have achieved when applied to certain tasks such as image, speech, or object recognition. Despite all the efforts, object class recognition methods based on deep learning techniques still have room for improvement. Most of the current approaches do not fully exploit 3D information, which has been proven to effectively improve the performance of other traditional object recognition methods. In this work, we propose PointNet, a new approach inspired by VoxNet and 3D ShapeNets, as an improvement over the existing methods by using density occupancy grids representations for the input data, and integrating them into a supervised Convolutional Neural Network architecture. An extensive experimentation was carried out, using ModelNet - a large-scale 3D CAD models dataset - to train and test the system, to prove that our approach is on par with state-of-the-art methods in terms of accuracy while being able to perform recognition under real-time constraints.",
"Surfaces serve as a natural parametrization to 3D shapes. Learning surfaces using convolutional neural networks (CNNs) is a challenging task. Current paradigms to tackle this challenge are to either adapt the convolutional filters to operate on surfaces, learn spectral descriptors defined by the Laplace-Beltrami operator, or to drop surfaces altogether in lieu of voxelized inputs. Here we adopt an approach of converting the 3D shape into a ‘geometry image’ so that standard CNNs can directly be used to learn 3D shapes. We qualitatively and quantitatively validate that creating geometry images using authalic parametrization on a spherical domain is suitable for robust learning of 3D shape surfaces. This spherically parameterized shape is then projected and cut to convert the original 3D shape into a flat and regular geometry image. We propose a way to implicitly learn the topology and structure of 3D shapes using geometry images encoded with suitable features. We show the efficacy of our approach to learn 3D shape surfaces for classification and retrieval tasks on non-rigid and rigid shape datasets."
]
} |
1902.03057 | 2920456178 | Service robots are expected to be more autonomous and efficiently work in human-centric environments. For this type of robots, open-ended object recognition is a challenging task due to the high demand for two essential capabilities: (i) the accurate and real-time response, and (ii) the ability to learn new object categories from very few examples on-site. These capabilities are required for such robots since no matter how extensive the training data used for batch learning, the robot might be faced with an unknown object when operating in everyday environments. In this work, we present OrthographicNet, a deep transfer learning based approach, for 3D object recognition in open-ended domains. In particular, OrthographicNet generates a rotation and scale invariant global feature for a given object, enabling to recognize the same or similar objects seen from different perspectives. Experimental results show that our approach yields significant improvements over the previous state-of-the-art approaches concerning scalability, memory usage, and object recognition performance. Regarding real-time performance, two real-world demonstrations validate the promising performance of the proposed architecture. Moreover, our approach demonstrates the capability of learning from very few training examples in a real-world setting. | We also investigate the ability to learn novel classes quickly, which is formulated as a transfer learning problem. Recent deep transfer learning approaches assumed that large amounts of training data are available for novel classes @cite_3 . For such situations the strength of pre-trained CNNs for extracting features is well known @cite_3 @cite_23 . Unlike our approach, CNN-based approaches are not scale and rotation invariant. Several researchers try to solve the issue qualitatively using data augmentation either using Generative Adversarial Networks (GAN) @cite_13 or by modifying images by translation, flipping, rotating and adding noise @cite_34 i.e., CNNs are still required to learn the rotation equivariance properties from the data @cite_15 @cite_30 . Furthermore, unlike these CNN-based approaches, we assume that the training instances are extracted from on-site experiences of a robot, and thus become gradually available over time, rather than being completely or partially available at the beginning of the learning process. Moreover, in our approach the set of classes is continuously growing while in the mentioned deep transfer learning approaches the set of classes is predefined. | {
"cite_N": [
"@cite_30",
"@cite_3",
"@cite_23",
"@cite_15",
"@cite_34",
"@cite_13"
],
"mid": [
"2952054889",
"2953391683",
"",
"2951770173",
"2783820753",
""
],
"abstract": [
"We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CIFAR10 and rotated MNIST.",
"Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.",
"",
"Many classes of images exhibit rotational symmetry. Convolutional neural networks are sometimes trained using data augmentation to exploit this, but they are still required to learn the rotation equivariance properties from the data. Encoding these properties into the network architecture, as we are already used to doing for translation equivariance by using convolutional layers, could result in a more efficient use of the parameter budget by relieving the model from learning them. We introduce four operations which can be inserted into neural network models as layers, and which can be combined to make these models partially equivariant to rotations. They also enable parameter sharing across different orientations. We evaluate the effect of these architectural modifications on three datasets which exhibit rotational symmetry and demonstrate improved performance with smaller models.",
"Humans can quickly learn new visual concepts, perhaps because they can easily visualize or imagine what novel objects look like from different views. Incorporating this ability to hallucinate novel instances of new concepts might help machine vision systems perform better low-shot learning, i.e., learning concepts from few examples. We present a novel approach to low-shot learning that uses this idea. Our approach builds on recent progress in meta-learning (\"learning to learn\") by combining a meta-learner with a \"hallucinator\" that produces additional training examples, and optimizing both models jointly. Our hallucinator can be incorporated into a variety of meta-learners and provides significant gains: up to a 6 point boost in classification accuracy when only a single training example is available, yielding state-of-the-art performance on the challenging ImageNet low-shot classification benchmark.",
""
]
} |
1902.02940 | 2919052601 | When optimizing against the mean loss over a distribution of predictions in the context of a regression task, then even if there is a distribution of targets the optimal prediction distribution is always a delta function at a single value. Methods of constructing generative models need to overcome this tendency. We consider a simple method of summarizing the prediction error, such that the optimal strategy corresponds to outputting a distribution of predictions with a support that matches the support of the distribution of targets --- optimizing against the minimal value of the loss given a set of samples from the prediction distribution, rather than the mean. We show that models trained against this loss learn to capture the support of the target distribution and, when combined with an auxiliary classifier-like prediction task, can be projected via rejection sampling to reproduce the full distribution of targets. The resulting method works well compared to other generative modeling approaches particularly in low dimensional spaces with highly non-trivial distributions, due to mode collapse solutions being globally suboptimal with respect to the extreme value loss. However, the method is less suited to high-dimensional spaces such as images due to the scaling of the number of samples needed in order to accurately estimate the extreme value loss when the dimension of the data manifold becomes large. | : Mixture density networks @cite_7 directly parameterize the output distribution with (generally) a sum of Gaussians, and then train the network to maximize the likelihood of the empirical distribution under the model. When the dimension of the target space is low, this is an effective way to capture complex multi-modal structure in the joint distribution. However, the covariance matrices to specify each Gaussian grow quadratically in the dimension of the target space, and the number of Gaussians needed to fit a curved manifold structure in high dimension may likewise grow quickly. MDNs were used in SketchRNN @cite_11 to model the continuation of lines in drawing figures and kanji. | {
"cite_N": [
"@cite_7",
"@cite_11"
],
"mid": [
"1579853615",
"2949439775"
],
"abstract": [
"Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.",
"We present sketch-rnn, a recurrent neural network (RNN) able to construct stroke-based drawings of common objects. The model is trained on thousands of crude human-drawn images representing hundreds of classes. We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format."
]
} |
1902.02940 | 2919052601 | When optimizing against the mean loss over a distribution of predictions in the context of a regression task, then even if there is a distribution of targets the optimal prediction distribution is always a delta function at a single value. Methods of constructing generative models need to overcome this tendency. We consider a simple method of summarizing the prediction error, such that the optimal strategy corresponds to outputting a distribution of predictions with a support that matches the support of the distribution of targets --- optimizing against the minimal value of the loss given a set of samples from the prediction distribution, rather than the mean. We show that models trained against this loss learn to capture the support of the target distribution and, when combined with an auxiliary classifier-like prediction task, can be projected via rejection sampling to reproduce the full distribution of targets. The resulting method works well compared to other generative modeling approaches particularly in low dimensional spaces with highly non-trivial distributions, due to mode collapse solutions being globally suboptimal with respect to the extreme value loss. However, the method is less suited to high-dimensional spaces such as images due to the scaling of the number of samples needed in order to accurately estimate the extreme value loss when the dimension of the data manifold becomes large. | : One method for generating samples from a high-dimensional distribution is to factorize the joint distribution into a product of one-dimensional conditional probabilities: @math . These one-dimensional distributions can each be learned separately by discretizing the possibilities and minimizing the categorical cross-entropy or KL divergence. Alternately, if the variables @math have some structural relationship, such as words in a sentence or pixels in an image, then weight sharing and causal masking can be used to learn a single model which is able to generate all such distributions. Sampling from the distribution is done by sampling from each one-dimensional distribution in turn, conditioning on the previous samples. Alternately, methods such as beam-search can be used to find maximum likelihood outcomes or otherwise adjust the 'temperature' of the generator. Methods which use this approach include CharRNN @cite_18 , PixelRNN @cite_4 , PixelCNN @cite_25 @cite_28 , WaveNet @cite_3 , and Transformer networks @cite_9 . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_28",
"@cite_9",
"@cite_3",
"@cite_25"
],
"mid": [
"196214544",
"2953318193",
"",
"2626778328",
"",
"2423557781"
],
"abstract": [
"Recurrent Neural Networks (RNNs) are very powerful sequence models that do not enjoy widespread use because it is extremely difficult to train them properly. Fortunately, recent advances in Hessian-free optimization have been able to overcome the difficulties associated with training RNNs, making it possible to apply them successfully to challenging sequence problems. In this paper we demonstrate the power of RNNs trained with the new Hessian-Free optimizer (HF) by applying them to character-level language modeling tasks. The standard RNN architecture, while effective, is not ideally suited for such tasks, so we introduce a new RNN variant that uses multiplicative (or \"gated\") connections which allow the current input character to determine the transition matrix from one hidden state vector to the next. After training the multiplicative RNN with the HF optimizer for five days on 8 high-end Graphics Processing Units, we were able to surpass the performance of the best previous single method for character-level language modeling – a hierarchical non-parametric sequence model. To our knowledge this represents the largest recurrent neural network application to date.",
"Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.",
"",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.",
"",
"This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost."
]
} |
1902.02940 | 2919052601 | When optimizing against the mean loss over a distribution of predictions in the context of a regression task, then even if there is a distribution of targets the optimal prediction distribution is always a delta function at a single value. Methods of constructing generative models need to overcome this tendency. We consider a simple method of summarizing the prediction error, such that the optimal strategy corresponds to outputting a distribution of predictions with a support that matches the support of the distribution of targets --- optimizing against the minimal value of the loss given a set of samples from the prediction distribution, rather than the mean. We show that models trained against this loss learn to capture the support of the target distribution and, when combined with an auxiliary classifier-like prediction task, can be projected via rejection sampling to reproduce the full distribution of targets. The resulting method works well compared to other generative modeling approaches particularly in low dimensional spaces with highly non-trivial distributions, due to mode collapse solutions being globally suboptimal with respect to the extreme value loss. However, the method is less suited to high-dimensional spaces such as images due to the scaling of the number of samples needed in order to accurately estimate the extreme value loss when the dimension of the data manifold becomes large. | : For binary data, Boltzmann machines @cite_30 and Restricted Boltzmann machines (RBMs) @cite_14 can directly learn an energy function @math over the space, such that the Boltzmann distribution over the space @math is matched to the empirical distribution. Boltzmann machines provide a direct estimation of the likelihood of samples, which can be useful for other applications. In the form of the hidden nodes, Boltzmann machines can learn a discrete latent space representing distributions over samples. However, Boltzmann machines are constrained in that the energy function must be constrained such that marginalization over the latent degrees of freedom can be done analytically, otherwise extensive Monte-Carlo simulation is needed for every update. This is generally a quadratic model with (in the case of the RBM) no within-layer links. This can make them tricky to generalize to new domains. | {
"cite_N": [
"@cite_30",
"@cite_14"
],
"mid": [
"2042492924",
"2116064496"
],
"abstract": [
"The computational power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections can allow a significant fraction of the knowledge of the system to be applied to an instance of a problem in a very short time. One kind of computation for which massively parallel networks appear to be well suited is large constraint satisfaction searches, but to use the connections efficiently two conditions must be met: First, a search technique that is suitable for parallel networks must be found. Second, there must be some way of choosing internal representations which allow the preexisting hardware connections to be used efficiently for encoding the constraints in the domain being searched. We describe a general parallel search method, based on statistical mechanics, and we show how it leads to a general learning rule for modifying the connection strengths so as to incorporate knowledge about a task domain in an efficient way. We describe some simple examples in which the learning algorithm creates internal representations that are demonstrably the most efficient way of using the preexisting connectivity structure.",
"It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual \"expert\" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called \"contrastive divergence\" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data."
]
} |
1902.02940 | 2919052601 | When optimizing against the mean loss over a distribution of predictions in the context of a regression task, then even if there is a distribution of targets the optimal prediction distribution is always a delta function at a single value. Methods of constructing generative models need to overcome this tendency. We consider a simple method of summarizing the prediction error, such that the optimal strategy corresponds to outputting a distribution of predictions with a support that matches the support of the distribution of targets --- optimizing against the minimal value of the loss given a set of samples from the prediction distribution, rather than the mean. We show that models trained against this loss learn to capture the support of the target distribution and, when combined with an auxiliary classifier-like prediction task, can be projected via rejection sampling to reproduce the full distribution of targets. The resulting method works well compared to other generative modeling approaches particularly in low dimensional spaces with highly non-trivial distributions, due to mode collapse solutions being globally suboptimal with respect to the extreme value loss. However, the method is less suited to high-dimensional spaces such as images due to the scaling of the number of samples needed in order to accurately estimate the extreme value loss when the dimension of the data manifold becomes large. | : Generative adversarial networks use a pair of networks in order to effectively estimate, then minimize, the difference between two data distributions. The discriminator network tries to classify samples as belonging to either the true or fake distribution, while the generator network uses the gradients propagated through the discriminator to try to fool it. The Nash equilibrium of this game is the point at which the fake and true distributions are identical. Due to training instability and phenomena such as mode collapse, there have been a number of refinements on the basic GAN idea such as LSGAN @cite_12 , Unrolled GANs @cite_8 , WGAN @cite_26 , WGAN-GP @cite_33 , Relativistic GANs @cite_17 , as well as fusing GANs with other techniques such as in VAEGAN @cite_19 , BEGAN @cite_5 , etc. Very high resolution outputs have been achieved by progressively stacking generators at a sequence of scales (Progressive GANs @cite_22 ). | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_22",
"@cite_8",
"@cite_19",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"2739748921",
"2605135824",
"2766527293",
"2953246223",
"2202109488",
"2605195953",
"2593414223",
"2810518847"
],
"abstract": [
"",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.",
"We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.",
"We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.",
"We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.",
"We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.",
"Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on LSUN and CIFAR-10 datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.",
"In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real. The generator is trained to increase the probability that fake data is real. We argue that it should also simultaneously decrease the probability that real data is real because 1) this would account for a priori knowledge that half of the data in the mini-batch is fake, 2) this would be observed with divergence minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data. We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average. We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that IPM-based GANs are a subset of RGANs which use the identity function. Empirically, we observe that 1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, 2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400 ), and 3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization."
]
} |
1902.02940 | 2919052601 | When optimizing against the mean loss over a distribution of predictions in the context of a regression task, then even if there is a distribution of targets the optimal prediction distribution is always a delta function at a single value. Methods of constructing generative models need to overcome this tendency. We consider a simple method of summarizing the prediction error, such that the optimal strategy corresponds to outputting a distribution of predictions with a support that matches the support of the distribution of targets --- optimizing against the minimal value of the loss given a set of samples from the prediction distribution, rather than the mean. We show that models trained against this loss learn to capture the support of the target distribution and, when combined with an auxiliary classifier-like prediction task, can be projected via rejection sampling to reproduce the full distribution of targets. The resulting method works well compared to other generative modeling approaches particularly in low dimensional spaces with highly non-trivial distributions, due to mode collapse solutions being globally suboptimal with respect to the extreme value loss. However, the method is less suited to high-dimensional spaces such as images due to the scaling of the number of samples needed in order to accurately estimate the extreme value loss when the dimension of the data manifold becomes large. | GANs can produce high-quality samples and retain the advantage of having a latent space to index the output distribution. However, training tends to be unstable, and among the various methods there tends to be a tradeoff between convergence and stability (with WGAN producing more stable training, but ultimately slightly worse converged results than the initial DCGAN @cite_24 formulation). GANs also do not produce a probability density associated with each sample. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2173520492"
],
"abstract": [
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations."
]
} |
1902.03084 | 2950036174 | Action prediction is to recognize the class label of an ongoing activity when only a part of it is observed. In this paper, we focus on online action prediction in streaming 3D skeleton sequences. A dilated convolutional network is introduced to model the motion dynamics in temporal dimension via a sliding window over the temporal axis. Since there are significant temporal scale variations in the observed part of the ongoing action at different time steps, a novel window scale selection method is proposed to make our network focus on the performed part of the ongoing action and try to suppress the possible incoming interference from the previous actions at each step. An activation sharing scheme is also proposed to handle the overlapping computations among the adjacent time steps, which enables our framework to run more efficiently. Moreover, to enhance the performance of our framework for action prediction with the skelet al input data, a hierarchy of dilated tree convolutions are also designed to learn the multi-level structured semantic representations over the skeleton joints at each frame. Our proposed approach is evaluated on four challenging datasets. The extensive experiments demonstrate the effectiveness of our method for skeleton-based online action prediction. | Most of the existing skeleton-based action recognition methods @cite_84 @cite_50 @cite_81 @cite_69 @cite_72 @cite_10 receive the fully observed segmented videos as input (each sample contains one full action instance), and derive a class label. The proposed skeleton-based online action prediction method takes one step forward in dealing with numerous action instances occurring in the untrimmed sequences, for which the current ongoing action can be only partly observed. There are a limited number of skeleton-based action recognition methods @cite_46 for untrimmed online sequences. Different from these works, the proposed SSNet framework predicts the class label of the current ongoing action by utilizing its predicted observation ratio. | {
"cite_N": [
"@cite_69",
"@cite_84",
"@cite_72",
"@cite_81",
"@cite_50",
"@cite_46",
"@cite_10"
],
"mid": [
"2526041356",
"2048821851",
"2415469094",
"2563154851",
"2964134613",
"2016240821",
"2604321021"
],
"abstract": [
"Recently, Convolutional Neural Networks (ConvNets) have shown promising performances in many computer vision tasks, especially image-based recognition. How to effectively use ConvNets for video-based recognition is still an open problem. In this paper, we propose a compact, effective yet simple method to encode spatio-temporal information carried in 3D skeleton sequences into multiple 2D images, referred to as Joint Trajectory Maps (JTM), and ConvNets are adopted to exploit the discriminative features for real-time human action recognition. The proposed method has been evaluated on three public benchmarks, i.e., MSRC-12 Kinect gesture dataset (MSRC-12), G3D dataset and UTD multimodal human action dataset (UTD-MHAD) and achieved the state-of-the-art results.",
"Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skelet al representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skelet al representation lies in the Lie group SE(3)×…×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skelet al representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.",
"Temporal dynamics of postures over time is crucial for sequence-based action recognition. Human actions can be represented by the corresponding motions of articulated skeleton. Most of the existing approaches for skeleton based action recognition model the spatial-temporal evolution of actions based on hand-crafted features. As a kind of hierarchically adaptive filter banks, Convolutional Neural Network (CNN) performs well in representation learning. In this paper, we propose an end-to-end hierarchical architecture for skeleton based action recognition with CNN. Firstly, we represent a skeleton sequence as a matrix by concatenating the joint coordinates in each instant and arranging those vector representations in a chronological order. Then the matrix is quantified into an image and normalized to handle the variable-length problem. The final image is fed into a CNN model for feature extraction and recognition. For the specific structure of such images, the simple max-pooling plays an important role on spatial feature selection as well as temporal frequency adjustment, which can obtain more discriminative joint information for different actions and meanwhile address the variable-frequency problem. Experimental results demonstrate that our method achieves the state-of-art performance with high computational efficiency, especially surpassing the existing result by more than 15 percentage on the challenging ChaLearn gesture recognition dataset.",
"In recent years, skeleton-based action recognition has become a popular 3D classification problem. State-of-the-art methods typically first represent each motion sequence as a high-dimensional trajectory on a Lie group with an additional dynamic time warping, and then shallowly learn favorable Lie group features. In this paper we incorporate the Lie group structure into a deep network architecture to learn more appropriate Lie group features for 3D action recognition. Within the network structure, we design rotation mapping layers to transform the input Lie group features into desirable ones, which are aligned better in the temporal domain. To reduce the high feature dimensionality, the architecture is equipped with rotation pooling layers for the elements on the Lie group. Furthermore, we propose a logarithm mapping layer to map the resulting manifold data into a tangent space that facilitates the application of regular output layers for the final classification. Evaluations of the proposed network for standard 3D human action recognition datasets clearly demonstrate its superiority over existing shallow Lie group feature learning methods as well as most conventional deep learning methods.",
"Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+Dbased action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art handcrafted features on the suggested cross-subject and crossview evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis.",
"In this paper a novel evaluation framework for measuring the performance of real-time action recognition methods is presented. The evaluation framework will extend the time-based event detection metric to model multiple distinct action classes. The proposed metric provides more accurate indications of the performance of action recognition algorithms for games and other similar applications since it takes into consideration restrictions related to time and consecutive repetitions. Furthermore, a new dataset, G3D for real-time action recognition in gaming containing synchronised video, depth and skeleton data is provided. Our results indicate the need of an advanced metric especially designed for games and other similar real-time applications.",
"This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips each consisting of several frames for spatial temporal feature learning using deep neural networks. Each clip is generated from one channel of the cylindrical coordinates of the skeleton sequence. Each frame of the generated clips represents the temporal information of the entire skeleton sequence, and incorporates one particular spatial relationship between the joints. The entire clips include multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We propose to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames of the clips in parallel to incorporate spatial structural information for action recognition. Experimental results clearly show the effectiveness of the proposed new representation and feature learning method for 3D action recognition."
]
} |
1902.03084 | 2950036174 | Action prediction is to recognize the class label of an ongoing activity when only a part of it is observed. In this paper, we focus on online action prediction in streaming 3D skeleton sequences. A dilated convolutional network is introduced to model the motion dynamics in temporal dimension via a sliding window over the temporal axis. Since there are significant temporal scale variations in the observed part of the ongoing action at different time steps, a novel window scale selection method is proposed to make our network focus on the performed part of the ongoing action and try to suppress the possible incoming interference from the previous actions at each step. An activation sharing scheme is also proposed to handle the overlapping computations among the adjacent time steps, which enables our framework to run more efficiently. Moreover, to enhance the performance of our framework for action prediction with the skelet al input data, a hierarchy of dilated tree convolutions are also designed to learn the multi-level structured semantic representations over the skeleton joints at each frame. Our proposed approach is evaluated on four challenging datasets. The extensive experiments demonstrate the effectiveness of our method for skeleton-based online action prediction. | Predicting (recognizing) an action before it gets fully performed has attracted a lot of research attention recently @cite_64 @cite_82 @cite_30 @cite_41 @cite_85 @cite_65 @cite_12 . | {
"cite_N": [
"@cite_30",
"@cite_64",
"@cite_41",
"@cite_85",
"@cite_65",
"@cite_12",
"@cite_82"
],
"mid": [
"2118527252",
"66452226",
"",
"2093655440",
"2548211577",
"2623040213",
"2147615062"
],
"abstract": [
"Recognizing human activities in partially observed videos is a challenging problem and has many practical applications. When the unobserved subsequence is at the end of the video, the problem is reduced to activity prediction from unfinished activity streaming, which has been studied by many researchers. However, in the general case, an unobserved subsequence may occur at any time by yielding a temporal gap in the video. In this paper, we propose a new method that can recognize human activities from partially observed videos in the general case. Specifically, we formulate the problem into a probabilistic framework: 1) dividing each activity into multiple ordered temporal segments, 2) using spatiotemporal features of the training video samples in each segment as bases and applying sparse coding (SC) to derive the activity likelihood of the test video sample at each segment, and 3) finally combining the likelihood at each segment to achieve a global posterior for the activities. We further extend the proposed method to include more bases that correspond to a mixture of segments with different temporal lengths (MSSC), which can better represent the activities with large intra-class variations. We evaluate the proposed methods (SC and MSSC) on various real videos. We also evaluate the proposed methods on two special cases: 1) activity prediction where the unobserved subsequence is at the end of the video, and 2) human activity recognition on fully observed videos. Experimental results show that the proposed methods outperform existing state-of-the-art comparison methods.",
"The speed with which intelligent systems can react to an action depends on how soon it can be recognized. The ability to recognize ongoing actions is critical in many applications, for example, spotting criminal activity. It is challenging, since decisions have to be made based on partial videos of temporally incomplete action executions. In this paper, we propose a novel discriminative multi-scale model for predicting the action class from a partially observed video. The proposed model captures temporal dynamics of human actions by explicitly considering all the history of observed features as well as features in smaller temporal segments. We develop a new learning formulation, which elegantly captures the temporal evolution over time, and enforces the label consistency between segments and corresponding partial videos. Experimental results on two public datasets show that the proposed approach outperforms state-of-the-art action prediction methods.",
"",
"Early prediction of ongoing human activity has become more valuable in a large variety of time-critical applications. To build an effective representation for prediction, human activities can be characterized by a complex temporal composition of constituent simple actions and interacting objects. Different from early detection on short-duration simple actions, we propose a novel framework for long -duration complex activity prediction by discovering three key aspects of activity: Causality, Context-cue, and Predictability. The major contributions of our work include: (1) a general framework is proposed to systematically address the problem of complex activity prediction by mining temporal sequence patterns; (2) probabilistic suffix tree (PST) is introduced to model causal relationships between constituent actions, where both large and small order Markov dependencies between action units are captured; (3) the context-cue, especially interactive objects information, is modeled through sequential pattern mining (SPM), where a series of action and object co-occurrence are encoded as a complex symbolic sequence; (4) we also present a predictive accumulative function (PAF) to depict the predictability of each kind of activity. The effectiveness of our approach is evaluated on two experimental scenarios with two data sets for each: action-only prediction and context-aware prediction. Our method achieves superior performance for predicting global activity classes and local action units.",
"Interaction prediction has a wide range of applications such as robot controlling and prevention of dangerous events. In this paper, we introduce a new method to capture deep temporal information in videos for human interaction prediction. We propose to use flow coding images to represent the low-level motion information in videos and extract deep temporal features using a deep convolutional neural network architecture. We tested our method on the UT-Interaction dataset and the challenging TV human interaction dataset, and demonstrated the advantages of the proposed deep temporal features based on flow coding images. The proposed method, though using only the temporal information, outperforms the state of the art methods for human interaction prediction.",
"Predicting an interaction before it is fully executed is very important in applications, such as human-robot interaction and video surveillance. In a two-human interaction scenario, there are often contextual dependency structures between the global interaction context of the two humans and the local context of the different body parts of each human. In this paper, we propose to learn the structure of the interaction contexts and combine it with the spatial and temporal information of a video sequence to better predict the interaction class. The structural models, including the spatial and the temporal models, are learned with long short term memory (LSTM) networks to capture the dependency of the global and local contexts of each RGB frame and each optical flow image, respectively. LSTM networks are also capable of detecting the key information from global and local interaction contexts. Moreover, to effectively combine the structural models with the spatial and temporal models for interaction prediction, a ranking score fusion method is introduced to automatically compute the optimal weight of each model for score fusion. Experimental results on the BIT-Interaction Dataset and the UT-Interaction Dataset clearly demonstrate the benefits of the proposed method.",
"In this paper, we present a novel approach of human activity prediction. Human activity prediction is a probabilistic process of inferring ongoing activities from videos only containing onsets (i.e. the beginning part) of the activities. The goal is to enable early recognition of unfinished activities as opposed to the after-the-fact classification of completed activities. Activity prediction methodologies are particularly necessary for surveillance systems which are required to prevent crimes and dangerous activities from occurring. We probabilistically formulate the activity prediction problem, and introduce new methodologies designed for the prediction. We represent an activity as an integral histogram of spatio-temporal features, efficiently modeling how feature distributions change over time. The new recognition methodology named dynamic bag-of-words is developed, which considers sequential nature of human activities while maintaining advantages of the bag-of-words to handle noisy observations. Our experiments confirm that our approach reliably recognizes ongoing activities from streaming videos with a high accuracy."
]
} |
1902.03084 | 2950036174 | Action prediction is to recognize the class label of an ongoing activity when only a part of it is observed. In this paper, we focus on online action prediction in streaming 3D skeleton sequences. A dilated convolutional network is introduced to model the motion dynamics in temporal dimension via a sliding window over the temporal axis. Since there are significant temporal scale variations in the observed part of the ongoing action at different time steps, a novel window scale selection method is proposed to make our network focus on the performed part of the ongoing action and try to suppress the possible incoming interference from the previous actions at each step. An activation sharing scheme is also proposed to handle the overlapping computations among the adjacent time steps, which enables our framework to run more efficiently. Moreover, to enhance the performance of our framework for action prediction with the skelet al input data, a hierarchy of dilated tree convolutions are also designed to learn the multi-level structured semantic representations over the skeleton joints at each frame. Our proposed approach is evaluated on four challenging datasets. The extensive experiments demonstrate the effectiveness of our method for skeleton-based online action prediction. | Cao al @cite_30 formulated the prediction task as a posterior-maximization problem, and applied sparse coding for action prediction. Ryoo al @cite_82 represented each action as an integral histogram of spatio-temporal features. They also developed a recognition methodology called dynamic bag-of-words (DBoW) for activity prediction. Li al @cite_86 designed a predictive accumulative function. In their method, the human activities are represented as a temporal composition of constituent actionlets. Kong al @cite_64 proposed a discriminative multi-scale model for early action recognition. Ke al @cite_65 extracted deep features in optical flow images for activity prediction. | {
"cite_N": [
"@cite_30",
"@cite_64",
"@cite_65",
"@cite_86",
"@cite_82"
],
"mid": [
"2118527252",
"66452226",
"2548211577",
"199115908",
"2147615062"
],
"abstract": [
"Recognizing human activities in partially observed videos is a challenging problem and has many practical applications. When the unobserved subsequence is at the end of the video, the problem is reduced to activity prediction from unfinished activity streaming, which has been studied by many researchers. However, in the general case, an unobserved subsequence may occur at any time by yielding a temporal gap in the video. In this paper, we propose a new method that can recognize human activities from partially observed videos in the general case. Specifically, we formulate the problem into a probabilistic framework: 1) dividing each activity into multiple ordered temporal segments, 2) using spatiotemporal features of the training video samples in each segment as bases and applying sparse coding (SC) to derive the activity likelihood of the test video sample at each segment, and 3) finally combining the likelihood at each segment to achieve a global posterior for the activities. We further extend the proposed method to include more bases that correspond to a mixture of segments with different temporal lengths (MSSC), which can better represent the activities with large intra-class variations. We evaluate the proposed methods (SC and MSSC) on various real videos. We also evaluate the proposed methods on two special cases: 1) activity prediction where the unobserved subsequence is at the end of the video, and 2) human activity recognition on fully observed videos. Experimental results show that the proposed methods outperform existing state-of-the-art comparison methods.",
"The speed with which intelligent systems can react to an action depends on how soon it can be recognized. The ability to recognize ongoing actions is critical in many applications, for example, spotting criminal activity. It is challenging, since decisions have to be made based on partial videos of temporally incomplete action executions. In this paper, we propose a novel discriminative multi-scale model for predicting the action class from a partially observed video. The proposed model captures temporal dynamics of human actions by explicitly considering all the history of observed features as well as features in smaller temporal segments. We develop a new learning formulation, which elegantly captures the temporal evolution over time, and enforces the label consistency between segments and corresponding partial videos. Experimental results on two public datasets show that the proposed approach outperforms state-of-the-art action prediction methods.",
"Interaction prediction has a wide range of applications such as robot controlling and prevention of dangerous events. In this paper, we introduce a new method to capture deep temporal information in videos for human interaction prediction. We propose to use flow coding images to represent the low-level motion information in videos and extract deep temporal features using a deep convolutional neural network architecture. We tested our method on the UT-Interaction dataset and the challenging TV human interaction dataset, and demonstrated the advantages of the proposed deep temporal features based on flow coding images. The proposed method, though using only the temporal information, outperforms the state of the art methods for human interaction prediction.",
"Early prediction of ongoing activity has been more and more valuable in a large variety of time-critical applications. To build an effective representation for prediction, human activities can be characterized by a complex temporal composition of constituent simple actions. Different from early recognition on short-duration simple activities, we propose a novel framework for long-duration complex activity prediction by discovering the causal relationships between constituent actions and the predictable characteristics of activities. The major contributions of our work include: (1) we propose a novel activity decomposition method by monitoring motion velocity which encodes a temporal decomposition of long activities into a sequence of meaningful action units; (2) Probabilistic Suffix Tree (PST) is introduced to represent both large and small order Markov dependencies between action units; (3) we present a Predictive Accumulative Function (PAF) to depict the predictability of each kind of activity. The effectiveness of the proposed method is evaluated on two experimental scenarios: activities with middle-level complexity and activities with high-level complexity. Our method achieves promising results and can predict global activity classes and local action units.",
"In this paper, we present a novel approach of human activity prediction. Human activity prediction is a probabilistic process of inferring ongoing activities from videos only containing onsets (i.e. the beginning part) of the activities. The goal is to enable early recognition of unfinished activities as opposed to the after-the-fact classification of completed activities. Activity prediction methodologies are particularly necessary for surveillance systems which are required to prevent crimes and dangerous activities from occurring. We probabilistically formulate the activity prediction problem, and introduce new methodologies designed for the prediction. We represent an activity as an integral histogram of spatio-temporal features, efficiently modeling how feature distributions change over time. The new recognition methodology named dynamic bag-of-words is developed, which considers sequential nature of human activities while maintaining advantages of the bag-of-words to handle noisy observations. Our experiments confirm that our approach reliably recognizes ongoing activities from streaming videos with a high accuracy."
]
} |
1902.03084 | 2950036174 | Action prediction is to recognize the class label of an ongoing activity when only a part of it is observed. In this paper, we focus on online action prediction in streaming 3D skeleton sequences. A dilated convolutional network is introduced to model the motion dynamics in temporal dimension via a sliding window over the temporal axis. Since there are significant temporal scale variations in the observed part of the ongoing action at different time steps, a novel window scale selection method is proposed to make our network focus on the performed part of the ongoing action and try to suppress the possible incoming interference from the previous actions at each step. An activation sharing scheme is also proposed to handle the overlapping computations among the adjacent time steps, which enables our framework to run more efficiently. Moreover, to enhance the performance of our framework for action prediction with the skelet al input data, a hierarchy of dilated tree convolutions are also designed to learn the multi-level structured semantic representations over the skeleton joints at each frame. Our proposed approach is evaluated on four challenging datasets. The extensive experiments demonstrate the effectiveness of our method for skeleton-based online action prediction. | Hu al @cite_21 explored to incorporate 3D skeleton information for real-time action prediction in the sequences, , each sequence includes only one action. They introduced a soft regression strategy for action prediction. An accumulative frame feature was also designed to make their method work efficiently. However, their framework is not suitable for online action prediction in the continuous skeleton sequence that contains multiple action instances. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2519711346"
],
"abstract": [
"In this paper, we propose a novel approach for predicting ongoing activities captured by a low-cost depth camera. Our approach avoids a usual assumption in existing activity prediction systems that the progress level of ongoing sequence is given. We overcome this limitation by learning a soft label for each subsequence and develop a soft regression framework for activity prediction to learn both predictor and soft labels jointly. In order to make activity prediction work in a real-time manner, we introduce a new RGB-D feature called “local accumulative frame feature (LAFF)”, which can be computed efficiently by constructing an integral feature map. Our experiments on two RGB-D benchmark datasets demonstrate that the proposed regression-based activity prediction model outperforms existing models significantly and also show that the activity prediction on RGB-D sequence is more accurate than that on RGB channel."
]
} |
1902.03084 | 2950036174 | Action prediction is to recognize the class label of an ongoing activity when only a part of it is observed. In this paper, we focus on online action prediction in streaming 3D skeleton sequences. A dilated convolutional network is introduced to model the motion dynamics in temporal dimension via a sliding window over the temporal axis. Since there are significant temporal scale variations in the observed part of the ongoing action at different time steps, a novel window scale selection method is proposed to make our network focus on the performed part of the ongoing action and try to suppress the possible incoming interference from the previous actions at each step. An activation sharing scheme is also proposed to handle the overlapping computations among the adjacent time steps, which enables our framework to run more efficiently. Moreover, to enhance the performance of our framework for action prediction with the skelet al input data, a hierarchy of dilated tree convolutions are also designed to learn the multi-level structured semantic representations over the skeleton joints at each frame. Our proposed approach is evaluated on four challenging datasets. The extensive experiments demonstrate the effectiveness of our method for skeleton-based online action prediction. | Beside the online action prediction task, the problem of temporal action detection @cite_37 @cite_17 @cite_43 @cite_26 @cite_57 @cite_27 @cite_25 @cite_3 @cite_37 @cite_59 @cite_32 @cite_40 @cite_83 also copes with untrimmed videos. Several methods attempted online detection @cite_49 , while most of the action detection approaches are developed for handling offline mode that conducts detection after observing the whole long sequence @cite_26 @cite_66 @cite_74 . | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_32",
"@cite_3",
"@cite_66",
"@cite_57",
"@cite_43",
"@cite_27",
"@cite_40",
"@cite_83",
"@cite_59",
"@cite_49",
"@cite_74",
"@cite_25",
"@cite_17"
],
"mid": [
"2146048167",
"2187772033",
"2964107628",
"2963720581",
"2609017925",
"2550143307",
"2597958930",
"2964008341",
"2604113307",
"2964216549",
"2593722617",
"2341313195",
"2016208906",
"2031765333",
"2963321993"
],
"abstract": [
"Action recognition has often been posed as a classification problem, which assumes that a video sequence only have one action class label and different actions are independent. However, a single human body can perform multiple concurrent actions at the same time, and different actions interact with each other. This paper proposes a concurrent action detection model where the action detection is formulated as a structural prediction problem. In this model, an interval in a video sequence can be described by multiple action labels. An detected action interval is determined both by the unary local detector and the relations with other actions. We use a wavelet feature to represent the action sequence, and design a composite temporal logic descriptor to describe the action relations. The model parameters are trained by structural SVM learning. Given a long video sequence, a sequential decision window search algorithm is designed to detect the actions. Experiments on our new collected concurrent action dataset demonstrate the strength of our method.",
"We describe the submission of the INRIA LEAR team to the THU-MOS workshop in conjunction with ECCV 2014. Our system is based on Fisher vector (FV) encoding of dense trajectory features (DTF), which we also used in our 2013 submission. This year's submission additionally incorporates static-image features (SIFT, Color, and CNN) and audio features (ASR and MFCC) for the classification task. For the detection task, we combine scores from the clas-sification task with FV-DTF features extracted from video slices. We found that these additional visual and audio feature significantly improve the classification results. For localization we found that using the classification scores as a contex-tual feature besides local motion features leads to significant improvements.",
"",
"",
"Action recognition from well-segmented 3D skeleton video has been intensively studied. However, due to the difficulty in representing the 3D skeleton video and the lack of training data, action detection from streaming 3D skeleton video still lags far behind its recognition counterpart and image based object detection. In this paper, we propose a novel approach for this problem, which leverages both effective skeleton video encoding and deep regression based object detection from images. Our framework consists of two parts: skeleton-based video image mapping, which encodes a skeleton video to a color image in a temporal preserving way, and an end-to-end trainable fast skeleton action detector (Skeleton Boxes) based on image detection. Experimental results on the latest and largest PKU-MMD benchmark dataset demonstrate that our method outperforms the state-of-the-art methods with a large margin. We believe our idea would inspire and benefit future research in this important area.",
"The ability to identify and temporally segment fine-grained human actions throughout a video is crucial for robotics, surveillance, education, and beyond. Typical approaches decouple this problem by first extracting local spatiotemporal features from video frames and then feeding them into a temporal classifier that captures high-level temporal patterns. We describe a class of temporal models, which we call Temporal Convolutional Networks (TCNs), that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling to efficiently capture long-range temporal patterns whereas our Dilated TCN uses dilated convolutions. We show that TCNs are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks. We apply these models to three challenging fine-grained datasets and show large improvements over the state of the art.",
"Temporal Action Proposal (TAP) generation is an important problem, as fast and accurate extraction of semantically important (e.g. human actions) segments from untrimmed videos is an important step for large-scale video analysis. We propose a novel Temporal Unit Regression Network (TURN) model. There are two salient aspects of TURN: (1) TURN jointly predicts action proposals and refines the temporal boundaries by temporal coordinate regression: (2) Fast computation is enabled by unit feature reuse: a long untrimmed video is decomposed into video units, which are reused as basic building blocks of temporal proposals. TURN outperforms the previous state-of-the-art methods under average recall (AR) by a large margin on THUMOS-14 and ActivityNet datasets, and runs at over 880 frames per second (FPS) on a TITAN X GPU. We further apply TURN as a proposal generation stage for existing temporal action localization pipelines, it outperforms state-of-the-art performance on THUMOS-14 and ActivityNet.",
"We present a Temporal Context Network (TCN) for precise temporal localization of human activities. Similar to the Faster-RCNN architecture, proposals are placed at equal intervals in a video which span multiple temporal scales. We propose a novel representation for ranking these proposals. Since pooling features only inside a segment is not sufficient to predict activity boundaries, we construct a representation which explicitly captures context around a proposal for ranking it. For each temporal segment inside a proposal, features are uniformly sampled at a pair of scales and are input to a temporal convolutional neural network for classification. After ranking proposals, non-maximum suppression is applied and classification is performed to obtain final detections. TCN outperforms state-of-the-art methods on the ActivityNet dataset and the THU-MOS14 dataset.",
"Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.",
"Detecting actions in untrimmed videos is an important yet challenging task. In this paper, we present the structured segment network (SSN), a novel framework which models the temporal structure of each action instance via a structured temporal pyramid. On top of the pyramid, we further introduce a decomposed discriminative model comprising two classifiers, respectively for classifying actions and determining completeness. This allows the framework to effectively distinguish positive proposals from background or incomplete ones, thus leading to both accurate recognition and localization. These components are integrated into a unified network that can be efficiently trained in an end-to-end fashion. Additionally, a simple yet effective temporal action proposal scheme, dubbed temporal actionness grouping (TAG) is devised to generate high quality action proposals. On two challenging benchmarks, THUMOS14 and ActivityNet, our method remarkably outperforms previous state-of-the-art methods, demonstrating superior accuracy and strong adaptivity in handling actions with various temporal structures.",
"Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https: bitbucket.org columbiadvmm cdc.",
"Human action recognition from well-segmented 3D skeleton data has been intensively studied and has been attracting an increasing attention. Online action detection goes one step further and is more challenging, which identifies the action type and localizes the action positions on the fly from the untrimmed stream data. In this paper, we study the problem of online action detection from streaming skeleton data. We propose a multi-task end-to-end Joint Classification-Regression Recurrent Neural Network to better explore the action type and temporal localization information. By employing a joint classification and regression optimization objective, this network is capable of automatically localizing the start and end points of actions more accurately. Specifically, by leveraging the merits of the deep Long Short-Term Memory (LSTM) subnetwork, the proposed model automatically captures the complex long-range temporal dynamics, which naturally avoids the typical sliding window design and thus ensures high computational efficiency. Furthermore, the subtask of regression optimization provides the ability to forecast the action prior to its occurrence. To evaluate our proposed model, we build a large streaming video dataset with annotations. Experimental results on our dataset and the public G3D dataset both demonstrate very promising performance of our scheme.",
"The detection of human action in videos of busy natural scenes with dynamic background is of interest for applications such as video surveillance. Taking a conventional fully supervised approach, the spatio-temporal locations of the action of interest have to be manually annotated frame by frame in the training videos, which is tedious and unreliable. In this paper, for the first time, a weakly supervised action detection method is proposed which only requires binary labels of the videos indicating the presence of the action of interest. Given a training set of binary labelled videos, the weakly supervised learning (WSL) problem is recast as a multiple instance learning (MIL) problem. A novel MIL algorithm is developed which differs from the existing MIL algorithms in that it locates the action of interest spatially and temporally by globally optimising both interand intra-class distance. We demonstrate through experiments that our WSL approach can achieve comparable detection performance to a fully supervised learning approach, and that the proposed MIL algorithm significantly outperforms the existing ones.",
"In this paper we introduce a real-time system for action detection. The system uses a small set of robust features extracted from 3D skeleton data. Features are effectively described based on the probability distribution of skeleton data. The descriptor computes a pyramid of sample covariance matrices and mean vectors to encode the relationship between the features. For handling the intra-class variations of actions, such as action temporal scale variations, the descriptor is computed using different window scales for each action. Discriminative elements of the descriptor are mined using feature selection. The system achieves accurate detection results on difficult unsegmented sequences. Experiments on MSRC-12 and G3D datasets show that the proposed system outperforms the state-of-the-art in detection accuracy with very low latency. To the best of our knowledge, we are the first to propose using multi-scale description in action detection from 3D skeleton data.",
"In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2 or less) of the video frames."
]
} |
1902.03084 | 2950036174 | Action prediction is to recognize the class label of an ongoing activity when only a part of it is observed. In this paper, we focus on online action prediction in streaming 3D skeleton sequences. A dilated convolutional network is introduced to model the motion dynamics in temporal dimension via a sliding window over the temporal axis. Since there are significant temporal scale variations in the observed part of the ongoing action at different time steps, a novel window scale selection method is proposed to make our network focus on the performed part of the ongoing action and try to suppress the possible incoming interference from the previous actions at each step. An activation sharing scheme is also proposed to handle the overlapping computations among the adjacent time steps, which enables our framework to run more efficiently. Moreover, to enhance the performance of our framework for action prediction with the skelet al input data, a hierarchy of dilated tree convolutions are also designed to learn the multi-level structured semantic representations over the skeleton joints at each frame. Our proposed approach is evaluated on four challenging datasets. The extensive experiments demonstrate the effectiveness of our method for skeleton-based online action prediction. | Our task is different from action detection, as action detection mainly addresses accurate spatio-temporal segmentation, while action prediction focuses more on predicting the class of the current ongoing action from its observed part so far, even when only a small ratio of it is performed. Sliding window-based design @cite_0 @cite_25 @cite_51 @cite_35 and action proposals @cite_27 have been adopted for action detection. Zanfir al @cite_0 used a sliding window with one fixed scale (obtained by cross validation) for action detection. Shou al @cite_62 adopted multi-scale windows for action detection via multi-stage networks. | {
"cite_N": [
"@cite_35",
"@cite_62",
"@cite_0",
"@cite_27",
"@cite_51",
"@cite_25"
],
"mid": [
"2016776918",
"2964214371",
"1983592444",
"2964008341",
"2540565217",
"2031765333"
],
"abstract": [
"The need for early detection of temporal events from sequential data arises in a wide spectrum of applications ranging from human-robot interaction to video security. While temporal event detection has been extensively studied, early detection is a relatively unexplored problem. This paper proposes a maximum-margin framework for training temporal event detectors to recognize partial events, enabling early detection. Our method is based on Structured Output SVM, but extends it to accommodate sequential data. Experiments on datasets of varying complexity, for detecting facial expressions, hand gestures, and human activities, demonstrate the benefits of our approach.",
"We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions, (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network, and (3) a localization network fine-tunes the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and achieve high temporal localization accuracy. In the end, only the proposal network and the localization network are used during prediction. On two largescale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7 to 7.4 on MEXaction2 and increases from 15.0 to 19.0 on THUMOS 2014.",
"Human action recognition under low observational latency is receiving a growing interest in computer vision due to rapidly developing technologies in human-robot interaction, computer gaming and surveillance. In this paper we propose a fast, simple, yet powerful non-parametric Moving Pose (MP) framework for low-latency human action and activity recognition. Central to our methodology is a moving pose descriptor that considers both pose information as well as differential quantities (speed and acceleration) of the human body joints within a short time window around the current frame. The proposed descriptor is used in conjunction with a modified kNN classifier that considers both the temporal location of a particular frame within the action sequence as well as the discrimination power of its moving pose descriptor compared to other frames in the training set. The resulting method is non-parametric and enables low-latency recognition, one-shot learning, and action detection in difficult unsegmented sequences. Moreover, the framework is real-time, scalable, and outperforms more sophisticated approaches on challenging benchmarks like MSR-Action3D or MSR-DailyActivities3D.",
"We present a Temporal Context Network (TCN) for precise temporal localization of human activities. Similar to the Faster-RCNN architecture, proposals are placed at equal intervals in a video which span multiple temporal scales. We propose a novel representation for ranking these proposals. Since pooling features only inside a segment is not sufficient to predict activity boundaries, we construct a representation which explicitly captures context around a proposal for ranking it. For each temporal segment inside a proposal, features are uniformly sampled at a pair of scales and are input to a temporal convolutional neural network for classification. After ranking proposals, non-maximum suppression is applied and classification is performed to obtain final detections. TCN outperforms state-of-the-art methods on the ActivityNet dataset and the THU-MOS14 dataset.",
"Online action detection (OAD) is challenging since 1) robust yet computationally expensive features cannot be straightforwardly used due to the real-time processing requirements and 2) the localization and classification of actions have to be performed even before they are fully observed. We propose a new random forest (RF)-based online action detection framework that addresses these challenges. Our algorithm uses computationally efficient skelet al joint features. High accuracy is achieved by using robust convolutional neural network (CNN)-based features which are extracted from the raw RGBD images, plus the temporal relationships between the current frame of interest, and the past and futures frames. While these high-quality features are not available in real-time testing scenario, we demonstrate that they can be effectively exploited in training RF classifiers: We use these spatio-temporal contexts to craft RF's new split functions improving RFs' leaf node statistics. Experiments with challenging MSRAction3D, G3D, and OAD datasets demonstrate that our algorithm significantly improves the accuracy over the state-of-the-art on-line action detection algorithms while achieving the real-time efficiency of existing skeleton-based RF classifiers.",
"In this paper we introduce a real-time system for action detection. The system uses a small set of robust features extracted from 3D skeleton data. Features are effectively described based on the probability distribution of skeleton data. The descriptor computes a pyramid of sample covariance matrices and mean vectors to encode the relationship between the features. For handling the intra-class variations of actions, such as action temporal scale variations, the descriptor is computed using different window scales for each action. Discriminative elements of the descriptor are mined using feature selection. The system achieves accurate detection results on difficult unsegmented sequences. Experiments on MSRC-12 and G3D datasets show that the proposed system outperforms the state-of-the-art in detection accuracy with very low latency. To the best of our knowledge, we are the first to propose using multi-scale description in action detection from 3D skeleton data."
]
} |
1902.03043 | 2919854899 | Automatic prediction of emotion promises to revolutionise human-computer interaction. Recent trends involve fusion of multiple modalities - audio, visual, and physiological - to classify emotional state. However, practical considerations 'in the wild' limit collection of this physiological data to commoditised heartbeat sensors. Furthermore, real-world applications often require some measure of uncertainty over model output. We present here an end-to-end deep learning model for classifying emotional valence from unimodal heartbeat data. We further propose a Bayesian framework for modelling uncertainty over valence predictions, and describe a procedure for tuning output according to varying demands on confidence. We benchmarked our framework against two established datasets within the field and achieved peak classification accuracy of 90 . These results lay the foundation for applications of affective computing in real-world domains such as healthcare, where a high premium is placed on non-invasive collection of data, and predictive certainty. | Physiological markers of autonomic nervous activity include galvanic skin response (GSR), electroencephalogram (EEG), electromyogram (EMG), respiration, skin temperature (ST), electrocardiogram (ECG) and photoplethysmogram (PPG) @cite_4 . Note that a heartbeat time series can easily be extracted from both ECG and PPG in the form of inter-beat-intervals (IBIs). Existing approaches for ED P typically pool a number of biosignals as multimodal input to classifier algorithms @cite_11 @cite_0 @cite_35 . However, this directly contrasts the near-unimodal nature of affordable wearable devices in use today. Comparatively few studies narrow their scope in accordance with these practical limitations. | {
"cite_N": [
"@cite_0",
"@cite_35",
"@cite_4",
"@cite_11"
],
"mid": [
"2150150736",
"",
"2146010402",
"2164368909"
],
"abstract": [
"Signals from peripheral physiology (e.g., ECG, EMG, and GSR) in conjunction with machine learning techniques can be used for the automatic detection of affective states. The affect detector can be user-independent, where it is expected to generalize to novel users, or user-dependent, where it is tailored to a specific user. Previous studies have reported some success in detecting affect from physiological signals, but much of the work has focused on induced affect or acted expressions instead of contextually constrained spontaneous expressions of affect. This study addresses these issues by developing and evaluating user-independent and user-dependent physiology-based detectors of nonbasic affective states (e.g., boredom, confusion, curiosity) that were trained and validated on naturalistic data collected during interactions between 27 students and AutoTutor, an intelligent tutoring system with conversational dialogues. There is also no consensus on which techniques (i.e., feature selection or classification methods) work best for this type of data. Therefore, this study also evaluates the efficacy of affect detection using a host of feature selection and classification techniques on three physiological signals (ECG, EMG, and GSR) and their combinations. Two feature selection methods and nine classifiers were applied to the problem of recognizing eight affective states (boredom, confusion, curiosity, delight, flow -engagement, surprise, and neutral). The results indicated that the user-independent modeling approach was not feasible; however, a mean kappa score of 0.25 was obtained for user-dependent models that discriminated among the most frequent emotions. The results also indicated that k-nearest neighbor and Linear Bayes Normal Classifier (LBNC) classifiers yielded the best affect detection rates. Single channel ECG, EMG, and GSR and three-channel multimodal models were generally more diagnostic than two--channel models.",
"",
"Recent research in the field of Human Computer Interaction aims at recognizing the user's emotional state in order to provide a smooth interface between humans and computers. This would make life easier and can be used in vast applications involving areas such as education, medicine etc. Human emotions can be recognized by several approaches such as gesture, facial images, physiological signals and neuro imaging methods. Most of the researchers have developed user dependent emotion recognition system and achieved maximum classification rate. Very few researchers have tried to develop a user independent system and obtained lower classification rate. Efficient emotion stimulus method, larger data samples and intelligent signal processing techniques are essential for improving the classification rate of the user independent system. In this paper, we present a review on emotion recognition using physiological signals. The various theories on emotion, emotion recognition methodology and the current advancements in emotion research are discussed in subsequent topics. This would provide an insight on the current state of research and its challenges on emotion recognition using physiological signals, so that research can be advanced to obtain better recognition.",
"Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological data set to a feature-based multiclass classification. In order to collect a physiological data set from multiple subjects over many weeks, we used a musical induction method that spontaneously leads subjects to real emotional states, without any deliberate laboratory setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity, and respiration changes. A wide range of physiological features from various analysis domains, including time frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive high arousal, negative high arousal, negative low arousal, and positive low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. An improved recognition accuracy of 95 percent and 70 percent for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme."
]
} |
1902.02907 | 2949542074 | This paper motivates and develops source traces for temporal difference (TD) learning in the tabular setting. Source traces are like eligibility traces, but model potential histories rather than immediate ones. This allows TD errors to be propagated to potential causal states and leads to faster generalization. Source traces can be thought of as the model-based, backward view of successor representations (SR), and share many of the same benefits. This view, however, suggests several new ideas. First, a TD( @math )-like source learning algorithm is proposed and its convergence is proven. Then, a novel algorithm for learning the source map (or SR matrix) is developed and shown to outperform the previous algorithm. Finally, various approaches to using the source SR model are explored, and it is shown that source traces can be effectively combined with other model-based methods like Dyna and experience replay. | Much like the backward and forward views of TD( @math ) offer different insights @cite_4 , so too do the backward view (source traces) and the forward view (SR) here. For instance, the forward view suggests the following useful characterization: we can think of @math as the solution @math of a derivative MRP with the same dynamics as the underlying MRP, but with zero reward everywhere state @math , where the reward is 1. This suggests that source traces may be found by TD learning as the solutions to a set of derivative MRPs (one for each state). This algorithm was proposed by dayan1993improving dayan1993improving . By contrast, the backward view prompts an alternative algorithm based on TD( @math ) that offers a faster initial learning curve at the price of increased variance. We combine the two algorithms in Section to get the best of both worlds. This shift in perspective produces several other novel ideas, which we explore below. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2121863487"
],
"abstract": [
"Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning."
]
} |
1902.02907 | 2949542074 | This paper motivates and develops source traces for temporal difference (TD) learning in the tabular setting. Source traces are like eligibility traces, but model potential histories rather than immediate ones. This allows TD errors to be propagated to potential causal states and leads to faster generalization. Source traces can be thought of as the model-based, backward view of successor representations (SR), and share many of the same benefits. This view, however, suggests several new ideas. First, a TD( @math )-like source learning algorithm is proposed and its convergence is proven. Then, a novel algorithm for learning the source map (or SR matrix) is developed and shown to outperform the previous algorithm. Finally, various approaches to using the source SR model are explored, and it is shown that source traces can be effectively combined with other model-based methods like Dyna and experience replay. | Source learning is also related to Least-Squares TD (LSTD) methods @cite_0 . In the tabular case, LSTD models @math (i.e., @math ) and @math (or scalar multiples thereof), and computes @math as @math . Incremental LSTD (iLSTD) @cite_10 is similar, but learns @math incrementally, thus avoiding matrix inversion. Finally, recursive LSTD (RLS TD) @cite_0 computes @math as does LSTD, but does so by recursive least squares instead of matrix inversion. Of these algorithms, RLS TD---despite being an algorithmic trick---is closest in spirit to source learning; in the tabular case, equation 15 of bradtke1996linear and equation (below) have comparable structure. However, RLS TD maintains its estimate of @math precisely using @math time per step, whereas the TD Source algorithm presented learns @math and @math using @math time per step. As iLSTD has the same time complexity as source learning in the tabular case, we compare them empirically in Section . | {
"cite_N": [
"@cite_0",
"@cite_10"
],
"mid": [
"2072931156",
"86816279"
],
"abstract": [
"We introduce two new temporal difference (TD) algorithms based on the theory of linear least-squares function approximation. We define an algorithm we call Least-Squares TD (LS TD) for which we prove probability-one convergence when it is used with a function approximator linear in the adjustable parameters. We then define a recursive version of this algorithm, Recursive Least-Square TD (RLS TD). Although these new TD algorithms require more computation per time-step than do Sutton‘s TD(λ) algorithms, they are more efficient in a statistical sense because they extract more information from training experiences. We describe a simulation experiment showing the substantial improvement in learning rate achieved by RLS TD in an example Markov prediction problem. To quantify this improvement, we introduce the TD error variance of a Markov chain, σTD, and experimentally conclude that the convergence rate of a TD algorithm depends linearly on σTD. In addition to converging more rapidly, LS TD and RLS TD do not have control parameters, such as a learning rate parameter, thus eliminating the possibility of achieving poor performance by an unlucky choice of parameters.",
"Approximate policy evaluation with linear function approximation is a commonly arising problem in reinforcement learning, usually solved using temporal difference (TD) algorithms. In this paper we introduce a new variant of linear TD learning, called incremental least-squares TD learning, or iLSTD. This method is more data efficient than conventional TD algorithms such as TD(0) and is more computationally efficient than non-incremental least-squares TD methods such as LSTD (Bradtke & Barto 1996; Boyan 1999). In particular, we show that the per-time-step complexities of iLSTD and TD(0) are O(n), where n is the number of features, whereas that of LSTD is O(n2). This difference can be decisive in modern applications of reinforcement learning where the use of a large number features has proven to be an effective solution strategy. We present empirical comparisons, using the test problem introduced by Boyan (1999), in which iLSTD converges faster than TD(0) and almost as fast as LSTD."
]
} |
1902.02910 | 2919935710 | In vision-enabled autonomous systems such as robots and autonomous cars, video object detection plays a crucial role, and both its speed and accuracy are important factors to provide reliable operation. The key insight we show in this paper is that speed and accuracy are not necessarily a trade-off when it comes to image scaling. Our results show that re-scaling the image to a lower resolution will sometimes produce better accuracy. Based on this observation, we propose a novel approach, dubbed AdaScale, which adaptively selects the input image scale that improves both accuracy and speed for video object detection. To this end, our results on ImageNet VID and mini YouTube-BoundingBoxes datasets demonstrate 1.3 points and 2.7 points mAP improvement with 1.6x and 1.8x speedup, respectively. Additionally, we improve state-of-the-art video acceleration work by an extra 1.25x speedup with slightly better mAP on ImageNet VID dataset. | In this category, object detectors are designed to take an input image once and detect objects at various scales. That is, this category of prior work treats deep CNNs as scale-invariant. Prior work @cite_9 uses features from different layers in the CNN and merge them with normalization and scaling. Similar idea is also adopted by other work @cite_20 @cite_19 @cite_11 @cite_21 . From a different viewpoint, prior art @cite_10 proposes to use a recurrent network to approximate feature maps produced by images at different scales. Though single-shot approaches have shown great promise in better detecting various scales, the scale-invariant design philosophy generally requires a large model capacity @cite_4 @cite_10 . We note that, without perfect scale-invariance, different image scales will result in different accuracy, and prior art often uses a fixed single scale, 600 pixels on the smallest side of the image. Hence, this line of work could be further improved in terms of speed and accuracy when augmented to adaptive scaling. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_21",
"@cite_19",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2951829713",
"2768909086",
"2490270993",
"2964209717",
"2963387679",
"2949533892"
],
"abstract": [
"",
"It is well known that contextual and multi-scale representations are important for accurate visual recognition. In this paper we present the Inside-Outside Net (ION), an object detector that exploits information both inside and outside the region of interest. Contextual information outside the region of interest is integrated using spatial recurrent neural networks. Inside, we use skip pooling to extract information at multiple scales and levels of abstraction. Through extensive experiments we evaluate the design space and provide readers with an overview of what tricks of the trade are important. ION improves state-of-the-art on PASCAL VOC 2012 object detection from 73.9 to 76.4 mAP. On the new and more challenging MS COCO dataset, we improve state-of-art-the from 19.7 to 33.1 mAP. In the 2015 MS COCO Detection Challenge, our ION model won the Best Student Entry and finished 3rd place overall. As intuition suggests, our detection results provide strong evidence that context and multi-scale representations improve small object detection.",
"Real-time detection frameworks that typically utilize end-to-end networks to scan the entire vision range, have shown potential effectiveness in object detection. However, compared to more accurate but time-consuming frameworks, detection accuracy of existing real-time networks are still left far behind. Towards this end, this work proposes a novel CAD framework to improve detection accuracy while preserving the real-time speed. Moreover, to enhance the generalization ability of the proposed framework, we introduce maxout [1] to approximate the correlation between image pixels and network predictions. In addition, the non-maximum weighted (NMW) [2] is employed to eliminate the redundant bounding boxes that are considered as repetitive detections for the same objects. Extensive experiments are conducted on two detection benchmarks to demonstrate that the proposed framework achieves state-of-the-art performance.",
"A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection. The MS-CNN consists of a proposal sub-network and a detection sub-network. In the proposal sub-network, detection is performed at multiple output layers, so that receptive fields match objects of different scales. These complementary scale-specific detectors are combined to produce a strong multi-scale object detector. The unified network is learned end-to-end, by optimizing a multi-task loss. Feature upsampling by deconvolution is also explored, as an alternative to input upsampling, to reduce the memory and computation costs. State-of-the-art object detection performance, at up to 15 fps, is reported on datasets, such as KITTI and Caltech, containing a substantial number of small objects.",
"Since convolutional neural network (CNN) lacks an inherent mechanism to handle large scale variations, we always need to compute feature maps multiple times for multiscale object detection, which has the bottleneck of computational cost in practice. To address this, we devise a recurrent scale approximation (RSA) to compute feature map once only, and only through this map can we approximate the rest maps on other levels. At the core of RSA is the recursive rolling out mechanism: given an initial map on a particular scale, it generates the prediction on a smaller scale that is half the size of input. To further increase efficiency and accuracy, we (a): design a scale-forecast network to globally predict potential scales in the image since there is no need to compute maps on all levels of the pyramid. (b): propose a landmark retracing network (LRN) to retrace back locations of the regressed landmarks and generate a confidence score for each landmark; LRN can effectively alleviate false positives due to the accumulated error in RSA. The whole system could be trained end-to-end in a unified CNN framework. Experiments demonstrate that our proposed algorithm is superior against state-of-the-arts on face detection benchmarks and achieves comparable results for generic proposal generation. The source code of our system is available.",
"Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices. Current designs, however, accelerate generic CNNs; they do not exploit the unique characteristics of real-time vision. We propose to use the temporal redundancy in natural video to avoid unnecessary computation on most frames. A new algorithm, activation motion compensation, detects changes in the visual input and incrementally updates a previously-computed activation. The technique takes inspiration from video compression and applies well-known motion estimation techniques to adapt to visual changes. We use an adaptive key frame rate to control the trade-off between efficiency and vision quality as the input changes. We implement the technique in hardware as an extension to state-of-the-art CNN accelerator designs. The new unit reduces the average energy per frame by 54 , 62 , and 87 for three CNNs with less than 1 loss in vision accuracy.",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available."
]
} |
1902.02826 | 2918617607 | Image classification is vulnerable to adversarial attacks. This work investigates the robustness of Saak transform against adversarial attacks towards high performance image classification. We develop a complete image classification system based on multi-stage Saak transform. In the Saak transform domain, clean and adversarial images demonstrate different distributions at different spectral dimensions. Selection of the spectral dimensions at every stage can be viewed as an automatic denoising process. Motivated by this observation, we carefully design strategies of feature extraction, representation and classification that increase adversarial robustness. The performances with well-known datasets and attacks are demonstrated by extensive experimental evaluations. | One of the most interesting explorations in defense against adversarial attacks is done through adversarial training @cite_0 . This aims in augmenting adversarial samples along with clean samples for simultaneous training. While these methods help in defending against particular adversarial attacks for which it is trained for, they fail to generalize. Also, this type of training takes longer time for convergence, hence needs to be trained for more epochs. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2903272733"
],
"abstract": [
"Standard adversarial attacks change the predicted class label of an image by adding specially tailored small perturbations to its pixels. In contrast, a universal perturbation is an update that can be added to any image in a broad class of images, while still changing the predicted class label. We study the efficient generation of universal adversarial perturbations, and also efficient methods for hardening networks to these attacks. We propose a simple optimization-based universal attack that reduces the top-1 accuracy of various network architectures on ImageNet to less than 20 , while learning the universal perturbation 13X faster than the standard method. To defend against these perturbations, we propose universal adversarial training, which models the problem of robust classifier generation as a two-player min-max game. This method is much faster and more scalable than conventional adversarial training with a strong adversary (PGD), and yet yields models that are extremely resistant to universal attacks, and comparably resistant to standard (per-instance) black box attacks. We also discover a rather fascinating side-effect of universal adversarial training: attacks built for universally robust models transfer better to other (black box) models than those built with conventional adversarial training."
]
} |
1902.02826 | 2918617607 | Image classification is vulnerable to adversarial attacks. This work investigates the robustness of Saak transform against adversarial attacks towards high performance image classification. We develop a complete image classification system based on multi-stage Saak transform. In the Saak transform domain, clean and adversarial images demonstrate different distributions at different spectral dimensions. Selection of the spectral dimensions at every stage can be viewed as an automatic denoising process. Motivated by this observation, we carefully design strategies of feature extraction, representation and classification that increase adversarial robustness. The performances with well-known datasets and attacks are demonstrated by extensive experimental evaluations. | Adversarial detection involves detection of an adversarial sample before passing through the network. Adversarial samples can be detected using statistical tests @cite_11 , estimating Bayesian uncertainty @cite_21 , using noise reduction methods like scalar quantization and spatial smoothing filter @cite_15 . Though these methods pave way to a good adversarial sample detection problem, these detectors still possess the risk of being fooled by the attacker. | {
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_11"
],
"mid": [
"",
"2594867206",
"2590523583"
],
"abstract": [
"",
"Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations--small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93 ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.",
"Machine Learning (ML) models are applied in a variety of tasks such as network intrusion detection or Malware classification. Yet, these models are vulnerable to a class of malicious inputs known as adversarial examples. These are slightly perturbed inputs that are classified incorrectly by the ML model. The mitigation of these adversarial inputs remains an open problem. As a step towards understanding adversarial examples, we show that they are not drawn from the same distribution than the original data, and can thus be detected using statistical tests. Using thus knowledge, we introduce a complimentary approach to identify specific inputs that are adversarial. Specifically, we augment our ML model with an additional output, in which the model is trained to classify all adversarial inputs. We evaluate our approach on multiple adversarial example crafting methods (including the fast gradient sign and saliency map methods) with several datasets. The statistical test flags sample sets containing adversarial inputs confidently at sample sizes between 10 and 100 data points. Furthermore, our augmented model either detects adversarial examples as outliers with high accuracy (> 80 ) or increases the adversary's cost - the perturbation added - by more than 150 . In this way, we show that statistical properties of adversarial examples are essential to their detection."
]
} |
1902.02826 | 2918617607 | Image classification is vulnerable to adversarial attacks. This work investigates the robustness of Saak transform against adversarial attacks towards high performance image classification. We develop a complete image classification system based on multi-stage Saak transform. In the Saak transform domain, clean and adversarial images demonstrate different distributions at different spectral dimensions. Selection of the spectral dimensions at every stage can be viewed as an automatic denoising process. Motivated by this observation, we carefully design strategies of feature extraction, representation and classification that increase adversarial robustness. The performances with well-known datasets and attacks are demonstrated by extensive experimental evaluations. | Use of knowledge distillation when training networks can be used as defense against adversarial samples @cite_10 . Reinforcement of network structure by using bounded ReLU activations help in enhancing stability to adversarial perturbations @cite_3 . Pixel defend is used as an image purification process, as described in @cite_17 . | {
"cite_N": [
"@cite_10",
"@cite_3",
"@cite_17"
],
"mid": [
"2174868984",
"2963626858",
""
],
"abstract": [
"Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95 to less than 0.5 on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10^30. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800 on one of the DNNs we tested.",
"Following the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat. Adversarial samples are crafted with a deliberate intention of undermining a system. In the case of DNNs, the lack of better understanding of their working has prevented the development of efficient defenses. In this paper, we propose a new defense method based on practical observations which is easy to integrate into models and performs better than state-of-the-art defenses. Our proposed solution is meant to reinforce the structure of a DNN, making its prediction more stable and less likely to be fooled by adversarial samples. We conduct an extensive experimental study proving the efficiency of our method against multiple attacks, comparing it to numerous defenses, both in white-box and black-box setups. Additionally, the implementation of our method brings almost no overhead to the training procedure, while maintaining the prediction performance of the original model on clean samples.",
""
]
} |
1902.02826 | 2918617607 | Image classification is vulnerable to adversarial attacks. This work investigates the robustness of Saak transform against adversarial attacks towards high performance image classification. We develop a complete image classification system based on multi-stage Saak transform. In the Saak transform domain, clean and adversarial images demonstrate different distributions at different spectral dimensions. Selection of the spectral dimensions at every stage can be viewed as an automatic denoising process. Motivated by this observation, we carefully design strategies of feature extraction, representation and classification that increase adversarial robustness. The performances with well-known datasets and attacks are demonstrated by extensive experimental evaluations. | @cite_14 applies lossy Saak transform to adversarially perturbed images as a pre-processing tool to defend against adversarial attacks. The method is based on the observation that outputs of Saak transform are very discriminative in differentiating adversarial examples from clean ones. Instead of using Saak transform as pre-processing tool, we apply multi-stage Saak transform to build a complete image classification pipeline and design new strategies of feature selection, representation and classification to defend against adversarial attacks. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2885625512"
],
"abstract": [
"Deep neural networks (DNNs) are known to be vulnerable to adversarial perturbations, which imposes a serious threat to DNN-based decision systems. In this paper, we propose to apply the lossy Saak transform to adversarially perturbed images as a preprocessing tool to defend against adversarial attacks. Saak transform is a recently-proposed state-of-the-art for computing the spatial-spectral representations of input images. Empirically, we observe that outputs of the Saak transform are very discriminative in differentiating adversarial examples from clean ones. Therefore, we propose a Saak transform based preprocessing method with three steps: 1) transforming an input image to a joint spatial-spectral representation via the forward Saak transform, 2) apply filtering to its high-frequency components, and, 3) reconstructing the image via the inverse Saak transform. The processed image is found to be robust against adversarial perturbations. We conduct extensive experiments to investigate various settings of the Saak transform and filtering functions. Without harming the decision performance on clean images, our method outperforms state-of-the-art adversarial defense methods by a substantial margin on both the CIFAR-10 and ImageNet datasets. Importantly, our results suggest that adversarial perturbations can be effectively and efficiently defended using state-of-the-art frequency analysis."
]
} |
1902.02880 | 2918794972 | Can multilayer neural networks -- typically constructed as highly complex structures with many nonlinearly activated neurons across layers -- behave in a non-trivial way that yet simplifies away a major part of their complexities? In this work, we uncover a phenomenon in which the behavior of these complex networks -- under suitable scalings and stochastic gradient descent dynamics -- becomes independent of the number of neurons as this number grows sufficiently large. We develop a formalism in which this many-neurons limiting behavior is captured by a set of equations, thereby exposing a previously unknown operating regime of these networks. While the current pursuit is mathematically non-rigorous, it is complemented with several experiments that validate the existence of this behavior. | As mentioned, several recent works have studied the MF limit in the two-layers network case. The works @cite_44 @cite_18 @cite_45 @cite_35 establish the MF limit, and in particular, @cite_44 proves that this holds as soon as the number of neurons exceeds the data dimension. @cite_44 @cite_18 utilize this limit to prove that (noisy) SGD can converge to (near) global optimum under different assumptions. For a specific class of activations and data distribution, @cite_1 proves that this convergence is exponentially fast using the displacement convexity property of the MF limit. Taking the same viewpoint, @cite_48 proves a convergence result for a specifically chosen many-neurons limit. @cite_35 @cite_4 study the fluctuations around the MF limit. Our analysis of the multilayer case requires substantial extension and new ideas, uncovering certain properties that are not obvious from the two-layers analysis (see also Section ). | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_4",
"@cite_48",
"@cite_1",
"@cite_44",
"@cite_45"
],
"mid": [
"2966530573",
"2952469083",
"2889237180",
"",
"2908088598",
"",
"2798826368"
],
"abstract": [
"Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, but rigorous results about the approximation error of neural networks after training are few. Here we establish conditions for global convergence of the standard optimization algorithm used in machine learning applications, stochastic gradient descent (SGD), and quantify the scaling of its error with the size of the network. This is done by reinterpreting SGD as the evolution of a particle system with interactions governed by a potential related to the objective or \"loss\" function used to train the network. We show that, when the number @math of units is large, the empirical distribution of the particles descends on a convex landscape towards the global minimum at a rate independent of @math , with a resulting approximation error that universally scales as @math . These properties are established in the form of a Law of Large Numbers and a Central Limit Theorem for the empirical distribution. Our analysis also quantifies the scale and nature of the noise introduced by SGD and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural networks to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as @math .",
"Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension.",
"Machine learning has revolutionized fields such as image, text, and speech recognition. There's also growing interest in applying machine and deep learning methods in science, engineering, medicine, and finance. Despite their immense success in practice, there is limited mathematical understanding of neural networks. We mathematically study neural networks in the asymptotic regime of simultaneously (A) large network sizes and (B) large numbers of stochastic gradient descent training iterations. We rigorously prove that the neural network satisfies a central limit theorem. Our result describes the neural network's fluctuations around its mean-field limit. The fluctuations have a Gaussian distribution and satisfy a stochastic partial differential equation.",
"",
"Fitting a function by using linear combinations of a large number @math of simple' components is one of the most fruitful ideas in statistical learning. This idea lies at the core of a variety of methods, from two-layer neural networks to kernel regression, to boosting. In general, the resulting risk minimization problem is non-convex and is solved by gradient descent or its variants. Unfortunately, little is known about global convergence properties of these approaches. Here we consider the problem of learning a concave function @math on a compact convex domain @math , using linear combinations of bump-like' components (neurons). The parameters to be fitted are the centers of @math bumps, and the resulting empirical risk minimization problem is highly non-convex. We prove that, in the limit in which the number of neurons diverges, the evolution of gradient descent converges to a Wasserstein gradient flow in the space of probability distributions over @math . Further, when the bump width @math tends to @math , this gradient flow has a limit which is a viscous porous medium equation. Remarkably, the cost function optimized by this gradient flow exhibits a special property known as displacement convexity, which implies exponential convergence rates for @math , @math . Surprisingly, this asymptotic theory appears to capture well the behavior for moderate values of @math . Explaining this phenomenon, and understanding the dependence on @math in a quantitative manner remains an outstanding challenge.",
"",
"Machine learning, and in particular neural network models, have revolutionized fields such as image, text, and speech recognition. Today, many important real-world applications in these areas are driven by neural networks. There are also growing applications in engineering, robotics, medicine, and finance. Despite their immense success in practice, there is limited mathematical understanding of neural networks. This paper illustrates how neural networks can be studied via stochastic analysis, and develops approaches for addressing some of the technical challenges which arise. We analyze one-layer neural networks in the asymptotic regime of simultaneously (A) large network sizes and (B) large numbers of stochastic gradient descent training iterations. We rigorously prove that the empirical distribution of the neural network parameters converges to the solution of a nonlinear partial differential equation. This result can be considered a law of large numbers for neural networks. In addition, a consequence of our analysis is that the trained parameters of the neural network asymptotically become independent, a property which is commonly called \"propagation of chaos\"."
]
} |
1902.02880 | 2918794972 | Can multilayer neural networks -- typically constructed as highly complex structures with many nonlinearly activated neurons across layers -- behave in a non-trivial way that yet simplifies away a major part of their complexities? In this work, we uncover a phenomenon in which the behavior of these complex networks -- under suitable scalings and stochastic gradient descent dynamics -- becomes independent of the number of neurons as this number grows sufficiently large. We develop a formalism in which this many-neurons limiting behavior is captured by a set of equations, thereby exposing a previously unknown operating regime of these networks. While the current pursuit is mathematically non-rigorous, it is complemented with several experiments that validate the existence of this behavior. | We take a note on the work @cite_22 , which shares a few similarities with our work in the forward pass description (for instance, in Eq. (1) of @cite_22 as compared to Eq. ) in our work). @cite_22 differs in that it takes a kernel method perspective and develops a Gaussian process formulation, which makes strong assumptions on the distribution of the weights. Its formulation does not extend beyond three layers. Meanwhile our work focuses on the MF limit, points out explicitly the appropriate scalings, proposes new crucial ideas to address the backward pass and the learning dynamics, and is not limited to any specific number of layers. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2212676342"
],
"abstract": [
"Contemporary deep neural networks exhibit impressive results on practical problems. These networks generalize well although their inherent capacity may extend significantly beyond the number of training examples. We analyze this behavior in the context of deep, infinite neural networks. We show that deep infinite layers are naturally aligned with Gaussian processes and kernel methods, and devise stochastic kernels that encode the information of these networks. We show that stability results apply despite the size, offering an explanation for their empirical success."
]
} |
1902.02697 | 2950573814 | The effect of signals on stability, throughput region, and delay in a two-user slotted ALOHA based random-access system with collisions is considered. This work gives rise to the development of random access G-networks, which can model virus attacks or other malfunctions and introduce load balancing in highly interacting networks. The users are equipped with infinite capacity buffers accepting external bursty arrivals. We consider both negative and triggering signals. Negative signals delete a packet from a user queue, while triggering signals cause the instantaneous transfer of packets among user queues. We obtain the exact stability region, and show that the stable throughput region is a subset of it. Moreover, we perform a compact mathematical analysis to obtain exact expressions for the queueing delay by solving a Riemann boundary value problem. A computationally efficient way to obtain explicit bounds for the queueing delay is also presented. The theoretical findings are numerically evaluated and insights regarding the system performance are derived. | Motivated by neural network modelling @cite_18 , a novel stochastic network, called G-network or queueing network with signals was introduced as a unifying model for neural and queuing networks. In contrast to traditional queueing networks where (positive) customers obey the specified service and routing disciplines determined by the network dynamics, there is another type of customers with the effect of signal that interact upon arrival at a queue with the queue or with the backlogged customers. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2010470943"
],
"abstract": [
"We introduce a new class of random neural networks in which signals are either negative or positive. A positive signal arriving at a neuron increases its total signal count or potential by one; a negative signal reduces it by one if the potential is positive, and has no effect if it is zero. When its potential is positive, a neuron fires, sending positive or negative signals at random intervals to neurons or to the outside. Positive signals represent excitatory signals and negative signals represent inhibition. We show that this model, with exponential signal emission intervals, Poisson external signal arrivals, and Markovian signal movements between neurons, has a product form leading to simple analytical expressions for the system state."
]
} |
1902.02697 | 2950573814 | The effect of signals on stability, throughput region, and delay in a two-user slotted ALOHA based random-access system with collisions is considered. This work gives rise to the development of random access G-networks, which can model virus attacks or other malfunctions and introduce load balancing in highly interacting networks. The users are equipped with infinite capacity buffers accepting external bursty arrivals. We consider both negative and triggering signals. Negative signals delete a packet from a user queue, while triggering signals cause the instantaneous transfer of packets among user queues. We obtain the exact stability region, and show that the stable throughput region is a subset of it. Moreover, we perform a compact mathematical analysis to obtain exact expressions for the queueing delay by solving a Riemann boundary value problem. A computationally efficient way to obtain explicit bounds for the queueing delay is also presented. The theoretical findings are numerically evaluated and insights regarding the system performance are derived. | G-Networks @cite_38 establish a versatile class of queueing networks with a computationally efficient product form solution, which had been proved to exist by using new techniques from the theory of fixed point equation @cite_43 . In its simplest version, a signal arriving at a non empty queue forces a positive customer to leave the network immediately @cite_38 . Since their introduction, G-networks have been extensively studied covering several extensions such as triggered movement, which redirect customers among the queues @cite_31 ; catastrophes or batch service @cite_85 , adders @cite_48 ; multiple classes of positive customers and signals @cite_25 , state-dependent service disciplines @cite_83 @cite_14 @cite_74 , tandem networks @cite_84 @cite_27 , deletion of a random amount of work @cite_82 @cite_55 , retrials @cite_23 @cite_26 (not exhaustive list). For a complete bibliography see @cite_15 @cite_37 @cite_4 . | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_14",
"@cite_26",
"@cite_4",
"@cite_15",
"@cite_48",
"@cite_55",
"@cite_85",
"@cite_84",
"@cite_43",
"@cite_27",
"@cite_83",
"@cite_74",
"@cite_23",
"@cite_31",
"@cite_25",
"@cite_82"
],
"mid": [
"2056377128",
"1966570856",
"2329313615",
"2026084050",
"2615452329",
"2921136923",
"2725399272",
"2120325725",
"2074419909",
"1995626820",
"2121777042",
"2328670288",
"2608188343",
"2155402103",
"1973764571",
"2056494836",
"2038297639",
"2127602924"
],
"abstract": [
"We introduce a new class of queueing networks in which customers are either 'negative' or 'positive'. A negative customer arriving to a queue reduces the total customer count in that queue by 1 if the queue length is positive; it has no effect at all if the queue length is empty. Negative customers do not receive service. Customers leaving a queue for another one can either become negative or remain positive. Positive customers behave as ordinary queueing network customers and receive service. We show that this model with exponential service times, Poisson external arrivals, with the usual independence assumptions for service times, and Markovian customer movements between queues, has product form. It is quasi-reversible in the usual sense, but not in a broader sense which includes all destructions of customers in the set of departures. The existence and uniqueness of the solutions to the (nonlinear) customer flow equations, and hence of the product form solution, is discussed.",
"In the last two decades, researchers have really been embracing the idea of G-networks with negative arrivals and the relevant product form solution including nonlinear traffic equations, which is the unified model for queueing networks and neural networks. This paper reports the initiative to collect and classify a bibliography on G-networks.",
"It has recently been shown that networks of queues with state-dependent movement of negative customers, and with state-independent triggering of customer movement have product-form equilibrium distributions. Triggers and negative customers are entities which, when arriving to a queue. force a single customer to be routed through the network or leave the network respectively. They are 'signals' which affect control network behaviour. The provision of state-dependent intensities introduces queues other than single-server queues into the network. This paper considers networks with state-dependent intensities in which signals can be either a trigger or a batch of negative customers (the batch size being determined by an arbitrary probability distribution). It is shown that such networks still have a product-form equilibrium distribution. Natural methods for state space truncation and for the inclusion of multiple customer types in the network can be viewed as special cases of this state dependence. A further generalisation allows for the possibility of signals building up at nodes.",
"A stochastic clearing system is characterized by the existence of an output mechanism that instantaneously clears the system, i.e. removes all work currently present. In this paper we study the stochastic behavior of a single server clearing queue in which customers cannot be continuously in contact with the server, but can reinitiate the demand some time later. We develop a comprehensive analysis of the system including its limiting behavior, busy period, and waiting time",
"",
"",
"Abstract: Queueing networks are used to model the performance of the Internet, of manufacturing and job-shop systems, supply chains, and other networked systems in transportation or emergency management. Composed of service stations where customers receive service, and then move to another service station till they leave the network, queueing networks are based on probabilistic assumptions concerning service times and customer movement that represent the variability of system workloads. Subject to restrictive assumptions regarding external arrivals, Markovian movement of customers, and service time distributions, such networks can be solved efficiently with “product form solutions” that reduce the need for software simulators requiring lengthy computations. G-networks generalise these models to include the effect of “signals” that re-route customer traffic, or negative customers that reject service requests, and also have a convenient product form solution. This paper extends G-networks by including a new type of signal, that we call an “Adder”, which probabilistically changes the queue length at the service center that it visits, acting as a load regulator. We show that this generalisation of G-networks has a product form solution.",
"Recently, a Pollaczek-Khintchine-like formulation for M G l queues with disasters has been obtained. A disaster is said to occur if a negative arrival causes all the customers (and therefore work) to depart from the system immediately. This study generalizes this result further, as it is shown to hold even when negative arrivals cause only part of the work to be demolished. In other words, an arbitrary amount of work, following a known distribution, is allowed to be removed at a negative event. Under these circumstances, a general approach for obtaining the Pollaczek–Khintchine Formula is proposed, which is then illustrated via several examples. Typically, it is seen that the formulainvolves certain parameters that are not explicitly known. The formula itself is made possible due to the number in system being geometric under preemptive last in-first out discipline.",
"",
"An important class of queueing networks is characterized by the following feature: in contrast with ordinary units, a disaster may remove all work from the network. Applications of such networks include computer networks with virus infection, migration processes with mass exodus and serial production lines with catastrophes. In this paper, we deal with a two-stage tandem queue with blocking operating under die presence of a secondary flow of disasters. The arrival flows of units and disasters are general Markovian arrival processes. Using spectral analysis, we determine the stationary distribution at departure epochs. That distribution enables us to derive the distribution of the number of units which leave the network at a disaster epoch, We calculate the stationary distribution at an arbitrary time and, finally, we give numerical results and graphs for certain probabilistic descriptors of the network.",
"",
"The Laplace transform of the probability distribution of the end-to-end delay in tandem networks is obtained where the first and or second queue are G-queues, i.e. they have negative arrivals. For the most general case the method is based on the solution of a boundary value problem on a closed contour in the complex plane, which itself reduces to the solution of a Fredholm integral equation of the second kind. We also consider the dependence or independence of the sojourn times at each queue in the two special cases where only one of the queues is a G-queue, the other having no negative arrivals.",
"",
"Consider a generalized queueing network model that is subject to two types of arrivals. The first type represents the regular customers; the second type represents signals. A signal induces a regular customer already present at a node to leave. Gelenbe [5] showed that such a network possesses a product form solution when each node consists of a single exponential server. In this paper we study a number of issues concerning this class of networks. First, we explain why such networks have a product form solution. Second, we generalize existing results to include different service disciplines, state-dependent service rates, multiple job classes, and batch servicing. Finally, we establish the relationship between these networks and networks of quasi-reversible queues. We show that the product form solution of the generalized networks is a consequence of a property of the individual nodes viewed in isolation. This property is similar to the quasi-reversibility property of the nodes of a Jackson network: if the arrivals of the regular customers and of the signals at a node in isolation are independent Poisson, the departure processes of the regular customers and the signals are also independent Poisson, and the current state of the system is independent of the past departure processes.",
"There is a growing interest in queueing systems with negative arrivals; i.e. where the arrival of a negative customer has the effect of deleting some customer in the queue. Recently, Hanison and Pitel (1996) investigated the queue length distribution of a single server queue of type M G 1 with negative arrivals. In this paper we extend the analysis to the context of queueing systems with request repeated. We show that the Limiting distribution of the system state can still be reduced to a Fredholm integral equation. We solve such an equation numerically by introducing an auxiliary 'truncated' system which can easily be evaluated with the help of a regenerative approach.",
"The generalized queueing networks (G-networks) which we introduce in this paper contain customers and signals. Both customers and signals can be exogenous, or can be obtained by a Markovian movement of a customer from one queue to another after service transforming itself into a signal or remaining a customer. A signal entering a queue forces a customer to move instantaneously to another queue according to a Markovian routing rule, or to leave the network, while customers request service. This synchronised or triggered motion is useful in representing the effect of tokens in Petri nets, in modelling systems in which customers and work can be instantaneously moved from one queue to the other upon certain events, and also for certain behaviours encountered in parallel computer system modelling. We show that this new class of network has product-form stationary solution, and establish the non-linear customer flow equations which govern it. Network stability is discussed in this new context.",
"Abstract G -networks are queueing models in which the types of customers one usually deals with in queues are enriched in several ways. In Gnetworks, positive customers are those that are ordinarily found in queueing systems; they queue up and wait for service, obtain service and then leave or go to some other queue. Negative customers have the specific function of destroying ordinary or positive customers. Finally triggers simply move an ordinary customer from one queue to the other. The term “signal” is used to cover negative customers and triggers. G-networks contain these three type of entities with certain restrictions; positive customers can move from one queue to another, and they can change into negative customers or into triggers when they leave a queue. On the other hand, signals (i.e. negative customers and triggers) do not queue up for service and simply disappear after having joined a queue and having destroyed or moved a negative customer. This paper considers this class of networks with multiple classes of positive customers and of signals. We show that with appropriate assumptions on service times, service disciplines, and triggering or destruction rules on the part of signals, these networks have a product form solution, extending earlier results.",
"We consider an M G 1 queue with the special feature of additional negative customers, who arrive according to a Poisson process. Negative customers require no service, but at their arrival a stochastic amount of work is instantaneously removed from the system. We show that the workload distribution in this M G 1 queue with negative customers equals the waiting time distribution in a GI G 1 queue with ordinary customers only; the effect of the negative customers is incorporated in the new arrival process."
]
} |
1902.02697 | 2950573814 | The effect of signals on stability, throughput region, and delay in a two-user slotted ALOHA based random-access system with collisions is considered. This work gives rise to the development of random access G-networks, which can model virus attacks or other malfunctions and introduce load balancing in highly interacting networks. The users are equipped with infinite capacity buffers accepting external bursty arrivals. We consider both negative and triggering signals. Negative signals delete a packet from a user queue, while triggering signals cause the instantaneous transfer of packets among user queues. We obtain the exact stability region, and show that the stable throughput region is a subset of it. Moreover, we perform a compact mathematical analysis to obtain exact expressions for the queueing delay by solving a Riemann boundary value problem. A computationally efficient way to obtain explicit bounds for the queueing delay is also presented. The theoretical findings are numerically evaluated and insights regarding the system performance are derived. | In the field of computer network performance, the RNN has been used to build distributed controllers for quality of service routing in packet networks @cite_32 @cite_28 and in the design of Software Defined Network controllers @cite_0 . Real-time optimised task allocation algorithms in Cloud systems @cite_6 have also been built and tested. Recent applications has addressed the use of the RNN to detect attacks on Internet of Things (IoT) gateways @cite_30 . The simple and decentralized nature of ALOHA protocol @cite_65 made it very popular in multiple access communication systems. The ever increasing need for massive uncoordinated access has increased the interest on random access protocols @cite_63 @cite_79 , which remain an active research area with challenging open problems even for very simple networks @cite_44 @cite_9 . | {
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_9",
"@cite_65",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_79",
"@cite_44",
"@cite_63"
],
"mid": [
"2895633135",
"2293168792",
"1489060051",
"2150847784",
"1980925810",
"2130095783",
"2963492437",
"",
"2158013409",
"2141804605"
],
"abstract": [
"In this paper, we analyze the network attacks that can be launched against IoT gateways, identify the relevant metrics to detect them, and explain how they can be computed from packet captures. We also present the principles and design of a deep learning-based approach using dense random neural networks (RNN) for the online detection of network attacks. Empirical validation results on packet captures in which attacks were inserted show that the Dense RNN correctly detects attacks.",
"This paper uses big data and machine learning for the real-time management of Internet scale quality-of-service (QoS) route optimisation with an overlay network. Based on the collection of data sampled every 2 min over a large number of source–destinations pairs, we show that intercontinental Internet protocol (IP) paths are far from optimal with respect to QoS metrics such as end-to-end round-trip delay. We, therefore, develop a machine learning-based scheme that exploits large scale data collected from communicating node pairs in a multihop overlay network that uses IP between the overlay nodes, and selects paths that provide substantially better QoS than IP. Inspired from cognitive packet network protocol, it uses random neural networks with reinforcement learning based on the massive data that is collected, to select intermediate overlay hops. The routing scheme is illustrated on a 20-node intercontinental overlay network that collects some @math measurements per week, and makes scalable distributed routing decisions. Experimental results show that this approach improves QoS significantly and efficiently.",
"In this paper, a cross-layer view for roles of signal processing in random access network and vice versa is presented. The two cases where cross-layer design has a quantifiable impact on system performance are discussed. The first case is a small network (such as wireless LAN) where a few nodes with bursty arrivals communicate with an access point. The design objective is to achieve the highest throughput among users with variable rate and delay constraints. The impact of PHY layer design on MAC protocol is examined and illustrates a tradeoff between allocating resources to the PHY layer and to MAC layer. The second case, in contrast, deals with large-scale sensor networks where each node carries little information but is severely constrained by its computation and communication complexity and most importantly, battery power. This paper emphasizes that the design of signal processing algorithms must take into account the role of MAC and the nature of random arrivals and bursty transmissions.",
"In September 1968 the University of Hawaii began work on a research program to investigate the use of radio communications for computer-computer and console-computer links. In this report we describe a remote-access computer system---THE ALOHA SYSTEM---under development as part of that research program and discuss some advantages of radio communications over conventional wire communications for interactive users of a large computer system. Although THE ALOHA SYSTEM research program is composed of a large number of research projects, in this report we shall be concerned primarily with a novel form of random-access radio communications developed for use within THE ALOHA SYSTEM.",
"Network software adapts to user needs and load variations and failures to provide reliable communications in largely unknown networks.",
"The increasingly wide application of Cloud Computing enables the consolidation of tens of thousands of applications in shared infrastructures. Thus, meeting the QoS requirements of so many diverse applications in such shared resource environments has become a real challenge, especially since the characteristics and workload of applications differ widely and may change over time. This paper presents an experimental system that can exploit a variety of online QoS aware adaptive task allocation schemes, and three such schemes are designed and compared. These are a measurement driven algorithm that uses reinforcement learning, secondly a “sensible” allocation algorithm that assigns tasks to sub-systems that are observed to provide a lower response time, and then an algorithm that splits the task arrival stream into sub-streams at rates computed from the hosts’ processing capabilities. All of these schemes are compared via measurements among themselves and with a simple round-robin scheduler, on two experimental test-beds with homogenous and heterogenous hosts having different processing capacities.",
"In Software Defined Networks (SDN), intensive traffic monitoring is used to optimize the Quality-of-Service (QoS) of the network paths which are selected. Thus, we introduce the use of the Cognitive Packet Network (CPN) algorithm to SDN in order to optimize the search for new high-QoS paths. We install the CPN algorithm in the Cognitive Routing Engine (CRE), a new application software for SDN, and show that with limited monitoring overhead we are able to determine the near-optimal paths for given QoS metrics that may be proposed by the end users. Measurements that we have conducted on an experimental replica of the GEANT network that our approach uses close to 10 times less monitoring data than conventional SDN, but that we are able to approach the optimal paths within 2 .",
"",
"Information theory has not yet had a direct impact on networking, although there are similarities in concepts and methodologies that have consistently attracted the attention of researchers from both fields. In this paper, we review several topics that are related to communication networks and that have an information-theoretic flavor, including multiaccess protocols, timing channels, effective bandwidth of bursty data sources, deterministic constraints on datastreams, queuing theory, and switching networks.",
"The 3GPP has raised the need to revisit the design of next generations of cellular networks in order to make them capable and efficient to provide M2M services. One of the key challenges that has been identified is the need to enhance the operation of the random access channel of LTE and LTE-A. The current mechanism to request access to the system is known to suffer from congestion and overloading in the presence of a huge number of devices. For this reason, different research groups around the globe are working towards the design of more efficient ways of managing the access to these networks in such circumstances. This paper aims to provide a survey of the alternatives that have been proposed over the last years to improve the operation of the random access channel of LTE and LTE-A. A comprehensive discussion of the different alternatives is provided, identifying strengths and weaknesses of each one of them, while drawing future trends to steer the efforts over the same shooting line. In addition, while existing literature has been focused on the performance in terms of delay, the energy efficiency of the access mechanism of LTE will play a key role in the deployment of M2M networks. For this reason, a comprehensive performance evaluation of the energy efficiency of the random access mechanism of LTE is provided in this paper. The aim of this computer-based simulation study is to set a baseline performance upon which new and more energy-efficient mechanisms can be designed in the near future."
]
} |
1902.02678 | 2914044559 | In this work, we propose a single deep neural network for panoptic segmentation, for which the goal is to provide each individual pixel of an input image with a class label, as in semantic segmentation, as well as a unique identifier for specific objects in an image, following instance segmentation. Our network makes joint semantic and instance segmentation predictions and combines these to form an output in the panoptic format. This has two main benefits: firstly, the entire panoptic prediction is made in one pass, reducing the required computation time and resources; secondly, by learning the tasks jointly, information is shared between the two tasks, thereby improving performance. Our network is evaluated on two street scene datasets: Cityscapes and Mapillary Vistas. By leveraging information exchange and improving the merging heuristics, we increase the performance of the single network, and achieve a score of 23.9 on the Panoptic Quality (PQ) metric on Mapillary Vistas validation, with an input resolution of 640 x 900 pixels. On Cityscapes validation, our method achieves a PQ score of 45.9 with an input resolution of 512 x 1024 pixels. Moreover, our method decreases the prediction time by a factor of 2 with respect to separate networks. | In semantic segmentation, it is very important that spatial relations are preserved, since the output is directly spatially related to the input. For this reason, the application of convolutional layers is essential. The first semantic segmentation architecture that consists of a Fully Convolutional Network (FCN), i.e. applying only convolutional layers, was presented in @cite_18 . They apply an FCN to decode the image into feature maps, make class predictions on these feature maps, and apply bilinear upsampling to create the segmentation masks. The SegNet model @cite_14 is also an FCN, but it applies a decoding network instead of bilinear upsampling. As of recently, PSPNet is the state-of-the-art model, as it improves performance by leveraging information from different levels of the feature map, introducing a sense of context @cite_3 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_3"
],
"mid": [
"2395611524",
"2963881378",
"2560023338"
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional networks achieve improved segmentation of PASCAL VOC (30 relative improvement to 67.2 mean IU on 2012), NYUDv2, SIFT Flow, and PASCAL-Context, while inference takes one tenth of a second for a typical image.",
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .",
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes."
]
} |
1902.02678 | 2914044559 | In this work, we propose a single deep neural network for panoptic segmentation, for which the goal is to provide each individual pixel of an input image with a class label, as in semantic segmentation, as well as a unique identifier for specific objects in an image, following instance segmentation. Our network makes joint semantic and instance segmentation predictions and combines these to form an output in the panoptic format. This has two main benefits: firstly, the entire panoptic prediction is made in one pass, reducing the required computation time and resources; secondly, by learning the tasks jointly, information is shared between the two tasks, thereby improving performance. Our network is evaluated on two street scene datasets: Cityscapes and Mapillary Vistas. By leveraging information exchange and improving the merging heuristics, we increase the performance of the single network, and achieve a score of 23.9 on the Panoptic Quality (PQ) metric on Mapillary Vistas validation, with an input resolution of 640 x 900 pixels. On Cityscapes validation, our method achieves a PQ score of 45.9 with an input resolution of 512 x 1024 pixels. Moreover, our method decreases the prediction time by a factor of 2 with respect to separate networks. | Instance segmentation, on the other hand, is closely related to bounding box object detection. Instance segmentation extends object detection by predicting per-pixel masks for the detected objects. Therefore, many methods choose to make instance segmentation predictions by predicting instance masks for detected objects. A state-of-the art instance segmentation method is Mask R-CNN @cite_12 . In this approach, the object detection method of Faster R-CNN @cite_6 is extended with per-pixel instance mask predictions for for each bounding box that is likely to contain an object. Recently, the Mask R-CNN architecture has been improved with the development of Feature Pyramid Networks @cite_16 and the Path Aggregation Network @cite_5 , leading to new state-of-the-art results. | {
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_6",
"@cite_12"
],
"mid": [
"2793693263",
"2565639579",
"639708223",
""
],
"abstract": [
"The way that information propagates in neural networks is of great importance. In this paper, we propose Path Aggregation Network (PANet) aiming at boosting information flow in proposal-based instance segmentation framework. Specifically, we enhance the entire feature hierarchy with accurate localization signals in lower layers by bottom-up path augmentation, which shortens the information path between lower layers and topmost feature. We present adaptive feature pooling, which links feature grid and all feature levels to make useful information in each feature level propagate directly to following proposal subnetworks. A complementary branch capturing different views for each proposal is created to further improve mask prediction. These improvements are simple to implement, with subtle extra computational overhead. Our PANet reaches the 1st place in the COCO 2017 Challenge Instance Segmentation task and the 2nd place in Object Detection task without large-batch training. It is also state-of-the-art on MVD and Cityscapes. Code is available at this https URL",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
""
]
} |
1902.02678 | 2914044559 | In this work, we propose a single deep neural network for panoptic segmentation, for which the goal is to provide each individual pixel of an input image with a class label, as in semantic segmentation, as well as a unique identifier for specific objects in an image, following instance segmentation. Our network makes joint semantic and instance segmentation predictions and combines these to form an output in the panoptic format. This has two main benefits: firstly, the entire panoptic prediction is made in one pass, reducing the required computation time and resources; secondly, by learning the tasks jointly, information is shared between the two tasks, thereby improving performance. Our network is evaluated on two street scene datasets: Cityscapes and Mapillary Vistas. By leveraging information exchange and improving the merging heuristics, we increase the performance of the single network, and achieve a score of 23.9 on the Panoptic Quality (PQ) metric on Mapillary Vistas validation, with an input resolution of 640 x 900 pixels. On Cityscapes validation, our method achieves a PQ score of 45.9 with an input resolution of 512 x 1024 pixels. Moreover, our method decreases the prediction time by a factor of 2 with respect to separate networks. | We have seen that, so far, separate instance segmentation and semantic segmentation networks have been used for panoptic segmentation @cite_7 . As a result, it was possible to use networks that are optimized for these specific tasks. However, there are also downsides to this method. If the predictions were made using a single network, computation time and resources could be decreased, because fewer parameters would be required. This is the case since a significant part of the processing is spent on low-level feature extraction layers that can be shared between different branches in a network. Moreover, jointly learning multiple tasks has the potential of improving performance, because information can be shared between different parts of the network. Therefore, we propose to address the task of panoptic segmentation by using a single network that makes parallel semantic segmentation and instance segmentation predictions, and fuses these outputs using heuristics. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2798040152"
],
"abstract": [
"We propose and study a task we name panoptic segmentation (PS). Panoptic segmentation unifies the typically distinct tasks of semantic segmentation (assign a class label to each pixel) and instance segmentation (detect and segment each object instance). The proposed task requires generating a coherent scene segmentation that is rich and complete, an important step toward real-world vision systems. While early work in computer vision addressed related image scene parsing tasks, these are not currently popular, possibly due to lack of appropriate metrics or associated recognition challenges. To address this, we propose a novel panoptic quality (PQ) metric that captures performance for all classes (stuff and things) in an interpretable and unified manner. Using the proposed metric, we perform a rigorous study of both human and machine performance for PS on three existing datasets, revealing interesting insights about the task. The aim of our work is to revive the interest of the community in a more unified view of image segmentation."
]
} |
1902.02678 | 2914044559 | In this work, we propose a single deep neural network for panoptic segmentation, for which the goal is to provide each individual pixel of an input image with a class label, as in semantic segmentation, as well as a unique identifier for specific objects in an image, following instance segmentation. Our network makes joint semantic and instance segmentation predictions and combines these to form an output in the panoptic format. This has two main benefits: firstly, the entire panoptic prediction is made in one pass, reducing the required computation time and resources; secondly, by learning the tasks jointly, information is shared between the two tasks, thereby improving performance. Our network is evaluated on two street scene datasets: Cityscapes and Mapillary Vistas. By leveraging information exchange and improving the merging heuristics, we increase the performance of the single network, and achieve a score of 23.9 on the Panoptic Quality (PQ) metric on Mapillary Vistas validation, with an input resolution of 640 x 900 pixels. On Cityscapes validation, our method achieves a PQ score of 45.9 with an input resolution of 512 x 1024 pixels. Moreover, our method decreases the prediction time by a factor of 2 with respect to separate networks. | Furthermore, we leverage the single network architecture by introducing additional information flow within the network, to enhance the overall performance of the model. In @cite_19 and @cite_2 , it has been shown that additional information flow between different tasks can improve the performance of the individual subtasks. In our network, it should improve the performance of the network as a whole. | {
"cite_N": [
"@cite_19",
"@cite_2"
],
"mid": [
"2613599172",
"2520951797"
],
"abstract": [
"Aggregating extra features has been considered as an effective approach to boost traditional pedestrian detection methods. However, there is still a lack of studies on whether and how CNN-based pedestrian detectors can benefit from these extra features. The first contribution of this paper is exploring this issue by aggregating extra features into CNN-based pedestrian detection framework. Through extensive experiments, we evaluate the effects of different kinds of extra features quantitatively. Moreover, we propose a novel network architecture, namely HyperLearner, to jointly learn pedestrian detection as well as the given extra feature. By multi-task training, HyperLearner is able to utilize the information of given features and improve detection performance without extra inputs in inference. The experimental results on multiple pedestrian benchmarks validate the effectiveness of the proposed HyperLearner.",
"The field of object detection has seen dramatic performance improvements in the last few years. Most of these gains are attributed to bottom-up, feedforward ConvNet frameworks. However, in case of humans, top-down information, context and feedback play an important role in doing object detection. This paper investigates how we can incorporate top-down information and feedback in the state-of-the-art Faster R-CNN framework. Specifically, we propose to: (a) augment Faster R-CNN with a semantic segmentation network; (b) use segmentation for top-down contextual priming; (c) use segmentation to provide top-down iterative feedback using two stage training. Our results indicate that all three contributions improve the performance on object detection, semantic segmentation and region proposal generation."
]
} |
1902.02678 | 2914044559 | In this work, we propose a single deep neural network for panoptic segmentation, for which the goal is to provide each individual pixel of an input image with a class label, as in semantic segmentation, as well as a unique identifier for specific objects in an image, following instance segmentation. Our network makes joint semantic and instance segmentation predictions and combines these to form an output in the panoptic format. This has two main benefits: firstly, the entire panoptic prediction is made in one pass, reducing the required computation time and resources; secondly, by learning the tasks jointly, information is shared between the two tasks, thereby improving performance. Our network is evaluated on two street scene datasets: Cityscapes and Mapillary Vistas. By leveraging information exchange and improving the merging heuristics, we increase the performance of the single network, and achieve a score of 23.9 on the Panoptic Quality (PQ) metric on Mapillary Vistas validation, with an input resolution of 640 x 900 pixels. On Cityscapes validation, our method achieves a PQ score of 45.9 with an input resolution of 512 x 1024 pixels. Moreover, our method decreases the prediction time by a factor of 2 with respect to separate networks. | Concurrent work also focusses on a unified single network for panoptic segmentation. In @cite_10 , the method consists of a unified network similar to ours, as well as a consistency loss to make the output more consistent, but there is no additional information flow to boost the performance. AUNet @cite_9 does leverage information exchange, but it requires complicated attention and masking operations. Our framework is designed to be simple and generally applicable, while leveraging the architecture by using additional information flow to improve the performance. The increase in related concurrent work highlights the relevance of creating a single unified network for panoptic segmentation. | {
"cite_N": [
"@cite_9",
"@cite_10"
],
"mid": [
"2904700592",
"2902499724"
],
"abstract": [
"This paper studies panoptic segmentation, a recently proposed task which segments foreground (FG) objects at the instance level as well as background (BG) contents at the semantic level. Existing methods mostly dealt with these two problems separately, but in this paper, we reveal the underlying relationship between them, in particular, FG objects provide complementary cues to assist BG understanding. Our approach, named the Attention-guided Unified Network (AUNet), is a unified framework with two branches for FG and BG segmentation simultaneously. Two sources of attentions are added to the BG branch, namely, RPN and FG segmentation mask to provide object-level and pixel-level attentions, respectively. Our approach is generalized to different backbones with consistent accuracy gain in both FG and BG segmentation, and also sets new state-of-the-arts both in the MS-COCO (46.5 PQ) and Cityscapes (59.0 PQ) benchmarks.",
"We propose an end-to-end learning approach for panoptic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmentations. We explicitly constrain these two output distributions through a global things and stuff binary mask to enforce cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmarks for panoptic segmentation as well as on the individual semantic and instance segmentation tasks."
]
} |
1902.02721 | 2944715696 | We address the problem of graph classification based only on structural information. Inspired by natural language processing techniques (NLP), our model sequentially embeds information to estimate class membership probabilities. Besides, we experiment with NLP-like variational regularization techniques, making the model predict the next node in the sequence as it reads it. We experimentally show that our model achieves state-of-the-art classification results on several standard molecular datasets. Finally, we perform a qualitative analysis and give some insights on whether the node prediction helps the model better classify graphs. | Some random walk models are used to address node classification or graph classification problems. The idea is to sequentially walk on a graph, one node at a time in a random fashion, and agglomerate information. The graph is represented by a discrete-time Markov chain where each node is associated to a state of the chain, and each probability of transition is proportional to its adjacency. More recently, @cite_0 or @cite_1 transform a graph into a sequence of fixed size vectors. Each of these vectors is an embedding of one node of the graph. The sequence of embeddings is then fed to a recurrent neural network (RNN). The two main challenges in this kind of approaches are the design of the embedding function for the nodes and the order in which the embeddings are given to the recurrent neural network. | {
"cite_N": [
"@cite_0",
"@cite_1"
],
"mid": [
"2804115458",
"2788962621"
],
"abstract": [
"Recently a variety of methods have been developed to encode graphs into low-dimensional vectors that can be easily exploited by machine learning algorithms. The majority of these methods start by embedding the graph nodes into a low-dimensional vector space, followed by using some scheme to aggregate the node embeddings. In this work, we develop a new approach to learn graph-level representations, which includes a combination of unsupervised and supervised learning components. We start by learning a set of node representations in an unsupervised fashion. Graph nodes are mapped into node sequences sampled from random walk approaches approximated by the Gumbel-Softmax distribution. Recurrent neural network (RNN) units are modified to accommodate both the node representations as well as their neighborhood information. Experiments on standard graph classification benchmarks demonstrate that our proposed approach achieves superior or comparable performance relative to the state-of-the-art algorithms in terms of convergence speed and classification accuracy. We further illustrate the effectiveness of the different components used by our approach.",
"Modeling and generating graphs is fundamental for studying networks in biology, engineering, and social sciences. However, modeling complex distributions over graphs and then efficiently sampling from these distributions is challenging due to the non-unique, high-dimensional nature of graphs and the complex, non-local dependencies that exist between edges in a given graph. Here we propose GraphRNN, a deep autoregressive model that addresses the above challenges and approximates any distribution of graphs with minimal assumptions about their structure. GraphRNN learns to generate graphs by training on a representative set of graphs and decomposes the graph generation process into a sequence of node and edge formations, conditioned on the graph structure generated so far. In order to quantitatively evaluate the performance of GraphRNN, we introduce a benchmark suite of datasets, baselines and novel evaluation metrics based on Maximum Mean Discrepancy, which measure distances between sets of graphs. Our experiments show that GraphRNN significantly outperforms all baselines, learning to generate diverse graphs that match the structural characteristics of a target set, while also scaling to graphs 50 times larger than previous deep models."
]
} |
1902.02610 | 2950673187 | Software testing is an important phase in the software development life-cycle because it helps in identifying bugs in a software system before it is shipped into the hand of its end users. There are numerous studies on how developers test general-purpose software applications. The idiosyncrasies of mobile software applications, however, set mobile apps apart from general-purpose systems (e.g., desktop, stand-alone applications, web services). This paper investigates working habits and challenges of mobile software developers with respect to testing. A key finding of our exhaustive study, using 1000 Android apps, demonstrates that mobile apps are still tested in a very ad hoc way, if tested at all. However, we show that, as in other types of software, testing increases the quality of apps (demonstrated in user ratings and number of code issues). Furthermore, we find evidence that tests are essential when it comes to engaging the community to contribute to mobile open source software. We discuss reasons and potential directions to address our findings. Yet another relevant finding of our study is that Continuous Integration and Continuous Deployment (CI CD) pipelines are rare in the mobile apps world (only 26 of the apps are developed in projects employing CI CD) --- we argue that one of the main reasons is due to the lack of exhaustive and automatic testing. | coppola2017scripted studied the fragility of GUI testing in Android apps @cite_0 . The authors collected 18,930 open source apps available on Github and analyzed the prevalence of five scripted GUI testing technologies. However, toy apps or forks of real apps were not factored out from the sample --- we understand that real apps were underrepresented . Thus, we restrict our study to apps that were published in F-droid. In addition, we extend our study to a broader set of testing technologies, while studying relationships between automated testing and other metrics of a project. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2963620568"
],
"abstract": [
"Background. Evidence suggests that mobile applications are not thoroughly tested as their desktop counterparts. In particular GUI testing is generally limited. Like web-based applications, mobile apps suffer from GUI test fragility, i.e. GUI test classes failing due to minor modifications in the GUI, without the application functionalities being altered. Aims. The objective of our study is to examine the diffusion of GUI testing on Android, and the amount of changes required to keep test classes up to date, and in particular the changes due to GUI test fragility. We define metrics to characterize the modifications and evolution of test classes and test methods, and proxies to estimate fragility-induced changes. Method. To perform our experiments, we selected six widely used open-source tools for scripted GUI testing of mobile applications previously described in the literature. We have mined the repositories on GitHub that used those tools, and computed our set of metrics. Results. We found that none of the considered GUI testing frameworks achieved a major diffusion among the open-source Android projects available on GitHub. For projects with GUI tests, we found that test suites have to be modified often, specifically 5 --10 of developers' modified LOCs belong to tests, and that a relevant portion (60 on average) of such modifications are induced by fragility. Conclusions. Fragility of GUI test classes constitute a relevant concern, possibly being an obstacle for developers to adopt automated scripted GUI tests. This first evaluation and measure of fragility of Android scripted GUI testing can constitute a benchmark for developers, and the basis for the definition of a taxonomy of fragility causes, and actionable guidelines to mitigate the issue."
]
} |
1902.02544 | 2912858941 | With the dawn of the Big Data era, data sets are growing rapidly. Data is streaming from everywhere - from cameras, mobile phones, cars, and other electronic devices. Clustering streaming data is a very challenging problem. Unlike the traditional clustering algorithms where the dataset can be stored and scanned multiple times, clustering streaming data has to satisfy constraints such as limit memory size, real-time response, unknown data statistics and an unknown number of clusters. In this paper, we present a novel online clustering algorithm which can be used to cluster streaming data without knowing the number of clusters a priori. Results on both synthetic and real datasets show that the proposed algorithm produces partitions which are close to what you could get if you clustered the whole data at one time. | A hierarchical approach for clustering streaming data is proposed by the CluStream algorithm @cite_2 . The CluStream algorithm extends the notion of a feature vector of the BIRCH @cite_5 method to create micro-clusters in the online phase. In the offline phase, the micro-clusters are clustered into bigger clusters using the K-means algorithm. Like the above methods, the number of clusters should be provided to this algorithm. | {
"cite_N": [
"@cite_5",
"@cite_2"
],
"mid": [
"2095897464",
"2170936641"
],
"abstract": [
"Finding useful patterns in large datasets has attracted considerable interest recently, and one of the most widely studied problems in this area is the identification of clusters, or densely populated regions, in a multi-dimensional dataset. Prior work does not adequately address the problem of large datasets and minimization of I O costs.This paper presents a data clustering method named BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and demonstrates that it is especially suitable for very large databases. BIRCH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources (i.e., available memory and time constraints). BIRCH can typically find a good clustering with a single scan of the data, and improve the quality further with a few additional scans. BIRCH is also the first clustering algorithm proposed in the database area to handle \"noise\" (data points that are not part of the underlying pattern) effectively.We evaluate BIRCH's time space efficiency, data input order sensitivity, and clustering quality through several experiments. We also present a performance comparisons of BIRCH versus CLARANS, a clustering method proposed recently for large datasets, and show that BIRCH is consistently superior.",
"The clustering problem is a difficult problem for the data stream domain. This is because the large volumes of data arriving in a stream renders most traditional algorithms too inefficient. In recent years, a few one-pass clustering algorithms have been developed for the data stream problem. Although such methods address the scalability issues of the clustering problem, they are generally blind to the evolution of the data and do not address the following issues: (1) The quality of the clusters is poor when the data evolves considerably over time. (2) A data stream clustering algorithm requires much greater functionality in discovering and exploring clusters over different portions of the stream. The widely used practice of viewing data stream clustering algorithms as a class of one-pass clustering algorithms is not very useful from an application point of view. For example, a simple one-pass clustering algorithm over an entire data stream of a few years is dominated by the outdated history of the stream. The exploration of the stream over different time windows can provide the users with a much deeper understanding of the evolving behavior of the clusters. At the same time, it is not possible to simultaneously perform dynamic clustering over all possible time horizons for a data stream of even moderately large volume. This paper discusses a fundamentally different philosophy for data stream clustering which is guided by application-centered requirements. The idea is divide the clustering process into an online component which periodically stores detailed summary statistics and an offine component which uses only this summary statistics. The offine component is utilized by the analyst who can use a wide variety of inputs (such as time horizon or number of clusters) in order to provide a quick understanding of the broad clusters in the data stream. The problems of efficient choice, storage, and use of this statistical data for a fast data stream turns out to be quite tricky. For this purpose, we use the concepts of a pyramidal time frame in conjunction with a microclustering approach. Our performance experiments over a number of real and synthetic data sets illustrate the effectiveness, efficiency, and insights provided by our approach."
]
} |
1902.02544 | 2912858941 | With the dawn of the Big Data era, data sets are growing rapidly. Data is streaming from everywhere - from cameras, mobile phones, cars, and other electronic devices. Clustering streaming data is a very challenging problem. Unlike the traditional clustering algorithms where the dataset can be stored and scanned multiple times, clustering streaming data has to satisfy constraints such as limit memory size, real-time response, unknown data statistics and an unknown number of clusters. In this paper, we present a novel online clustering algorithm which can be used to cluster streaming data without knowing the number of clusters a priori. Results on both synthetic and real datasets show that the proposed algorithm produces partitions which are close to what you could get if you clustered the whole data at one time. | A density-based online clustering method is the DenStream @cite_8 . The DenStream algorithm extends the notion of core points of the DBSCAN algorithm to a new concept of micro-clusters. It also has two phases. In the online phase, the algorithm maintains micro clusters structures which approximately capture the density of the data stream. In order to create the final data partition, in the offline phase, a variant of the DBSCAN algorithm is applied to those clusters. The DenStream algorithm should be provided with a cluster radius threshold and data fading rate. Providing the cluster radius is a main drawback of the algorithm. As stated, in streaming data, the data statistics may vary along the time and therefore a single cluster radius may not fit all the data. | {
"cite_N": [
"@cite_8"
],
"mid": [
"182707955"
],
"abstract": [
"Clustering is an important task in mining evolving data streams. Beside the limited memory and one-pass constraints, the nature of evolving data streams implies the following requirements for stream clustering: no assumption on the number of clusters, discovery of clusters with arbitrary shape and ability to handle outliers. While a lot of clustering algorithms for data streams have been proposed, they offer no solution to the combination of these requirements. In this paper, we present DenStream, a new approach for discovering clusters in an evolving data stream. The “dense” micro-cluster (named core-micro-cluster) is introduced to summarize the clusters with arbitrary shape, while the potential core-micro-cluster and outlier micro-cluster structures are proposed to maintain and distinguish the potential clusters and outliers. A novel pruning strategy is designed based on these concepts, which guarantees the precision of the weights of the micro-clusters with limited memory. Our performance study over a number of real and synthetic data sets demonstrates the effectiveness and efficiency of our method."
]
} |
1902.02636 | 2964781572 | We present and characterize a simple and reliable method for detecting pointing gestures suitable for human-robot interaction applications using a commodity RGB-D camera. We exploit an existing Deep CNN model to robustly find hands and faces in RGB images, then examine the corresponding depth channel pixels to obtain full 3D pointing vectors. We test several methods of estimating the hand end-point of the pointing vector. The system runs at better than 30Hz on commodity hardware: exceeding the frame rate of typical RGB-D sensors. An estimate of the absolute pointing accuracy is found empirically by comparison with ground-truth data from a VICON motion-capture system, and the useful interaction volume established. Finally, we show an end-to-end test where a robot estimates where the pointing vector intersects the ground plane, and report the accuracy obtained. We provide source code as a ROS node, with the intention of contributing a commodity implementation of this common component in HRI systems. | There have been many studies using pointing gesture detection for human-robot interaction. In terms of capturing human body and postures, various approaches were proposed. Pointing gesture detection began with the help of wearable devices, like glove-based devices @cite_5 @cite_21 . With recent achievements in computer vision, a new era began; gesture recognition using vision-based methods are reviewed in @cite_1 and particularly hand gesture recognition @cite_2 @cite_20 . In vision-based methods, the camera plays an important role: stereo camera, multi cameras, Time-Of-Flight(TOF) camera or depth camera are different approaches for solving pointing gesture detection. @cite_4 @cite_6 proposed multi-camera approaches which are promising but less convenient for mobile-robot HRI. A TOF camera was used in @cite_17 . | {
"cite_N": [
"@cite_4",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_20",
"@cite_17"
],
"mid": [
"78429647",
"",
"2168392347",
"2118988519",
"2094645604",
"2124120479",
"1977995219",
"2071008727"
],
"abstract": [
"We propose a multi-camera system that can detect omni-directional pointing gestures and estimate the direction of pointing. In general, when a human points at something, their target exists directly in front of the direction they are facing. Therefore, we regard the direction of pointing as the direction represented by the straight line that connects the face position with the hand position. First, the multiple cameras detect the face region by skin colors and estimate the face direction with the discrete face direction feature classes. Second, we estimate the precise direction that the subject is facing with the integrated information from multiple cameras and decide which camera captures the frontal view of the face the best. This camera is labeled the center camera. Third, we select a pair of cameras on both sides of the center camera as a stereo camera and detect the spatial positions of the face and hand. Finally, the target that the subject is pointing to is found on the straight line that connects the face position with the hand position. Experiments show that out system can achieve a mean error of 1.94\" with a variance of 4.37 throughout the pointing direction.",
"",
"Gesture recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and or body. It is of utmost importance in designing an intelligent and efficient human-computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. Existing challenges and future research possibilities are also highlighted",
"We present an algorithm for the real-time detection and interpretation of pointing gestures, performed with one or both arms. The pointing gestures are used as an intuitive tracking interface for a user interacting with an immersive virtual environment. We have defined the pointing direction to correspond to the line of sight connecting the eyes and the pointing fingertip. If a pointing gesture is being performed, the algorithm detects and tracks the position of the user's eyes and fingertip and computes the origin and direction of that gesture with respect to a real-world coordinate system. The algorithm is based on the body silhouettes extracted from multiple views and uses point correspondences to reconstruct in 3D the points of interest. The system doesn't require initial poses, special clothing, or markers.",
"This paper presents a literature review on the use of depth for hand tracking and gesture recognition. The survey examines 37 papers describing depth-based gesture recognition systems in terms of (1) the hand localization and gesture classification methods developed and used, (2) the applications where gesture recognition has been tested, and (3) the effects of the low-cost Kinect and OpenNI software libraries on gesture recognition research. The survey is organized around a novel model of the hand gesture recognition process. In the reviewed literature, 13 methods were found for hand localization and 11 were found for gesture classification. 24 of the papers included real-world applications to test a gesture recognition system, but only 8 application categories were found (and three applications accounted for 18 of the papers). The papers that use the Kinect and the OpenNI libraries for hand tracking tend to focus more on applications than on localization and classification methods, and show that the OpenNI hand tracking method is good enough for the applications tested thus far. However, the limitations of the Kinect and other depth sensors for gesture recognition have yet to be tested in challenging applications and environments.",
"An experiment was conducted to investigate gesture recognition with a human hand manipulating the DataGlove, an electronically instrumented glove which provides information about finger and hand position. A total of 22 gestures in three classes were investigated. The first class contained gestures which only involved finger flexure. The second class contained gestures which required both finger flexure and hand orientation. The third class of gestures required finger motion in addition to flexure and orientation. Only four sensors were necessary to positively identify specific gestures from groups of up to 15 gestures. The results show the specific number of sensors required to positively identify a gesture from a group. This depends on the number of gestures in a group, as well as the class of gestures. >",
"As computers become more pervasive in society, facilitating natural human---computer interaction (HCI) will have a positive impact on their use. Hence, there has been growing interest in the development of new approaches and technologies for bridging the human---computer barrier. The ultimate aim is to bring HCI to a regime where interactions with computers will be as natural as an interaction between humans, and to this end, incorporating gestures in HCI is an important research area. Gestures have long been considered as an interaction technique that can potentially deliver more natural, creative and intuitive methods for communicating with our computers. This paper provides an analysis of comparative surveys done in this area. The use of hand gestures as a natural interface serves as a motivating force for research in gesture taxonomies, its representations and recognition techniques, software platforms and frameworks which is discussed briefly in this paper. It focuses on the three main phases of hand gesture recognition i.e. detection, tracking and recognition. Different application which employs hand gestures for efficient interaction has been discussed under core and advanced application domains. This paper also provides an analysis of existing literature related to gesture recognition systems for human computer interaction by categorizing it under different key parameters. It further discusses the advances that are needed to further improvise the present hand gesture recognition systems for future perspective that can be widely used for efficient human computer interaction. The main goal of this survey is to provide researchers in the field of gesture based HCI with a summary of progress achieved to date and to help identify areas where further research is needed.",
"Pointing gestures are a common and intuitive way to draw somebody's attention to a certain object. While humans can easily interpret robot gestures, the perception of human behavior using robot sensors is more difficult. In this work, we propose a method for perceiving pointing gestures using a Time-of-Flight (ToF) camera. To determine the intended pointing target, frequently the line between a person's eyes and hand is assumed to be the pointing direction. However, since people tend to keep the line-of-sight free while they are pointing, this simple approximation is inadequate. Moreover, depending on the distance and angle to the pointing target, the line between shoulder and hand or elbow and hand may yield better interpretations of the pointing direction. In order to achieve a better estimate, we extract a set of body features from depth and amplitude images of a ToF camera and train a model of pointing directions using Gaussian Process Regression. We evaluate the accuracy of the estimated pointing direction in a quantitative study. The results show that our learned model achieves far better accuracy than simple criteria like head-hand, shoulder-hand, or elbow-hand line."
]
} |
1902.02502 | 2949909594 | Humans perceive the seemingly chaotic world in a structured and compositional way with the prerequisite of being able to segregate conceptual entities from the complex visual scenes. The mechanism of grouping basic visual elements of scenes into conceptual entities is termed as perceptual grouping. In this work, we propose a new type of spatial mixture models with learnable priors for perceptual grouping. Different from existing methods, the proposed method disentangles the representation of an object into shape' and appearance' which are modeled separately by the mixture weights and the conditional probability distributions. More specifically, each object in the visual scene is modeled by one mixture component, whose mixture weights and the parameter of the conditional probability distribution are generated by two neural networks, respectively. The mixture weights focus on modeling spatial dependencies (i.e., shape) and the conditional probability distributions deal with intra-object variations (i.e., appearance). In addition, the background is separately modeled as a special component complementary to the foreground objects. Our extensive empirical tests on two perceptual grouping datasets demonstrate that the proposed method outperforms the state-of-the-art methods under most experimental configurations. The learned conceptual entities are generalizable to novel visual scenes and insensitive to the diversity of objects. | Several approaches have been proposed to solve the perceptual grouping problem and related tasks in recent years. Tagger @cite_11 combines the iterative amortized grouping (TAG) mechanism and the Ladder Network @cite_5 to learn perceptual grouping in an unsupervised manner. It utilizes multiple copies of the same neural network to model different groups in the visual scene and iteratively refine the reconstruction result. RTagger @cite_14 replaces the Ladder Network with the Recurrent Ladder Network and extends Tagger to sequential data. Neural Expectation Maximization (N-EM) @cite_15 tackles the problem based on the Expectation-Maximization (EM) framework @cite_22 , and achieves comparable performance to Tagger with much fewer parameters. Relational Neural Expectation Maximization (R-NEM) @cite_20 integrates N-EM with a type of Message Passing Neural Network @cite_3 to learn common-sense physical reasoning based on the compositional object-representations extracted by N-EM. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_3",
"@cite_5",
"@cite_15",
"@cite_20",
"@cite_11"
],
"mid": [
"2963575501",
"2049633694",
"2606780347",
"830076066",
"2747264013",
"2963370555",
"2962889261"
],
"abstract": [
"We propose a recurrent extension of the Ladder networks whose structure is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling. The architecture shows close-to-optimal results on temporal modeling of video data, competitive results on music modeling, and improved perceptual grouping based on higher order abstractions, such as stochastic textures and motion cues. We present results for fully supervised, semi-supervised, and unsupervised tasks. The results suggest that the proposed architecture and principles are powerful tools for learning a hierarchy of abstractions, learning iterative inference and handling temporal information.",
"",
"Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels.",
"We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification in addition to permutation-invariant MNIST classification with all labels.",
"Many real world tasks such as reasoning and physical interaction require identification and manipulation of conceptual entities. A first step towards solving these tasks is the automated discovery of distributed symbol-like representations. In this paper, we explicitly formalize this problem as inference in a spatial mixture model where each component is parametrized by a neural network. Based on the Expectation Maximization framework we then derive a differentiable clustering method that simultaneously learns how to group and represent individual entities. We evaluate our method on the (sequential) perceptual grouping task and find that it is able to accurately recover the constituent objects. We demonstrate that the learned representations are useful for next-step prediction.",
"Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world. For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved. In order to match real-world conditions this causal knowledge must be learned without access to supervised data. To address this problem we present a novel method that learns to discover objects, and models their physical interactions from raw visual images in a purely unsupervised fashion. It incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently. On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches that do not incorporate such prior knowledge. We demonstrate its ability to handle occlusion and show that it can extrapolate learned knowledge to environments with varying numbers of objects.",
"We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features. Rather than being trained for any specific segmentation, our framework learns the grouping process in an unsupervised manner or alongside any supervised task. We enable a neural network to group the representations of different objects in an iterative manner through a differentiable mechanism. We achieve very fast convergence by allowing the system to amortize the joint iterative inference of the groupings and their representations. In contrast to many other recently proposed methods for addressing multi-object scenes, our system does not assume the inputs to be images and can therefore directly handle other modalities. We evaluate our method on multi-digit classification of very cluttered images that require texture segmentation. Remarkably our method achieves improved classification performance over convolutional networks despite being fully connected, by making use of the grouping mechanism. Furthermore, we observe that our system greatly improves upon the semi-supervised result of a baseline Ladder network on our dataset. These results are evidence that grouping is a powerful tool that can help to improve sample efficiency."
]
} |
1902.02497 | 2915058596 | With the widespread applications of deep convolutional neural networks (DCNNs), it becomes increasingly important for DCNNs not only to make accurate predictions but also to explain how they make their decisions. In this work, we propose a CHannel-wise disentangled InterPretation (CHIP) model to give the visual interpretation to the predictions of DCNNs. The proposed model distills the class-discriminative importance of channels in networks by utilizing the sparse regularization. Here, we first introduce the network perturbation technique to learn the model. The proposed model is capable to not only distill the global perspective knowledge from networks but also present the class-discriminative visual interpretation for specific predictions of networks. It is noteworthy that the proposed model is able to interpret different layers of networks without re-training. By combining the distilled interpretation knowledge in different layers, we further propose the Refined CHIP visual interpretation that is both high-resolution and class-discriminative. Experimental results on the standard dataset demonstrate that the proposed model provides promising visual interpretation for the predictions of networks in image classification task compared with existing visual interpretation methods. Besides, the proposed method outperforms related approaches in the application of ILSVRC 2015 weakly-supervised localization task. | One approach to the task is to utilize the visual interpretation distilled from the network to localize target object @cite_18 , @cite_7 . From this point of view, visual interpretation model can be applied to the weakly-supervised localization task. In CAM, a network should be modified to a particular kind of architectures to learn the class activation map which can be used to generate an object bounding box for weakly-supervised localization. In the modified architectures, the convolutional layer is followed by the global average pooling layer and then the softmax layer. Compared with the original network, the modified network architecture may achieve inferior classification accuracy. Therefore, when addressing localization task based on class activation maps, the localization accuracy is also limited by the inferior accuracy of the modified network. | {
"cite_N": [
"@cite_18",
"@cite_7"
],
"mid": [
"2295107390",
"2962858109"
],
"abstract": [
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1.",
"We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: github.com ramprs grad-cam along with a demo on CloudCV [2] and video at youtu.be COjUB9Izk6E."
]
} |
1902.02380 | 2914294010 | Abstract Recurrent neural networks have proved to be an effective method for statistical language modeling. However, in practice their memory and run-time complexity are usually too large to be implemented in real-time offline mobile applications. In this paper we consider several compression techniques for recurrent neural networks including Long–Short Term Memory models. We make particular attention to the high-dimensional output problem caused by the very large vocabulary size. We focus on effective compression methods in the context of their exploitation on devices: pruning, quantization, and matrix decomposition approaches (low-rank factorization and tensor train decomposition, in particular). For each model we investigate the trade-off between its size, suitability for fast inference and perplexity. We propose a general pipeline for applying the most suitable methods to compress recurrent neural networks for language modeling. It has been shown in the experimental study with the Penn Treebank (PTB) dataset that the most efficient results in terms of speed and compression–perplexity balance are obtained by matrix decomposition techniques. | The first kind of techniques include pruning and quantization and was originally applied in computer vision. In one of the first works on these methods @cite_27 it was shown that pruning makes it possible to remove a lot of weights before doing quantization without loss of accuracy. It was verified for such neural networks as LeNet, AlexNet, VGGNet that pruning can remove @math O( | V | )$ parameters for this way of softmax computation. @cite_47 propose the method for speeding up softmax computation by sampling of subset of words from the available vocabulary on each iteration during the training phase. | {
"cite_N": [
"@cite_27",
"@cite_47"
],
"mid": [
"2963674932",
"1558797106"
],
"abstract": [
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"The device of this invention uses electrostatic recording to apply original titles, legends, or other indicia to discrete record receivers. The system includes changeable stencils which, for the purposes of this description, are constructed as counters. The desired information is set up, a voltage is applied between the stencil and an electrode behind a record receiver, ionized pigmented particles are attracted through the stencils to the record receiver, and the record receiver is moved to a fixing station. By rendering the individual items of information changeable, any desired information within the range of the apparatus can be applied to an appropriate record receiver."
]
} |
1902.02380 | 2914294010 | Abstract Recurrent neural networks have proved to be an effective method for statistical language modeling. However, in practice their memory and run-time complexity are usually too large to be implemented in real-time offline mobile applications. In this paper we consider several compression techniques for recurrent neural networks including Long–Short Term Memory models. We make particular attention to the high-dimensional output problem caused by the very large vocabulary size. We focus on effective compression methods in the context of their exploitation on devices: pruning, quantization, and matrix decomposition approaches (low-rank factorization and tensor train decomposition, in particular). For each model we investigate the trade-off between its size, suitability for fast inference and perplexity. We propose a general pipeline for applying the most suitable methods to compress recurrent neural networks for language modeling. It has been shown in the experimental study with the Penn Treebank (PTB) dataset that the most efficient results in terms of speed and compression–perplexity balance are obtained by matrix decomposition techniques. | The pruning with the variational dropout technique was applied to compression of RNNs in natural language processing tasks @cite_49 @cite_31 . However, the results in language modeling @cite_49 are significantly worth in terms of achieved perplexity when compared even with classical results of @cite_51 . Moreover, the acute problem with high-dimensional output is completely ignored. | {
"cite_N": [
"@cite_31",
"@cite_51",
"@cite_49"
],
"mid": [
"",
"1591801644",
"2739963392"
],
"abstract": [
"",
"We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.",
"Recurrent neural networks show state-of-the-art results in many text analysis tasks but often require a lot of memory to store their weights. Recently proposed Sparse Variational Dropout eliminates the majority of the weights in a feed-forward neural network without significant loss of quality. We apply this technique to sparsify recurrent neural networks. To account for recurrent specifics we also rely on Binary Variational Dropout for RNN. We report 99.5 sparsity level on sentiment analysis task without a quality drop and up to 87 sparsity level on language modeling task with slight loss of accuracy."
]
} |
1902.02277 | 2914258540 | In this paper, we consider a queueing system with multiple channels (or servers) and multiple classes of users. We aim at allocating the available channels among the users in such a way to minimize the expected total average queue length of the system. This known scheduling problem falls in the framework of Restless Bandit Problems (RBP) for which an optimal solution is known to be out of reach for the general case. The contributions of this paper are as follows. We rely on the Lagrangian relaxation method to characterize the Whittle index values and to develop an index-based heuristic for the original scheduling problem. The main difficulty lies in the fact that, for some queue states, deriving the Whittle's index requires introducing a new approach which consists in introducing a new expected discounted cost function and deriving the Whittle's index values with respect to the discount parameter @math . We then deduce the Whittle's indices for the original problem (i.e. with total average queue length minimization) by taking the limit @math . The numerical results provided in this paper show that this policy performs very well and is very close to the optimal solution for high number of users. | There are lot of works which study the problem of resource allocation in wireless networks. For instance, in @cite_12 @cite_4 @cite_10 @cite_6 , the authors give a throughput optimal policy for single channel, multi-channel and multi-user MIMO contexts using max weight rule, which is known to not be delay optimal. To overcome this matter, many works have been developed in the past to minimize the average delay of the traffic of the users (e.g. see @cite_1 and the references therein). Most of them describe the minimization problem as Markov Decision Process (MDP) and develop resource allocation policies using Bellman equation such as Value iteration algorithm. However, as we have already mentioned in abstract, MDP frameworks and Bellman equation are hard to solve them. @cite_15 @cite_9 try to minimize the average delay of the users' queues using stochastic learning algorithm. Indeed, the stochastic learning algorithm consumes lot of time and users memories. Besides, it requires high computational complexity. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_15",
"@cite_10",
"@cite_12"
],
"mid": [
"2034073776",
"",
"2156006461",
"2138993731",
"2093223549",
"2105177639",
"2635604129"
],
"abstract": [
"In this paper, the problem of feedback and active user selection in multiple-input single-output (MISO) wireless systems such that the system’s stability region is as big as possible is examined. The focus is on a system in a Rayleigh fading environment where zero forcing precoding is used to serve all active users in every slot. Acquisition of the channel states is done via uplink training in time division duplexing mode by the active users. Clearly, only a subset of users can perform uplink training and the selection of this subset is a challenging and interesting problem especially in MISO systems. The stability regions of a baseline centralized scheme and two novel decentralized policies are examined analytically. In the decentralized schemes, the transmitter broadcasts periodically the queue state information and the users contend for the channel in a carrier sense multiple access-based manner with parameters based on the outdated queue state information and real-time channel state information. We show that, using infrequent signaling between the base station and the users, the decentralized policies outperform the centralized policy. In addition, a threshold-based user selection and training scheme for discrete-time contention is proposed. The results of this paper imply that, as far as stability is concerned, the users must be involved in the active user selection and feedback training decision. This should be leveraged in future communication systems.",
"",
"In this paper, a comprehensive survey is given on several major systematic approaches in dealing with delay-aware control problems, namely the equivalentrate constraint approach, the Lyapunov stability drift approach, and the approximate Markov decision process approach using stochastic learning. These approaches essentially embrace most of the existing literature regarding delay-aware resource control in wireless systems. They have their relative pros and cons in terms of performance, complexity, and implementation issues. For each of the approaches, the problem setup, the general solution, and the design methodology are discussed. Applications of these approaches to delay-aware resource allocation are illustrated with examples in single-hop wireless networks. Furthermore, recent results regarding delay-aware multihop routing designs in general multihop networks are elaborated. Finally, the delay performances of various approaches are compared through simulations using an example of the uplink OFDMA systems.",
"Information flow in a telecommunication network is accomplished through the interaction of mechanisms at various design layers with the end goal of supporting the information exchange needs of the applications. In wireless networks in particular, the different layers interact in a nontrivial manner in order to support information transfer. In this text we will present abstract models that capture the cross-layer interaction from the physical to transport layer in wireless network architectures including cellular, ad-hoc and sensor networks as well as hybrid wireless-wireline. The model allows for arbitrary network topologies as well as traffic forwarding modes, including datagrams and virtual circuits. Furthermore the time varying nature of a wireless network, due either to fading channels or to changing connectivity due to mobility, is adequately captured in our model to allow for state dependent network control policies. Quantitative performance measures that capture the quality of service requirements in these systems depending on the supported applications are discussed, including throughput maximization, energy consumption minimization, rate utility function maximization as well as general performance functionals. Cross-layer control algorithms with optimal or suboptimal performance with respect to the above measures are presented and analyzed. A detailed exposition of the related analysis and design techniques is provided.",
"In this paper, a low-complexity delay-aware cross-layer scheduling algorithm for two-hop relay communication systems is proposed. The complex interactions of the queues at the source node and the M relay nodes (RSs) are modeled as an infinite horizon average reward Markov decision process (MDP), whose state space involves the joint queue state information (QSI) of the queues at the source node and the M RSs as well as the joint channel state information (CSI) of all S-R and R-D links. To address the curse of dimensionality, an equivalent MDP formulation is first proposed, where the system state depends only on global QSI. Furthermore, using approximate MDP and stochastic learning, an auction-based distributed online learning algorithm is derived, where each node iteratively estimates a per-node value function based on real-time observations of the local CSI and local QSI as well as signaling between relays. The combined distributed learning converges almost surely to a global optimal solution for large arrivals. Finally, it is showed by simulations that the proposed scheme achieves significant gain compared with various baselines such as the conventional CSIT-only control and the throughput optimal control (in stability sense).",
"The stability of a queueing network with interdependent servers is considered. The dependency among the servers is described by the definition of their subsets that can be activated simultaneously. Multihop radio networks provide a motivation for the consideration of this system. The problem of scheduling the server activation under the constraints imposed by the dependency among servers is studied. The performance criterion of a scheduling policy is its throughput that is characterized by its stability region, that is, the set of vectors of arrival and service rates for which the system is stable. A policy is obtained which is optimal in the sense that its stability region is a superset of the stability region of every other scheduling policy, and this stability region is characterized. The behavior of the network is studied for arrival rates that lie outside the stability region. Implications of the results in certain types of concurrent database and parallel processing systems are discussed. >",
"This paper characterizes the performance in terms of queueing stability of a network composed of multiple MIMO transmitter-receiver pairs taking into account the dynamic traffic pattern and the probing feedback cost. We adopt a centralized scheduling scheme that selects a number of active pairs in each time-slot. We consider that the transmitters apply interference alignment (IA) technique if two or more pairs are active, whereas in the special case where one pair is active point-to-point MIMO singular value decomposition (SVD) is used. We consider a time-division duplex (TDD) system where transmitters acquire their channel state information (CSI) by decoding the pilot sequences sent by the receivers. Since global CSI knowledge is required for IA, the transmitters have also to exchange their estimated CSIs over a backhaul of limited capacity (i.e. imperfect case). Under this setting, we characterize in this paper the stability region of the system under both the imperfect and perfect (i.e. unlimited backhaul) cases, then we examine the gap between these two resulting regions. Further, under each case we provide a centralized probing policy that achieves the max stability region. These stability regions and scheduling policies are given for the symmetric system, where all the path loss coefficients are equal to each other, as well as for the general system. For the symmetric system, we provide the conditions under which IA yields a queueing stability gain compared to SVD. Under the general system, the adopted scheduling policy is of a high computational complexity for moderate numbers of pairs, consequently we propose an approximate policy that has a reduced complexity but that achieves only a fraction of the system stability region. A characterization of this fraction is provided."
]
} |
1902.02279 | 2912287680 | We define a Causal Decision Problem as a Decision Problem where the available actions, the family of uncertain events and the set of outcomes are related through the variables of a Causal Graphical Model @math . A solution criteria based on Pearl's Do-Calculus and the Expected Utility criteria for rational preferences is proposed. The implementation of this criteria leads to an on-line decision making procedure that has been shown to have similar performance to classic Reinforcement Learning algorithms while allowing for a causal model of an environment to be learned. Thus, we aim to provide the theoretical guarantees of the usefulness and optimality of a decision making procedure based on causal information. | From the Machine Learning point of view, @cite_1 consider a where the actions available to an autonomous agent are interventions over a known causal model. In their work it is required that the causal model is known, an assumption later relaxed by @cite_0 who considers as unknown only a part of the causal model. | {
"cite_N": [
"@cite_0",
"@cite_1"
],
"mid": [
"2603093240",
"2964311196"
],
"abstract": [
"Motivated by applications in computational advertising and systems biology, we consider the problem of identifying the best out of several possible soft interventions at a source node V in an acyclic causal directed graph, to maximize the expected value of a target node Y (located downstream of V). Our setting imposes a fixed total budget for sampling under various interventions, along with cost constraints on different types of interventions. We pose this as a best arm identification bandit problem with K arms where each arm is a soft intervention at V, and leverage the information leakage among the arms to provide the first gap dependent error and simple regret bounds for this problem. Our results are a significant improvement over the traditional best arm identification results. We empirically show that our algorithms outperform the state of the art in the Flow Cytometry data-set, and also apply our algorithm for model interpretation of the Inception-v3 deep net that classifies images.",
"We study the problem of using causal models to improve the rate at which good interventions can be learned online in a stochastic environment. Our formalism combines multi-arm bandits and causal inference to model a novel type of bandit feedback that is not exploited by existing approaches. We propose a new algorithm that exploits the causal feedback and prove a bound on its simple regret that is strictly better (in all quantities) than algorithms that do not use the additional causal information."
]
} |
1902.02408 | 2971905802 | When data is partially missing at random, imputation and importance weighting are often used to estimate moments of the unobserved population. In this paper, we study 1-nearest neighbor (1NN) importance weighting, which estimates moments by replacing missing data with the complete data that is the nearest neighbor in the non-missing covariate space. We define an empirical measure, the 1NN measure, and show that it is weakly consistent for the measure of the missing data. The main idea behind this result is that the 1NN measure is performing inverse probability weighting in the limit. We study applications to missing data and mitigating the impact of covariate shift in prediction tasks. | @cite_21 outlines many of the common methods for dealing with missing data, including imputation, importance weighting, and likelihood based methods (such as the EM algorithm). Importance weighting, or propensity scoring, is commonly used for correcting sampling selection bias @cite_13 , which arises in survey sampling and causal inference. Tree-based methods and SVMs have been used to obtain estimates of the ideal importance weights in @cite_0 @cite_15 . Hot-deck imputation has been studied empirically in @cite_2 @cite_26 @cite_17 but is not a commonly used as model-based approaches. | {
"cite_N": [
"@cite_26",
"@cite_21",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_17"
],
"mid": [
"1511044732",
"2044758663",
"2028040032",
"2293385408",
"1999822211",
"2150291618",
"2162313689"
],
"abstract": [
"Hot deck imputation is a procedure in which missing items are replaced with values from respondents. A model supporting such procedures is the model in which response probabilities are assumed equal within imputation cells. An efficient version of hot deck imputation is described for the cell response model and a computationally efficient variance estimator is given. An approximation to the fully efficient procedure in which a small number of values are imputed for each nonrespondent is described. Variance estimation procedures are illustrated in a Monte Carlo study.",
"Preface.PART I: OVERVIEW AND BASIC APPROACHES.Introduction.Missing Data in Experiments.Complete-Case and Available-Case Analysis, Including Weighting Methods.Single Imputation Methods.Estimation of Imputation Uncertainty.PART II: LIKELIHOOD-BASED APPROACHES TO THE ANALYSIS OF MISSING DATA.Theory of Inference Based on the Likelihood Function.Methods Based on Factoring the Likelihood, Ignoring the Missing-Data Mechanism.Maximum Likelihood for General Patterns of Missing Data: Introduction and Theory with Ignorable Nonresponse.Large-Sample Inference Based on Maximum Likelihood Estimates.Bayes and Multiple Imputation.PART III: LIKELIHOOD-BASED APPROACHES TO THE ANALYSIS OF MISSING DATA: APPLICATIONS TO SOME COMMON MODELS.Multivariate Normal Examples, Ignoring the Missing-Data Mechanism.Models for Robust Estimation.Models for Partially Classified Contingency Tables, Ignoring the Missing-Data Mechanism.Mixed Normal and Nonnormal Data with Missing Values, Ignoring the Missing-Data Mechanism.Nonignorable Missing-Data Models.References.Author Index.Subject Index.",
"Abstract Causal effect modeling with naturalistic rather than experimental data is challenging. In observational studies participants in different treatment conditions may also differ on pretreatment characteristics that influence outcomes. Propensity score methods can theoretically eliminate these confounds for all observed covariates, but accurate estimation of propensity scores is impeded by large numbers of covariates, uncertain functional forms for their associations with treatment selection, and other problems. This article demonstrates that boosting, a modern statistical technique, can overcome many of these obstacles. The authors illustrate this approach with a study of adolescent probationers in substance abuse treatment programs. Propensity score weights estimated using boosting eliminate most pretreatment group differences and substantially alter the apparent relative effects of adolescent substance abuse treatment.",
"Covariate data which are missing or measured with error form the subject of a growing body of statistical literature. Parametric methods have not been widely adopted, quite possibly due to the necessity of specifying the form of a 'nuisance function' not required for complete data analysis, and the non-robustness of the methods to mis-specification. A non-parametric counterpart of multiple imputation, known as 'hot deck', was proposed by Rubin (1987) and has been used by the Census Bureau to complete public-use databases. However, inference using this method has not been possible due to the distribution theory not being available. Recently, it has been shown that the hot deck estimator has the same asymptotic distribution as the 'mean score' estimator, so that inference using hot deck is now possible. The method is intuitively appealing and easily implemented. Furthermore, it accommodates missingness which depends on outcome, which is an important generalization of many currently available methods. In this paper, the hot deck multiple imputation method is explained, its asymptotic distribution presented and its application to data analysis demonstrated by an example.",
"Machine learning techniques such as classification and regression trees (CART) have been suggested as promising alternatives to logistic regression for the estimation of propensity scores. The authors examined the performance of various CART-based propensity score models using simulated data. Hypothetical studies of varying sample sizes (n=500, 1000, 2000) with a binary exposure, continuous outcome, and ten covariates were simulated under seven scenarios differing by degree of non-linear and non-additive associations between covariates and the exposure. Propensity score weights were estimated using logistic regression (all main effects), CART, pruned CART, and the ensemble methods of bagged CART, random forests, and boosted CART. Performance metrics included covariate balance, standard error, percent absolute bias, and 95 confidence interval coverage. All methods displayed generally acceptable performance under conditions of either non-linearity or non-additivity alone. However, under conditions of both moderate non-additivity and moderate non-linearity, logistic regression had subpar performance, while ensemble methods provided substantially better bias reduction and more consistent 95 CI coverage. The results suggest that ensemble methods, especially boosted CART, may be useful for propensity score weighting.",
"Abstract : The results of observational studies are often disputed because of nonrandom treatment assignment. For example, patients at greater risk may be overrepresented in some treatment group. This paper discusses the central role of propensity scores and balancing scores in the analysis of observational studies. The propensity score is the (estimated) conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Applications include: matched sampling on the univariate propensity score which is equal percent bias reducing under more general conditions than required for discriminant matching, multivariate adjustment by subclassification on balancing scores where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations, and visual representation of multivariate adjustment by a two-dimensional plot. (Author)",
"Hot deck imputation is a method for handling missing data in which each missing value is replaced with an observed response from a \"similar\" unit. Despite being used extensively in practice, the theory is not as well developed as that of other imputation methods. We have found that no consensus exists as to the best way to apply the hot deck and obtain inferences from the completed data set. Here we review different forms of the hot deck and existing research on its statistical properties. We describe applications of the hot deck currently in use, including the U.S. Census Bureau's hot deck for the Current Population Survey (CPS). We also provide an extended example of variations of the hot deck applied to the third National Health and Nutrition Examination Survey (NHANES III). Some potential areas for future research are highlighted. Copyright (c) 2010 The Authors. Journal compilation (c) 2010 International Statistical Institute."
]
} |
1902.02408 | 2971905802 | When data is partially missing at random, imputation and importance weighting are often used to estimate moments of the unobserved population. In this paper, we study 1-nearest neighbor (1NN) importance weighting, which estimates moments by replacing missing data with the complete data that is the nearest neighbor in the non-missing covariate space. We define an empirical measure, the 1NN measure, and show that it is weakly consistent for the measure of the missing data. The main idea behind this result is that the 1NN measure is performing inverse probability weighting in the limit. We study applications to missing data and mitigating the impact of covariate shift in prediction tasks. | Domain adaptive methods can be divided into two broad camps: specialized learning algorithms that are designed to target the test distribution, and weighting based modifications to existing learning algorithms. If the testing and training distributions are known, then it is natural to use importance weighted losses in training @cite_5 . @cite_6 demonstrated excess risk bounds in this setting for hypothesis classes that are finite or have bounded pseudo-dimension. That work also considers importance weighting with imperfect weights, but those bounds leave the quality of the weights unknown (which this paper addresses for 1NN). @cite_3 considers estimating the importance weights via kernel mean matching, which we can think of as a kernelized version of the 1NN method that we consider. @cite_23 considers a modified objective for discriminative learning that simultaneously learns the weights and classifier, but little is known about its theoretical performance. @cite_1 considers an active KNN algorithm for covariate shift problems, and proves a risk bound in this setting, but this result is not directly comparable to ours due to the active learning setting, and the fact that they are able to use standard theoretical tools for KNN because they let @math . | {
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_23",
"@cite_5"
],
"mid": [
"1870462933",
"2153929442",
"2111355007",
"2170612786",
"189742998"
],
"abstract": [
"While classic machine learning paradigms assume training and test data are generated from the same process, domain adaptation addresses the more realistic setting in which the learner has large quantities of labeled data from some source task but limited or no labeled data from the target task it is attempting to learn. In this work, we give the first formal analysis showing that using active learning for domain adaptation yields a way to address the statistical challenges inherent in this setting. We propose a novel nonparametric algorithm, ANDA, that combines an active nearest neighbor querying strategy with nearest neighbor prediction. We provide analyses of its querying behavior and of finite sample convergence rates of the resulting classifier under covariate shift. Our experiments show that ANDA successfully corrects for dataset bias in multiclass image categorization.",
"Let X denote the feature and Y the target. We consider domain adaptation under three possible scenarios: (1) the marginal PY changes, while the conditional PX Y stays the same (target shift), (2) the marginal PY is fixed, while the conditional PX Y changes with certain constraints (conditional shift), and (3) the marginal PY changes, and the conditional PX Y changes with constraints (generalized target shift). Using background knowledge, causal interpretations allow us to determine the correct situation for a problem at hand. We exploit importance reweighting or sample transformation to find the learning machine that works well on test data, and propose to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain. Thanks to kernel embedding of conditional as well as marginal distributions, the proposed approaches avoid distribution estimation, and are applicable for high-dimensional problems. Numerical evaluations on synthetic and real-world data sets demonstrate the effectiveness of the proposed framework.",
"This paper presents an analysis of importance weighting for learning from finite samples and gives a series of theoretical and algorithmic results. We point out simple cases where importance weighting can fail, which suggests the need for an analysis of the properties of this technique. We then give both upper and lower bounds for generalization with bounded importance weights and, more significantly, give learning guarantees for the more common case of unbounded importance weights under the weak assumption that the second moment is bounded, a condition related to the Renyi divergence of the traning and test distributions. These results are based on a series of novel and general bounds we derive for unbounded loss functions, which are of independent interest. We use these bounds to guide the definition of an alternative reweighting algorithm and report the results of experiments demonstrating its benefits. Finally, we analyze the properties of normalized importance weights which are also commonly used.",
"We address classification problems for which the training instances are governed by an input distribution that is allowed to differ arbitrarily from the test distribution---problems also referred to as classification under covariate shift. We derive a solution that is purely discriminative: neither training nor test distribution are modeled explicitly. The problem of learning under covariate shift can be written as an integrated optimization problem. Instantiating the general optimization problem leads to a kernel logistic regression and an exponential model classifier for covariate shift. The optimization problem is convex under certain conditions; our findings also clarify the relationship to the known kernel mean matching procedure. We report on experiments on problems of spam filtering, text classification, and landmine detection.",
"A common assumption in supervised learning is that the input points in the training set follow the same probability distribution as the input points that will be given in the future test phase. However, this assumption is not satisfied, for example, when the outside of the training region is extrapolated. The situation where the training input points and test input points follow different distributions while the conditional distribution of output values given input points is unchanged is called the covariate shift. Under the covariate shift, standard model selection techniques such as cross validation do not work as desired since its unbiasedness is no longer maintained. In this paper, we propose a new method called importance weighted cross validation (IWCV), for which we prove its unbiasedness even under the covariate shift. The IWCV procedure is the only one that can be applied for unbiased classification under covariate shift, whereas alternatives to IWCV exist for regression. The usefulness of our proposed method is illustrated by simulations, and furthermore demonstrated in the brain-computer interface, where strong non-stationarity effects can be seen between training and test sessions."
]
} |
1902.02388 | 2913758774 | In this paper, we use Proximal Cubic regularized Newton Methods (PCNM) to optimize the sum of a smooth convex function and a non-smooth convex function, where we use inexact gradient and Hessian, and an inexact subsolver for the cubic regularized second-order subproblem. We propose inexact variants of PCNM and accelerated PCNM respectively, and show that both variants can achieve the same convergence rate as in the exact case, provided that the errors in the inexact gradient, Hessian and subsolver decrease at appropriate rates. Meanwhile, in the online stochastic setting where data comes endlessly, we give the overall complexity of the proposed algorithms and show that they are as competitive as the stochastic gradient descent. Moreover, we give the overall complexity of the proposed algorithms in the finite-sum setting and show that it is as competitive as the state of the art variance reduced algorithms. Finally, we propose an efficient algorithm for the cubic regularized second-order subproblem, which can converge to an enough small neighborhood of the optimal solution in a superlinear rate. | When the nonsmooth term @math exists, @math has @math -Lipschitz Hessian and @math is convex, by @cite_6 , APCNM can find an @math -accurate solution with @math iterations. Meanwhile by extending the result of the smooth setting trivially @cite_15 , we can know that PCNM can find an @math -accurate solution for with @math iterations. Except @cite_6 , the existing researches of inexact variants of CNM mainly focus on the smooth setting where the nonsmooth term @math does not exist. | {
"cite_N": [
"@cite_15",
"@cite_6"
],
"mid": [
"2009941369",
"2792215433"
],
"abstract": [
"In this paper, we provide theoretical analysis for a cubic regularization of Newton method as applied to unconstrained minimization problem. For this scheme, we prove general local convergence results. However, the main contribution of the paper is related to global worst-case complexity bounds for different problem classes including some nonconvex cases. It is shown that the search direction can be computed by standard linear algebra technique.",
"In this paper, we study accelerated regularized Newton methods for minimizing objectives formed as a sum of two functions: one is convex and twice differentiable with Holder-continuous Hessian, and..."
]
} |
1902.02388 | 2913758774 | In this paper, we use Proximal Cubic regularized Newton Methods (PCNM) to optimize the sum of a smooth convex function and a non-smooth convex function, where we use inexact gradient and Hessian, and an inexact subsolver for the cubic regularized second-order subproblem. We propose inexact variants of PCNM and accelerated PCNM respectively, and show that both variants can achieve the same convergence rate as in the exact case, provided that the errors in the inexact gradient, Hessian and subsolver decrease at appropriate rates. Meanwhile, in the online stochastic setting where data comes endlessly, we give the overall complexity of the proposed algorithms and show that they are as competitive as the stochastic gradient descent. Moreover, we give the overall complexity of the proposed algorithms in the finite-sum setting and show that it is as competitive as the state of the art variance reduced algorithms. Finally, we propose an efficient algorithm for the cubic regularized second-order subproblem, which can converge to an enough small neighborhood of the optimal solution in a superlinear rate. | In the nonconvex setting, in order to reduce the high computational cost in optimizing the subproblem and maintain the convergence rate of the exact case at the same time, @cite_9 @cite_7 @cite_19 considered a subsampling strategy to obtain inexact gradient and Hessian, and a termination condition for optimizing , while the conditions of subsampling depend on the further iteration and thus is implementable, and the termination condition is specific to the Lanzcos method @cite_22 . @cite_11 used variance reduction strategy to reduce the complexity of computing the gradient and Hessian, while the complexity to update the variance reduced gradient and Hessian is @math and thus the SVRC method in @cite_11 is only suitable for the problem with small dimension @math . @cite_16 made a considerable progress that the stochastic cubic regularization method in @cite_16 needs @math stochastic gradient and stochastic Hessian-vector product evaluations to find an approximate local minima for general smooth, nonconvex functions, which matches the best known result, while it did not give the analysis of the convex setting. -0.1in | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_19",
"@cite_16",
"@cite_11"
],
"mid": [
"2950418232",
"1994974865",
"2156005216",
"2614389101",
"2962876518",
"2787156640"
],
"abstract": [
"We provide convergence rates for Krylov subspace solutions to the trust-region and cubic-regularized (nonconvex) quadratic problems. Such solutions may be efficiently computed by the Lanczos method and have long been used in practice. We prove error bounds of the form @math and @math , where @math is a condition number for the problem, and @math is the Krylov subspace order (number of Lanczos iterations). We also provide lower bounds showing that our analysis is sharp.",
"An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization and analysed in Cartis, Gould and Toint (Part I, Math Program, doi: 10.1007 s10107-009-0286-5, 2009), generalizing at the same time an unpublished method due to Griewank (Technical Report NA 12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser, Deuflhard and Erdmann (Optim Methods Softw 22(3):413–431, 2007). In this companion paper, we further the analysis by providing worst-case global iteration complexity bounds for ARC and a second-order variant to achieve approximate first-order, and for the latter second-order, criticality of the iterates. In particular, the second-order ARC algorithm requires at most @math iterations, or equivalently, function- and gradient-evaluations, to drive the norm of the gradient of the objective below the desired accuracy @math , and @math iterations, to reach approximate nonnegative curvature in a subspace. The orders of these bounds match those proved for Algorithm 3.3 of Nesterov and Polyak which minimizes the cubic model globally on each iteration. Our approach is more general in that it allows the cubic model to be solved only approximately and may employ approximate Hessians.",
"An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA 12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by (Optim Methods Softw 22(3):413–431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation.",
"We consider the minimization of non-convex functions that typically arise in machine learning. Specifically, we focus our attention on a variant of trust region methods known as cubic regularization. This approach is particularly attractive because it escapes strict saddle points and it provides stronger convergence guarantees than first- and second-order as well as classical trust region methods. However, it suffers from a high computational complexity that makes it impractical for large-scale learning. Here, we propose a novel method that uses sub-sampling to lower this computational cost. By the use of concentration inequalities we provide a sampling scheme that gives sufficiently accurate gradient and Hessian approximations to retain the strong global and local convergence guarantees of cubically regularized methods. To the best of our knowledge this is the first work that gives global convergence guarantees for a sub-sampled variant of cubic regularization on non-convex functions. Furthermore, we provide experimental results supporting our theory.",
"This paper proposes a stochastic variant of a classic algorithm---the cubic-regularized Newton method [Nesterov and Polyak]. The proposed algorithm efficiently escapes saddle points and finds approximate local minima for general smooth, nonconvex functions in only ˜ O (ϵ−3.5) stochastic gradient and stochastic Hessian-vector product evaluations. The latter can be computed as efficiently as stochastic gradients. This improves upon the ˜ O (ϵ−4) rate of stochastic gradient descent. Our rate matches the best-known result for finding local minima without requiring any delicate acceleration or variance-reduction techniques.",
"We propose a stochastic variance-reduced cubic regularized Newton method for non-convex optimization. At the core of our algorithm is a novel semi-stochastic gradient along with a semi-stochastic Hessian, which are specifically designed for cubic regularization method. We show that our algorithm is guaranteed to converge to an @math -approximately local minimum within @math second-order oracle calls, which outperforms the state-of-the-art cubic regularization algorithms including subsampled cubic regularization. Our work also sheds light on the application of variance reduction technique to high-order non-convex optimization methods. Thorough experiments on various non-convex optimization problems support our theory."
]
} |
1902.02388 | 2913758774 | In this paper, we use Proximal Cubic regularized Newton Methods (PCNM) to optimize the sum of a smooth convex function and a non-smooth convex function, where we use inexact gradient and Hessian, and an inexact subsolver for the cubic regularized second-order subproblem. We propose inexact variants of PCNM and accelerated PCNM respectively, and show that both variants can achieve the same convergence rate as in the exact case, provided that the errors in the inexact gradient, Hessian and subsolver decrease at appropriate rates. Meanwhile, in the online stochastic setting where data comes endlessly, we give the overall complexity of the proposed algorithms and show that they are as competitive as the stochastic gradient descent. Moreover, we give the overall complexity of the proposed algorithms in the finite-sum setting and show that it is as competitive as the state of the art variance reduced algorithms. Finally, we propose an efficient algorithm for the cubic regularized second-order subproblem, which can converge to an enough small neighborhood of the optimal solution in a superlinear rate. | Compared with finding an @math -second order stationary point of the nonconvex setting, in the convex setting, the goal of finding an @math -accurate solution in terms of objective function results in extra difficulty. In this paper, we propose the inexact PCNM (IPCNM) and accelerated IPCNM (AIPCNM) as the inexact invariants of PCNM and APCNM respectively. Table gives the researches about inexact variants of CNM in the convex setting. As shown in Table , only the algorithms in @cite_0 and this paper can maintain the same convergence rate as the exact case; all the researches consider using inexact Hessian, while only this paper also use inexact gradient; @cite_9 @cite_0 and this paper use inexact subsolver, while only the termination condition for subsolver in this paper is not specific to the Lanzcos method; finally, only the results in this paper are applicable to the case where the nonsmooth regularizer @math exists. | {
"cite_N": [
"@cite_0",
"@cite_9"
],
"mid": [
"2786184162",
"2156005216"
],
"abstract": [
"In this paper, we consider an unconstrained optimization model where the objective is a sum of a large number of possibly nonconvex functions, though overall the objective is assumed to be smooth and convex. Our bid to solving such model uses the framework of cubic regularization of Newton's method.As well known, the crux in cubic regularization is its utilization of the Hessian information, which may be computationally expensive for large-scale problems. To tackle this, we resort to approximating the Hessian matrix via sub-sampling. In particular, we propose to compute an approximated Hessian matrix by either uniformly or non-uniformly sub-sampling the components of the objective. Based upon sub-sampling, we develop both standard and accelerated adaptive cubic regularization approaches and provide theoretical guarantees on global iteration complexity. We show that the standard and accelerated sub-sampled cubic regularization methods achieve iteration complexity in the order of @math and @math respectively, which match those of the original standard and accelerated cubic regularization methods Cartis-2012-Evaluation, Jiang-2017-Unified using the full Hessian information. The performances of the proposed methods on regularized logistic regression problems show a clear effect of acceleration in terms of epochs on several real data sets.",
"An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA 12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by (Optim Methods Softw 22(3):413–431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation."
]
} |
1902.02388 | 2913758774 | In this paper, we use Proximal Cubic regularized Newton Methods (PCNM) to optimize the sum of a smooth convex function and a non-smooth convex function, where we use inexact gradient and Hessian, and an inexact subsolver for the cubic regularized second-order subproblem. We propose inexact variants of PCNM and accelerated PCNM respectively, and show that both variants can achieve the same convergence rate as in the exact case, provided that the errors in the inexact gradient, Hessian and subsolver decrease at appropriate rates. Meanwhile, in the online stochastic setting where data comes endlessly, we give the overall complexity of the proposed algorithms and show that they are as competitive as the stochastic gradient descent. Moreover, we give the overall complexity of the proposed algorithms in the finite-sum setting and show that it is as competitive as the state of the art variance reduced algorithms. Finally, we propose an efficient algorithm for the cubic regularized second-order subproblem, which can converge to an enough small neighborhood of the optimal solution in a superlinear rate. | From the convergence analysis @cite_22 , it is known that the Lanzcos method can satisfy Assumption when the nonsmooth term @math does not exist. In Section , we propose the Cubic-Prox-SVRG method and show that it can converge to an enough small neighborhood of @math in a superlinear rate. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2950418232"
],
"abstract": [
"We provide convergence rates for Krylov subspace solutions to the trust-region and cubic-regularized (nonconvex) quadratic problems. Such solutions may be efficiently computed by the Lanczos method and have long been used in practice. We prove error bounds of the form @math and @math , where @math is a condition number for the problem, and @math is the Krylov subspace order (number of Lanczos iterations). We also provide lower bounds showing that our analysis is sharp."
]
} |
1902.02388 | 2913758774 | In this paper, we use Proximal Cubic regularized Newton Methods (PCNM) to optimize the sum of a smooth convex function and a non-smooth convex function, where we use inexact gradient and Hessian, and an inexact subsolver for the cubic regularized second-order subproblem. We propose inexact variants of PCNM and accelerated PCNM respectively, and show that both variants can achieve the same convergence rate as in the exact case, provided that the errors in the inexact gradient, Hessian and subsolver decrease at appropriate rates. Meanwhile, in the online stochastic setting where data comes endlessly, we give the overall complexity of the proposed algorithms and show that they are as competitive as the stochastic gradient descent. Moreover, we give the overall complexity of the proposed algorithms in the finite-sum setting and show that it is as competitive as the state of the art variance reduced algorithms. Finally, we propose an efficient algorithm for the cubic regularized second-order subproblem, which can converge to an enough small neighborhood of the optimal solution in a superlinear rate. | Table gives the overall complexity of representative algorithms in the online stochastic setting. For simplicity, in Table , we neglect the poly-logarithmic factor and use the @math notation. The existing algorithms under this setting are mainly first-order algorithms @cite_4 , which can be divided into methods that pass one sample or a fixed mini-batch samples in each iteration @cite_21 @cite_3 , and methods that use an increased sample size in each iteration @cite_20 @cite_18 @cite_5 . If we do not consider the poly-logarithmic factor, COMID @cite_21 obtains the optimal convergence rate ( @math regret) @math in the convex setting and @math in the @math -strongly convex setting. However, as shown in Table , the Inexact Proximal Gradient Method (IPGM) and Accelerated IPGM (IPGM) @cite_5 Although @cite_5 do not give the overall complexity in the online stochastic setting, we can give the complexity results in Table based on the same analysis in this paper. , which belongs to the methods with increased sample size, can not obtain the optimal rate in the convex setting. The proposed methods IPCNM and AIPCNM belongs to the methods with an increased sample size, while we use the second-order information. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_21",
"@cite_3",
"@cite_5",
"@cite_20"
],
"mid": [
"1751687266",
"",
"131378802",
"2205628031",
"2952975724",
"2061570747"
],
"abstract": [
"Many structured data-fitting applications require the solution of an optimization problem involving a sum over a potentially large number of measurements. Incremental gradient algorithms offer inexpensive iterations by sampling a subset of the terms in the sum; these methods can make great progress initially, but often slow as they approach a solution. In contrast, full-gradient methods achieve steady convergence at the expense of evaluating the full objective and gradient on each iteration. We explore hybrid methods that exhibit the benefits of both approaches. Rate-of-convergence analysis shows that by controlling the sample size in an incremental-gradient algorithm, it is possible to maintain the steady convergence rates of full-gradient methods. We detail a practical quasi-Newton implementation based on this approach. Numerical experiments illustrate its potential benefits.",
"",
"We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstorder algorithms, such as the projected gradient method, mirror descent, and forwardbackward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly used regularization functions, such as l1, mixed norm, and trace-norm.",
"We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1-norm for promoting sparsity. We develop extensions of Nesterov's dual averaging method, that can exploit the regularization structure in an online setting. At each iteration of these methods, the learning variables are adjusted by solving a simple minimization problem that involves the running average of all past subgradients of the loss function and the whole regularization term, not just its subgradient. In the case of l1-regularization, our method is particularly effective in obtaining sparse solutions. We show that these methods achieve the optimal convergence rates or regret bounds that are standard in the literature on stochastic and online convex optimization. For stochastic learning problems in which the loss functions have Lipschitz continuous gradients, we also present an accelerated version of the dual averaging method.",
"We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems.",
"This paper presents a methodology for using varying sample sizes in batch-type optimization methods for large-scale machine learning problems. The first part of the paper deals with the delicate issue of dynamic sample selection in the evaluation of the function and gradient. We propose a criterion for increasing the sample size based on variance estimates obtained during the computation of a batch gradient. We establish an @math complexity bound on the total cost of a gradient method. The second part of the paper describes a practical Newton method that uses a smaller sample to compute Hessian vector-products than to evaluate the function and the gradient, and that also employs a dynamic sampling technique. The focus of the paper shifts in the third part of the paper to L 1-regularized problems designed to produce sparse solutions. We propose a Newton-like method that consists of two phases: a (minimalistic) gradient projection phase that identifies zero variables, and subspace phase that applies a subsampled Hessian Newton iteration in the free variables. Numerical tests on speech recognition problems illustrate the performance of the algorithms."
]
} |
1902.02388 | 2913758774 | In this paper, we use Proximal Cubic regularized Newton Methods (PCNM) to optimize the sum of a smooth convex function and a non-smooth convex function, where we use inexact gradient and Hessian, and an inexact subsolver for the cubic regularized second-order subproblem. We propose inexact variants of PCNM and accelerated PCNM respectively, and show that both variants can achieve the same convergence rate as in the exact case, provided that the errors in the inexact gradient, Hessian and subsolver decrease at appropriate rates. Meanwhile, in the online stochastic setting where data comes endlessly, we give the overall complexity of the proposed algorithms and show that they are as competitive as the stochastic gradient descent. Moreover, we give the overall complexity of the proposed algorithms in the finite-sum setting and show that it is as competitive as the state of the art variance reduced algorithms. Finally, we propose an efficient algorithm for the cubic regularized second-order subproblem, which can converge to an enough small neighborhood of the optimal solution in a superlinear rate. | To solve , the state of the art algorithms are based on the well-known variance reduction technique @cite_12 @cite_8 @cite_17 . In Table , we use SVRG and Katyusha as the representative algorithms of non-accelerated variance reduced method and accelerated variance reduced method respectively. | {
"cite_N": [
"@cite_17",
"@cite_12",
"@cite_8"
],
"mid": [
"2963607709",
"2107438106",
"2047152541"
],
"abstract": [
"Nesterov's momentum trick is famously known for accelerating gradient descent, and has been proven useful in building fast iterative algorithms. However, in the stochastic setting, counterexamples exist and prevent Nesterov's momentum from providing similar acceleration, even if the underlying problem is convex. We introduce Katyusha, a direct, primal-only stochastic gradient method to fix this issue. It has a provably accelerated convergence rate in convex (off-line) stochastic optimization. The main ingredient is Katyusha momentum, a novel \"negative momentum\" on top of Nesterov's momentum that can be incorporated into a variance-reduction based algorithm and speed it up. Since variance reduction has been successfully applied to a growing list of practical problems, our paper suggests that in each of such cases, one could potentially give Katyusha a hug.",
"Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.",
"We consider the problem of minimizing the sum of two convex functions: one is the average of a large number of smooth component functions, and the other is a general convex function that admits a simple proximal mapping. We assume the whole objective function is strongly convex. Such problems often arise in machine learning, known as regularized empirical risk minimization. We propose and analyze a new proximal stochastic gradient method, which uses a multistage scheme to progressively reduce the variance of the stochastic gradient. While each iteration of this algorithm has similar cost as the classical stochastic gradient method (or incremental gradient method), we show that the expected objective value converges to the optimum at a geometric rate. The overall complexity of this method is much lower than both the proximal full gradient method and the standard proximal stochastic gradient method."
]
} |
1902.02388 | 2913758774 | In this paper, we use Proximal Cubic regularized Newton Methods (PCNM) to optimize the sum of a smooth convex function and a non-smooth convex function, where we use inexact gradient and Hessian, and an inexact subsolver for the cubic regularized second-order subproblem. We propose inexact variants of PCNM and accelerated PCNM respectively, and show that both variants can achieve the same convergence rate as in the exact case, provided that the errors in the inexact gradient, Hessian and subsolver decrease at appropriate rates. Meanwhile, in the online stochastic setting where data comes endlessly, we give the overall complexity of the proposed algorithms and show that they are as competitive as the stochastic gradient descent. Moreover, we give the overall complexity of the proposed algorithms in the finite-sum setting and show that it is as competitive as the state of the art variance reduced algorithms. Finally, we propose an efficient algorithm for the cubic regularized second-order subproblem, which can converge to an enough small neighborhood of the optimal solution in a superlinear rate. | In Section , we propose the Cubic-Prox-SVRG method to solve the subproblem @math by exploring the finite-sum structure in the inexact Hessian and the uniform convexity of the cubic regularizer @math Because it can converge to an enough small neighborhood of the optimal solution in a superlinear rate ( enough'' means the approximate solution satisfies the need of IPCNM and AIPCNM), in the convex setting, it is a good alternative to the well-known Lanczos method which only has a linear rate @cite_22 . | {
"cite_N": [
"@cite_22"
],
"mid": [
"2950418232"
],
"abstract": [
"We provide convergence rates for Krylov subspace solutions to the trust-region and cubic-regularized (nonconvex) quadratic problems. Such solutions may be efficiently computed by the Lanczos method and have long been used in practice. We prove error bounds of the form @math and @math , where @math is a condition number for the problem, and @math is the Krylov subspace order (number of Lanczos iterations). We also provide lower bounds showing that our analysis is sharp."
]
} |
1902.02104 | 2913171059 | Matrix multiplication @math appears as intermediate operation during the solution of a wide set of problems. In this paper, we propose a new cache-oblivious algorithm for the @math multiplication. Our algorithm, A @math A, calls classical Strassen's algorithm as sub-routine, decreasing the computational cost (expressed in number of performed products) of the conventional @math multiplication to @math . It works for generic rectangular matrices and exploits the peculiar symmetry of the resulting product matrix for sparing memory. We used the MPI paradigm to implement A @math A in parallel, and we tested its performances on a small subset of nodes of the Galileo cluster. Experiments highlight good scalability and speed-up, also thanks to minimal number of exchanged messages in the designed communication system. Parallel overhead and inherently sequential time fraction are negligible in the tested configurations. | In @cite_21 , Authors extend Strassen’s algorithm to deal with rectangular and arbitrary-size matrices. They consider the performance effects of Strassen’s directly applied to rectangular matrices or, after a cache-oblivious problem division, to (almost) square matrices, thus exploiting data locality. They also exploit the state-of-the-art adaptive software packages ATLAS and hand tuned packages such as GotoBLAS. Besides, they show that choosing a suitable combination of Strassen’s with ATLAS GotoBLAS, their approach achieves up to 30 . | {
"cite_N": [
"@cite_21"
],
"mid": [
"1990568697"
],
"abstract": [
"Strassen's matrix multiplication (MM) has benefits with respect to any (highly tuned) implementations of MM because Strassen's reduces the total number of operations. Strassen achieved this operation reduction by replacing computationally expensive MMs with matrix additions (MAs). For architectures with simple memory hierarchies, having fewer operations directly translates into an efficient utilization of the CPU and, thus, faster execution. However, for modern architectures with complex memory hierarchies, the operations introduced by the MAs have a limited in-cache data reuse and thus poor memory-hierarchy utilization, thereby overshadowing the (improved) CPU utilization, and making Strassen's algorithm (largely) useless on its own. In this paper, we investigate the interaction between Strassen's effective performance and the memory-hierarchy organization. We show how to exploit Strassen's full potential across different architectures. We present an easy-to-use adaptive algorithm that combines a novel implementation of Strassen's idea with the MM from automatically tuned linear algebra software (ATLAS) or GotoBLAS. An additional advantage of our algorithm is that it applies to any size and shape matrices and works equally well with row or column major layout. Our implementation consists of introducing a final step in the ATLAS GotoBLAS-installation process that estimates whether or not we can achieve any additional speedup using our Strassen's adaptation algorithm. Then we install our codes, validate our estimates, and determine the specific performance. We show that, by the right combination of Strassen's with ATLAS GotoBLAS, our approach achieves up to 30 22 speed-up versus ATLAS GotoBLAS alone on modern high-performance single processors. We consider and present the complexity and the numerical analysis of our algorithm, and, finally, we show performance for 17 (uniprocessor) systems."
]
} |
1902.02104 | 2913171059 | Matrix multiplication @math appears as intermediate operation during the solution of a wide set of problems. In this paper, we propose a new cache-oblivious algorithm for the @math multiplication. Our algorithm, A @math A, calls classical Strassen's algorithm as sub-routine, decreasing the computational cost (expressed in number of performed products) of the conventional @math multiplication to @math . It works for generic rectangular matrices and exploits the peculiar symmetry of the resulting product matrix for sparing memory. We used the MPI paradigm to implement A @math A in parallel, and we tested its performances on a small subset of nodes of the Galileo cluster. Experiments highlight good scalability and speed-up, also thanks to minimal number of exchanged messages in the designed communication system. Parallel overhead and inherently sequential time fraction are negligible in the tested configurations. | In @cite_14 , a parallel algorithm based on Strassen's fast matrix multiplication, Communication-Avoiding Parallel Strassen (CAPS), is described. Authors present the computational and communication cost analyses of the algorithm, and show that it matches the communication lower bounds described in @cite_3 . | {
"cite_N": [
"@cite_14",
"@cite_3"
],
"mid": [
"2032839380",
"2164946224"
],
"abstract": [
"Parallel matrix multiplication is one of the most studied fundamental problems in distributed and high performance computing. We obtain a new parallel algorithm that is based on Strassen's fast matrix multiplication and minimizes communication. The algorithm outperforms all known parallel matrix multiplication algorithms, classical and Strassen-based, both asymptotically and in practice. A critical bottleneck in parallelizing Strassen's algorithm is the communication between the processors. Ballard, Demmel, Holtz, and Schwartz (SPAA '11) prove lower bounds on these communication costs, using expansion properties of the underlying computation graph. Our algorithm matches these lower bounds, and so is communication-optimal. It exhibits perfect strong scaling within the maximum possible range. Benchmarking our implementation on a Cray XT4, we obtain speedups over classical and Strassen-based algorithms ranging from 24 to 184 for a fixed matrix dimension n=94080, where the number of processors ranges from 49 to 7203. Our parallelization approach generalizes to other fast matrix multiplication algorithms.",
"The communication cost of algorithms (also known as I O-complexity) is shown to be closely related to the expansion properties of the corresponding computation graphs. We demonstrate this on Strassen's and other fast matrix multiplication algorithms, and obtain the first lower bounds on their communication costs. In the sequential case, where the processor has a fast memory of size M, too small to store three n-by-n matrices, the lower bound on the number of words moved between fast and slow memory is, for a large class of matrix multiplication algorithms, Ω( (n √M)ω0 ·M), where ω0 is the exponent in the arithmetic count (e.g., ω0 = lg 7 for Strassen, and ω0 = 3 for conventional matrix multiplication). With p parallel processors, each with fast memory of size M, the lower bound is asymptotically lower by a factor of p. These bounds are attainable both for sequential and for parallel algorithms and hence optimal."
]
} |
1902.02104 | 2913171059 | Matrix multiplication @math appears as intermediate operation during the solution of a wide set of problems. In this paper, we propose a new cache-oblivious algorithm for the @math multiplication. Our algorithm, A @math A, calls classical Strassen's algorithm as sub-routine, decreasing the computational cost (expressed in number of performed products) of the conventional @math multiplication to @math . It works for generic rectangular matrices and exploits the peculiar symmetry of the resulting product matrix for sparing memory. We used the MPI paradigm to implement A @math A in parallel, and we tested its performances on a small subset of nodes of the Galileo cluster. Experiments highlight good scalability and speed-up, also thanks to minimal number of exchanged messages in the designed communication system. Parallel overhead and inherently sequential time fraction are negligible in the tested configurations. | In this work, we consider a particular matrix multiplication, that is the multiplication between @math and @math , where @math may have any size and shape. We exploit the recursive Strassen's algorithm, that is recursively applied to conceivably rectangular matrices, exploiting the idea described in @cite_21 . | {
"cite_N": [
"@cite_21"
],
"mid": [
"1990568697"
],
"abstract": [
"Strassen's matrix multiplication (MM) has benefits with respect to any (highly tuned) implementations of MM because Strassen's reduces the total number of operations. Strassen achieved this operation reduction by replacing computationally expensive MMs with matrix additions (MAs). For architectures with simple memory hierarchies, having fewer operations directly translates into an efficient utilization of the CPU and, thus, faster execution. However, for modern architectures with complex memory hierarchies, the operations introduced by the MAs have a limited in-cache data reuse and thus poor memory-hierarchy utilization, thereby overshadowing the (improved) CPU utilization, and making Strassen's algorithm (largely) useless on its own. In this paper, we investigate the interaction between Strassen's effective performance and the memory-hierarchy organization. We show how to exploit Strassen's full potential across different architectures. We present an easy-to-use adaptive algorithm that combines a novel implementation of Strassen's idea with the MM from automatically tuned linear algebra software (ATLAS) or GotoBLAS. An additional advantage of our algorithm is that it applies to any size and shape matrices and works equally well with row or column major layout. Our implementation consists of introducing a final step in the ATLAS GotoBLAS-installation process that estimates whether or not we can achieve any additional speedup using our Strassen's adaptation algorithm. Then we install our codes, validate our estimates, and determine the specific performance. We show that, by the right combination of Strassen's with ATLAS GotoBLAS, our approach achieves up to 30 22 speed-up versus ATLAS GotoBLAS alone on modern high-performance single processors. We consider and present the complexity and the numerical analysis of our algorithm, and, finally, we show performance for 17 (uniprocessor) systems."
]
} |
1902.02202 | 2913946636 | In the group testing problem we aim to identify a small number of infected individuals within a large population. We avail ourselves to a procedure that can test a group of multiple individuals, with the test result coming out positive iff at least one individual in the group is infected. With all tests conducted in parallel, what is the least number of tests required to identify the status of all individuals? In a recent test design [ 2016] the individuals are assigned to test groups randomly, with every individual joining an equal number of groups. We pinpoint the sharp threshold for the number of tests required in this randomised design so that it is information-theoretically possible to infer the infection status of every individual. Moreover, we analyse two efficient inference algorithms. These results settle conjectures from [ 2014, 2019]. | Dorfman's original group testing scheme, intended to test the American army for syphilis, was adaptive . In a first round of tests each soldier would be allocated to precisely one test group. If the test result came out negative, none of the soldiers in the group were infected. In a second round the soldiers whose group was tested positively would then be tested individually. Of course, Dorfman's scheme was not information-theoretically optimal. An optimal adaptive scheme that involves several test stages, with the tests conducted in the present stage governed by the results from the previous stages, is known @cite_38 . In the adaptive scenario the information-theoretic threshold works out to be The lower bound, i.e., that no adaptive design gets by with @math tests, follows from a very simple information-theoretic consideration. Namely, with a total of @math tests at our disposal there are merely @math possible test outcomes, and we need this number to exceed the count @math of possible vectors @math . | {
"cite_N": [
"@cite_38"
],
"mid": [
"2226661047"
],
"abstract": [
"In the (d,n) group testing problem n items have to be identified as either good or defective and the number of defective items is known to be d. A test on an arbitrary group (subset) of items reveals either that all items in the group are good or that at least one of the items is defective, but not how many or which items are defective. We present a new algorithm which in the worst case needs less than @math tests more than the information lower bound @math for n d≥2. For n d≥38, the difference decreases to less than @math tests. For d≥10, this is a considerable improvement over the d−1 additional tests given for the best previously known algorithm by Hwang. We conjecture that the behaviour for large n and d of the difference is optimal for @math . This implies that the @math tests per defective given in the bound above are the best possible."
]
} |
1902.02202 | 2913946636 | In the group testing problem we aim to identify a small number of infected individuals within a large population. We avail ourselves to a procedure that can test a group of multiple individuals, with the test result coming out positive iff at least one individual in the group is infected. With all tests conducted in parallel, what is the least number of tests required to identify the status of all individuals? In a recent test design [ 2016] the individuals are assigned to test groups randomly, with every individual joining an equal number of groups. We pinpoint the sharp threshold for the number of tests required in this randomised design so that it is information-theoretically possible to infer the infection status of every individual. Moreover, we analyse two efficient inference algorithms. These results settle conjectures from [ 2014, 2019]. | The most interesting regime for the group testing problem is when the number @math of infected individuals scales as a power @math of the entire population. Mathematically this is because in the linear regime @math the optimal strategy is to perform @math individual tests @cite_6 . Thus, for @math linear in @math there is nothing interesting to do. But the sublinear case is also of practical relevance, as witnessed by Heap's law in epidemiology @cite_42 or biological applications @cite_23 . | {
"cite_N": [
"@cite_42",
"@cite_23",
"@cite_6"
],
"mid": [
"2085816609",
"",
"2784892334"
],
"abstract": [
"Power-law distributions have been observed in a wide variety of areas. To our knowledge however, there has been no systematic observation of power-law distributions in chemoinformatics. Here, we present several examples of power-law distributions arising from the features of small, organic molecules. The distributions of rigid segments and ring systems, the distributions of molecular paths and circular substructures, and the sizes of molecular similarity clusters all show linear trends on log-log rank frequency plots, suggesting underlying power-law distributions. The number of unique features also follow Heaps'-like laws. The characteristic exponents of the power-laws lie in the 1.5-3 range, consistently with the exponents observed in other power-law phenomena. The power-law nature of these distributions leads to several applications including the prediction of the growth of available data through Heaps' law and the optimal allocation of experimental or computational resources via the 80 20 rule. More importantly, we also show how the power-laws can be leveraged to efficiently compress chemical fingerprints in a lossless manner, useful for the improved storage and retrieval of molecules in large chemical databases.",
"",
"We consider nonadaptive probabilistic group testing in the linear regime, where each of n items is defective independently with probability p in (0,1), where p is a constant independent of n. We show that testing each item individually is optimal, in the sense that with fewer than n tests the error probability is bounded away from zero."
]
} |
1902.02067 | 2914709529 | We demonstrated that Non-Maximum Suppression (NMS), which is commonly used in object detection tasks to filter redundant detection results, is no longer secure. NMS has always been an integral part of object detection algorithms. Currently, Fully Convolutional Network (FCN) is widely used as the backbone architecture of object detection models. Given an input instance, since FCN generates end-to-end detection results in a single stage, it outputs a large number of raw detection boxes. These bounding boxes are then filtered by NMS to make the final detection results. In this paper, we propose an adversarial example attack which triggers malfunctioning of NMS in the end-to-end object detection models. Our attack, namely Daedalus, manipulates the detection box regression values to compress the dimensions of detection boxes. Henceforth, NMS will no longer be able to filter redundant detection boxes correctly. And as a result, the final detection output contains extremely dense false positives. This can be fatal for many object detection applications such as autonomous vehicle and smart manufacturing industry. Our attack can be applied to different end-to-end object detection models. Furthermore, we suggest crafting robust adversarial examples by using an ensemble of popular detection models as the substitutes. Considering that model reusing is commonly seen in real-world object detection scenarios, Daedalus examples crafted based on an ensemble of substitutes can launch attacks without knowing the details of the victim models. Our experiments demonstrate that our attack effectively stops NMS from filtering redundant bounding boxes. As the evaluation results suggest, Daedalus increases the false positive rate in detection results to 99.9 and reduces the mean average precision scores to 0, while maintaining a low cost of distortion on the original inputs. | Methods for crafting adversarial examples against classification tasks have been extensively studied. The basic algorithms for generating examples can be divided into gradient based attacks and forward derivative based attack. Gradient-based attacks find adversarial examples by minimising the cost of the adversarial objective set by the attacker, based on gradient descent. For example, L-BFGS was adopted to optimise the adversarial objective functions for generating adversarial examples @cite_37 @cite_0 . Fast gradient sign method was proposed to rapidly find first-order Taylor polynomial approximations of adversarial examples based on gradient descent or gradient sign descent @cite_57 . Basic iterative method relies on multiple steps of gradient descent to generate adversarial examples @cite_51 . Deepfool computes adversarial gradients based on the local linearity of neural networks @cite_12 . Forward derivative based attack perturbs salient features based on the Jacobian between the model inputs and outputs @cite_5 . Based on these algorithms, there are evolved attacks that use different distortion metrics to make adversarial examples that are more imperceptible for human beings @cite_39 @cite_41 . Furthermore, there are methods proposed to craft adversarial example of data that has discrete features (e.g. text) @cite_24 @cite_26 @cite_56 . | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_41",
"@cite_39",
"@cite_0",
"@cite_57",
"@cite_24",
"@cite_56",
"@cite_5",
"@cite_51",
"@cite_12"
],
"mid": [
"1673923490",
"",
"",
"2964077693",
"",
"2963207607",
"2963969878",
"",
"2180612164",
"2460937040",
"2243397390"
],
"abstract": [
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"",
"",
"Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error. Such images, however, contain artificial perturbations that make them somewhat distinguishable from natural images. This property is used by several defense methods to counter adversarial examples by applying denoising filters or training the model to be robust to small perturbations. In this paper, we introduce a new class of adversarial examples, namely \"Semantic Adversarial Examples,\" as images that are arbitrarily perturbed to fool the model, but in such a way that the modified image semantically represents the same object as the original image. We formulate the problem of generating such images as a constrained optimization problem and develop an adversarial transformation based on the shape bias property of human cognitive system. In our method, we generate adversarial images by first converting the RGB image into the HSV (Hue, Saturation and Value) color space and then randomly shifting the Hue and Saturation components, while keeping the Value component the same. Our experimental results on CIFAR10 dataset show that the accuracy of VGG16 network on adversarial color-shifted images is 5.7 .",
"",
"Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.",
"",
"",
"Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97 adversarial success rate while only modifying on average 4.02 of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.",
"Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.1"
]
} |
1902.02067 | 2914709529 | We demonstrated that Non-Maximum Suppression (NMS), which is commonly used in object detection tasks to filter redundant detection results, is no longer secure. NMS has always been an integral part of object detection algorithms. Currently, Fully Convolutional Network (FCN) is widely used as the backbone architecture of object detection models. Given an input instance, since FCN generates end-to-end detection results in a single stage, it outputs a large number of raw detection boxes. These bounding boxes are then filtered by NMS to make the final detection results. In this paper, we propose an adversarial example attack which triggers malfunctioning of NMS in the end-to-end object detection models. Our attack, namely Daedalus, manipulates the detection box regression values to compress the dimensions of detection boxes. Henceforth, NMS will no longer be able to filter redundant detection boxes correctly. And as a result, the final detection output contains extremely dense false positives. This can be fatal for many object detection applications such as autonomous vehicle and smart manufacturing industry. Our attack can be applied to different end-to-end object detection models. Furthermore, we suggest crafting robust adversarial examples by using an ensemble of popular detection models as the substitutes. Considering that model reusing is commonly seen in real-world object detection scenarios, Daedalus examples crafted based on an ensemble of substitutes can launch attacks without knowing the details of the victim models. Our experiments demonstrate that our attack effectively stops NMS from filtering redundant bounding boxes. As the evaluation results suggest, Daedalus increases the false positive rate in detection results to 99.9 and reduces the mean average precision scores to 0, while maintaining a low cost of distortion on the original inputs. | Beyond the above attacks, some methods can generate robust adversarial examples which can fool real-world classifiers and detectors. For example, dense adversarial generation (DAG) algorithm was proposed to craft adversarial example for object detection and segmentation @cite_13 . craft robust adversarial examples that cause misclassification of object detector @cite_7 . @math was proposed to craft adversarial example of real-world road signs @cite_8 . Subsequently, @math was extended to attack YOLO-v2 @cite_53 . Recently, expectation over transformation (EoT) algorithm was proposed to synthesise robust adversarial examples @cite_49 @cite_2 . However, current adversarial examples only focus on triggering misclassification in machine learning classifiers object detectors. Our Daedalus attack creates adversarial examples that lead to malfunctioning of NMS, which is different from all the previous attacks. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_53",
"@cite_49",
"@cite_2",
"@cite_13"
],
"mid": [
"2775467454",
"2741933435",
"2884519271",
"2736899637",
"",
"2604505099"
],
"abstract": [
"An adversarial example is an example that has been adjusted to produce a wrong label when presented to a system at test time. To date, adversarial example constructions have been demonstrated for classifiers, but not for detectors. If adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehicles. In this paper, we demonstrate a construction that successfully fools two standard detectors, Faster RCNN and YOLO. The existence of such examples is surprising, as attacking a classifier is very different from attacking a detector, and that the structure of detectors - which must search for their own bounding box, and which cannot estimate that box very accurately - makes it quite likely that adversarial patterns are strongly disrupted. We show that our construction produces adversarial examples that generalize well across sequences digitally, even though large perturbations are needed. We also show that our construction yields physical objects that are adversarial.",
"Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world--they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper we propose a new attack algorithm--Robust Physical Perturbations (RP2)-- that generates perturbations by taking images under different conditions into account. Our algorithm can create spatially-constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer. We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100 of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100 of the testing conditions.",
"Deep neural networks (DNNs) are vulnerable to adversarial examples-maliciously crafted inputs that cause DNNs to make incorrect predictions. Recent work has shown that these attacks generalize to the physical domain, to create perturbations on physical objects that fool image classifiers under a variety of real-world conditions. Such attacks pose a risk to deep learning models used in safety-critical cyber-physical systems. In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene. Improving upon a previous physical attack on image classifiers, we create perturbed physical objects that are either ignored or mislabeled by object detection models. We implement a Disappearance Attack, in which we cause a Stop sign to \"disappear\" according to the detector-either by covering thesign with an adversarial Stop sign poster, or by adding adversarial stickers onto the sign. In a video recorded in a controlled lab environment, the state-of-the-art YOLOv2 detector failed to recognize these adversarial Stop signs in over 85 of the video frames. In an outdoor experiment, YOLO was fooled by the poster and sticker attacks in 72.5 and 63.5 of the video frames respectively. We also use Faster R-CNN, a different object detection model, to demonstrate the transferability of our adversarial perturbations. The created poster perturbation is able to fool Faster R-CNN in 85.9 of the video frames in a controlled lab environment, and 40.2 of the video frames in an outdoor environment. Finally, we present preliminary results with a new Creation Attack, where in innocuous physical stickers fool a model into detecting nonexistent objects.",
"Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems. We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations. We synthesize two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. We apply our algorithm to complex three-dimensional objects, using 3D-printing to manufacture the first physical adversarial objects. Our results demonstrate the existence of 3D adversarial objects in the physical world.",
"",
"It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, cause deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the target is a pixel or a receptive field in segmentation, and an object proposal in detection). This inspires us to optimize a loss function over a set of targets for generating adversarial perturbations. Based on this, we propose a novel algorithm named Dense Adversary Generation (DAG), which applies to the state-of-the-art networks for segmentation and detection. We find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transfer ability across networks with the same architecture is more significant than in other cases. Besides, we show that summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack."
]
} |
1902.02205 | 2911331204 | Robust localisation and identification of vertebrae, jointly termed vertebrae labelling, in computed tomography (CT) images is an essential component of automated spine analysis. Current approaches for this task mostly work with 3D scans and are comprised of a sequence of multiple networks. Contrarily, our approach relies only on 2D reformations, enabling us to design an end-to-end trainable, standalone network. Our contribution includes: (1) Inspired by the workflow of human experts, a novel butterfly-shaped network architecture (termed Btrfly net) that efficiently combines information across sufficiently-informative sagittal and coronal reformations. (2) Two adversarial training regimes that encode an anatomical prior of the spine's shape into the Btrfly net, each enforcing the prior in a distinct manner. We evaluate our approach on a public benchmarking dataset of 302 CT scans achieving a performance comparable to state-of-art methods (identification rate of @math 88 ) without any post-processing stages. Addressing its translation to clinical settings, an in-house dataset of 65 CT scans with a higher data variability is introduced, where we discuss refinements that render our approach robust to such scenarios. | @PARASPLIT Recent work in @cite_4 propose encoding (anatomical) segmentation priors into an FCN by learning the shape representation using an auto encoder (AE) alongside the primary segmentation network. The AE once trained projects a new prediction onto the space of the learnt, true segmentations, thus repeating the shape'. Different from this, in @cite_12 , the encoder of a similarly pre-trained AE is used to provide projection-loss (euclidean distance in latent space) to train the primary network. | {
"cite_N": [
"@cite_4",
"@cite_12"
],
"mid": [
"2753924563",
"2620296437"
],
"abstract": [
"Semantic segmentation has been popularly addressed using Fully convolutional networks (FCN) (e.g. U-Net) with impressive results and has been the forerunner in recent segmentation challenges. However, FCN approaches do not necessarily incorporate local geometry such as smoothness and shape, whereas traditional image analysis techniques have benefitted greatly by them in solving segmentation and tracking problems. In this work, we address the problem of incorporating shape priors within the FCN segmentation framework. We demonstrate the utility of such a shape prior in robust handling of scenarios such as loss of contrast and artifacts. Our experiments show ( 5 ) improvement over U-Net for the challenging problem of ultrasound kidney segmentation.",
"Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning-based techniques. However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy ( e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks ( e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks. In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies."
]
} |
1902.02205 | 2911331204 | Robust localisation and identification of vertebrae, jointly termed vertebrae labelling, in computed tomography (CT) images is an essential component of automated spine analysis. Current approaches for this task mostly work with 3D scans and are comprised of a sequence of multiple networks. Contrarily, our approach relies only on 2D reformations, enabling us to design an end-to-end trainable, standalone network. Our contribution includes: (1) Inspired by the workflow of human experts, a novel butterfly-shaped network architecture (termed Btrfly net) that efficiently combines information across sufficiently-informative sagittal and coronal reformations. (2) Two adversarial training regimes that encode an anatomical prior of the spine's shape into the Btrfly net, each enforcing the prior in a distinct manner. We evaluate our approach on a public benchmarking dataset of 302 CT scans achieving a performance comparable to state-of-art methods (identification rate of @math 88 ) without any post-processing stages. Addressing its translation to clinical settings, an in-house dataset of 65 CT scans with a higher data variability is introduced, where we discuss refinements that render our approach robust to such scenarios. | Notice the parallels that can be drawn between these prior encoding approaches and generative adversarial networks (GAN) @cite_10 , @cite_0 . Both of them have two networks, a primary network ( @math generator) generating a prediction and an auxiliary auto-encoding network ( @math discriminator) working on the of this prediction. Architecturally closer to the AE-like secondary network is the energy-based GAN proposed by @cite_14 that uses an AE as a discriminator and its reconstruction energy as the adversarial signal. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_10"
],
"mid": [
"",
"2962775818",
"2099471712"
],
"abstract": [
"",
"We introduce the \"Energy-based Generative Adversarial Network\" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
]
} |
1902.02308 | 2914463281 | Predicting flood for any location at times of extreme storms is a longstanding problem that has utmost importance in emergency management. Conventional methods that aim to predict water levels in streams use advanced hydrological models still lack of giving accurate forecasts everywhere. This study aims to explore artificial deep neural networks' performance on flood prediction. While providing models that can be used in forecasting stream stage, this paper presents a dataset that focuses on the connectivity of data points on river networks. It also shows that neural networks can be very helpful in time-series forecasting as in flood events, and support improving existing models through data assimilation. | Studies that use deep neural networks for time-series data show great results and provides an extensive vision in increasing the usage of neural networks architectures in time-series data. Researchers show that neural networks can be used for traffic speed prediction @cite_4 @cite_0 @cite_26 , taxi demand prediction @cite_10 , financial time-series prediction @cite_21 @cite_25 . Traffic speed prediction is a very similar task with flood prediction since both of them relies on changes in connected points in the network. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_21",
"@cite_0",
"@cite_10",
"@cite_25"
],
"mid": [
"",
"2583466634",
"2296438605",
"",
"2788134583",
"2734777338"
],
"abstract": [
"",
"Traffic speed prediction is a long-standing and critically important topic in the area of Intelligent Transportation Systems (ITS). Recent years have witnessed the encouraging potentials of deep neural networks for real-life applications of various domains. Traffic speed prediction, however, is still in its initial stage without making full use of spatio-temporal traffic information. In light of this, in this paper, we propose a deep learning method with an Error-feedback Recurrent Convolutional Neural Network structure (eRCNN) for continuous traffic speed prediction. By integrating the spatio-temporal traffic speeds of contiguous road segments as an input matrix, eRCNN explicitly leverages the implicit correlations among nearby segments to improve the predictive accuracy. By further introducing separate error feedback neurons to the recurrent layer, eRCNN learns from prediction errors so as to meet predictive challenges rising from abrupt traffic events such as morning peaks and traffic accidents. Extensive experiments on real-life speed data of taxis running on the 2nd and 3rd ring roads of Beijing city demonstrate the strong predictive power of eRCNN in comparison to some state-of-the-art competitors. The necessity of weight pre-training using a transfer learning notion has also been testified. More interestingly, we design a novel influence function based on the deep learning model, and showcase how to leverage it to recognize the congestion sources of the ring roads in Beijing.",
"We propose a deep learning method for event-driven stock market prediction. First, events are extracted from news text, and represented as dense vectors, trained using a novel neural tensor network. Second, a deep convolutional neural network is used to model both short-term and long-term influences of events on stock price movements. Experimental results show that our model can achieve nearly 6 improvements on S&P 500 index prediction and individual stock prediction, respectively, compared to state-of-the-art baseline methods. In addition, market simulation results show that our system is more capable of making profits than previously reported systems trained on S&P 500 stock historical data.",
"",
"Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from large-scale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-of-the-art methods.",
"The application of deep learning approaches to finance has received a great deal of attention from both investors and researchers. This study presents a novel deep learning framework where wavelet transforms (WT), stacked autoencoders (SAEs) and long-short term memory (LSTM) are combined for stock price forecasting. The SAEs for hierarchically extracted deep features is introduced into stock price forecasting for the first time. The deep learning framework comprises three stages. First, the stock price time series is decomposed by WT to eliminate noise. Second, SAEs is applied to generate deep high-level features for predicting the stock price. Third, high-level denoising features are fed into LSTM to forecast the next day’s closing price. Six market indices and their corresponding index futures are chosen to examine the performance of the proposed model. Results show that the proposed model outperforms other similar models in both predictive accuracy and profitability performance."
]
} |
1902.02308 | 2914463281 | Predicting flood for any location at times of extreme storms is a longstanding problem that has utmost importance in emergency management. Conventional methods that aim to predict water levels in streams use advanced hydrological models still lack of giving accurate forecasts everywhere. This study aims to explore artificial deep neural networks' performance on flood prediction. While providing models that can be used in forecasting stream stage, this paper presents a dataset that focuses on the connectivity of data points on river networks. It also shows that neural networks can be very helpful in time-series forecasting as in flood events, and support improving existing models through data assimilation. | In this paper, we propose a flood prediction benchmark dataset for future applications in machine learning as well as a scalable approach to forecasting river stage for individual survey points on rivers. The approach takes into account the historical stage data of survey points on selected upstream locations, as well as precipitation data. This approach is both decentralized and doesn't need any historical data from unrelated survey points and can be used in real-time. Recurrent Neural Networks (RNNs), in particular, Gated recurrent unit (GRU) Networks are utilized throughout this study. We also show that this approach presents satisfiable results when it's applied for the state of Iowa as a proof of concept. The data are gathered from the Iowa Flood Center (IFC) and United States Geological Survey (USGS) sensors on the rivers within the state of Iowa. Findings of this project and the deep neural network model will benefit operational cyber platforms @cite_6 , intelligent knowledge systems @cite_13 , and watershed information and visualization tools @cite_27 @cite_14 @cite_15 with enhanced river stage forecasting. | {
"cite_N": [
"@cite_14",
"@cite_6",
"@cite_27",
"@cite_15",
"@cite_13"
],
"mid": [
"2166277006",
"2773719488",
"2001193916",
"65683889",
"2811064840"
],
"abstract": [
"This paper addresses the challenge of social legitimacy issues for the technical solutions to environmental problems, and the role of Information Systems to resolve such issues. The paper outlines the Georgia Watershed Information System (GWIS), a comprehensive environmental information system, and one of its scientific visualization interfaces. This paper presents a novel scientific visualization tool based on unique components and features of GWIS. The visualization tool uses data and mapping services of GWIS to create dynamic visualizations and animation of water quality observations. A case study is demonstrated for visualizing water quality observations for dry and wet weather conditions on urban Weracoba Creek (Colombus) and its BMP (Best Management Practice), which might help to deal with issues of storm water (storm sewage) pollution control and management. The results show that the scientific visualization interface might support the prospective role of Information Systems in trying to resolve issues of \"social legitimacy\" surrounding the technical proposals with respect to re-engineering the city's infrastructure.",
"ABSTRACTThis article presents the vision, implementation, and case studies of the Iowa Flood Information System (IFIS) towards the vision for next-generation decision support systems for flooding. ...",
"AbstractIn the spring of 2013, NASA conducted a field campaign known as Iowa Flood Studies (IFloodS) as part of the Ground Validation (GV) program for the Global Precipitation Measurement (GPM) mission. The purpose of IFloodS was to enhance the understanding of flood-related, space-based observations of precipitation processes in events that transpire worldwide. NASA used a number of scientific instruments such as ground-based weather radars, rain and soil moisture gauges, stream gauges, and disdrometers to monitor rainfall events in Iowa. This article presents the cyberinfrastructure tools and systems that supported the planning, reporting, and management of the field campaign and that allow these data and models to be accessed, evaluated, and shared for research. The authors describe the collaborative informatics tools, which are suitable for the network design, that were used to select the locations in which to place the instruments. How the authors used information technology tools for instrument moni...",
"Abstract. In principle, water quality can be managed through an integrative perspective on watersheds, facilitated by a web-based information system. This paper describes a prototype of a web-based Information System for Georgia watersheds called ‘GWIS: Georgia Watershed Information System’. The principal functionality of the system is to provide a platform for integrating state-wide efforts in environmental information collection, collation, storage, analysis, retrieval, and dissemination to all potential stakeholders. Several data management, modeling, visualization, mapping and resource management tools for watersheds, as well as interfaces for integration across diverse and dispersed data sources, are included in the system. As its initial point of departure — to provide substantial and specific content — GWIS has been populated with the high-volume high-quality (HVHQ; near continuous) water quality data acquired during field monitoring campaigns over the past 11 years with the Environmental Process Control Laboratory (EPCL) of the University of Georgia. Information Systems (EIS) modeling with artificial in",
"Abstract Communities are at risk from extreme events and natural disasters that can lead to dangerous situations for residents. Improving resilience by helping people learn how to better prepare for, recover from, and adapt to disasters is critical to reduce the impacts of these extreme events. This project presents an intelligent system, Flood AI, designed to improve societ al preparedness for flooding by providing a knowledge engine that uses voice recognition, artificial intelligence, and natural language processing based on a generalized ontology for disasters with a primary focus on flooding. The knowledge engine uses flood ontology to connect user input to relevant knowledge discovery channels on flooding by developing a data acquisition and processing framework using environmental observations, forecast models, and knowledge bases. The framework’s communication channels include web-based systems, agent-based chatbots, smartphone applications, automated web workflows, and smart home devices, opening the knowledge discovery for flooding to many unique use cases."
]
} |
1902.02263 | 2963964591 | We present a TTS neural network that is able to produce speech in multiple languages. The proposed network is able to transfer a voice, which was presented as a sample in a source language, into one of several target languages. Training is done without using matching or parallel data, i.e., without samples of the same speaker in multiple languages, making the method much more applicable. The conversion is based on learning a polyglot network that has multiple per-language sub-networks and adding loss terms that preserve the speaker’s identity in multiple languages. We evaluate the proposed polyglot neural network for three languages with a total of more than 400 speakers and demonstrate convincing conversion capabilities. | The recent neural TTS systems are sequence to sequence methods, where the underlying methods differ. Wavenet @cite_15 employs CNNs with dilated convolutions. Char2Wav @cite_23 employs RNNs, the original Tacotron method contains multiple RNNs, convolutions and a highway network @cite_17 . The subsequent Tacotron2 @cite_18 method replaced the highway networks with RNNs and directly predicts the residuals. Deep Voice (DV) @cite_9 and DV2 @cite_26 employ bidirectional RNNs, multilayer fully connected networks and residual connections. DV3 @cite_2 switched the architecture to a gated convolutional sequence to sequence architecture @cite_19 and also incorporated the key-value attention mechanism of @cite_22 . The VoiceLoop @cite_13 method is based on a specific type of RNN, in which a shifting buffer is used to maintain the context. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_9",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_17"
],
"mid": [
"2964243274",
"",
"2963403868",
"",
"2964265128",
"2901997113",
"",
"2519091744",
"2963534259",
"1026270304"
],
"abstract": [
"This paper describes Tacotron 2, a neural network architecture for speech synthesis directly from text. The system is composed of a recurrent sequence-to-sequence feature prediction network that maps character embeddings to mel-scale spectrograms, followed by a modified WaveNet model acting as a vocoder to synthesize time-domain waveforms from those spectrograms. Our model achieves a mean opinion score (MOS) of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. To validate our design choices, we present ablation studies of key components of our system and evaluate the impact of using mel spectrograms as the conditioning input to WaveNet instead of linguistic, duration, and @math features. We further show that using this compact acoustic intermediate representation allows for a significant reduction in the size of the WaveNet architecture.",
"",
"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.",
"",
"The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training to better exploit the GPU hardware and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.*",
"",
"",
"",
"We present a new neural text to speech (TTS) method that is able to transform text to speech in voices that are sampled in the wild. Unlike other systems, our solution is able to deal with unconstrained voice samples and without requiring aligned phonemes or linguistic features. The network architecture is simpler than those in the existing literature and is based on a novel shifting buffer working memory. The same buffer is used for estimating the attention, computing the output audio, and for updating the buffer itself. The input sentence is encoded using a context-free lookup table that contains one entry per character or phoneme. The speakers are similarly represented by a short vector that can also be fitted to new identities, even with only a few samples. Variability in the generated speech is achieved by priming the buffer prior to generating the audio. Experimental results on several datasets demonstrate convincing capabilities, making TTS accessible to a wider range of applications. In order to promote reproducibility, we release our source code and models.",
"Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures."
]
} |
1902.02263 | 2963964591 | We present a TTS neural network that is able to produce speech in multiple languages. The proposed network is able to transfer a voice, which was presented as a sample in a source language, into one of several target languages. Training is done without using matching or parallel data, i.e., without samples of the same speaker in multiple languages, making the method much more applicable. The conversion is based on learning a polyglot network that has multiple per-language sub-networks and adding loss terms that preserve the speaker’s identity in multiple languages. We evaluate the proposed polyglot neural network for three languages with a total of more than 400 speakers and demonstrate convincing conversion capabilities. | @cite_5 , a neural multilingual TTS is proposed. A deep neural network is trained on samples from a pair of languages -- English and Mandarin. Three speakers are recorded, each speaking both languages. In the last section of that paper, a neural polyglot TTS is discussed and demonstrated in a specific setting: the Mandarin samples of one of the speakers were removed from the training set, and a polyglot synthesis in Mandarin of this speaker was generated. Our work does not employ shared speakers among the corpora used for the different languages. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2401698713"
],
"abstract": [
"We have successfully proposed to use multi-speaker modelling in DNN-based TTS synthesis for improved voice quality with limited available data from a speaker. In this paper, we propose a new speaker and language factorized DNN, where speaker-specific layers are used for multi-speaker modelling, and shared layers and language-specific layers are employed for multi-language, linguistic feature transformation. Experimental results on a speech corpus of multiple speakers in both Mandarin and English show that the proposed factorized DNN can not only achieve a similar voice quality as that of a multi-speaker DNN, but also perform polyglot synthesis with a monolingual speaker's voice."
]
} |
1902.01780 | 2912936128 | When deep neural networks optimize highly complex functions, it is not always obvious how they reach the final decision. Providing explanations would make this decision process more transparent and improve a user's trust towards the machine as they help develop a better understanding of the rationale behind the network's predictions. Here, we present an explainable observer-classifier framework that exposes the steps taken through the model's decision-making process. Instead of assigning a label to an image in a single step, our model makes iterative binary sub-decisions, which reveal a decision tree as a thought process. In addition, our model allows to hierarchically cluster the data and give each binary decision a semantic meaning. The sequence of binary decisions learned by our model imitates human-annotated attributes. On six benchmark datasets with increasing size and granularity, our model outperforms the decision-tree baseline and generates easy-to-understand binary decision sequences explaining the network's predictions. | Decision Trees with Neural Networks. Adaptive Neural Trees @cite_12 directly model the neural network as a decision tree, where each node and edge correspond to one or more modules of the network. Our model is self-adapting through the use of a recurrent network in the classifier that makes a prediction at every node and can be easily rolled out to a greater depth without changing the architecture or number of weights. The prior work closest to ours is the Deep Neural Decision Forest @cite_13 , which first uses a CNN to determine the routing probabilities on each node and then combines nodes to an ensemble of decision trees that jointly make the prediction. Similarly, in our Explainable Observer, we compute the binary decisions, i.e., router nodes of the tree, once before using them ad hoc. Our method differs in that we focus on explainability by explicitly only considering a hard binary decision at each node while other methods use soft decisions, making a large portion of the tree responsible for the predictions, and, thus, are harder to interpret. | {
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"2220384803",
"2950149565"
],
"abstract": [
"We present Deep Neural Decision Forests - a novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner. To combine these two worlds, we introduce a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network. Our model differs from conventional deep networks because a decision forest provides the final predictions and it differs from conventional decision forests since we propose a principled, joint and global optimization of split and leaf node parameters. We show experimental results on benchmark machine learning datasets like MNIST and ImageNet and find on-par or superior results when compared to state-of-the-art deep models. Most remarkably, we obtain Top5-Errors of only 7.84 6.38 on ImageNet validation data when integrating our forests in a single-crop, single seven model GoogLeNet architecture, respectively. Thus, even without any form of training data set augmentation we are improving on the 6.67 error obtained by the best GoogLeNet architecture (7 models, 144 crops).",
"Deep neural networks and decision trees operate on largely separate paradigms; typically, the former performs representation learning with pre-specified architectures, while the latter is characterised by learning hierarchies over pre-specified features with data-driven architectures. We unite the two via adaptive neural trees (ANTs) that incorporates representation learning into edges, routing functions and leaf nodes of a decision tree, along with a backpropagation-based training algorithm that adaptively grows the architecture from primitive modules (e.g., convolutional layers). We demonstrate that, whilst achieving competitive performance on classification and regression datasets, ANTs benefit from (i) lightweight inference via conditional computation, (ii) hierarchical separation of features useful to the task e.g. learning meaningful class associations, such as separating natural vs. man-made objects, and (iii) a mechanism to adapt the architecture to the size and complexity of the training dataset."
]
} |
1902.01780 | 2912936128 | When deep neural networks optimize highly complex functions, it is not always obvious how they reach the final decision. Providing explanations would make this decision process more transparent and improve a user's trust towards the machine as they help develop a better understanding of the rationale behind the network's predictions. Here, we present an explainable observer-classifier framework that exposes the steps taken through the model's decision-making process. Instead of assigning a label to an image in a single step, our model makes iterative binary sub-decisions, which reveal a decision tree as a thought process. In addition, our model allows to hierarchically cluster the data and give each binary decision a semantic meaning. The sequence of binary decisions learned by our model imitates human-annotated attributes. On six benchmark datasets with increasing size and granularity, our model outperforms the decision-tree baseline and generates easy-to-understand binary decision sequences explaining the network's predictions. | Explainability. The importance of explanations for an end-user has been studied from the psychological perspective @cite_29 , showing that humans use explanations as a guidance for learning and understanding by building inferences and seeking propositions or judgments that enrich their prior knowledge. They usually seek explanations to fill the requested gap depending on prior knowledge and the goal in question. | {
"cite_N": [
"@cite_29"
],
"mid": [
"2952165242"
],
"abstract": [
"The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables---continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks."
]
} |
1902.01780 | 2912936128 | When deep neural networks optimize highly complex functions, it is not always obvious how they reach the final decision. Providing explanations would make this decision process more transparent and improve a user's trust towards the machine as they help develop a better understanding of the rationale behind the network's predictions. Here, we present an explainable observer-classifier framework that exposes the steps taken through the model's decision-making process. Instead of assigning a label to an image in a single step, our model makes iterative binary sub-decisions, which reveal a decision tree as a thought process. In addition, our model allows to hierarchically cluster the data and give each binary decision a semantic meaning. The sequence of binary decisions learned by our model imitates human-annotated attributes. On six benchmark datasets with increasing size and granularity, our model outperforms the decision-tree baseline and generates easy-to-understand binary decision sequences explaining the network's predictions. | As for visual explanations, propose to apply a prediction-difference analysis to a specific input. utilize a visual-attention module that justifies the predictions of deep networks for visual question answering and activity recognition. Grad-CAM @cite_31 uses the gradients of any target concept, e.g., a predicted action, flowing into a convolutional layer to produce a localization map highlighting the important regions in the image that lead to predicting the target concept. Interpretable CNNs @cite_18 modify the convolutional layer, such that each filter map corresponds to an object part in the image, and a follow-up work @cite_24 uses a classical decision tree to explain the predictions based on the learned object-part filters. | {
"cite_N": [
"@cite_24",
"@cite_31",
"@cite_18"
],
"mid": [
"2787382079",
"",
"2951308125"
],
"abstract": [
"This paper presents a method to learn a decision tree to quantitatively explain the logic of each prediction of a pre-trained convolutional neural networks (CNNs). Our method boosts the following two aspects of network interpretability. 1) In the CNN, each filter in a high conv-layer must represent a specific object part, instead of describing mixed patterns without clear meanings. 2) People can explain each specific prediction made by the CNN at the semantic level using a decision tree, i.e., which filters (or object parts) are used for prediction and how much they contribute in the prediction. To conduct such a quantitative explanation of a CNN, our method learns explicit representations of object parts in high conv-layers of the CNN and mines potential decision modes memorized in fully-connected layers. The decision tree organizes these potential decision modes in a coarse-to-fine manner. Experiments have demonstrated the effectiveness of the proposed method.",
"",
"This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs."
]
} |
1902.01990 | 2912917389 | Abstract Spectral clustering algorithms typically require a priori selection of input parameters such as the number of clusters, a scaling parameter for the affinity measure, or ranges of these values for parameter tuning. Despite efforts for automating the process of spectral clustering, the task of grouping data in multi-scale and higher dimensional spaces is yet to be explored. This study presents a spectral clustering heuristic algorithm that obviates the need for any input by estimating the parameters from the data itself. Specifically, it introduces the heuristic of iterative eigengap search with (1) global scaling and (2) local scaling. These approaches estimate the scaling parameter and implement iterative eigengap quantification along a search tree to reveal dissimilarities at different scales of a feature space and identify clusters. The performance of these approaches has been tested on various real-world datasets of power variation with multi-scale nature and gene expression. Our findings show that iterative eigengap search with a PCA-based global scaling scheme can discover different patterns with an accuracy of higher than 90 in most cases without asking for a priori input information. | Spectral clustering has gained popularity due to their ease of implementation and efficiency in clustering @cite_42 @cite_33 . Therefore, in recent decades, several clustering algorithms have been proposed and used for different applications. The focus in these algorithms has been on the application of the similarity matrix spectrum for dimensionality reduction and feature space transformation to introduce convexity. One of the well-known algorithms in this field is the one proposed by Ng, Jordan, and Weiss (referred to as NJW) @cite_2 . In addition to the efforts in the formalization of spectral clustering algorithms, a number of studies have focused on expanding the algorithms into instances, which are capable of self-tuning or automated identification of natural partitions (or groups) in the data. Natural in this context refers to the clusters (or groups) that represent the actual physical separation in the data. | {
"cite_N": [
"@cite_42",
"@cite_33",
"@cite_2"
],
"mid": [
"2132914434",
"68894772",
"2165874743"
],
"abstract": [
"In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.",
"",
"Despite many empirical successes of spectral clustering methods— algorithms that cluster points using eigenvectors of matrices derived from the data—there are several unresolved issues. First. there are a wide variety of algorithms that use the eigenvectors in slightly different ways. Second, many of these algorithms have no proof that they will actually compute a reasonable clustering. In this paper, we present a simple spectral clustering algorithm that can be implemented using a few lines of Matlab. Using tools from matrix perturbation theory, we analyze the algorithm, and give conditions under which it can be expected to do well. We also show surprisingly good experimental results on a number of challenging clustering problems."
]
} |
1902.01990 | 2912917389 | Abstract Spectral clustering algorithms typically require a priori selection of input parameters such as the number of clusters, a scaling parameter for the affinity measure, or ranges of these values for parameter tuning. Despite efforts for automating the process of spectral clustering, the task of grouping data in multi-scale and higher dimensional spaces is yet to be explored. This study presents a spectral clustering heuristic algorithm that obviates the need for any input by estimating the parameters from the data itself. Specifically, it introduces the heuristic of iterative eigengap search with (1) global scaling and (2) local scaling. These approaches estimate the scaling parameter and implement iterative eigengap quantification along a search tree to reveal dissimilarities at different scales of a feature space and identify clusters. The performance of these approaches has been tested on various real-world datasets of power variation with multi-scale nature and gene expression. Our findings show that iterative eigengap search with a PCA-based global scaling scheme can discover different patterns with an accuracy of higher than 90 in most cases without asking for a priori input information. | Unlike the aforementioned efforts that have proposed solutions for challenging 2D datasets and image segmentation, @cite_46 proposed a kernel spectral clustering for a large-scale network without parameter input. To this end, entropy was used to detect the block-diagonal of the affinity matrix that was created by the projections in the eigenspace. The efficacy of the proposed approach was studied through synthetic data and real-world network datasets. While these existing approaches @cite_30 @cite_23 @cite_12 @cite_46 were developed to tackle spectral clustering in an automated manner, they are either designed to solve the problem for multi-scale 2D and image segmentation or network data, which is different in nature from data with multi-scale higher dimensional attributes as sought here. | {
"cite_N": [
"@cite_30",
"@cite_46",
"@cite_12",
"@cite_23"
],
"mid": [
"",
"1975594892",
"2119158865",
"2129610796"
],
"abstract": [
"",
"We propose a parameter-free kernel spectral clustering model for large scale complex networks. The kernel spectral clustering (KSC) method works by creating a model on a subgraph of the complex network. The model requires a kernel function which can have parameters and the number of communities k has be detected in the large scale network. We exploit the structure of the projections in the eigenspace to automatically identify the number of clusters. We use the concept of entropy and balanced clusters for this purpose. We show the effectiveness of the proposed approach by comparing the cluster memberships w.r.t. several large scale community detection techniques like Louvain, Infomap and Bigclam methods. We conducted experiments on several synthetic networks of varying size and mixing parameter along with large scale real world experiments to show the efficiency of the proposed approach.",
"Spectral clustering has become an increasingly adopted tool and an active area of research in the machine learning community over the last decade. A common challenge with image segmentation methods based on spectral clustering is scalability, since the computation can become intractable for large images. Down-sizing the image, however, will cause a loss of finer details and can lead to less accurate segmentation results. A combination of blockwise processing and stochastic ensemble consensus are used to address this challenge. Experimental results indicate that this approach can preserve details with higher accuracy than comparable spectral clustering image segmentation methods and without significant computational demands.",
"We introduce a novel spectral clustering algorithm that allows us to automatically determine the number of clusters in a dataset. The algorithm is based on a theoretical analysis of the spectral properties of block diagonal affinity matrices; in contrast to established methods, we do not normalise the rows of the matrix of eigenvectors, and argue that the non-normalised data contains key information that allows the automatic determination of the number of clusters present. We present several examples of datasets successfully clustered by our algorithm, both artificial and real, obtaining good results even without employing refined feature extraction techniques"
]
} |
1902.01843 | 2914902801 | Neural networks with a large number of parameters admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of "overparameterized" models. In this regime, gradient descent obeys a deterministic partial differential equation (PDE) that converges to a globally optimal solution for networks with a single hidden layer under appropriate assumptions. In this work, we propose a non-local mass transport dynamics that leads to a modified PDE with the same minimizer. We implement this non-local dynamics as a stochastic neuronal birth-death process and we prove that it accelerates the rate of convergence in the mean-field limit. We subsequently realize this PDE with two classes of numerical schemes that converge to the mean-field equation, each of which can easily be implemented for neural networks with finite numbers of parameters. We illustrate our algorithms with two models to provide intuition for the mechanism through which convergence is accelerated. | Non-local update rules appear in various areas of machine learning and optimization. Derivative-free optimization @cite_12 offers a general framework for optimizing complex non-convex functions using non-local search heuristics. Some notable examples include Particle Swarm Optimization @cite_17 and Evolutionary Strategies, such as the Covariance Matrix Adaptation method @cite_16 . These approaches have found some renewed interest in the optimization of neural networks in the context of Reinforcement Learning @cite_7 @cite_1 and hyperparameter optimization @cite_24 . | {
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_24",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"2596367596",
"2778749116",
"2770298516",
"102487131",
"2160960847",
"2152195021"
],
"abstract": [
"We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.",
"Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. However, ES can be considered a gradient-based algorithm because it performs stochastic gradient descent via an operation similar to a finite-difference approximation of the gradient. That raises the question of whether non-gradient-based evolutionary algorithms can work at DNN scales. Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion. The Deep GA successfully evolves networks with over four million free parameters, the largest neural networks ever evolved with a traditional evolutionary algorithm. These results (1) expand our sense of the scale at which GAs can operate, (2) suggest intriguingly that in some cases following the gradient is not the best choice for optimizing performance, and (3) make immediately available the multitude of techniques that have been developed in the neuroevolution community to improve performance on RL problems. To demonstrate the latter, we show that combining DNNs with novelty search, which was designed to encourage exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms (e.g. DQN, A3C, ES, and the GA) fail. Additionally, the Deep GA parallelizes better than ES, A3C, and DQN, and enables a state-of-the-art compact encoding technique that can represent million-parameter DNNs in thousands of bytes.",
"Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present , a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance.",
"Derived from the concept of self-adaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multi-variate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with large population sizes, reflecting recent extensions of the CMA algorithm. Commonalities and differences to continuous Estimation of Distribution Algorithms are analyzed. The aspects of reliability of the estimation, overall step size control, and independence from the coordinate system (invariance) become particularly important in small populations sizes. Consequently, performing the adaptation task with small populations is more intricate.",
"This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no derivative information. We refer to these algorithms as derivative-free algorithms. Fueled by a growing number of applications in science and engineering, the development of derivative-free optimization algorithms has long been studied, and it has found renewed interest in recent time. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a review of derivative-free algorithms, followed by a systematic comparison of 22 related implementations using a test set of 502 problems. The test bed includes convex and nonconvex problems, smooth as well as nonsmooth problems. The algorithms were tested under the same conditions and ranked under several criteria, including their ability to find near-global solutions for nonconvex problems, improve a given starting point, and refine a near-optimal solution. A total of 112,448 problem instances were solved. We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size. For the problems used in this study, TOMLAB MULTIMIN, TOMLAB GLCCLUSTER, MCS and TOMLAB LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2,500 function evaluations. These global solvers outperform local solvers even for convex problems. Finally, TOMLAB OQNLP, NEWUOA, and TOMLAB MULTIMIN show superior performance in terms of refining a near-optimal solution.",
"A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described."
]
} |
1902.01843 | 2914902801 | Neural networks with a large number of parameters admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of "overparameterized" models. In this regime, gradient descent obeys a deterministic partial differential equation (PDE) that converges to a globally optimal solution for networks with a single hidden layer under appropriate assumptions. In this work, we propose a non-local mass transport dynamics that leads to a modified PDE with the same minimizer. We implement this non-local dynamics as a stochastic neuronal birth-death process and we prove that it accelerates the rate of convergence in the mean-field limit. We subsequently realize this PDE with two classes of numerical schemes that converge to the mean-field equation, each of which can easily be implemented for neural networks with finite numbers of parameters. We illustrate our algorithms with two models to provide intuition for the mechanism through which convergence is accelerated. | Our setup of non-interacting potentials is closely related to the so-called Estimation of Distribution Algorithms @cite_10 @cite_18 , which define update rules for a probability distribution over a search space by querying the values of a given function to be optimized. In particular, Information Geometric Optimization Algorithms @cite_6 study the dynamics of parametric densities using ordinary differential equations, focusing on invariance properties. In contrast, our focus in on the combination of transport (gradient-based) and birth-death dynamics. | {
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_6"
],
"mid": [
"",
"2162036626",
"1480347379"
],
"abstract": [
"",
"We present an abstraction of the genetic algorithm (GA), termed population-based incremental learning (PBIL), that explicitly maintains the statistics contained in a GA''s population, but which abstracts away the crossover operator and redefines the role of the population. This results in PBIL being simpler, both computationally and theoretically, than the GA. Empirical results reported elsewhere show that PBIL is faster and more effective than the GA on a large set of commonly used benchmark problems. Here we present results on a problem custom designed to benefit both from the GA''s crossover operator and from its use of a population. The results show that PBIL performs as well as, or better than, GAs carefully tuned to do well on this problem. This suggests that even on problems custom designed for GAs, much of the power of the GA may derive from the statistics maintained implicitly in its population, and not from the population itself nor from the crossover operator.",
"We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuous-time black-box optimization method on X, the information-geometric optimization (IGO) method. Invariance as a major design principle keeps the number of arbitrary choices to a minimum. The resulting IGO flow is the flow of an ordinary differential equation conducting the natural gradient ascent of an adaptive, time-dependent transformation of the objective function. It makes no particular assumptions on the objective function to be optimized. The IGO method produces explicit IGO algorithms through time discretization. It naturally recovers versions of known algorithms and offers a systematic way to derive new ones. In continuous search spaces, IGO algorithms take a form related to natural evolution strategies (NES). The cross-entropy method is recovered in a particular case with a large time step, and can be extended into a smoothed, parametrization-independent maximum likelihood update (IGO-ML). When applied to the family of Gaussian distributions on Rd, the IGO framework recovers a version of the well-known CMA-ES algorithm and of xNES. For the family of Bernoulli distributions on 0, 1 d, we recover the seminal PBIL algorithm and cGA. For the distributions of restricted Boltzmann machines, we naturally obtain a novel algorithm for discrete optimization on 0, 1 d. All these algorithms are natural instances of, and unified under, the single information-geometric optimization framework. The IGO method achieves, thanks to its intrinsic formulation, maximal invariance properties: invariance under reparametrization of the search space X, under a change of parameters of the probability distribution, and under increasing transformation of the function to be optimized. The latter is achieved through an adaptive, quantile-based formulation of the objective. Theoretical considerations strongly suggest that IGO algorithms are essentially characterized by a minimal change of the distribution over time. Therefore they have minimal loss in diversity through the course of optimization, provided the initial diversity is high. First experiments using restricted Boltzmann machines confirm this insight. As a simple consequence, IGO seems to provide, from information theory, an elegant way to simultaneously explore several valleys of a fitness landscape in a single run."
]
} |
1902.01843 | 2914902801 | Neural networks with a large number of parameters admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of "overparameterized" models. In this regime, gradient descent obeys a deterministic partial differential equation (PDE) that converges to a globally optimal solution for networks with a single hidden layer under appropriate assumptions. In this work, we propose a non-local mass transport dynamics that leads to a modified PDE with the same minimizer. We implement this non-local dynamics as a stochastic neuronal birth-death process and we prove that it accelerates the rate of convergence in the mean-field limit. We subsequently realize this PDE with two classes of numerical schemes that converge to the mean-field equation, each of which can easily be implemented for neural networks with finite numbers of parameters. We illustrate our algorithms with two models to provide intuition for the mechanism through which convergence is accelerated. | Dropout @cite_25 is a regularization technique popularized by the AlexNet CNN @cite_0 reminiscent of a birth-death process, but we note that its mechanism is very different: rather than killing a neuron and replacing it by a new one with some rate, Dropout momentarily masks neurons, which become active again at the same position; in other words, Dropout implements a purely local transport scheme, as opposed to our non-local dynamics. | {
"cite_N": [
"@cite_0",
"@cite_25"
],
"mid": [
"2163605009",
"2095705004"
],
"abstract": [
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets."
]
} |
1902.01843 | 2914902801 | Neural networks with a large number of parameters admit a mean-field description, which has recently served as a theoretical explanation for the favorable training properties of "overparameterized" models. In this regime, gradient descent obeys a deterministic partial differential equation (PDE) that converges to a globally optimal solution for networks with a single hidden layer under appropriate assumptions. In this work, we propose a non-local mass transport dynamics that leads to a modified PDE with the same minimizer. We implement this non-local dynamics as a stochastic neuronal birth-death process and we prove that it accelerates the rate of convergence in the mean-field limit. We subsequently realize this PDE with two classes of numerical schemes that converge to the mean-field equation, each of which can easily be implemented for neural networks with finite numbers of parameters. We illustrate our algorithms with two models to provide intuition for the mechanism through which convergence is accelerated. | Finally, closest to our motivation is @cite_21 , who, building on the recent body of works that leverage optimal-transport techniques to study optimization in the large parameter limit @cite_26 @cite_9 @cite_20 @cite_23 , proposed a modification of the dynamics that replaced traditional stochastic noise by a resampling of a fraction of neurons from a base, fixed measure. Our model has significant differences to this scheme, namely we show that the dynamics preserves the same global minimizers and accelerates the rate of convergence. | {
"cite_N": [
"@cite_26",
"@cite_9",
"@cite_21",
"@cite_23",
"@cite_20"
],
"mid": [
"",
"2962742373",
"2897080984",
"2966530573",
"2071048859"
],
"abstract": [
"",
"We introduce a new optimal transport distance between nonnegative finite Radon measures with possibly different masses. The construction is based on non-conservative continuity equations and a corresponding modified Benamou-Brenier formula. We establish various topological and geometrical properties of the resulting metric space, derive some formal Riemannian structure, and develop differential calculus following F. Otto’s approach. Finally, we apply these ideas to identify an ideal free distribution model of population dynamics as a gradient flow and obtain new long-time convergence results.",
"Past works have shown that, somewhat surprisingly, over-parametrization can help generalization in neural networks. Towards explaining this phenomenon, we adopt a margin-based perspective. We establish: 1) for multi-layer feedforward relu networks, the global minimizer of a weakly-regularized cross-entropy loss has the maximum normalized margin among all networks, 2) as a result, increasing the over-parametrization improves the normalized margin and generalization error bounds for two-layer networks. In particular, an infinite-size neural network enjoys the best generalization guarantees. The typical infinite feature methods are kernel methods; we compare the neural net margin with that of kernel methods and construct natural instances where kernel methods have much weaker generalization guarantees. We validate this gap between the two approaches empirically. Finally, this infinite-neuron viewpoint is also fruitful for analyzing optimization. We show that a perturbed gradient flow on infinite-size networks finds a global optimizer in polynomial time.",
"Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, but rigorous results about the approximation error of neural networks after training are few. Here we establish conditions for global convergence of the standard optimization algorithm used in machine learning applications, stochastic gradient descent (SGD), and quantify the scaling of its error with the size of the network. This is done by reinterpreting SGD as the evolution of a particle system with interactions governed by a potential related to the objective or \"loss\" function used to train the network. We show that, when the number @math of units is large, the empirical distribution of the particles descends on a convex landscape towards the global minimum at a rate independent of @math , with a resulting approximation error that universally scales as @math . These properties are established in the form of a Law of Large Numbers and a Central Limit Theorem for the empirical distribution. Our analysis also quantifies the scale and nature of the noise introduced by SGD and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural networks to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as @math .",
"The Fokker--Planck equation, or forward Kolmogorov equation, describes the evolution of the probability density for a stochastic process associated with an Ito stochastic differential equation. It pertains to a wide variety of time-dependent systems in which randomness plays a role. In this paper, we are concerned with Fokker--Planck equations for which the drift term is given by the gradient of a potential. For a broad class of potentials, we construct a time discrete, iterative variational scheme whose solutions converge to the solution of the Fokker--Planck equation. The major novelty of this iterative scheme is that the time-step is governed by the Wasserstein metric on probability measures. This formulation enables us to reveal an appealing, and previously unexplored, relationship between the Fokker--Planck equation and the associated free energy functional. Namely, we demonstrate that the dynamics may be regarded as a gradient flux, or a steepest descent, for the free energy with respect to the Wass..."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.