aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1905.02265
2947619145
In order to train a computer agent to play a text-based computer game, we must represent each hidden state of the game. A Long Short-Term Memory (LSTM) model running over observed texts is a common choice for state construction. However, a normal Deep Q-learning Network (DQN) for such an agent requires millions of steps of training or more to converge. As such, an LSTM-based DQN can take tens of days to finish the training process. Though we can use a Convolutional Neural Network (CNN) as a text-encoder to construct states much faster than the LSTM, doing so without an understanding of the syntactic context of the words being analyzed can slow convergence. In this paper, we use a fast CNN to encode position- and syntax-oriented structures extracted from observed texts as states. We additionally augment the reward signal in a universal and practical manner. Together, we show that our improvements can not only speed up the process by one order of magnitude but also learn a superior agent.
However, previous works that attempt to play Zork can only finish a very small portion of the game, far from that achievable by human players. @cite_5 use the max-pooling CNN-DQN but without position embeddings. Our Zork evaluation result is stable at a score of 40 in one million steps, compared to @cite_5 , we get a score of 40 without using the action elimination DQN framework and compared to @cite_1 , that use the LSTM-DQN framework without the action elimination method, we have a huge performance gain. The generalized method of reward shaping is important for games with multiple sub-quests. @cite_2 use random network distillation to change the instant reward and get improved results on several hard Atari games that require extra exploration.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_2" ], "mid": [ "2963771109", "2614188047", "2964067469" ], "abstract": [ "Learning how to act when there are many available actions in each state is a challenging task for Reinforcement Learning (RL) agents, especially when many of the actions are redundant or irrelevant. In such cases, it is easier to learn which actions not to take. In this work, we propose the Action-Elimination Deep Q-Network (AE-DQN) architecture that combines a Deep RL algorithm with an Action Elimination Network (AEN) that eliminates sub-optimal actions. The AEN is trained to predict invalid actions, supervised by an external elimination signal provided by the environment. Simulations demonstrate a considerable speedup and added robustness over vanilla DQN in text-based games with over a thousand discrete actions.", "The domain of text-based adventure games has been recently established as a new challenge of creating the agent that is both able to understand natural language, and acts intelligently in text-described environments. In this paper, we present our approach to tackle the problem. Our agent, named Golovin, takes advantage of the limited game domain. We use genre-related corpora (including fantasy books and decompiled games) to create language models suitable to this domain. Moreover, we embed mechanisms that allow us to specify, and separately handle, important tasks as fighting opponents, managing inventory, and navigating on the game map. We validated usefulness of these mechanisms, measuring agent's performance on the set of 50 interactive fiction games. Finally, we show that our agent plays on a level comparable to the winner of the last year Text-Based Adventure AI Competition.", "" ] }
1905.02066
2943483156
In configurable software systems, stakeholders are often interested in knowing how configuration options influence the performance of a system to facilitate, for example, the debugging and optimization processes of these systems. There are several black-box approaches to obtain this information, but they usually require a large number of samples to make accurate predictions, whereas the few existing white-box approaches impose limitations on the systems that they can analyze. This paper proposes ConfigCrusher, a white-box performance analysis that exploits several insights of configurable systems. ConfigCrusher employs a static data-flow analysis to identify how configuration options may influence control-flow decisions and instruments code regions corresponding to these decisions to dynamically analyze the influence of configuration options on the regions' performance. Our evaluation using @math real-world configurable systems shows that ConfigCrusher is more efficient at building performance models that are similar to or more accurate than current state-of-the-art black-box and white-box approaches. Overall, this paper showcases the benefits and potential of white-box performance analyses to outperform black-box approaches and provide additional information for analyzing configurable systems.
Similar to our approach, @cite_64 used taint analysis to identify, for each code fragment, in which configurations it may be executed. However, they do not track information about individual options. Instead, our taint analysis tracks how options influence code fragments due to control-flow and data-flow interactions to track how options influence the performance of the system. =-1
{ "cite_N": [ "@cite_64" ], "mid": [ "1963971515" ], "abstract": [ "Today's smartphone operating systems frequently fail to provide users with adequate control over and visibility into how third-party applications use their privacy-sensitive data. We address these shortcomings with TaintDroid, an efficient, systemwide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid provides real-time analysis by leveraging Android's virtualized execution environment. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, we found 68 instances of misappropriation of users' location and device identification information across 20 applications. Monitoring sensitive data with TaintDroid provides informed use of third-party applications for phone users and valuable input for smartphone security service firms seeking to identify misbehaving applications." ] }
1905.02066
2943483156
In configurable software systems, stakeholders are often interested in knowing how configuration options influence the performance of a system to facilitate, for example, the debugging and optimization processes of these systems. There are several black-box approaches to obtain this information, but they usually require a large number of samples to make accurate predictions, whereas the few existing white-box approaches impose limitations on the systems that they can analyze. This paper proposes ConfigCrusher, a white-box performance analysis that exploits several insights of configurable systems. ConfigCrusher employs a static data-flow analysis to identify how configuration options may influence control-flow decisions and instruments code regions corresponding to these decisions to dynamically analyze the influence of configuration options on the regions' performance. Our evaluation using @math real-world configurable systems shows that ConfigCrusher is more efficient at building performance models that are similar to or more accurate than current state-of-the-art black-box and white-box approaches. Overall, this paper showcases the benefits and potential of white-box performance analyses to outperform black-box approaches and provide additional information for analyzing configurable systems.
@cite_52 used dynamic influence tracing to convert static parameters into dynamic control variables to adapt properties of an application. However, they do not consider interactions among parameters. By contrast, our approach traces control-flow and data-flow interactions among options, and how they influence the performance of the system. =-1
{ "cite_N": [ "@cite_52" ], "mid": [ "2111444234" ], "abstract": [ "We present PowerDial, a system for dynamically adapting application behavior to execute successfully in the face of load and power fluctuations. PowerDial transforms static configuration parameters into dynamic knobs that the PowerDial control system can manipulate to dynamically trade off the accuracy of the computation in return for reductions in the computational resources that the application requires to produce its results. These reductions translate directly into performance improvements and power savings. Our experimental results show that PowerDial can enable our benchmark applications to execute responsively in the face of power caps that would otherwise significantly impair responsiveness. They also show that PowerDial can significantly reduce the number of machines required to service intermittent load spikes, enabling reductions in power and capital costs." ] }
1905.02066
2943483156
In configurable software systems, stakeholders are often interested in knowing how configuration options influence the performance of a system to facilitate, for example, the debugging and optimization processes of these systems. There are several black-box approaches to obtain this information, but they usually require a large number of samples to make accurate predictions, whereas the few existing white-box approaches impose limitations on the systems that they can analyze. This paper proposes ConfigCrusher, a white-box performance analysis that exploits several insights of configurable systems. ConfigCrusher employs a static data-flow analysis to identify how configuration options may influence control-flow decisions and instruments code regions corresponding to these decisions to dynamically analyze the influence of configuration options on the regions' performance. Our evaluation using @math real-world configurable systems shows that ConfigCrusher is more efficient at building performance models that are similar to or more accurate than current state-of-the-art black-box and white-box approaches. Overall, this paper showcases the benefits and potential of white-box performance analyses to outperform black-box approaches and provide additional information for analyzing configurable systems.
@cite_53 and @cite_16 used symbolic execution and variational execution, respectively, to analyze the behavior of interactions in configurable systems and found the insights that we consider in this work (Sec. ). We leverage those insights to create a novel white-box performance analysis that efficiently generates accurate performance models. =-1
{ "cite_N": [ "@cite_53", "@cite_16" ], "mid": [ "2098345837", "2511206070" ], "abstract": [ "Many modern software systems are designed to be highly configurable, which increases flexibility but can make programs hard to test, analyze, and understand. We present an initial empirical study of how configuration options affect program behavior. We conjecture that, at certain levels of abstraction, configuration spaces are far smaller than the worst case, in which every configuration is distinct. We evaluated our conjecture by studying three configurable software systems: vsftpd, ngIRCd, and grep. We used symbolic evaluation to discover how the settings of run-time configuration options affect line, basic block, edge, and condition coverage for our subjects under a given test suite. Our results strongly suggest that for these subject programs, test suites, and configuration options, when abstracted in terms of the four coverage criteria above, configuration spaces are in fact much smaller than combinatorics would suggest and are effectively the composition of many small, self-contained groupings of options.", "Quality assurance for highly-configurable systems is challenging due to the exponentially growing configuration space. Interactions among multiple options can lead to surprising behaviors, bugs, and security vulnerabilities. Analyzing all configurations systematically might be possible though if most options do not interact or interactions follow specific patterns that can be exploited by analysis tools. To better understand interactions in practice, we analyze program traces to characterize and identify where interactions occur on control flow and data. To this end, we developed a dynamic analysis for Java based on variability-aware execution and monitor executions of multiple small to medium-sized programs. We find that the essential configuration complexity of these programs is indeed much lower than the combinatorial explosion of the configuration space indicates. However, we also discover that the interaction characteristics that allow scalable and complete analyses are more nuanced than what is exploited by existing state-of-the-art quality assurance strategies." ] }
1905.02066
2943483156
In configurable software systems, stakeholders are often interested in knowing how configuration options influence the performance of a system to facilitate, for example, the debugging and optimization processes of these systems. There are several black-box approaches to obtain this information, but they usually require a large number of samples to make accurate predictions, whereas the few existing white-box approaches impose limitations on the systems that they can analyze. This paper proposes ConfigCrusher, a white-box performance analysis that exploits several insights of configurable systems. ConfigCrusher employs a static data-flow analysis to identify how configuration options may influence control-flow decisions and instruments code regions corresponding to these decisions to dynamically analyze the influence of configuration options on the regions' performance. Our evaluation using @math real-world configurable systems shows that ConfigCrusher is more efficient at building performance models that are similar to or more accurate than current state-of-the-art black-box and white-box approaches. Overall, this paper showcases the benefits and potential of white-box performance analyses to outperform black-box approaches and provide additional information for analyzing configurable systems.
Combinatorial Testing @cite_54 @cite_65 @cite_79 @cite_4 @cite_89 @cite_66 is an approach to reduce the number of samples to test a program by satisfying a certain coverage criterion. Similarly, @cite_91 improved SPLat @cite_39 to use sampling heuristics @cite_43 to select what configurations to sample. While both these approaches scale to large systems, they make assumptions on how options interact in the program and can potentially miss relevant interactions. Instead, our sampling is guided by white-box information on how options are used and interact in the systems. =-1
{ "cite_N": [ "@cite_4", "@cite_91", "@cite_54", "@cite_65", "@cite_89", "@cite_39", "@cite_79", "@cite_43", "@cite_66" ], "mid": [ "2187798471", "2618360667", "2464549268", "", "2793051425", "2004248182", "2114525558", "2949679769", "" ], "abstract": [ "Context: Testing highly-configurable software systems is challenging due to a large number of test configurations that have to be carefully selected in order to reduce the testing effort as much as possible, while maintaining high software quality. Finding the smallest set of valid test configurations that ensure sufficient coverage of the system's feature interactions is thus the objective of validation engineers, especially when the execution of test configurations is costly or time-consuming. However, this problem is NP-hard in general and approximation algorithms have often been used to address it in practice.Objective: In this paper, we explore an alternative exact approach based on constraint programming that will allow engineers to increase the effectiveness of configuration testing while keeping the number of configurations as low as possible.Method: Our approach consists in using a (time-aware) minimization algorithm based on constraint programming. Given the amount of time, our solution generates a minimized set of valid test configurations that ensure coverage of all pairs of feature values (a.k.a. pairwise coverage). The approach has been implemented in a tool called PACOGEN.Results: PACOGEN was evaluated on 224 feature models in comparison with the two existing tools that are based on a greedy algorithm. For 79 of 224 feature models, PACOGEN generated up to 60 fewer test configurations than the competitor tools. We further evaluated PACOGEN in the case study of an industrial video conferencing product line with a feature model of 169 features, and found 60 fewer configurations compared with the manual approach followed by test engineers. The set of test configurations generated by PACOGEN decreased the time required by test engineers in manual test configuration by 85 , increasing the feature-pairs coverage at the same time.Conclusion: Our experimental evaluation concluded that optimal time-aware minimization of pairwise-covering test configurations is efficiently addressed using constraint programming techniques.", "Testing configurable systems is important and challenging due to the enormous space of configurations where errors can hide. Existing approaches to test these systems are often costly or unreliable. This paper proposes S-SPLat, a technique that combines heuristic sampling with symbolic search to obtain both breadth and depth in the exploration of the configuration space. S-SPLat builds on SPLat, our previously developed technique, that explores all reachable configurations from tests. In contrast to its predecessor, S-SPLat sacrifices soundness in favor of efficiency. We evaluated our technique on eight software product lines of various sizes and on a large configurable system - GCC. Considering the results for GCC, S-SPLat was able to reproduce all five bugs that we previously found in a previous study with SPLat but much faster and it was able to find two new bugs in a recent release of GCC. Results suggest that it is preferable to use a combination of simple heuristics to drive the symbolic search as opposed to a single heuristic. S-SPLat and our experimental infrastructure are publicly available.", "Finding optimal configurations for Stream Processing Systems (SPS) is a challenging problem due to the large number of parameters that can influence their performance and the lack of analytical models to anticipate the effect of a change. To tackle this issue, we consider tuning methods where an experimenter is given a limited budget of experiments and needs to carefully allocate this budget to find optimal configurations. We propose in this setting Bayesian Optimization for Configuration Optimization (BO4CO), an auto-tuning algorithm that leverages Gaussian Processes (GPs) to iteratively capture posterior distributions of the configuration spaces and sequentially drive the experimentation. Validation based on Apache Storm demonstrates that our approach locates optimal configurations within a limited experimental budget, with an improvement of SPS performance typically of at least an order of magnitude compared to existing configuration algorithms.", "", "Many approaches for testing configurable software systems start from the same assumption: it is impossible to test all configurations. This motivated the definition of variability-aware abstractions and sampling techniques to cope with large configuration spaces. Yet, there is no theoretical barrier that prevents the exhaustive testing of all configurations by simply enumerating them if the effort required to do so remains acceptable. Not only this: we believe there is a lot to be learned by systematically and exhaustively testing a configurable system. In this case study, we report on the first ever endeavour to test all possible configurations of the industry-strength, open source configurable software system JHipster, a popular code generator for web applications. We built a testing scaffold for the 26,000+ configurations of JHipster using a cluster of 80 machines during 4 nights for a total of 4,376 hours (182 days) CPU time. We find that 35.70 configurations fail and we identify the feature interactions that cause the errors. We show that sampling strategies (like dissimilarity and 2-wise): (1) are more effective to find faults than the 12 default configurations used in the JHipster continuous integration; (2) can be too costly and exceed the available testing budget. We cross this quantitative analysis with the qualitative assessment of JHipster’s lead developers.", "Many programs can be configured through dynamic and or static selection of configuration variables. A software product line (SPL), for example, specifies a family of programs where each program is defined by a unique combination of features. Systematically testing SPL programs is expensive as it can require running each test against a combinatorial number of configurations. Fortunately, a test is often independent of many configuration variables and need not be run against every combination. Configurations that are not required for a test can be pruned from execution. This paper presents SPLat, a new way to dynamically prune irrelevant configurations: the configurations to run for a test can be determined during test execution by monitoring accesses to configuration variables. SPLat achieves an optimal reduction in the number of configurations and is lightweight compared to prior work that used static analysis and heavyweight dynamic execution. Experimental results on 10 SPLs written in Java show that SPLat substantially reduces the total test execution time in many cases. Moreover, we demonstrate the scalability of SPLat by applying it to a large industrial code base written in Ruby on Rails.", "Many industrial systems are highly-configurable, complicating the testing and debugging process. While researchers have developed techniques to statically extract, quantify and manipulate the valid system configurations, we conjecture that many of these techniques will fail in practice. In this paper we analyze a highly-configurable industrial application and two open source applications in order to quantify the true challenges that configurability creates for software testing and debugging. We find that (1) all three applications consist of multiple programming languages, hence static analyses need to cross programming language barriers to work, (2) there are many access points and methods to modify configurations, implying that practitioners need configuration traceability and should gather and merge metadata from more than one source and (3) the configuration state of an application on failure cannot be reliably determined by reading persistent data; a runtime memory dump or other heuristics must be used for accurate debugging. We conclude with a roadmap and lessons learned to help practitioners better handle configurability now, and that may lead to new configuration-aware testing and debugging techniques in the future.", "Almost every software system provides configuration options to tailor the system to the target platform and application scenario. Often, this configurability renders the analysis of every individual system configuration infeasible. To address this problem, researchers have proposed a diverse set of sampling algorithms. We present a comparative study of 10 state-of-the-art sampling algorithms regarding their fault-detection capability and size of sample sets. The former is important to improve software quality and the latter to reduce the time of analysis. In a nutshell, we found that sampling algorithms with larger sample sets are able to detect higher numbers of faults, but simple algorithms with small sample sets, such as most-enabled-disabled, are the most efficient in most contexts. Furthermore, we observed that the limiting assumptions made in previous work influence the number of detected faults, the size of sample sets, and the ranking of algorithms. Finally, we have identified a number of technical challenges when trying to avoid the limiting assumptions, which questions the practicality of certain sampling algorithms.", "" ] }
1905.02133
2947461019
We consider the online problem of scheduling jobs on identical machines, where jobs have precedence constraints. We are interested in the demanding setting where the jobs sizes are not known up-front, but are revealed only upon completion (the non-clairvoyant setting). Such precedence-constrained scheduling problems routinely arise in map-reduce and large-scale optimization. In this paper, we make progress on this problem. For the objective of total weighted completion time, we give a constant-competitive algorithm. And for total weighted flow-time, we give an @math -competitive algorithm under @math -speed augmentation and a natural no-surprises'' assumption on release dates of jobs (which we show is necessary in this context). Our algorithm proceeds by assigning virtual rates to all the waiting jobs, including the ones which are dependent on other uncompleted jobs, and then use these virtual rates to decide on the actual rates of minimal jobs (i.e., jobs which do not have dependencies and hence are eligible to run). Interestingly, the virtual rates are obtained by allocating time in a fair manner, using a Eisenberg-Gale-type convex program (which we can also solve optimally using a primal-dual scheme). The optimality condition of this convex program allows us to show dual-fitting proofs more easily, without having to guess and hand-craft the duals. We feel that this idea of using fair virtual rates should have broader applicability in scheduling problems.
Much less is known for flow-time problems with precedence constraints. For the offline setting on identical machines, @cite_15 give @math -approximations with @math -speedup, even for general delay functions. In the current paper, we achieve a @math -approximation with @math -speedup for flow-time. Interestingly, @cite_15 show that beating a @math -approximation for any constant @math requires a speedup of at least the optimal approximation factor of makespan minimization in the same machine environment. However, this lower bound requires different jobs with a precedence relationship to have different release dates, which is something our model disallows. (Appendix gives another lower bound showing why we disallow such precedences in the online setting.)
{ "cite_N": [ "@cite_15" ], "mid": [ "2949165180" ], "abstract": [ "Scheduling a set of jobs over a collection of machines is a fundamental problem that needs to be solved millions of times a day in various computing platforms: in operating systems, in large data clusters, and in data centers. Along with makespan, flow-time, which measures the length of time a job spends in a system before it completes, is arguably the most important metric to measure the performance of a scheduling algorithm. In recent years, there has been a remarkable progress in understanding flow-time based objective functions in diverse settings such as unrelated machines scheduling, broadcast scheduling, multi-dimensional scheduling, to name a few. Yet, our understanding of the flow-time objective is limited mostly to the scenarios where jobs have simple structures; in particular, each job is a single self contained entity. On the other hand, in almost all real world applications, think of MapReduce settings for example, jobs have more complex structures. In this paper, we consider two classical scheduling models that capture complex job structures: 1) concurrent open-shop scheduling and 2) precedence constrained scheduling. Our main motivation to study these problems specifically comes from their relevance to two scheduling problems that have gained importance in the context of data centers: co-flow scheduling and DAG scheduling. We design almost optimal approximation algorithms for open-shop scheduling and precedence constrained scheduling, and show hardness results." ] }
1905.01718
2946299167
Recent success in deep reinforcement learning for continuous control has been dominated by model-free approaches which, unlike model-based approaches, do not suffer from representational limitations in making assumptions about the world dynamics and model errors inevitable in complex domains. However, they require a lot of experiences compared to model-based approaches that are typically more sample-efficient. We propose to combine the benefits of the two approaches by presenting an integrated approach called Curious Meta-Controller. Our approach alternates adaptively between model-based and model-free control using a curiosity feedback based on the learning progress of a neural model of the dynamics in a learned latent space. We demonstrate that our approach can significantly improve the sample efficiency and achieve near-optimal performance on learning robotic reaching and grasping tasks from raw-pixel input in both dense and sparse reward settings.
In a very different study, @cite_30 find that learning a state representation and a dynamics model that improve gradient-descent planning based on a set of training demonstrations rather than optimize auxiliary objectives leads to more successful action plans. They show that the distance to a target image encoded with the learned representation can be effectively used as a reward for a model-free RL agent in visuomotor control tasks. The approach however requires expert demonstrations to be available for training.
{ "cite_N": [ "@cite_30" ], "mid": [ "2795756076" ], "abstract": [ "A key challenge in complex visuomotor control is learning abstract representations that are effective for specifying goals, planning, and generalization. To this end, we introduce universal planning networks (UPN). UPNs embed differentiable planning within a goal-directed policy. This planning computation unrolls a forward model in a latent space and infers an optimal action plan through gradient descent trajectory optimization. The plan-by-gradient-descent process and its underlying representations are learned end-to-end to directly optimize a supervised imitation learning objective. We find that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images. The learned representations can be leveraged to specify distance-based rewards to reach new target states for model-free reinforcement learning, resulting in substantially more effective learning when solving new tasks described via image-based goals. We were able to achieve successful transfer of visuomotor planning strategies across robots with significantly different morphologies and actuation capabilities." ] }
1905.01718
2946299167
Recent success in deep reinforcement learning for continuous control has been dominated by model-free approaches which, unlike model-based approaches, do not suffer from representational limitations in making assumptions about the world dynamics and model errors inevitable in complex domains. However, they require a lot of experiences compared to model-based approaches that are typically more sample-efficient. We propose to combine the benefits of the two approaches by presenting an integrated approach called Curious Meta-Controller. Our approach alternates adaptively between model-based and model-free control using a curiosity feedback based on the learning progress of a neural model of the dynamics in a learned latent space. We demonstrate that our approach can significantly improve the sample efficiency and achieve near-optimal performance on learning robotic reaching and grasping tasks from raw-pixel input in both dense and sparse reward settings.
More recently, a control architecture was proposed that includes an arbitrator used to switch between habitual and planning systems by choosing between an action predicted by an actor of an actor-critic model and that predicted by an inverse dynamics model @cite_6 . The arbitration is managed by the reward prediction error and favors the actor's prediction if the error at the previous timestep is below a predefined threshold. The approach does not consider imperfect model predictions and is applied to a significantly low-dimensional state space.
{ "cite_N": [ "@cite_6" ], "mid": [ "2898050260" ], "abstract": [ "Internal models are important when agents make decisions based on predictions of future states and their utilities. However, using internal models for planning can be time consuming. Therefore, it can be useful to use a habitual system for repetitive tasks that can be executed faster and with reduced algorithmic resources. Current evidence suggests that the brain uses both control systems, planning and habitual systems for behavioural control, which then requires an arbitration between these two systems. In our previous work [1], we proposed an Arbitrated Predictive Actor-Critic (APAC), which is a neural architecture demonstrating cooperative mechanisms of planning and habitual control systems for one step mapping. The present study tests the ability of such a model to control a simulated twojoints robotic arm during multiple reaching tasks with movement limitations that require multiple steps to solve the task. Our results show that APAC can learn the multi-step learning under various conditions. Interestingly, the APAC tends to shift from planning to habits by taking actions predicted by a habitual controller over the training time." ] }
1905.01718
2946299167
Recent success in deep reinforcement learning for continuous control has been dominated by model-free approaches which, unlike model-based approaches, do not suffer from representational limitations in making assumptions about the world dynamics and model errors inevitable in complex domains. However, they require a lot of experiences compared to model-based approaches that are typically more sample-efficient. We propose to combine the benefits of the two approaches by presenting an integrated approach called Curious Meta-Controller. Our approach alternates adaptively between model-based and model-free control using a curiosity feedback based on the learning progress of a neural model of the dynamics in a learned latent space. We demonstrate that our approach can significantly improve the sample efficiency and achieve near-optimal performance on learning robotic reaching and grasping tasks from raw-pixel input in both dense and sparse reward settings.
As opposed to explicitly learning a dynamics model, @cite_4 propose a type of goal-conditioned value function called Temporal Difference Model (TDM) that implicitly learns a dynamics model and uses it for optimal control. In their approach, transitions collected off-policy are sampled from a replay buffer and relabeled with new, randomly sampled goal states and time horizons which the TDM uses as input along with the state-action pair. The TDM is learned model-free and updated to be the negative distance between the newly visited and goal states if the horizon is zero or, otherwise, to be the approximate TDM value after decrementing the horizon and advancing the state. The information the TDM provides on the closeness to the goal after a given number of actions makes it resemble a model. Despite achieving high sample efficiency by relabeling collected transitions with several goals and horizons, the approach has not been applied to learning from raw-pixel input but only to learning from simple low-dimensional observations.
{ "cite_N": [ "@cite_4" ], "mid": [ "2787757704" ], "abstract": [ "Model-free reinforcement learning (RL) is a powerful, general tool for learning complex behaviors. However, its sample efficiency is often impractically large for solving challenging real-world problems, even with off-policy algorithms such as Q-learning. A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples. Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as model-free RL due to model bias. We introduce temporal difference models (TDMs), a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based control. TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods. Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to state-of-the-art model-based and model-free methods." ] }
1905.01718
2946299167
Recent success in deep reinforcement learning for continuous control has been dominated by model-free approaches which, unlike model-based approaches, do not suffer from representational limitations in making assumptions about the world dynamics and model errors inevitable in complex domains. However, they require a lot of experiences compared to model-based approaches that are typically more sample-efficient. We propose to combine the benefits of the two approaches by presenting an integrated approach called Curious Meta-Controller. Our approach alternates adaptively between model-based and model-free control using a curiosity feedback based on the learning progress of a neural model of the dynamics in a learned latent space. We demonstrate that our approach can significantly improve the sample efficiency and achieve near-optimal performance on learning robotic reaching and grasping tasks from raw-pixel input in both dense and sparse reward settings.
Neuroscience studies on choice behavior have presented different hypotheses on how habitual (model-free) and planning (model-based) systems control human sequential decision-making. A study by Daw @cite_8 argues for a deliberative planning system in which a learned model of the task is used to exhaustively search the decision tree until the goal is reached, while the habits are formed based on the expected long-term reward of an action, obtained on completion of the tree search. In contrast, Cushman and Morris @cite_13 argue for a different hybrid control model where (sub-) goals are first chosen with model-free learning and then aimed at with model-based planning.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "2228005960", "2170651797" ], "abstract": [ "Humans choose actions based on both habit and planning. Habitual control is computationally frugal but adapts slowly to novel circumstances, whereas planning is computationally expensive but can adapt swiftly. Current research emphasizes the competition between habits and plans for behavioral control, yet many complex tasks instead favor their integration. We consider a hierarchical architecture that exploits the computational efficiency of habitual control to select goals while preserving the flexibility of planning to achieve those goals. We formalize this mechanism in a reinforcement learning setting, illustrate its costs and benefits, and experimentally demonstrate its spontaneous application in a sequential decision-making task.", "In the television series The Wire, addicts Bubbles and Johnny regularly engage in bizarre and elaborate schemes to obtain drugs, ranging from robbing an ambulance to stealing drugs by lowering a fishing line from a rooftop. That these fictional crimes can be both so meticulously planned and yet focused on such narrow, shortsighted goals highlights a gap in our understanding of how the real brain deploys deliberative vs. automatic mechanisms to make decisions. On a standard account, people can deliberatively evaluate the consequences of candidate actions, which gives us our flexibility to dream up novel plans. Alternatively, the brain can crystallize repeatedly successful behaviors into habits: learned reflexes that free up resources by executing the behaviors automatically (although at the expense of inflexibility and, it is believed, underpinning pathological compulsions). As with most dichotomies, the problem with this view is that the world is not so black and white. Much as the drug-seeking behavior of addicts seems not to fit into either category, for healthy behaviors also, neither of these two sorts of decision making is very practical on its own. In PNAS, Cushman and Morris suggest a hybrid of these mechanisms, and show behavioral evidence that humans use it to plan actions (1)." ] }
1905.01718
2946299167
Recent success in deep reinforcement learning for continuous control has been dominated by model-free approaches which, unlike model-based approaches, do not suffer from representational limitations in making assumptions about the world dynamics and model errors inevitable in complex domains. However, they require a lot of experiences compared to model-based approaches that are typically more sample-efficient. We propose to combine the benefits of the two approaches by presenting an integrated approach called Curious Meta-Controller. Our approach alternates adaptively between model-based and model-free control using a curiosity feedback based on the learning progress of a neural model of the dynamics in a learned latent space. We demonstrate that our approach can significantly improve the sample efficiency and achieve near-optimal performance on learning robotic reaching and grasping tasks from raw-pixel input in both dense and sparse reward settings.
In contrast, @cite_12 propose that the arbitration between model-free and model-based control is driven by a cost-benefit trade-off and not by the cognitive ability to plan. They hypothesize that the brain estimates the expected value of using each of the two control systems during choice but then decreases that of the model-based proportional to its cognitive cost. This was supported by an observation that participants with even an accurate internal model of a decision-making task and an extended response time used less model-based control when its estimated reward advantage was low.
{ "cite_N": [ "@cite_12" ], "mid": [ "2800631177" ], "abstract": [ "Decision-making algorithms face a basic tradeoff between accuracy and effort (i.e., computational demands). It is widely agreed that humans can choose between multiple decision-making processes tha..." ] }
1905.01833
2944464415
While CUDA has become a major parallel computing platform and programming model for general-purpose GPU computing, CUDA-induced bug patterns have not yet been well explored. In this paper, we conduct the first empirical study to reveal important categories of CUDA program bug patterns based on 319 bugs identified within 5 popular CUDA projects in GitHub. Our findings demonstrate that CUDA-specific characteristics may cause program bugs such as synchronization bugs that are rather difficult to detect. To efficiently detect such synchronization bugs, we establish the first lightweight general CUDA bug detection framework, namely Simulee, to simulate CUDA program execution by interpreting the corresponding llvm bytecode and collecting the memory-access information to automatically detect CUDA synchronization bugs. To evaluate the effectiveness and efficiency of Simulee, we conduct a set of experiments and the experimental results suggest that Simulee can detect 20 out of the 27 studied synchronization bugs and successfully detects 26 previously unknown synchronization bugs, 10 of which have been confirmed by the developers.
There are several existing work that study bugs and other features on CUDA programs. For instance, @cite_9 delivered the empirical study on the features of the performance bugs on CUDA programs, @cite_44 studied the control-flow irregularity and memory-access irregularity and found that both irregularities are mutually dependent and exist in most of kernels. @cite_3 examined the effectiveness of CUDA to express with different sets of performance characteristics. Some researchers are keen on the comparisons between CUDA and OpenCL. For instance, @cite_35 compared some C++ programs running on top of CUDA and OpenCL and found that they work equally well for problems of large size. @cite_28 , on the other side, studied the discrepancies in the OpenCL and CUDA compilers' optimization that affect the associated GPU computing performance.
{ "cite_N": [ "@cite_35", "@cite_28", "@cite_9", "@cite_3", "@cite_44" ], "mid": [ "2112632437", "2107911628", "2025692558", "2128022558", "1997162567" ], "abstract": [ "We present a comparison of several modern C++ libraries providing high-level interfaces for programming multi- and many-core architectures on top of CUDA or OpenCL. The comparison focuses on the solution of ordinary differential equations (ODEs) and is based on odeint, a framework for the solution of systems of ODEs. Odeint is designed in a very flexible way and may be easily adapted for effective use of libraries such as MTL4, VexCL, or ViennaCL, using CUDA or OpenCL technologies. We found that CUDA and OpenCL work equally well for problems of large sizes, while OpenCL has higher overhead for smaller problems. Furthermore, we show that modern high-level libraries allow us to effectively use the computational resources of many-core GPUs or multicore CPUs without much knowledge of the underlying technologies.", "In this work, we evaluate OpenCL as a programming tool for developing performance-portable applications for GPGPU. While the Khronos group developed OpenCL with programming portability in mind, performance is not necessarily portable. OpenCL has required performance-impacting initializations that do not exist in other languages such as CUDA. Understanding these implications allows us to provide a single library with decent performance on a variety of platforms. We choose triangular solver (TRSM) and matrix multiplication (GEMM) as representative level 3 BLAS routines to implement in OpenCL. We profile TRSM to get the time distribution of the OpenCL runtime system. We then provide tuned GEMM kernels for both the NVIDIA Tesla C2050 and ATI Radeon 5870, the latest GPUs offered by both companies. We explore the benefits of using the texture cache, the performance ramifications of copying data into images, discrepancies in the OpenCL and CUDA compilers' optimizations, and other issues that affect the performance. Experimental results show that nearly 50 of peak performance can be obtained in GEMM on both GPUs in OpenCL. We also show that the performance of these kernels is not highly portable. Finally, we propose the use of auto-tuning to better explore these kernels' parameter space using search harness.", "Given the extraordinary computational power of modern graphics processing units (GPUs), general purpose computation on GPUs (GPGPU) has become an increasingly important platform for high performance computing. To better understand how well the GPU resource has been utilized by application developers and then to facilitate them to develop high performance GPGPU code, we conduct an empirical study on GPGPU programs from ten open-source projects. These projects span a wide range of disciplines and many are designed as high performance libraries. Among these projects, we found various performance 'bugs', i.e., code segments leading to inefficient use of GPU hardware. We characterize these performance bugs, and propose the bug fixes. Our experiments confirm both significant performance gains and energy savings from our fixes and reveal interesting insights on different GPUs.", "Graphics processors (GPUs) provide a vast number of simple, data-parallel, deeply multithreaded cores and high memory bandwidths. GPU architectures are becoming increasingly programmable, offering the potential for dramatic speedups for a variety of general-purpose applications compared to contemporary general-purpose processors (CPUs). This paper uses NVIDIA's C-like CUDA language and an engineering sample of their recently introduced GTX 260 GPU to explore the effectiveness of GPUs for a variety of application types, and describes some specific coding idioms that improve their performance on the GPU. GPU performance is compared to both single-core and multicore CPU performance, with multicore CPU implementations written using OpenMP. The paper also discusses advantages and inefficiencies of the CUDA programming model and some desirable features that might allow for greater ease of use and also more readily support a larger body of applications.", "GPUs have been used to accelerate many regular applications and, more recently, irregular applications in which the control flow and memory access patterns are data-dependent and statically unpredictable. This paper defines two measures of irregularity called control-flow irregularity and memory-access irregularity, and investigates, using performance-counter measurements, how irregular GPU kernels differ from regular kernels with respect to these measures. For a suite of 13 benchmarks, we find that (i) irregularity at the warp level varies widely, (ii) control-flow irregularity and memory-access irregularity are largely independent of each other, and (iii) most kernels, including regular ones, exhibit some irregularity. A program's irregularity can change between different inputs, systems, and arithmetic precision but generally stays in a specific region of the irregularity space. Whereas some highly tuned implementations of irregular algorithms exhibit little irregularity, trading off extra irregularity for better locality or less work can improve overall performance." ] }
1905.01833
2944464415
While CUDA has become a major parallel computing platform and programming model for general-purpose GPU computing, CUDA-induced bug patterns have not yet been well explored. In this paper, we conduct the first empirical study to reveal important categories of CUDA program bug patterns based on 319 bugs identified within 5 popular CUDA projects in GitHub. Our findings demonstrate that CUDA-specific characteristics may cause program bugs such as synchronization bugs that are rather difficult to detect. To efficiently detect such synchronization bugs, we establish the first lightweight general CUDA bug detection framework, namely Simulee, to simulate CUDA program execution by interpreting the corresponding llvm bytecode and collecting the memory-access information to automatically detect CUDA synchronization bugs. To evaluate the effectiveness and efficiency of Simulee, we conduct a set of experiments and the experimental results suggest that Simulee can detect 20 out of the 27 studied synchronization bugs and successfully detects 26 previously unknown synchronization bugs, 10 of which have been confirmed by the developers.
Several approaches that detect CUDA bugs are static dynamic-analysis-based @cite_29 @cite_20 @cite_13 @cite_31 @cite_5 @cite_10 @cite_17 @cite_47 . Though they can be effective, they are also argued to be time costly @cite_34 . A lot of the research concentrate on detecting the specific data race bugs. In addition to many aforementioned work, LDetector @cite_6 instrumented compiler to detect races by using diffs between memory snapshots. @cite_18 detected data race on GPU emulators instead on real GPU hardware. Many tools have been developed to inspect CUDA programs. For instance, @cite_29 employed concolic execution-based verification and test-case reduction heuristics for CUDA program detections the sentence is problematic, which test-case reduction? standalone techniques or part of gklee? . It was scaled as using the technique of Parameterized Flows @cite_19 .
{ "cite_N": [ "@cite_47", "@cite_18", "@cite_10", "@cite_29", "@cite_6", "@cite_19", "@cite_5", "@cite_31", "@cite_34", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "1866093455", "", "2121717408", "2076960126", "2597739964", "2150248374", "2168272209", "2043201977", "", "2731956051", "2258083119", "1971463841" ], "abstract": [ "We report on practical experiences over the last 2.5 years related to the engineering of GPUVerify, a static verification tool for OpenCL and CUDA GPU kernels, plotting the progress of GPUVerify from a prototype to a fully functional and relatively efficient analysis tool. Our hope is that this experience report will serve the verification community by helping to inform future tooling efforts.", "", "We present a technique for verifying race- and divergence-freedom of GPU kernels that are written in mainstream kernel programming languages such as OpenCL and CUDA. Our approach is founded on a novel formal operational semantics for GPU programming termed synchronous, delayed visibility (SDV) semantics. The SDV semantics provides a precise definition of barrier divergence in GPU kernels and allows kernel verification to be reduced to analysis of a sequential program, thereby completely avoiding the need to reason about thread interleavings, and allowing existing modular techniques for program verification to be leveraged. We describe an efficient encoding for data race detection and propose a method for automatically inferring loop invariants required for verification. We have implemented these techniques as a practical verification tool, GPUVerify, which can be applied directly to OpenCL and CUDA source code. We evaluate GPUVerify with respect to a set of 163 kernels drawn from public and commercial sources. Our evaluation demonstrates that GPUVerify is capable of efficient, automatic verification of a large number of real-world kernels.", "Programs written for GPUs often contain correctness errors such as races, deadlocks, or may compute the wrong result. Existing debugging tools often miss these errors because of their limited input-space and execution-space exploration. Existing tools based on conservative static analysis or conservative modeling of SIMD concurrency generate false alarms resulting in wasted bug-hunting. They also often do not target performance bugs (non-coalesced memory accesses, memory bank conflicts, and divergent warps). We provide a new framework called GKLEE that can analyze C++ GPU programs, locating the aforesaid correctness and performance bugs. For these programs, GKLEE can also automatically generate tests that provide high coverage. These tests serve as concrete witnesses for every reported bug. They can also be used for downstream debugging, for example to test the kernel on the actual hardware. We describe the architecture of GKLEE, its symbolic virtual machine model, and describe previously unknown bugs and performance issues that it detected on commercial SDK kernels. We describe GKLEE's test-case reduction heuristics, and the resulting scalability improvement for a given coverage target.", "Data race detection has become an important problem in GPU programming. Previous designs of CPU race-checking tools are mainly task parallel and incur high overhead on GPUs due to access instrumentation, especially when monitoring many thousands of threads routinely used by GPU programs. This article presents a novel data-parallel solution designed and optimized for the GPU architecture. It includes compiler support and a set of runtime techniques. It uses value-based checking, which detects the races reported in previous work, finds new races, and supports race-free deterministic GPU execution. More important, race checking is massively data parallel and does not introduce divergent branching or atomic synchronization. Its slowdown is less than 5 × for over half of the tests and 10 × on average, which is orders of magnitude more efficient than the cuda-memcheck tool by Nvidia and the methods that use fine-grained access instrumentation.", "The growing scale of concurrency requires automated abstraction techniques to cut down the effort in concurrent system analysis. In this paper, we show that the high degree of behavioral symmetry present in GPU programs allows CUDA race detection to be dramatically simplified through abstraction. Our abstraction techniques is one of automatically creating parametric flows -- control-flow equivalence classes of threads that diverge in the same manner -- and checking for data races only across a pair of threads per parametric flow. We have implemented this approach as an extension of our recently proposed GKLEE symbolic analysis framework and show that all our previous results are dramatically improved in that (i) the parametric flow-based analysis takes far less time, and (ii) because of the much higher scalability of the analysis, we can detect even more data race situations that were previously missed by GKLEE because it was forced to downscale examples to limit analysis complexity. Moreover, the parametric flow-based analysis is applicable to other programs with SPMD models.", "Interest in Graphical Processing Units (GPUs) is skyrocketing due to their potential to yield spectacular performance on many important computing applications. Unfortunately, writing such efficient GPU kernels requires painstaking manual optimization effort which is very error prone. We contribute the first comprehensive symbolic verifier for kernels written in CUDA C. Called the 'Prover of User GPU programs (PUG),' our tool efficiently and automatically analyzes real-world kernels using Satisfiability Modulo Theories (SMT) tools, detecting bugs such as data races, incorrectly synchronized barriers, bank conflicts, and wrong results. PUG's innovative ideas include a novel approach to symbolically encode thread interleavings, exact analysis for correct barrier placement, special methods for avoiding interleaving generation, dividing up the analysis over barrier intervals, and handling loops through three approaches: loop normalization, overapproximation, and invariant finding. PUG has analyzed over a hundred CUDA kernels from public distributions and in-house projects, finding bugs as well as subtle undocumented assumptions.", "Even the careful GPU programmer can inadvertently introduce data races while writing and optimizing code. Currently available GPU race checking methods fall short either in terms of their formal guarantees, ease of use, or practicality. Existing symbolic methods: (1) do not fully support existing CUDA kernels, (2) may require user-specified assertions or invariants, (3) often require users to guess which inputs may be safely made concrete, (4) tend to explode in complexity when the number of threads is increased, and (5) explode in the face of thread-ID based decisions, especially in a loop. We present SESA, a new tool combining Symbolic Execution and Static Analysis to analyze C++ CUDA programs that overcomes all these limitations. SESA also scales well to handle non-trivial benchmarks such as Parboil and Lonestar, and is the only tool of its class that handles such practical examples. This paper presents SESA's methodological innovations and practical results.", "", "GPU programming models enable and encourage massively parallel programming with over a million threads, requiring extreme parallelism to achieve good performance. Massive parallelism brings significant correctness challenges by increasing the possibility for bugs as the number of thread interleavings balloons. Conventional dynamic safety analyses struggle to run at this scale. We present BARRACUDA, a concurrency bug detector for GPU programs written in Nvidia’s CUDA language. BARRACUDA handles a wider range of parallelism constructs than previous work, including branch operations, low-level atomics and memory fences, which allows BARRACUDA to detect new classes of concurrency bugs. BARRACUDA operates at the binary level for increased compatibility with existing code, leveraging a new binary instrumentation framework that is extensible to other dynamic analyses. BARRACUDA incorporates a number of novel optimizations that are crucial for scaling concurrency bug detection to over a million threads.", "We present ESBMC-GPU, an extension to the ESBMC model checker that is aimed at verifying GPU programs written for the CUDA framework. ESBMC-GPU uses an operational model for the verification, i.e., an abstract representation of the standard CUDA libraries that conservatively approximates their semantics. ESBMC-GPU verifies CUDA programs, by explicitly exploring the possible interleavings (up to the given context bound), while treating each interleaving itself symbolically. Experimental results show that ESBMC-GPU is able to detect more properties violations, while keeping lower rates of false results.", "Data-dependent GPU kernels, whose data or control flow are dependent on the input of the program, are difficult to verify because they require reasoning about shared state manipulated by many parallel threads. Existing verification techniques for GPU kernels achieve soundness and scalability by using a two-thread reduction and making the contents of the shared state nondeterministic each time threads synchronise at a barrier, to account for all possible thread interactions. This coarse abstraction prohibits verification of data-dependent kernels. We present barrier invariants, a novel abstraction technique which allows key properties about the shared state of a kernel to be preserved across barriers during formal reasoning. We have integrated barrier invariants with the GPUVerify tool, and present a detailed case study showing how they can be used to verify three prefix sum algorithms, allowing efficient modular verification of a stream compaction kernel, a key building block for GPU programming. This analysis goes significantly beyond what is possible using existing verification techniques for GPU kernels." ] }
1905.01833
2944464415
While CUDA has become a major parallel computing platform and programming model for general-purpose GPU computing, CUDA-induced bug patterns have not yet been well explored. In this paper, we conduct the first empirical study to reveal important categories of CUDA program bug patterns based on 319 bugs identified within 5 popular CUDA projects in GitHub. Our findings demonstrate that CUDA-specific characteristics may cause program bugs such as synchronization bugs that are rather difficult to detect. To efficiently detect such synchronization bugs, we establish the first lightweight general CUDA bug detection framework, namely Simulee, to simulate CUDA program execution by interpreting the corresponding llvm bytecode and collecting the memory-access information to automatically detect CUDA synchronization bugs. To evaluate the effectiveness and efficiency of Simulee, we conduct a set of experiments and the experimental results suggest that Simulee can detect 20 out of the 27 studied synchronization bugs and successfully detects 26 previously unknown synchronization bugs, 10 of which have been confirmed by the developers.
One closely-related work with is a test-amplification-based bug detection approach @cite_4 that amplified the result of a single running test to combine it with static analysis such that the set of all inputs and interleavings could be verified. Though the idea of injecting testing philosophy into CUDA programs is similar with , advances in (1) it is a general-purpose and fully automated bug detection framework that can detect various synchronization types while @cite_4 only handles data race and requires manual inputs; (2) only needs to run the code relevant to kernel function execution, while @cite_4 needs to run the whole program life cycle which leads to much larger overhead. (3) @cite_4 suffers from the incorrect input regarding synchronization and loss of effectiveness while does not have these limitations.
{ "cite_N": [ "@cite_4" ], "mid": [ "2167995056" ], "abstract": [ "We present a novel technique for verifying properties of data parallel GPU programs via test amplification. The key insight behind our work is that we can use the technique of static information flow to amplify the result of a single test execution over the set of all inputs and interleavings that affect the property being verified. We empirically demonstrate the effectiveness of test amplification for verifying race-freedom and determinism over a large number of standard GPU kernels, by showing that the result of verifying a single dynamic execution can be amplified over the massive space of possible data inputs and thread interleavings." ] }
1905.01941
2942809297
Inter-personal anatomical differences limit the accuracy of person-independent gaze estimation networks. Yet there is a need to lower gaze errors further to enable applications requiring higher quality. Further gains can be achieved by personalizing gaze networks, ideally with few calibration samples. However, over-parameterized neural networks are not amenable to learning from few examples as they can quickly over-fit. We embrace these challenges and propose a novel framework for Few-shot Adaptive GaZE Estimation (FAZE) for learning person-specific gaze networks with very few (less than 9) calibration samples. FAZE learns a rotation-aware latent representation of gaze via a disentangling encoder-decoder architecture along with a highly adaptable gaze estimator trained using meta-learning. It is capable of adapting to any new person to yield significant performance gains with as few as 3 samples, yielding state-of-the-art performance of 3.18-deg on GazeCapture, a 19 improvement over prior art.
Gaze Estimation. Appearance-based gaze estimation @cite_0 methods that map images directly to gaze have recently surpassed classical model-based approaches @cite_31 for in-the-wild settings. Earlier approaches in this direction assume images captured in restricted laboratory settings and use direct regression methods @cite_50 @cite_53 or learning-by-synthesis approaches combined with random forests to separate head-pose clusters @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_53", "@cite_0", "@cite_50", "@cite_31" ], "mid": [ "", "2160495187", "1946259682", "2139196511", "2167020116" ], "abstract": [ "", "T o infer human gaze from eye appearance, various methods have been proposed. However, most of them assume a fixed head pose because allowing free head motion adds 6 degrees of freedom to the problem and requires a prohibitively large number of training samples. In this paper, we aim at solving the appearance-based gaze estimation problem under free head motion without significantly increasing the cost of training. The idea is to decompose the problem into subproblems, including initial estimation under fixed head pose and subsequent compensations for estimation biases caused by head rotation and eye appearance distortion. Then each subproblem is solved by either learning-based method or geometric-based calculation. Specifically, the gaze estimation bias caused by eye appearance distortion is learnt effectively from a 5-seconds video clip. Extensive experiments were conducted to verify the effectiveness of the proposed approach.", "We present a method for estimating eye gaze direction, which represents a departure from conventional eye gaze estimation methods, the majority of which are based on tracking specific optical phenomena like corneal reflection and the Purkinje images. We employ an appearance manifold model, but instead of using a densely sampled spline to perform the nearest manifold point query, we retain the original set of sparse appearance samples and use linear interpolation among a small subset of samples to approximate the nearest manifold point. The advantage of this approach is that since we are only storing a sparse set of samples, each sample can be a high dimensional vector that retains more representational accuracy than short vectors produced with dimensionality reduction methods. The algorithm was tested with a set of eye images labelled with ground truth point-of-regard coordinates. We have found that the algorithm is capable of estimating eye gaze with a mean angular error of 0.38 degrees, which is comparable to that obtained by commercially available eye trackers.", "The problem of estimating human gaze from eye appearance is regarded as mapping high-dimensional features to low-dimensional target space. Conventional methods require densely obtained training samples on the eye appearance manifold, which results in a tedious calibration stage. In this paper, we introduce an adaptive linear regression (ALR) method for accurate mapping via sparsely collected training samples. The key idea is to adaptively find the subset of training samples where the test sample is most linearly representable. We solve the problem via l1-optimization and thoroughly study the key issues to seek for the best solution for regression. The proposed gaze estimation approach based on ALR is naturally sparse and low-dimensional, giving the ability to infer human gaze from variant resolution eye images using much fewer training samples than existing methods. Especially, the optimization procedure in ALR is extended to solve the subpixel alignment problem simultaneously for low resolution test eye images. Performance of the proposed method is evaluated by extensive experiments against various factors such as number of training samples, feature dimensionality and eye image resolution to verify its effectiveness.", "Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond." ] }
1905.01941
2942809297
Inter-personal anatomical differences limit the accuracy of person-independent gaze estimation networks. Yet there is a need to lower gaze errors further to enable applications requiring higher quality. Further gains can be achieved by personalizing gaze networks, ideally with few calibration samples. However, over-parameterized neural networks are not amenable to learning from few examples as they can quickly over-fit. We embrace these challenges and propose a novel framework for Few-shot Adaptive GaZE Estimation (FAZE) for learning person-specific gaze networks with very few (less than 9) calibration samples. FAZE learns a rotation-aware latent representation of gaze via a disentangling encoder-decoder architecture along with a highly adaptable gaze estimator trained using meta-learning. It is capable of adapting to any new person to yield significant performance gains with as few as 3 samples, yielding state-of-the-art performance of 3.18-deg on GazeCapture, a 19 improvement over prior art.
More recently, the availability of large scale datasets such as MPIIGaze @cite_48 and GazeCapture @cite_12 , and progress in CNNs have rapidly moved the field forward. MPIIGaze has become a benchmark dataset for in-the-wild gaze estimation. For the most competitive person-independent within-MPIIGaze leave-one-person-out evaluation, gaze errors have progressively decreased from @math for naively applying a LeNet-5 architecture to eye-input @cite_48 to the current best of @math with an ensemble of multi-modal networks based on VGG-16 @cite_33 . Proposed advancements include the use of more complex CNNs @cite_38 ; more meaningful use of face @cite_46 @cite_9 and multi-modal input @cite_12 @cite_33 @cite_55 ; explicit handling of differences in the two eyes @cite_36 ; greater robustness to head pose @cite_5 @cite_13 ; improvements in data normalization @cite_34 ; learning more informed intermediate representations @cite_1 ; using ensembles of networks @cite_33 ; and using synthetic data @cite_14 @cite_63 @cite_6 @cite_56 @cite_13 .
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_33", "@cite_36", "@cite_48", "@cite_9", "@cite_55", "@cite_1", "@cite_6", "@cite_56", "@cite_63", "@cite_5", "@cite_46", "@cite_34", "@cite_13", "@cite_12" ], "mid": [ "2953126257", "2567101557", "2799164693", "2897890508", "", "", "2902615807", "2884915206", "2785967764", "", "", "2778268008", "2950361968", "2805941539", "", "2952055246" ], "abstract": [ "Learning-based methods are believed to work well for unconstrained gaze estimation, i.e. gaze estimation from a monocular RGB camera without assumptions regarding user, environment, or camera. However, current gaze datasets were collected under laboratory conditions and methods were not evaluated across multiple datasets. Our work makes three contributions towards addressing these limitations. First, we present the MPIIGaze that contains 213,659 full face images and corresponding ground-truth gaze positions collected from 15 users during everyday laptop use over several months. An experience sampling approach ensured continuous gaze and head poses and realistic variation in eye appearance and illumination. To facilitate cross-dataset evaluations, 37,667 images were manually annotated with eye corners, mouth corners, and pupil centres. Second, we present an extensive evaluation of state-of-the-art gaze estimation methods on three current datasets, including MPIIGaze. We study key challenges including target gaze range, illumination conditions, and facial appearance variation. We show that image resolution and the use of both eyes affect gaze estimation performance while head pose and pupil centre information are less informative. Finally, we propose GazeNet, the first deep appearance-based gaze estimation method. GazeNet improves the state of the art by 22 percent (from a mean error of 13.9 degrees to 10.8 degrees) for the most challenging cross-dataset evaluation.", "With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.", "In this work, we introduce a Hierarchical Generative Model (HGM) to enable realistic forward eye image synthesis, as well as effective backward eye gaze estimation. The proposed HGM consists of a hierarchical generative shape model (HGSM), and a conditional bidirectional generative adversarial network (c-BiGAN). The HGSM encodes eye geometry knowledge and relates eye gaze with eye shape, while c-BiGAN leverages on big data and captures the dependency between eye shape and eye appearance. As an intermediate component, eye shape connects knowledge-based model (HGSM) with data-driven model (c-BiGAN) and enables bidirectional inference. Through a top-down inference, the HGM can synthesize eye images consistent with the given eye gaze. Through a bottom-up inference, HGM can infer eye gaze effectively from a given eye image. Qualitative and quantitative evaluations on benchmark datasets demonstrate our model's effectiveness on both eye image synthesis and eye gaze estimation. In addition, the proposed model is not restricted to eye images only. It can be adapted to face images and any shape-appearance related fields.", "Eye gaze estimation has been increasingly demanded by recent intelligent systems to accomplish a range of interaction-related tasks, by using simple eye images as input. However, learning the highly complex regression between eye images and gaze directions is nontrivial, and thus the problem is yet to be solved efficiently. In this paper, we propose the Asymmetric Regression-Evaluation Network (ARE-Net), and try to improve the gaze estimation performance to its full extent. At the core of our method is the notion of “two eye asymmetry” observed during gaze estimation for the left and right eyes. Inspired by this, we design the multi-stream ARE-Net; one asymmetric regression network (AR-Net) predicts 3D gaze directions for both eyes with a novel asymmetric strategy, and the evaluation network (E-Net) adaptively adjusts the strategy by evaluating the two eyes in terms of their performance during optimization. By training the whole network, our method achieves promising results and surpasses the state-of-the-art methods on multiple public datasets.", "", "", "As an indicator of attention, gaze is an important cue for human behavior and social interaction analysis. Recent deep learning methods for gaze estimation rely on plain regression of the gaze from images without accounting for potential mismatches in eye image cropping and normalization. This may impact the estimation of the implicit relation between visual cues and the gaze direction when dealing with low resolution images or when training with a limited amount of data. In this paper, we propose a deep multitask framework for gaze estimation, with the following contributions. (i) we proposed a multitask framework which relies on both synthetic data and real data for end-to-end training. During training, each dataset provides the label of only one task but the two tasks are combined in a constrained way. (ii) we introduce a Constrained Landmark-Gaze Model (CLGM) modeling the joint variation of eye landmark locations (including the iris center) and gaze directions. By relating explicitly visual information (landmarks) to the more abstract gaze values, we demonstrate that the estimator is more accurate and easier to learn. (iii) by decomposing our deep network into a network inferring jointly the parameters of the CLGM model and the scale and translation parameters of eye regions on one hand, and a CLGM based decoder deterministically inferring landmark positions and gaze from these parameters and head pose on the other hand, our framework decouples gaze estimation from irrelevant geometric variations in the eye image (scale, translation), resulting in a more robust model. Thorough experiments on public datasets demonstrate that our method achieves competitive results, improving over state-of-the-art results in challenging free head pose gaze estimation tasks and on eye landmark localization (iris location) ones.", "Estimating human gaze from natural eye images only is a challenging task. Gaze direction can be defined by the pupil- and the eyeball center where the latter is unobservable in 2D images. Hence, achieving highly accurate gaze estimates is an ill-posed problem. In this paper, we introduce a novel deep neural network architecture specifically designed for the task of gaze estimation from single eye input. Instead of directly regressing two angles for the pitch and yaw of the eyeball, we regress to an intermediate pictorial representation which in turn simplifies the task of 3D gaze direction estimation. Our quantitative and qualitative results show that our approach achieves higher accuracies than the state-of-the-art and is robust to variation in gaze, head pose and image quality.", "Collecting a large dataset with high quality annotations is expensive and time-consuming. Recently, (2017) propose Simulated+Unsupervised (S+U) learning: It first learns a mapping from synthetic data to real data, translates a large amount of labeled synthetic data to the ones that resemble real data, and then trains a learning model on the translated data. (2017) propose a similar framework that jointly trains a translation mapping and a learning model. While these algorithms are shown to achieve the state-of-the-art performances on various tasks, it may have a room for improvement, as they do not fully leverage flexibility of data simulation process and consider only the forward (synthetic to real) mapping. While these algorithms are shown to achieve the state-of-the-art performances on various tasks, it may have a room for improvement, as it does not fully leverage flexibility of data simulation process and consider only the forward (synthetic to real) mapping. Inspired by this limitation, we propose a new S+U learning algorithm, which fully leverage the flexibility of data simulators and bidirectional mappings between synthetic data and real data. We show that our approach achieves the improved performance on the gaze estimation task, outperforming (, 2017).", "", "", "Free-head 3D gaze tracking outputs both the eye location and the gaze vector in 3D space, and it has wide applications in scenarios such as driver monitoring, advertisement analysis and surveillance. A reliable and low-cost monocular solution is critical for pervasive usage in these areas. Noticing that a gaze vector is a composition of head pose and eyeball movement in a geometrically deterministic way, we propose a novel gaze transform layer to connect separate head pose and eyeball movement models. The proposed decomposition does not suffer from head-gaze correlation overfitting and makes it possible to use datasets existing for other tasks. To add stronger supervision for better network training, we propose a two-step training strategy, which first trains sub-tasks with rough labels and then jointly trains with accurate gaze labels. To enable good cross-subject performance under various conditions, we collect a large dataset which has full coverage of head poses and eyeball movements, contains 200 subjects, and has diverse illumination conditions. Our deep solution achieves state-of-the-art gaze tracking accuracy, reaching 5.6° cross-subject prediction error using a small network running at 1000 fps on a single CPU (excluding face alignment time) and 4.3° cross-subject error with a deeper network.", "Eye gaze is an important non-verbal cue for human affect analysis. Recent gaze estimation work indicated that information from the full face region can benefit performance. Pushing this idea further, we propose an appearance-based method that, in contrast to a long-standing line of work in computer vision, only takes the full face image as input. Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps to flexibly suppress or enhance information in different facial regions. Through extensive evaluation, we show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation, achieving improvements of up to 14.3 on MPIIGaze and 27.7 on EYEDIAP for person-independent 3D gaze estimation. We further show that this improvement is consistent across different illumination conditions and gaze directions and particularly pronounced for the most challenging extreme head poses.", "Appearance-based gaze estimation is promising for unconstrained real-world settings, but the significant variability in head pose and user-camera distance poses significant challenges for training generic gaze estimators. Data normalization was proposed to cancel out this geometric variability by mapping input images and gaze labels to a normalized space. Although used successfully in prior works, the role and importance of data normalization remains unclear. To fill this gap, we study data normalization for the first time using principled evaluations on both simulated and real data. We propose a modification to the current data normalization formulation by removing the scaling factor and show that our new formulation performs significantly better (between 9.5 and 32.7 ) in the different evaluation settings. Using images synthesized from a 3D face model, we demonstrate the benefit of data normalization for the efficiency of the model training. Experiments on real-world images confirm the advantages of data normalization in terms of gaze estimation performance.", "", "From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost 2.5M frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10-15fps) on a modern mobile device. Our model achieves a prediction error of 1.71cm and 2.53cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.34cm and 2.12cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results. The code, data, and models are available at this http URL" ] }
1905.01941
2942809297
Inter-personal anatomical differences limit the accuracy of person-independent gaze estimation networks. Yet there is a need to lower gaze errors further to enable applications requiring higher quality. Further gains can be achieved by personalizing gaze networks, ideally with few calibration samples. However, over-parameterized neural networks are not amenable to learning from few examples as they can quickly over-fit. We embrace these challenges and propose a novel framework for Few-shot Adaptive GaZE Estimation (FAZE) for learning person-specific gaze networks with very few (less than 9) calibration samples. FAZE learns a rotation-aware latent representation of gaze via a disentangling encoder-decoder architecture along with a highly adaptable gaze estimator trained using meta-learning. It is capable of adapting to any new person to yield significant performance gains with as few as 3 samples, yielding state-of-the-art performance of 3.18-deg on GazeCapture, a 19 improvement over prior art.
However, person-independent gaze errors are still insufficient for many applications @cite_7 @cite_24 @cite_23 @cite_25 . While significant gains can be obtained by training person-specific models, it requires many thousands of training images per subject @cite_38 . On the other hand, CNNs are prone to over-fitting if trained with very few ( @math ) samples. In order to address this issue, existing approaches try to adapt person-dependent CNN, image-based features @cite_12 @cite_56 or points-of-regard (PoR) @cite_37 to person-specific ones via heuristic functions. Some methods also train a Siamese network with pairs of images of the same subject @cite_16 .
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_7", "@cite_56", "@cite_24", "@cite_23", "@cite_16", "@cite_25", "@cite_12" ], "mid": [ "2953126257", "2972260410", "", "", "2012895915", "2523950919", "2887595520", "2150856527", "2952055246" ], "abstract": [ "Learning-based methods are believed to work well for unconstrained gaze estimation, i.e. gaze estimation from a monocular RGB camera without assumptions regarding user, environment, or camera. However, current gaze datasets were collected under laboratory conditions and methods were not evaluated across multiple datasets. Our work makes three contributions towards addressing these limitations. First, we present the MPIIGaze that contains 213,659 full face images and corresponding ground-truth gaze positions collected from 15 users during everyday laptop use over several months. An experience sampling approach ensured continuous gaze and head poses and realistic variation in eye appearance and illumination. To facilitate cross-dataset evaluations, 37,667 images were manually annotated with eye corners, mouth corners, and pupil centres. Second, we present an extensive evaluation of state-of-the-art gaze estimation methods on three current datasets, including MPIIGaze. We study key challenges including target gaze range, illumination conditions, and facial appearance variation. We show that image resolution and the use of both eyes affect gaze estimation performance while head pose and pupil centre information are less informative. Finally, we propose GazeNet, the first deep appearance-based gaze estimation method. GazeNet improves the state of the art by 22 percent (from a mean error of 13.9 degrees to 10.8 degrees) for the most challenging cross-dataset evaluation.", "Appearance-based gaze estimation methods that only require an off-the-shelf camera have significantly improved but they are still not yet widely used in the human-computer interaction (HCI) community. This is partly because it remains unclear how they perform compared to model-based approaches as well as dominant, special-purpose eye tracking equipment. To address this limitation, we evaluate the performance of state-of-the-art appearance-based gaze estimation for interaction scenarios with and without personal calibration, indoors and outdoors, for different sensing distances, as well as for users with and without glasses. We discuss the obtained findings and their implications for the most important gaze-based applications, namely explicit eye input, attentive user interfaces, gaze-based user modelling, and passive eye monitoring. To democratise the use of appearance-based gaze estimation and interaction in HCI, we finally present OpenGaze (this http URL), the first software toolkit for appearance-based gaze estimation and interaction.", "", "", "We have developed a system for remedial reading instruction that uses visually controlled auditory prompting to help the user with recognition and pronunciation of words. Our underlying hypothesis is that the relatively unobtrusive assistance rendered by such a system will be more effective than previous computer aided approaches. We present a description of the design and implementation of our system and discuss a controlled study that we undertook to evaluate the usability of the Reading Assistant.", "Stress sensing is valuable in many applications, including online learning crowdsourcing and other daily human-computer interactions. Traditional affective computing techniques investigate affect inference based on different individual modalities, such as facial expression, vocal tones, and physiological signals or the aggregation of signals of these independent modalities, without explicitly exploiting their inter-connections. In contrast, this paper focuses on exploring the impact of mental stress on the coordination between two human nervous systems, the somatic and autonomic nervous systems. Specifically, we present the analysis of the subtle but indicative pattern of human gaze behaviors surrounding a mouse-click event, i.e. the gaze-click pattern. Our evaluation shows that mental stress affects the gaze-click pattern, and this influence has largely been ignored in previous work. This paper, therefore, further proposes a non-intrusive approach to inferring human stress level based on the gaze-click pattern, using only data collected from the common computer webcam and mouse. We conducted a human study on solving math questions under different stress levels to explore the validity of stress recognition based on this coordination pattern. Experimental results show the effectiveness of our technique and the generalizability of the proposed features for user-independent modeling. Our results suggest that it may be possible to detect stress non-intrusively in the wild, without the need for specialized equipment.", "", "The \"Camera Mouse\" system has been developed to provide computer access for people with severe disabilities. The system tracks the computer user's movements with a video camera and translates them into the movements of the mouse pointer on the screen. Body features such as the tip of the user's nose or finger can be tracked. The visual tracking algorithm is based on cropping an online template of the tracked feature from the current image frame and testing where this template correlates in the subsequent frame. The location of the highest correlation is interpreted as the new location of the feature in the subsequent frame. Various body features are examined for tracking robustness and user convenience. A group of 20 people without disabilities tested the Camera Mouse and quickly learned how to use it to spell out messages or play games. Twelve people with severe cerebral palsy or traumatic brain injury have tried the system, nine of whom have shown success. They interacted with their environment by spelling out messages and exploring the Internet.", "From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost 2.5M frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10-15fps) on a modern mobile device. Our model achieves a prediction error of 1.71cm and 2.53cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.34cm and 2.12cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results. The code, data, and models are available at this http URL" ] }
1905.01941
2942809297
Inter-personal anatomical differences limit the accuracy of person-independent gaze estimation networks. Yet there is a need to lower gaze errors further to enable applications requiring higher quality. Further gains can be achieved by personalizing gaze networks, ideally with few calibration samples. However, over-parameterized neural networks are not amenable to learning from few examples as they can quickly over-fit. We embrace these challenges and propose a novel framework for Few-shot Adaptive GaZE Estimation (FAZE) for learning person-specific gaze networks with very few (less than 9) calibration samples. FAZE learns a rotation-aware latent representation of gaze via a disentangling encoder-decoder architecture along with a highly adaptable gaze estimator trained using meta-learning. It is capable of adapting to any new person to yield significant performance gains with as few as 3 samples, yielding state-of-the-art performance of 3.18-deg on GazeCapture, a 19 improvement over prior art.
Few-shot Learning. Few-shot learning aims to learn a new task with very few examples @cite_15 . This is a non-trivial problem for highly over-parameterized deep networks as it leads to over-fitting. Recently, several promising meta-learning @cite_57 @cite_51 @cite_32 @cite_52 @cite_40 @cite_29 @cite_43 techniques have been proposed that learn unique but similar tasks in a few-shot manner using CNNs. They have been shown to be successful for various few-shot visual learning tasks including object recognition @cite_62 , segmentation @cite_61 and online adaptation @cite_59 . Inspired by this success, we use meta-learning to learn how to learn person-specific gaze networks from few examples. To the best of our knowledge we are the first to cast person-specific gaze estimation as a multi-task problem in the context of meta-learning, where each subject is seen as a new task for the meta-learner. We identify that person-specific factors may be only slight but important variations across people. Our insight is that meta-learning lends itself well to few-shot gaze personalization and indeed leads to performance improvements.
{ "cite_N": [ "@cite_61", "@cite_62", "@cite_29", "@cite_52", "@cite_32", "@cite_57", "@cite_43", "@cite_40", "@cite_59", "@cite_15", "@cite_51" ], "mid": [ "2808912095", "", "", "2472819217", "", "2950537964", "2753160622", "", "2783173047", "", "2432717477" ], "abstract": [ "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See this http URL for code, models, and more details.", "", "", "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of \"one-shot learning.\" Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory location-based focusing mechanisms.", "", "We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.", "Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.", "", "This paper improves state-of-the-art on-line trackers that use deep learning. Such trackers train a deep network to pick a specified object out from the background in an initial frame (initialization) and then keep training the model as tracking proceeds (updates). Our core contribution is a meta-learning-based method to adjust deep networks for tracking using off-line training. First, we learn initial parameters and per-parameter coefficients for fast online adaptation. Second, we use training signal from future frames for robustness to target appearance variations and environment changes. The resulting networks train significantly faster during the initialization, while improving robustness and accuracy. We demonstrate this approach on top of the current highest accuracy tracking approach, tracking-by-detection based MDNet and close competitor, the correlation-based CREST. Experimental results on both standard benchmarks, OTB and VOT2016, show improvements in speed, accuracy, and robustness on both trackers.", "", "Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank." ] }
1905.01790
2969163779
Nowadays, skeleton information in videos plays an important role in human-centric video analysis but effective coding such massive skeleton information has never been addressed in previous work. In this paper, we make the first attempt to solve this problem by proposing a multi-modal skeleton coding tool containing three different coding schemes, namely, spatial differential-coding scheme, motion-vector-based differential-coding scheme and inter prediction scheme, thus utilizing both spatial and temporal redundancy to losslessly compress skeleton data. More importantly, these schemes are switched properly for different types of skeletons in video frames, hence achieving further improvement of compression rate. Experimental results show that our approach leads to 74.4 and 54.7 size reduction on our surveillance sequences and overall test sequences respectively, which demonstrates the effectiveness of our skeleton coding tool.
Skeleton information in videos is of increasing important recently in many applications such as event detection, video recognition, etc. For example, previous works have shown how action recognition can benefit from skeleton-based video modeling @cite_8 @cite_7 @cite_4 @cite_11 . A person's pose is described by multiple skeleton key joints and the skeleton information in videos represents the dynamic characteristics of body postures, which makes skeleton information widely used in human action recognition and other video analysis tasks.
{ "cite_N": [ "@cite_11", "@cite_4", "@cite_7", "@cite_8" ], "mid": [ "2799409595", "2798644314", "2604321021", "2606294640" ], "abstract": [ "Some of the main challenges in skeleton-based action recognition systems are redundant and noisy pose transformations. Earlier works in skeleton-based action recognition explored different approaches for filtering linear noise transformations, but neglect to address potential nonlinear transformations. In this paper, we present an unsupervised learning approach for estimating nonlinear noise transformations in pose estimates. Our approach starts by decoupling linear and nonlinear noise transformations. While the linear transformations are modelled explicitly the nonlinear transformations are learned from data. Subsequently, we use an autoencoder with L2-norm reconstruction error and show that it indeed does capture nonlinear noise transformations, and recover a denoised pose estimate which in turn improves performance significantly. We validate our approach on a publicly available dataset, NW-UCLA.", "In this paper, we propose a deep progressive reinforcement learning (DPRL) method for action recognition in skeleton-based videos, which aims to distil the most informative frames and discard ambiguous frames in sequences for recognizing actions. Since the choices of selecting representative frames are multitudinous for each video, we model the frame selection as a progressive process through deep reinforcement learning, during which we progressively adjust the chosen frames by taking two important factors into account: (1) the quality of the selected frames and (2) the relationship between the selected frames to the whole video. Moreover, considering the topology of human body inherently lies in a graph-based structure, where the vertices and edges represent the hinged joints and rigid bones respectively, we employ the graph-based convolutional neural network to capture the dependency between the joints for action recognition. Our approach achieves very competitive performance on three widely used benchmarks.", "This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips each consisting of several frames for spatial temporal feature learning using deep neural networks. Each clip is generated from one channel of the cylindrical coordinates of the skeleton sequence. Each frame of the generated clips represents the temporal information of the entire skeleton sequence, and incorporates one particular spatial relationship between the joints. The entire clips include multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We propose to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames of the clips in parallel to incorporate spatial structural information for action recognition. Experimental results clearly show the effectiveness of the proposed new representation and feature learning method for 3D action recognition.", "Recently, skeleton based action recognition gains more popularity due to cost-effective depth sensors coupled with real-time skeleton estimation algorithms. Traditional approaches based on handcrafted features are limited to represent the complexity of motion patterns. Recent methods that use Recurrent Neural Networks (RNN) to handle raw skeletons only focus on the contextual dependency in the temporal domain and neglect the spatial configurations of articulated skeletons. In this paper, we propose a novel two-stream RNN architecture to model both temporal dynamics and spatial configurations for skeleton based action recognition. We explore two different structures for the temporal stream: stacked RNN and hierarchical RNN. Hierarchical RNN is designed according to human body kinematics. We also propose two effective methods to model the spatial structure by converting the spatial graph into a sequence of joints. To improve generalization of our model, we further exploit 3D transformation based data augmentation techniques including rotation and scaling transformation to transform the 3D coordinates of skeletons during training. Experiments on 3D action recognition benchmark datasets show that our method brings a considerable improvement for a variety of actions, i.e., generic actions, interaction activities and gestures." ] }
1905.01790
2969163779
Nowadays, skeleton information in videos plays an important role in human-centric video analysis but effective coding such massive skeleton information has never been addressed in previous work. In this paper, we make the first attempt to solve this problem by proposing a multi-modal skeleton coding tool containing three different coding schemes, namely, spatial differential-coding scheme, motion-vector-based differential-coding scheme and inter prediction scheme, thus utilizing both spatial and temporal redundancy to losslessly compress skeleton data. More importantly, these schemes are switched properly for different types of skeletons in video frames, hence achieving further improvement of compression rate. Experimental results show that our approach leads to 74.4 and 54.7 size reduction on our surveillance sequences and overall test sequences respectively, which demonstrates the effectiveness of our skeleton coding tool.
Since video analysis is directly performed based on extracted features, shifting the feature extraction into the camera-integrated module can reduce the analysis server load and is highly desirable. Therefore, some feature coding methods that aim to compress and transmit different kinds of extracted features of videos are proposed recently. Duan @cite_12 describe the compact descriptors for video analysis, where handcrafted and deep features are compressed and transmitted in a standardized bitstream. Chen @cite_3 introduce their proposed Region-of-Interest (ROI) location coding tool where the ROI location information itself is coded in the video bitstream.
{ "cite_N": [ "@cite_3", "@cite_12" ], "mid": [ "2138342918", "2964191079" ], "abstract": [ "Region-of-Interest (ROI) location information in videos has many practical usages in video coding field, such as video content analysis and user experience improvement. Although ROI-based coding has been studied widely by many researchers to improve coding efficiency for video contents, the ROI location information itself is seldom coded in video bitstream. In this paper, we will introduce our proposed ROI location coding tool which has been adopted in surveillance profile of AVS2 video coding standard (surveillance profile). Our tool includes three schemes: direct-coding scheme, differential- coding scheme, and reconstructed-coding scheme. We will illustrate the details of these schemes, and perform analysis of their advantages and disadvantages, respectively.", "This paper provides an overview of the on-going compact descriptors for video analysis standard (CDVA) from the ISO IEC moving pictures experts group (MPEG). MPEG-CDVA targets at defining a standardized bitstream syntax to enable interoperability in the context of video analysis applications. During the developments of MPEG-CDVA, a series of techniques aiming to reduce the descriptor size and improve the video representation ability have been proposed. This paper describes the new standard that is being developed and reports the performance of these key technical contributions." ] }
1905.01695
2952452366
The increasing demands for computing performance have been a reality regardless of the requirements for smaller and more energy efficient devices. Throughout the years, the strategy adopted by industry was to increase the robustness of a single processor by increasing its clock frequency and mounting more transistors so more calculations could be executed. However, it is known that the physical limits of such processors are being reached, and one way to fulfill such increasing computing demands has been to adopt a strategy based on heterogeneous computing, i.e., using a heterogeneous platform containing more than one type of processor. This way, different types of tasks can be executed by processors that are specialized in them. Heterogeneous computing, however, poses a number of challenges to software engineering, especially in the architecture and deployment phases. In this paper, we conduct an empirical study that aims at discovering the state-of-the-art in software architecture for heterogeneous computing, with focus on deployment. We conduct a systematic mapping study that retrieved 28 studies, which were critically assessed to obtain an overview of the research field. We identified gaps and trends that can be used by both researchers and practitioners as guides to further investigate the topic.
In @cite_1 , the authors thoroughly investigated heterogeneous computing techniques through a survey, including both software and hardware aspects. Their work includes approaches for workload partitioning and their uses against system performance and energy consumption requirements. The study reports an in-dept categorization of techniques that are used throughout the development of heterogeneous computing systems, such as programming languages, development frameworks and tools. However, their survey is limited to CPU-GPU environments. As shown in the findings of our study, CPU-GPU platforms represent today the majority of heterogeneous computing platforms. There is a variety of approaches that can be used when developing systems to be deployed on such platforms. On the other hand, we believe that other types of processors, such as FPGAs and DSPs are also gaining importance in industry and will soon become more common solutions in heterogeneous computing. FPGAs, for instance, are capable of high computing power despite the present difficulties in developing software to be executed on them. In the future, we believe that more tools and approaches will be available to decrease the upfront cost of implementing systems for this type of processors.
{ "cite_N": [ "@cite_1" ], "mid": [ "1864199185" ], "abstract": [ "As both CPUs and GPUs become employed in a wide range of applications, it has been acknowledged that both of these Processing Units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated a significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this article, we survey Heterogeneous Computing Techniques (HCTs) such as workload partitioning that enable utilizing both CPUs and GPUs to improve performance and or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler, and application levels. Further, we review both discrete and fused CPU-GPU systems and discuss benchmark suites designed for evaluating Heterogeneous Computing Systems (HCSs). We believe that this article will provide insights into the workings and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance." ] }
1905.01695
2952452366
The increasing demands for computing performance have been a reality regardless of the requirements for smaller and more energy efficient devices. Throughout the years, the strategy adopted by industry was to increase the robustness of a single processor by increasing its clock frequency and mounting more transistors so more calculations could be executed. However, it is known that the physical limits of such processors are being reached, and one way to fulfill such increasing computing demands has been to adopt a strategy based on heterogeneous computing, i.e., using a heterogeneous platform containing more than one type of processor. This way, different types of tasks can be executed by processors that are specialized in them. Heterogeneous computing, however, poses a number of challenges to software engineering, especially in the architecture and deployment phases. In this paper, we conduct an empirical study that aims at discovering the state-of-the-art in software architecture for heterogeneous computing, with focus on deployment. We conduct a systematic mapping study that retrieved 28 studies, which were critically assessed to obtain an overview of the research field. We identified gaps and trends that can be used by both researchers and practitioners as guides to further investigate the topic.
Further, in @cite_13 , the authors conducted a study that aimed at describing and analyzing the state-of-the-art in heterogeneous computing. They investigated hardware, software tools and algorithms used to develop systems that include processors of different types, such as CPUs, GPUs and FPGAs. The authors extensively describe the concerns related to developing systems for heterogeneous platforms, and included programming languages for CPUs, GPUs and FPGAs. However, the term often referred to the hardware characteristics of each processor type, and their impact on developing systems. Our work differs from theirs in the sense that we focus on software architectures and their implications to deployment on heterogeneous platforms. We restricted our scope to the software engineering process and how the software architecture design supports the deployment on platforms that are heterogeneous.
{ "cite_N": [ "@cite_13" ], "mid": [ "2148802605" ], "abstract": [ "Node level heterogeneous architectures have become attractive during the last decade for several reasons: compared to traditional symmetric CPUs, they offer high peak performance and are energy and or cost efficient. With the increase of fine-grained parallelism in high-performance computing, as well as the introduction of parallelism in workstations, there is an acute need for a good overview and understanding of these architectures. We give an overview of the state-of-the-art in heterogeneous computing, focusing on three commonly found architectures: the Cell Broadband Engine Architecture, graphics processing units (GPUs), and field programmable gate arrays (FPGAs). We present a review of hardware, available software tools, and an overview of state-of-the-art techniques and algorithms. Furthermore, we present a qualitative and quantitative comparison of the architectures, and give our view on the future of heterogeneous computing." ] }
1905.01639
2943142229
Video inpainting aims to fill spatio-temporal holes with plausible content in a video. Despite tremendous progress of deep neural networks for image inpainting, it is challenging to extend these methods to the video domain due to the additional time dimension. In this work, we propose a novel deep network architecture for fast video inpainting. Built upon an image-based encoder-decoder model, our framework is designed to collect and refine information from neighbor frames and synthesize still-unknown regions. At the same time, the output is enforced to be temporally consistent by a recurrent feedback and a temporal memory module. Compared with the state-of-the-art image inpainting algorithm, our method produces videos that are much more semantically correct and temporally smooth. In contrast to the prior video completion method which relies on time-consuming optimization, our method runs in near real-time while generating competitive video results. Finally, we applied our framework to video retargeting task, and obtain visually pleasing results.
Significant progress has been made on image inpainting @cite_15 @cite_30 @cite_17 @cite_28 @cite_8 @cite_14 @cite_31 @cite_32 @cite_19 @cite_1 , to a point of where commercial solutions are now available @cite_10 . However, video inpainting algorithms have been under-investigated. This is due to the additional time dimension which introduces major challenges such as severe viewpoint changes, temporal consistency preserving, and high computational complexity. Most recent methods found in the literature address these issues using either object-based or patch-based approaches.
{ "cite_N": [ "@cite_30", "@cite_31", "@cite_14", "@cite_8", "@cite_28", "@cite_1", "@cite_32", "@cite_19", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "", "2738588019", "2735970878", "2557414982", "2342877626", "2807633959", "2784790939", "2950820654", "", "1993120651", "" ], "abstract": [ "", "We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.", "Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.", "Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal. While these learning-based methods are significantly more effective in capturing high-level features than prior techniques, they can only handle very low-resolution inputs due to memory limitations and difficulty in training. Even for slightly larger images, the inpainted regions would appear blurry and unpleasant boundaries become visible. We propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. We evaluate our method on the ImageNet and Paris Streetview datasets and achieved state-of-the-art inpainting accuracy. We show our approach produces sharper and more coherent results than prior methods, especially for high-resolution images.", "We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.", "We present a novel deep learning based image inpainting system to complete images with free-form masks and inputs. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shapes, global and local GANs designed for a single rectangular mask are not suitable. To this end, we also present a novel GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminators on dense image patches. It is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. We show that our system helps users quickly remove distracting objects, modify image layouts, clear watermarks, edit faces and interactively create novel objects in images. Furthermore, visualization of learned feature representations reveals the effectiveness of gated convolution and provides an interpretation of how the proposed neural network fills in missing regions. More high-resolution results and video materials are available at this http URL", "Recent deep learning based approaches have shown promising results on image inpainting for the challenging task of filling in large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feed-forward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces, textures and natural images demonstrate that the proposed approach generates higher-quality inpainting results than existing ones. Code and trained models will be released.", "Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach.", "", "This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.", "" ] }
1905.01639
2943142229
Video inpainting aims to fill spatio-temporal holes with plausible content in a video. Despite tremendous progress of deep neural networks for image inpainting, it is challenging to extend these methods to the video domain due to the additional time dimension. In this work, we propose a novel deep network architecture for fast video inpainting. Built upon an image-based encoder-decoder model, our framework is designed to collect and refine information from neighbor frames and synthesize still-unknown regions. At the same time, the output is enforced to be temporally consistent by a recurrent feedback and a temporal memory module. Compared with the state-of-the-art image inpainting algorithm, our method produces videos that are much more semantically correct and temporally smooth. In contrast to the prior video completion method which relies on time-consuming optimization, our method runs in near real-time while generating competitive video results. Finally, we applied our framework to video retargeting task, and obtain visually pleasing results.
In patch-based methods, the patches from known regions are used to fill in a mask region. For example, Patwardhan al @cite_3 @cite_6 extend the well-known texture synthesis technique @cite_17 to video inpainting. However, these methods either assume static cameras @cite_3 or constrained camera motion @cite_6 and are based on a greedy patch-filling process where the early errors are inevitably propagated, yielding globally inconsistent outputs.
{ "cite_N": [ "@cite_6", "@cite_3", "@cite_17" ], "mid": [ "2131565286", "2084227286", "" ], "abstract": [ "A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: It may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step, we reconstruct moving objects in the foreground that are \"occluded\" by the region to be inpainted. To this end, we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples taken with a consumer hand-held camera are shown supporting these findings", "We present a basic technique to fill-in missing parts of a video sequence taken from a static camera. Two important cases are considered. The first case is concerned with the removal of non-stationary objects that occlude stationary background. We use a priority based spatio-temporal synthesis scheme for inpainting the stationary background. The second and more difficult case involves filling-in moving objects when they are partially occluded. For this, we propose a priority scheme to first inpaint the occluded moving objects and then fill-in the remaining area with stationary background using the method proposed for the first case. We use as input an optical-flow based mask, which tells if an undamaged pixel is moving or is stationary. The moving object is inpainted by copying patches from undamaged frames, and this copying is independent of the background of the moving object in either frame. This work has applications in a variety of different areas, including video special effects and restoration and enhancement of damaged videos. The examples shown in the paper illustrate these ideas.", "" ] }
1905.01639
2943142229
Video inpainting aims to fill spatio-temporal holes with plausible content in a video. Despite tremendous progress of deep neural networks for image inpainting, it is challenging to extend these methods to the video domain due to the additional time dimension. In this work, we propose a novel deep network architecture for fast video inpainting. Built upon an image-based encoder-decoder model, our framework is designed to collect and refine information from neighbor frames and synthesize still-unknown regions. At the same time, the output is enforced to be temporally consistent by a recurrent feedback and a temporal memory module. Compared with the state-of-the-art image inpainting algorithm, our method produces videos that are much more semantically correct and temporally smooth. In contrast to the prior video completion method which relies on time-consuming optimization, our method runs in near real-time while generating competitive video results. Finally, we applied our framework to video retargeting task, and obtain visually pleasing results.
To ensure the global consistency, patch-based algorithms have been cast as a global optimization problem. Wexler al @cite_7 present a method that optimizes a global energy minimization problem for 3D spatio-temporal patches by alternating between patch search and reconstruction steps. Newson al @cite_33 extend this by developing a spatio-temporal version of PatchMatch @cite_10 to strengthen the temporal coherence and speed up the patch matching. Recently, Huang al @cite_25 modify the energy term of @cite_7 by adding an optical flow term to enforce temporal consistency. Although these methods are effective, their biggest limitations are high computational complexity and the absolute dependence upon the pre-computed optical flow which cannot be guaranteed to be accurate in complex sequences.
{ "cite_N": [ "@cite_10", "@cite_33", "@cite_7", "@cite_25" ], "mid": [ "1993120651", "2069237980", "", "2551763541" ], "abstract": [ "This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.", "We propose an automatic video inpainting algorithm which relies on the optimization of a global, patch-based functional. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects, and moving background. Furthermore, we achieve this in an order of magnitude less execution time with respect to the state-of-the-art. We are also able to achieve good quality results on high-definition videos. Finally, we provide specific algorithmic details to make implementation of our algorithm as easy as possible. The resulting algorithm requires no segmentation or manual input other than the definition of the inpainting mask and can deal with a wider variety of situations than is handled by previous work.", "", "We present an automatic video completion algorithm that synthesizes missing regions in videos in a temporally coherent fashion. Our algorithm can handle dynamic scenes captured using a moving camera. State-of-the-art approaches have difficulties handling such videos because viewpoint changes cause image-space motion vectors in the missing and known regions to be inconsistent. We address this problem by jointly estimating optical flow and color in the missing regions. Using pixel-wise forward backward flow fields enables us to synthesize temporally coherent colors. We formulate the problem as a non-parametric patch-based optimization. We demonstrate our technique on numerous challenging videos." ] }
1905.01969
2969574947
The use of deep pre-trained bidirectional transformers has led to remarkable progress in a number of applications (, 2018). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on three existing tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks.
There is a broad class of models that map the input and a candidate label into a feature space wherein typically a dot product, cosine or (parameterized) non-linearity is used to measure their similarity. We refer to these models as Bi-encoders . Such methods include vector space models @cite_1 , LSI @cite_27 , supervised embeddings @cite_18 and classical siamese networks @cite_13 . For the next utterance prediction tasks we consider in this work, several Bi-encoder neural approaches have been considered, in particular Memory Networks @cite_2 and Transformer Memory networks @cite_11 as well as LSTMs @cite_3 and CNNs @cite_30 which encode input and label separately. A major advantage of Bi-encoder methods is their ability to cache the representations of a large, fixed candidate set. Since the candidate encodings are independent of the input, Bi-encoders are very efficient during evaluation.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_1", "@cite_3", "@cite_27", "@cite_2", "@cite_13", "@cite_11" ], "mid": [ "2197546379", "2129921015", "2165612380", "2962854379", "2147152072", "", "2127589108", "2963475460" ], "abstract": [ "This paper presents results of our experiments for the next utterance ranking on the Ubuntu Dialog Corpus -- the largest publicly available multi-turn dialog corpus. First, we use an in-house implementation of previously reported models to do an independent evaluation using the same data. Second, we evaluate the performances of various LSTMs, Bi-LSTMs and CNNs on the dataset. Third, we create an ensemble by averaging predictions of multiple models. The ensemble further improves the performance and it achieves a state-of-the-art result for the next utterance ranking on this dataset. Finally, we discuss our future plans using this corpus.", "In this article we propose Supervised Semantic Indexing (SSI), an algorithm that is trained on (query, document) pairs of text documents to predict the quality of their match. Like Latent Semantic Indexing (LSI), our models take account of correlations between words (synonymy, polysemy). However, unlike LSI our models are trained with a supervised signal directly on the ranking task of interest, which we argue is the reason for our superior results. As the query and target texts are modeled separately, our approach is easily generalized to different retrieval tasks, such as online advertising placement. Dealing with models on all pairs of words features is computationally challenging. We propose several improvements to our basic model for addressing this issue, including low rank (but diagonal preserving) representations, and correlated feature hashing (CFH). We provide an empirical study of all these methods on retrieval tasks based on Wikipedia documents as well as an Internet advertisement task. We obtain state-of-the-art performance while providing realistically scalable methods.", "In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model.", "This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyzing this dataset, and provide benchmark performance on the task of selecting the best next response.", "A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.", "", "This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a \"Siamese\" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector with a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries.", "" ] }
1905.01928
2943652119
Regression testing is an important part of quality control in both software and embedded products, where hardware is involved. It is also one of the most expensive and time consuming part of the product cycle. To improve the cost effectiveness of the development cycle and the regression testing, we use test case prioritisation and selection techniques to run more important test cases earlier in the testing process. In this paper, we consider a functional test case prioritisation with an access only to the version control of the codebase and regression history. Prioritisation is used to aid our test case selection, where we have chosen 5-25 (0.4 -2.0 of 1254) test cases to validate our method. The selection technique together with other prioritisation methods allows us to shape the current static, retest-all regression testing into a more resource managed regression testing framework. This framework will serve the agile way of working better and will allow us to allocate testing resources more wisely. This is a joint work with a large international Finnish company in an embedded industrial domain.
In agile software development one wants to select test cases based on code changes and regression testing @cite_0 to catch the flipping test cases and to decrease the feedback loop time. These methods alone are not enough, because they do not catch test cases that keep on failing. Therefore, we need, e.g., a history-based test case prioritisation (HBTP) technique as well, where test cases are prioritised based on their failure rate in the regression history. Basically, if a given test case has failed then it will most likely fail again. One technique @cite_6 gives weight to test cases depending on how many builds there have been since their last failure.
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "2098807207", "2889037615" ], "abstract": [ "Due to changes in the development practices at Axis Communications, towards continuous integration, faster regression testing feedback is needed. The current automated regression test suite takes approximately seven hours to run which prevents developers from integrating code changes several times a day as preferred. Therefore we want to implement a highly selective yet accurate regression testing strategy. Traditional code coverage based techniques are not applicable due to the size and complexity of the software under test. Instead we decided to select tests based on regression test history. We developed a tool, the Difference Engine, which parses and analyzes results from previous test runs and outputs regression test recommendations. The Difference Engine correlates code and test cases at package level and recommends test cases that are strongly correlated to recently changed packages. We evaluated the technique with respect to correctness, precision, recall and efficiency. Our results are promising. On average the tool manages to identify 80 of the relevant tests while recommending only 4 of the test cases in the full regression test suite.", "Abstract Two heuristics namely diversity-based (DBTP) and history-based test prioritization (HBTP) have been separately proposed in the literature. Yet, their combination has not been widely studied in continuous integration (CI) environments. The objective of this study is to catch regression faults earlier, allowing developers to integrate and verify their changes more frequently and continuously. To achieve this, we investigated six open-source projects, each of which included several builds over a large time period. Findings indicate that previous failure knowledge seems to have strong predictive power in CI environments and can be used to effectively prioritize tests. HBTP does not necessarily need to have large data, and its effectiveness improves to a certain degree with larger history interval. DBTP can be used effectively during the early stages, when no historical data is available, and also combined with HBTP to improve its effectiveness. Among the investigated techniques, we found that history-based diversity using NCD Multiset is superior in terms of effectiveness but comes with relatively higher overhead in terms of method execution time. Test prioritization in CI environments can be effectively performed with negligible investment using previous failure knowledge, and its effectiveness can be further improved by considering dissimilarities among the tests." ] }
1905.01928
2943652119
Regression testing is an important part of quality control in both software and embedded products, where hardware is involved. It is also one of the most expensive and time consuming part of the product cycle. To improve the cost effectiveness of the development cycle and the regression testing, we use test case prioritisation and selection techniques to run more important test cases earlier in the testing process. In this paper, we consider a functional test case prioritisation with an access only to the version control of the codebase and regression history. Prioritisation is used to aid our test case selection, where we have chosen 5-25 (0.4 -2.0 of 1254) test cases to validate our method. The selection technique together with other prioritisation methods allows us to shape the current static, retest-all regression testing into a more resource managed regression testing framework. This framework will serve the agile way of working better and will allow us to allocate testing resources more wisely. This is a joint work with a large international Finnish company in an embedded industrial domain.
Another history-based method is to cluster test cases based on regression history alone or on codebase changes. For example, one can build a co-change matrix of the modified files and compute a singular value decomposition to cluster the files. Then combining the clusters with the information on test cases, the method yields a list of prioritised test cases @cite_9 . Using clustering methods, one may gain in-depth knowledge on how test cases behave, e.g., which test cases have passed and failed as a group even when that is not apparent by looking at the data alone. Downsides of clustering methods are, e.g., the required prior data and running the clustering algorithm in regular intervals to keep clusters up to date, which may become expensive in the long run.
{ "cite_N": [ "@cite_9" ], "mid": [ "2126905619" ], "abstract": [ "During development and testing, changes made to a system to repair a detected fault can often inject a new fault into the code base. These injected faults may not be in the same files that were just changed, since the effects of a change in the code base can have ramifications in other parts of the system. We propose a methodology for determining the effect of a change and then prioritizing regression test cases by gathering software change records and analyzing them through singular value decomposition. This methodology generates clusters of files that historically tend to change together. Combining these clusters with test case information yields a matrix that can be multiplied by a vector representing a new system modification to create a prioritized list of test cases. We performed a post hoc case study using this technique with three minor releases of a software product at IBM. We found that our methodology suggested additional regression tests in 50 of test runs and that the highest-priority suggested test found an additional fault 60 of the time." ] }
1905.01665
2962706131
We present models for utilizing blockchain and smart contract technology with the widely used OAuth 2.0 open authorization framework to provide delegated authorization for constrained IoT devices. The models involve different tradeoffs in terms of privacy, delay, and cost, while exploiting key advantages of blockchains and smart contracts. These include linking payments to authorization grants, immutably recording authorization information and policies in smart contracts, and offering resilience through the execution of smart contract code on all blockchain nodes.
The work in @cite_12 presents a blockchain-based authorization system where authorization proofs can be efficiently verified. The work in @cite_8 presents a blockchain-based decentralized access control system where IoT devices interact directly with the blockchain and are always connected, while @cite_13 presents a system where policies and access control events are directly recorded on Bitcoin's blockchain. @cite_0 presents a smart contract-based system for providing access control to IoT devices while satisfying access policies in terms of the minimum time interval between consecutive accesses. The above works all assume that the IoT device can directly access the blockchain, which is not possible in constrained IoT environments.
{ "cite_N": [ "@cite_0", "@cite_13", "@cite_12", "@cite_8" ], "mid": [ "2785503678", "2620085947", "", "2799128769" ], "abstract": [ "This paper investigates a critical access control issue in the Internet of Things (IoT). In particular, we propose a smart contract-based framework, which consists of multiple access control contracts (ACCs), one judge contract (JC) and one register contract (RC), to achieve distributed and trustworthy access control for IoT systems. Each ACC provides one access control method for a subject-object pair, and implements both static access right validation based on predefined policies and dynamic access right validation by checking the behavior of the subject. The JC implements a misbehavior-judging method to facilitate the dynamic validation of the ACCs by receiving misbehavior reports from the ACCs, judging the misbehavior and returning the corresponding penalty. The RC registers the information of the access control and misbehavior-judging methods as well as their smart contracts, and also provides functions (e.g., register, update and delete) to manage these methods. To demonstrate the application of the framework, we provide a case study in an IoT system with one desktop computer, one laptop and two Raspberry Pi single-board computers, where the ACCs, JC and RC are implemented based on the Ethereum smart contract platform to achieve the access control.", "Access Control systems are used in computer security to regulate the access to critical or valuable resources. The rights of subjects to access such resources are typically expressed through access control policies, which are evaluated at access request time against the current access context. This paper proposes a new approach based on blockchain technology to publish the policies expressing the right to access a resource and to allow the distributed transfer of such right among users. In our proposed protocol the policies and the rights exchanges are publicly visible on the blockchain, consequently any user can know at any time the policy paired with a resource and the subjects who currently have the rights to access the resource. This solution allows distributed auditability, preventing a party from fraudulently denying the rights granted by an enforceable policy. We also show a possible working implementation based on XACML policies, deployed on the Bitcoin blockchain.", "", "The prevalence of Internet of Things (IoTs) allows heterogeneous embedded smart devices to collaboratively provide smart services with or without human intervention. While leveraging the large scale IoT based applications like Smart Gird or Smart Cities, IoTs also incur more concerns on privacy and security. Among the top security challenges that IoTs face, access authorization is critical in resource sharing and information protection. One of the weaknesses in today's access control (AC) is the centralized authorization server, which can be the performance bottleneck or the single point of failure. In this paper, BlendCAC, a blockchain enabled decentralized capability based AC is proposed for the security of IoTs. The BlendCAC aims at an effective access control processes to devices, services and information in large scale IoT systems. Based on the blockchain network, a capability delegation mechanism is suggested for access permission propagation. A robust identity based capability token management strategy is proposed, which takes advantage of smart contract for registering, propagation and revocation of the access authorization. In the proposed BlendCAC scheme, IoT devices are their own master to control their resources instead of being supervised by a centralized authority. Implemented and tested on a Raspberry Pi device and on a local private blockchain network, our experimental results demonstrate the feasibility of the proposed BlendCAC approach to offer a decentralized, scalable, lightweight and fine grained AC solution to IoT systems." ] }
1905.01665
2962706131
We present models for utilizing blockchain and smart contract technology with the widely used OAuth 2.0 open authorization framework to provide delegated authorization for constrained IoT devices. The models involve different tradeoffs in terms of privacy, delay, and cost, while exploiting key advantages of blockchains and smart contracts. These include linking payments to authorization grants, immutably recording authorization information and policies in smart contracts, and offering resilience through the execution of smart contract code on all blockchain nodes.
The work in @cite_14 presents a system based on OAuth 2.0 where a smart contract generates authorization tokens, which a key server obtains in order to provide private keys that allow clients to access a protected resource. The work of @cite_17 contains a high level description for using smart contracts with OAuth 2.0 to allow users to freely select the server that provides authorization to their protected resource. The difference of this paper is that we present two different models, with different tradeoffs, for integrating OAuth 2.0 with blockchains, utilizing hash and time-lock mechanisms.
{ "cite_N": [ "@cite_14", "@cite_17" ], "mid": [ "2788822833", "2766740708" ], "abstract": [ "In this paper, we propose IoTChain, a combination of the OSCAR architecture [1] and the ACE authorization framework [2] to provide an E2E solution for the secure authorized access to IoT resources. IoTChain consists of two components, an authorization blockchain based on the ACE framework and the OSCAR object security model, extended with a group key scheme. The blockchain provides a flexible and trustless way to handle authorization while OSCAR uses the public ledger to set up multicast groups for authorized clients. To evaluate the feasibility of our architecture, we have implemented the authorization blockchain on top of a private Ethereum network. We report on several experiments that assess the performance of different architecture components.", "This document proposes an alternative service architecture for user- centric control of the sharing of resources, such as personal data, using the decentralized peer-to-peer computing paradigm. The term 'control' is used here to denote the full capacity of the user to freely select (i) the entities with whom to share resources (e.g. data), and (ii) the entities which provide services implementing user- controlled resource sharing. The peer-to-peer service architecture uses a set of computing nodes called OAuth2.0 Nodes (ON) that are part of a peer-to-peer network as the basis for the decentralized service architecture. Each OAuth2.0 Nodes is assumed to have the capability to provide AS-services, RS-services and Client-services." ] }
1905.01671
2943864996
We present models that utilize smart contracts and interledger mechanisms to provide decentralized authorization for constrained IoT devices. The models involve different tradeoffs in terms of cost, delay, complexity, and privacy, while exploiting key advantages of smart contracts and multiple blockchains that communicate with interledger mechanisms. These include immutably recording hashes of authorization information and policies in smart contracts, resilience through the execution of smart contract code on all blockchain nodes, and cryptographically linking transactions and IoT events recorded on different blockchains using hash and time-lock mechanisms. The proposed models are evaluated on the public Ethereum testnets Rinkeby and Ropsten, in terms of execution cost (gas), delay, and reduction of data that needs to be sent to the constrained IoT devices.
The work in @cite_2 presents a blockchain-based decentralized authorization system where authorization proofs can be efficiently verified. The work in @cite_11 presents a decentralized access control system where IoT devices are required to interact directly with the blockchain and are assumed to be always connected, while @cite_16 @cite_22 present solutions where policies and access control decisions are directly recorded on Bitcoin's blockchain. The work in @cite_14 present a system based on OAuth 2.0 where a smart contract generates authorization tokens, which a key server verifies in order to provide private keys that allow clients to access a protected resource. The work in @cite_15 contains a high level description of using smart contracts with OAuth 2.0 to provide an architecture where a user can freely select the server to provide authorization for the user's protected resource. Finally, threshold signatures can be used to achieve authorization from a subset of parties that possess a share of a private signing key @cite_6 .
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_6", "@cite_2", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "2788822833", "2785503678", "", "", "2766740708", "2620085947", "2799128769" ], "abstract": [ "In this paper, we propose IoTChain, a combination of the OSCAR architecture [1] and the ACE authorization framework [2] to provide an E2E solution for the secure authorized access to IoT resources. IoTChain consists of two components, an authorization blockchain based on the ACE framework and the OSCAR object security model, extended with a group key scheme. The blockchain provides a flexible and trustless way to handle authorization while OSCAR uses the public ledger to set up multicast groups for authorized clients. To evaluate the feasibility of our architecture, we have implemented the authorization blockchain on top of a private Ethereum network. We report on several experiments that assess the performance of different architecture components.", "This paper investigates a critical access control issue in the Internet of Things (IoT). In particular, we propose a smart contract-based framework, which consists of multiple access control contracts (ACCs), one judge contract (JC) and one register contract (RC), to achieve distributed and trustworthy access control for IoT systems. Each ACC provides one access control method for a subject-object pair, and implements both static access right validation based on predefined policies and dynamic access right validation by checking the behavior of the subject. The JC implements a misbehavior-judging method to facilitate the dynamic validation of the ACCs by receiving misbehavior reports from the ACCs, judging the misbehavior and returning the corresponding penalty. The RC registers the information of the access control and misbehavior-judging methods as well as their smart contracts, and also provides functions (e.g., register, update and delete) to manage these methods. To demonstrate the application of the framework, we provide a case study in an IoT system with one desktop computer, one laptop and two Raspberry Pi single-board computers, where the ACCs, JC and RC are implemented based on the Ethereum smart contract platform to achieve the access control.", "", "", "This document proposes an alternative service architecture for user- centric control of the sharing of resources, such as personal data, using the decentralized peer-to-peer computing paradigm. The term 'control' is used here to denote the full capacity of the user to freely select (i) the entities with whom to share resources (e.g. data), and (ii) the entities which provide services implementing user- controlled resource sharing. The peer-to-peer service architecture uses a set of computing nodes called OAuth2.0 Nodes (ON) that are part of a peer-to-peer network as the basis for the decentralized service architecture. Each OAuth2.0 Nodes is assumed to have the capability to provide AS-services, RS-services and Client-services.", "Access Control systems are used in computer security to regulate the access to critical or valuable resources. The rights of subjects to access such resources are typically expressed through access control policies, which are evaluated at access request time against the current access context. This paper proposes a new approach based on blockchain technology to publish the policies expressing the right to access a resource and to allow the distributed transfer of such right among users. In our proposed protocol the policies and the rights exchanges are publicly visible on the blockchain, consequently any user can know at any time the policy paired with a resource and the subjects who currently have the rights to access the resource. This solution allows distributed auditability, preventing a party from fraudulently denying the rights granted by an enforceable policy. We also show a possible working implementation based on XACML policies, deployed on the Bitcoin blockchain.", "The prevalence of Internet of Things (IoTs) allows heterogeneous embedded smart devices to collaboratively provide smart services with or without human intervention. While leveraging the large scale IoT based applications like Smart Gird or Smart Cities, IoTs also incur more concerns on privacy and security. Among the top security challenges that IoTs face, access authorization is critical in resource sharing and information protection. One of the weaknesses in today's access control (AC) is the centralized authorization server, which can be the performance bottleneck or the single point of failure. In this paper, BlendCAC, a blockchain enabled decentralized capability based AC is proposed for the security of IoTs. The BlendCAC aims at an effective access control processes to devices, services and information in large scale IoT systems. Based on the blockchain network, a capability delegation mechanism is suggested for access permission propagation. A robust identity based capability token management strategy is proposed, which takes advantage of smart contract for registering, propagation and revocation of the access authorization. In the proposed BlendCAC scheme, IoT devices are their own master to control their resources instead of being supervised by a centralized authority. Implemented and tested on a Raspberry Pi device and on a local private blockchain network, our experimental results demonstrate the feasibility of the proposed BlendCAC approach to offer a decentralized, scalable, lightweight and fine grained AC solution to IoT systems." ] }
1905.01671
2943864996
We present models that utilize smart contracts and interledger mechanisms to provide decentralized authorization for constrained IoT devices. The models involve different tradeoffs in terms of cost, delay, complexity, and privacy, while exploiting key advantages of smart contracts and multiple blockchains that communicate with interledger mechanisms. These include immutably recording hashes of authorization information and policies in smart contracts, resilience through the execution of smart contract code on all blockchain nodes, and cryptographically linking transactions and IoT events recorded on different blockchains using hash and time-lock mechanisms. The proposed models are evaluated on the public Ethereum testnets Rinkeby and Ropsten, in terms of execution cost (gas), delay, and reduction of data that needs to be sent to the constrained IoT devices.
All the above works assume that the IoT devices interact directly with the blockchain or are capable devices, i.e. they are always connected to the Internet and are capable of implementing public private key cryptographic functions. We do not make these assumptions, and propose a scheme where the authorization function is distributed across multiple servers. Our previous work @cite_19 considered the case of a single AS and a single chain, and proposed an approach to verify that the IoT device and AS share a common secret.
{ "cite_N": [ "@cite_19" ], "mid": [ "2905445635" ], "abstract": [ "Despite technological advances, most smart objects in the Internet of Things (IoT) cannot be accessed using technologies designed and developed for interacting with powerful Internet servers. IoT use cases involve devices that not only have limited resources, but also they are not always connected to the Internet and are physically exposed to tampering. In this paper, we describe the design, development, and evaluation of a smart contract-based solution that allows end-users to securely interact with smart devices. Our approach enables access control, Thing authentication, and payments in a fully decentralized setting, taking at the same time into consideration the limitations and constraints imposed by both blockchain technologies and the IoT paradigm. Our prototype implementation is based on existing technologies, i.e., Ethereum smart contracts, which makes it realistic and fundamentally secure." ] }
1905.01920
2944496541
Existing methods for face image manipulation generally focus on editing the expression, changing some predefined attributes, or applying different filters. However, users lack the flexibility of controlling the shapes of different semantic facial parts in the generated face. In this paper, we propose an approach to compute a disentangled shape representation for a face image, namely the FaceShapeGene. The proposed FaceShapeGene encodes the shape information of each semantic facial part separately into a 1D latent vector. On the basis of the FaceShapeGene, a novel part-wise face image editing system is developed, which contains a shape-remix network and a conditional label-to-face transformer. The shape-remix network can freely recombine the part-wise latent vectors from different individuals, producing a remixed face shape in the form of a label map, which contains the facial characteristics of multiple subjects. The conditional label-to-face transformer, which is trained in an unsupervised cyclic manner, performs part-wise face editing while preserving the original identity of the subject. Experimental results on several tasks demonstrate that the proposed FaceShapeGene representation correctly disentangles the shape features of different semantic parts. In addition, we test our system on several novel part-wise face editing tasks. Comparisons to existing methods demonstrate the superiority of the proposed method on accomplishing novel face editing tasks.
Image-to-image translation is a problem of translating one possible representation of an image into another representation. Isola al @cite_30 proposed Pix2Pix to give a supervised solution to general image-to-image translation based on conditional adversarial networks. Afterwards, some unsupervised methods @cite_41 @cite_32 were proposed by introducing the cycle-consistent constraints. The above methods make a simplifying assumption that image-to-image translation is a problem of learning a deterministic one-to-one mapping. However, one-to-many or many-to-many mappings exist in most of the image-to-image translation tasks. Recently, methods like MUNIT @cite_4 and DRIT @cite_43 tried to tackle the multimodal image-to-image translation problem by decomposing the latent representation of an image into a domain-invariant content code and a domain-specific style code, greatly reducing mode collapse and producing diverse multimodal translation results. There are also some approaches that intend to perform image-to-image translation at higher resolution, under either a supervised setting @cite_25 or an unsupervised setting @cite_36 .
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_41", "@cite_36", "@cite_32", "@cite_43", "@cite_25" ], "mid": [ "2552465644", "2797650215", "2962793481", "2883376126", "", "2952056941", "2963800363" ], "abstract": [ "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.", "Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at this https URL", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "Recent studies on unsupervised image-to-image translation have made remarkable progress by training a pair of generative adversarial networks with a cycle-consistent loss. However, such unsupervised methods may generate inferior results when the image resolution is high or the two image domains are of significant appearance differences, such as the translations between semantic layouts and natural images in the Cityscapes dataset. In this paper, we propose novel Stacked Cycle-Consistent Adversarial Networks (SCANs) by decomposing a single translation into multi-stage transformations, which not only boost the image translation quality but also enable higher resolution image-to-image translation in a coarse-to-fine fashion. Moreover, to properly exploit the information from the previous stage, an adaptive fusion block is devised to learn a dynamic integration of the current stage’s output and the previous stage’s output. Experiments on multiple datasets demonstrate that our proposed approach can improve the translation quality compared with previous single-stage unsupervised methods.", "", "Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for many applications: 1) the lack of aligned training pairs and 2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for producing diverse outputs without paired training images. To achieve diversity, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and the attribute vectors sampled from the attribute space to produce diverse outputs at test time. To handle unpaired training data, we introduce a novel cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative comparisons, we measure realism with user study and diversity with a perceptual distance metric. We apply the proposed model to domain adaptation and show competitive performance when compared to the state-of-the-art on the MNIST-M and the LineMod datasets.", "We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing." ] }
1905.01436
2953451070
In this paper, we propose a novel edge-labeling graph neural network (EGNN), which adapts a deep neural network on the edge-labeling graph, for few-shot learning. The previous graph neural network (GNN) approaches in few-shot learning have been based on the node-labeling framework, which implicitly models the intra-cluster similarity and the inter-cluster dissimilarity. In contrast, the proposed EGNN learns to predict the edge-labels rather than the node-labels on the graph that enables the evolution of an explicit clustering by iteratively updating the edge-labels with direct exploitation of both intra-cluster similarity and the inter-cluster dissimilarity. It is also well suited for performing on various numbers of classes without retraining, and can be easily extended to perform a transductive inference. The parameters of the EGNN are learned by episodic training with an edge-labeling loss to obtain a well-generalizable model for unseen low-data problem. On both of the supervised and semi-supervised few-shot image classification tasks with two benchmark datasets, the proposed EGNN significantly improves the performances over the existing GNNs.
One main stream approach for few-shot image classification is based on representation learning and does prediction by using nearest-neighbor according to similarity between representations. The similarity can be a simple distance function such as cosine or Euclidean distance. A Siamese network @cite_41 works in a pairwise manner using trainable weighted @math distance. A matching network @cite_44 further uses an attention mechanism to derive an differentiable nearest-neighbor classifier and a prototypical network @cite_52 extends it with defining prototypes as the mean of embedded support examples for each class. DEML @cite_40 has introduced a concept learner to extract high-level concept by using a large-scale auxiliary labeled dataset showing that a good representation is an important component to improve the performance of few-shot image classification.
{ "cite_N": [ "@cite_41", "@cite_44", "@cite_40", "@cite_52" ], "mid": [ "", "2432717477", "2194321275", "2950537964" ], "abstract": [ "", "Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.", "People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior.", "We propose prototypical networks for the problem of few-shot classification, where a classifier must generalize to new classes not seen in the training set, given only a small number of examples of each new class. Prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve excellent results. We provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset." ] }
1905.01436
2953451070
In this paper, we propose a novel edge-labeling graph neural network (EGNN), which adapts a deep neural network on the edge-labeling graph, for few-shot learning. The previous graph neural network (GNN) approaches in few-shot learning have been based on the node-labeling framework, which implicitly models the intra-cluster similarity and the inter-cluster dissimilarity. In contrast, the proposed EGNN learns to predict the edge-labels rather than the node-labels on the graph that enables the evolution of an explicit clustering by iteratively updating the edge-labels with direct exploitation of both intra-cluster similarity and the inter-cluster dissimilarity. It is also well suited for performing on various numbers of classes without retraining, and can be easily extended to perform a transductive inference. The parameters of the EGNN are learned by episodic training with an edge-labeling loss to obtain a well-generalizable model for unseen low-data problem. On both of the supervised and semi-supervised few-shot image classification tasks with two benchmark datasets, the proposed EGNN significantly improves the performances over the existing GNNs.
A meta-learner that learns to optimize model parameters extract some transferable knowledge between tasks to leverage in the context of few-shot learning. Meta-LSTM @cite_45 uses LSTM as a model updater and treats the model parameters as its hidden states. This allows to learn the initial values of parameters and update the parameters by reading few-shot examples. MAML @cite_23 learns only the initial values of parameters and simply uses SGD. It is a model agnostic approach, applicable to both supervised and reinforcement learning tasks. Reptile @cite_21 is similar to MAML but using only first-order gradients. Another generic meta-learner, SNAIL @cite_3 , is with a novel combination of temporal convolutions and soft attention to learn an optimal learning strategy.
{ "cite_N": [ "@cite_45", "@cite_3", "@cite_21", "@cite_23" ], "mid": [ "2753160622", "2951881474", "2962767366", "" ], "abstract": [ "Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a model has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity models requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning.", "Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve. However, many recent meta-learning approaches are extensively hand-designed, either using architectures specialized to a particular application, or hard-coding algorithmic components that constrain how the meta-learner solves the task. We propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information. In the most extensive set of meta-learning experiments to date, we evaluate the resulting Simple Neural AttentIve Learner (or SNAIL) on several heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement learning, SNAIL attains state-of-the-art performance by significant margins.", "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "" ] }
1905.01595
2943715591
In visual relationship detection, human-notated relationships can be regarded as determinate relationships. However, there are still large amount of unlabeled data, such as object pairs with less significant relationships or even with no relationships. We refer to these unlabeled but potentially useful data as undetermined relationships. Although a vast body of literature exists, few methods exploit these undetermined relationships for visual relationship detection. In this paper, we explore the beneficial effect of undetermined relationships on visual relationship detection. We propose a novel multi-modal feature based undetermined relationship learning network (MF-URLN) and achieve great improvements in relationship detection. In detail, our MF-URLN automatically generates undetermined relationships by comparing object pairs with human-notated data according to a designed criterion. Then, the MF-URLN extracts and fuses features of object pairs from three complementary modals: visual, spatial, and linguistic modals. Further, the MF-URLN proposes two correlated subnetworks: one subnetwork decides the determinate confidence, and the other predicts the relationships. We evaluate the MF-URLN on two datasets: the Visual Relationship Detection (VRD) and the Visual Genome (VG) datasets. The experimental results compared with state-of-the-art methods verify the significant improvements made by the undetermined relationships, e.g., the top-50 relation detection recall improves from 19.5 to 23.9 on the VRD dataset.
Positive Unlabeled Learning. Utilization of undetermined relationships is related to positive unlabeled (PU) learning. PU learning refers to the task of learning a binary classifier from only positive and unlabeled data @cite_8 . PU learning has been used in a variety of tasks, such as matrix completion @cite_22 , multi-view learning @cite_0 , and data mining @cite_7 . Most PU learning methods emphasize only binary classification @cite_35 ; , @cite_9 proposed an unlabeled data in sequential minimal optimization (USMO) algorithm to learn a binary classifier from an unlabeled dataset. However, visual relationship detection is a multi-label classification task. Therefore, this paper is one of the works for PU learning on multi-label tasks, similar to @cite_23 @cite_38 @cite_3 . Following @cite_3 , we consider the beneficial effect of unlabeled relationships to improve visual relationship detection.
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_3", "@cite_0", "@cite_23" ], "mid": [ "2618311116", "2422823951", "2951999827", "14576171", "1825821140", "", "2237505347", "2166452903", "2891196798" ], "abstract": [ "We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.", "In this paper, we specifically examine the training of a multi-label classifier from data with incompletely assigned labels. This problem is fundamentally important in many multi-label applications because it is almost impossible for human annotators to assign a complete set of labels, although their judgments are reliable. In other words, a multilabel dataset usually has properties by which (1) assigned labels are definitely positive and (2) some labels are absent but are still considered positive. Such a setting has been studied as a positive and unlabeled (PU) classification problem in a binary setting. We treat incomplete label assignment problems as a multi-label PU ranking, which is an extension of classical binary PU problems to the wellstudied rank-based multi-label classification. We derive the conditions that should be satisfied to cancel the negative effects of label incompleteness. Our experimentally obtained results demonstrate the effectiveness of these conditions.", "In this paper, we consider the matrix completion problem when the observations are one-bit measurements of some underlying matrix M, and in particular the observed samples consist only of ones and no zeros. This problem is motivated by modern applications such as recommender systems and social networks where only \"likes\" or \"friendships\" are observed. The problem of learning from only positive and unlabeled examples, called PU (positive-unlabeled) learning, has been studied in the context of binary classification. We consider the PU matrix completion problem, where an underlying real-valued matrix M is first quantized to generate one-bit observations and then a subset of positive entries is revealed. Under the assumption that M has bounded nuclear norm, we provide recovery guarantees for two different observation models: 1) M parameterizes a distribution that generates a binary matrix, 2) M is thresholded to obtain a binary matrix. For the first case, we propose a \"shifted matrix completion\" method that recovers M using only a subset of indices corresponding to ones, while for the second case, we propose a \"biased matrix completion\" method that recovers the (thresholded) binary matrix. Both methods yield strong error bounds --- if M is n by n, the Frobenius error is bounded as O(1 ((1-rho)n), where 1-rho denotes the fraction of ones observed. This implies a sample complexity of O(n n) ones to achieve a small error, when M is dense and n is large. We extend our methods and guarantees to the inductive matrix completion problem, where rows and columns of M have associated features. We provide efficient and scalable optimization procedures for both the methods and demonstrate the effectiveness of the proposed methods for link prediction (on real-world networks consisting of over 2 million nodes and 90 million links) and semi-supervised clustering tasks.", "Learning from positive and unlabeled examples (PU learning) has been investigated in recent years as an alternative learning model for dealing with situations where negative training examples are not available. It has many real world applications, but it has yet to be applied in the data stream environment where it is highly possible that only a small set of positive data and no negative data is available. An important challenge is to address the issue of concept drift in the data stream environment, which is not easily handled by the traditional PU learning techniques. This paper studies how to devise PU learning techniques for the data stream environment. Unlike existing data stream classification methods that assume both positive and negative training data are available for learning, we propose a novel PU learning technique LELC (PU Learning by Extracting Likely positive and negative micro-Clusters) for document classification. LELC only requires a small set of positive examples and a set of unlabeled examples which is easily obtainable in the data stream environment to build accurate classifiers. Experimental results show that LELC is a PU learning method that can effectively address the issues in the data stream environment with significantly better speed and accuracy on capturing concept drift than the existing state-of-the-art PU learning techniques.", "We discuss binary classification from only positive and unlabeled data (PU classification), which is conceivable in various real-world machine learning problems. Since unlabeled data consists of both positive and negative data, simply separating positive and unlabeled data yields a biased solution. Recently, it was shown that the bias can be canceled by using a particular non-convex loss such as the ramp loss. However, classifier training with a non-convex loss is not straightforward in practice. In this paper, we discuss a convex formulation for PU classification that can still cancel the bias. The key idea is to use different loss functions for positive and unlabeled samples. However, in this setup, the hinge loss is not permissible. As an alternative, we propose the double hinge loss. Theoretically, we prove that the estimators converge to the optimal solutions at the optimal parametric rate. Experimentally, we demonstrate that PU classification with the double hinge loss performs as accurate as the non-convex method, with a much lower computational cost.", "", "This paper shows that simply prescribing \"none of the above\" labels to unlabeled data has a beneficial regularization effect to supervised learning. We call it universum prescription by the fact that the prescribed labels cannot be one of the supervised labels. In spite of its simplicity, universum prescription obtained competitive results in training deep convolutional networks for CIFAR-10, CIFAR-100, STL-10 and ImageNet datasets. A qualitative justification of these approaches using Rademacher complexity is presented. The effect of a regularization parameter -- probability of sampling from unlabeled data -- is also studied empirically.", "Learning with Positive and Unlabeled instances (PU learning) arises widely in information retrieval applications. To address the unavailability issue of negative instances, most existing PU learning approaches require to either identify a reliable set of negative instances from the unlabeled data or estimate probability densities as an intermediate step. However, inaccurate negative-instance identication or poor density estimation may severely degrade overall performance of the nal predictive model. To this end, we propose a novel PU learning method based on density ratio estimation without constructing any sets of negative instances or estimating any intermediate densities. To further boost PU learning performance, we extend our proposed learning method in a multi-view manner by utilizing multiple heterogeneous sources. Extensive experimental studies demonstrate the eectiveness of our proposed methods, especially when positive labeled data are limited.", "In real-world machine learning applications, we are often faced with a situation where only a small number of training samples is available due to high sampling costs. For instance, prediction of mental states such as drowsiness from physiological information is a typical example. To cope with this problem, classifier training methods only from positive and unlabeled data and multi-task learning methods for improving the classification performance by solving multiple related tasks simultaneously have been actively investigated recently. In this paper, we combine these methods and propose a multitask learning method that can handle positive-unlabeled tasks and positive-negative tasks in a unified manner. Through experiments on drivers' drowsiness prediction, we demonstrate the effectiveness of the proposed method." ] }
1905.01386
2943017418
Query Auto-Completion (QAC) is a widely used feature in many domains, including web and eCommerce search, suggesting full queries based on a prefix typed by the user. QAC has been extensively studied in the literature in the recent years, and it has been consistently shown that adding personalization features can significantly improve the performance of QAC. In this work we propose a novel method for personalized QAC that uses lightweight embeddings learnt through fastText. We construct an embedding for the user context queries, which are the last few queries issued by the user. We also use the same model to get the embedding for the candidate queries to be ranked. We introduce ranking features that compute the distance between the candidate queries and the context queries in the embedding space. These features are then combined with other commonly used QAC ranking features to learn a ranking model. We apply our method to a large eCommerce search engine (eBay) and show that the ranker with our proposed feature significantly outperforms the baselines on all of the offline metrics measured, which includes Mean Reciprocal Rank (MRR), Success Rate (SR), Mean Average Precision (MAP), and Normalized Discounted Cumulative Gain (NDCG). Our baselines include the Most Popular Completion (MPC) model as well as a ranking model without our proposed features. The ranking model with the proposed features results in a @math improvement over the MPC model on all metrics. We obtain up to a @math improvement over the baseline ranking model for all the sessions, which goes up to about @math when we restrict to sessions that contain the user context. Moreover, our proposed features also significantly outperform text based personalization features studied in the literature before, and adding text based features on top of our proposed embedding based features results only in minor improvements.
The user's previously entered text is used for personalized QAC by Bar-Yossef and Kraus @cite_19 . The method, called NearestCompletion, computes the similarity of query completion candidates to the context queries (user's previously entered queries), using term-weighted vectors for queries and contexts and applying cosine similarity. This method results in significant improvements in MRR. In addition, the authors proposed the MPC approach, which is based on the overall popularity of the queries matching the given prefix. MPC is a straightforward heuristic approach with good performance and is typically used as a baseline for more complex approaches. We use MPC as one of the baselines in this work as well.
{ "cite_N": [ "@cite_19" ], "mid": [ "8870360" ], "abstract": [ "For thousands of years people have realized the importance of archiving and finding information. With the advent of computers, it became possible to store large amounts of information; and finding useful information from such collections became a necessity. The field of Information Retrieval (IR) was born in the 1950s out of this necessity. Over the last forty years, the field has matured considerably. Several IR systems are used on an everyday basis by a wide variety of users. This article is a brief overview of the key advances in the field of Information Retrieval, and a description of where the state-of-the-art is at in the field." ] }
1905.01386
2943017418
Query Auto-Completion (QAC) is a widely used feature in many domains, including web and eCommerce search, suggesting full queries based on a prefix typed by the user. QAC has been extensively studied in the literature in the recent years, and it has been consistently shown that adding personalization features can significantly improve the performance of QAC. In this work we propose a novel method for personalized QAC that uses lightweight embeddings learnt through fastText. We construct an embedding for the user context queries, which are the last few queries issued by the user. We also use the same model to get the embedding for the candidate queries to be ranked. We introduce ranking features that compute the distance between the candidate queries and the context queries in the embedding space. These features are then combined with other commonly used QAC ranking features to learn a ranking model. We apply our method to a large eCommerce search engine (eBay) and show that the ranker with our proposed feature significantly outperforms the baselines on all of the offline metrics measured, which includes Mean Reciprocal Rank (MRR), Success Rate (SR), Mean Average Precision (MAP), and Normalized Discounted Cumulative Gain (NDCG). Our baselines include the Most Popular Completion (MPC) model as well as a ranking model without our proposed features. The ranking model with the proposed features results in a @math improvement over the MPC model on all metrics. We obtain up to a @math improvement over the baseline ranking model for all the sessions, which goes up to about @math when we restrict to sessions that contain the user context. Moreover, our proposed features also significantly outperform text based personalization features studied in the literature before, and adding text based features on top of our proposed embedding based features results only in minor improvements.
Word embeddings, such as @cite_21 , @cite_25 , and @cite_23 @cite_17 , have become increasingly popular in the recent years for a large variety of tasks, including computing similarity between words. Embeddings have also been studied in the context of QAC. Specifically, Mitra @cite_10 studies a Convolutional Latent Semantic Model for distributed representations of queries. Query similarity based on embeddings is studied in @cite_13 where the features are combined with the MPC model. In Section , we explain our approach of learning embeddings for the user context in a simple and scalable fashion and the usage of these embeddings and text based features to personalize QAC.
{ "cite_N": [ "@cite_13", "@cite_21", "@cite_23", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2800585554", "", "", "2143196462", "2250539671", "2468328197" ], "abstract": [ "Query auto-completion (QAC) is the first step of information retrieval, which helps users formulate the entire query after inputting only a few prefixes. Regarding the models of QAC, the traditional method ignores the contribution from the semantic relevance between queries. However, similar queries always express extremely similar search intention. In this paper, we propose a hybrid model FS-QAC based on query semantic similarity as well as the query frequency. We choose word2vec method to measure the semantic similarity between intended queries and pre-submitted queries. By combining both features, our experiments show that FS-QAC model improves the performance when predicting the user's query intention and helping formulate the right query. Our experimental results show that the optimal hybrid model contributes to a 7.54 improvement in terms of MRR against a state-of-the-art baseline using the public AOL query logs.", "", "", "Search logs contain examples of frequently occurring patterns of user reformulations of queries. Intuitively, the reformulation \"San Francisco\" -- \"San Francisco 49ers\" is semantically similar to \"Detroit\" -- \"Detroit Lions\". Likewise, \"London\" -- \"things to do in London\" and \"New York\" -- \"New York tourist attractions\" can also be considered similar transitions in intent. The reformulation \"movies\" -- \"new movies\" and \"york\" -- \"New York\", however, are clearly different despite the lexical similarities in the two reformulations. In this paper, we study the distributed representation of queries learnt by deep neural network models, such as the Convolutional Latent Semantic Model, and show that they can be used to represent query reformulations as vectors. These reformulation vectors exhibit favourable properties such as mapping semantically and syntactically similar query changes closer in the embedding space. Our work is motivated by the success of continuous space language models in capturing relationships between words and their meanings using offset vectors. We demonstrate a way to extend the same intuition to represent query reformulations. Furthermore, we show that the distributed representations of queries and reformulations are both useful for modelling session context for query prediction tasks, such as for query auto-completion (QAC) ranking. Our empirical study demonstrates that short-term (session) history context features based on these two representations improves the mean reciprocal rank (MRR) for the QAC ranking task by more than 10 over a supervised ranker baseline. Our results also show that by using features based on both these representations together we achieve a better performance, than either of them individually.", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore CPU, and classify half a million sentences among 312K classes in less than a minute." ] }
1905.01545
2952169544
This paper presents a logic framework for modeling the interaction among deductive databases in a P2P (Peer to Peer) environment. Each peer joining a P2P system provides or imports data from its neighbors by using a set of mapping rules, i.e. a set of semantic correspondences to a set of peers belonging to the same environment. Two different types of mapping rules are defined: mapping rules allowing to import a maximal set of atoms not leading to inconsistency (called maximal mapping rules) and mapping rules allowing to import a minimal set of atoms needed to restore consistency (called minimal mapping rules). Implicitly, the use of maximal mapping rules states it is preferable to import as long as no inconsistencies arise; whereas the use of minimal mapping rules states that it is preferable not to import unless a inconsistency exists. The paper presents three different declarative semantics of a P2P system: (i) the Max Weak Model Semantics, in which mapping rules are used to import as much knowledge as possible from a peer's neighborhood without violating local integrity constraints; (ii) the Min Weak Model Semantics, in which the P2P system can be locally inconsistent and the information provided by the neighbors is used to restore consistency, that is to only integrate the missing portion of a correct, but incomplete database; (iii) the Max-Min Weak Model Semantics that unifies the previous two different perspectives captured by the Max Weak Model Semantics and Min Weak Model Semantics. This last semantics allows to characterize each peer in the neighborhood as a resource used either to enrich (integrate) or to fix (repair) the knowledge, so as to define a kind of integrate-repair strategy for each peer. Under consideration in Theory and Practice of Logic Programming (TPLP).
------------------------- In @cite_6 several techniques for optimizing the reformulation of queries in a PDMS are presented. In particular the paper presents techniques for pruning semantic paths of mappings in the reformulation process and for minimizing the reformulated queries. The design of optimization methods for query processing over a network of semantically related data is investigated in @cite_40 . ------------------------------------------------------------
{ "cite_N": [ "@cite_40", "@cite_6" ], "mid": [ "1560604137", "2172059872" ], "abstract": [ "Semantic mappings between data sources play a key role in several data sharing architectures. Mappings provide the relationships between data stored in different sources, and therefore enable answering queries that require data from other nodes in a data sharing network. Composing mappings is one of the core problems that lies at the heart of several optimization methods in data sharing networks, such as caching frequently traversed paths and redundancy analysis. This paper investigates the theoretical underpinnings of mapping composition. We study the problem for a rich mapping language, GLAV, that combines the advantages of the known mapping formalisms globalas-view and local-as-view. We first show that even when composing two simple GLAV mappings, the full composition may be an infinite set of GLAV formulas. Second, we show that if we restrict the set of queries to be in CQk (a common restriction in practice), then we can always encode the infinite set of GLAV formulas using a finite representation. Furthermore, we describe an algorithm that given a query and a finite encoding of an infinite set of GLAV formulas, finds all the certain answers to the query. Consequently, we show that for a commonly occuring class of queries it is possible to pre-compose mappings, thereby potentially offering significant savings in query processing.", "Peer data management systems (PDMS) offer a flexible architecture for decentralized data sharing. In a PDMS, every peer is associated with a schema that represents the peer's domain of interest, and semantic relationships between peers are provided locally between pairs (or small sets) of peers. By traversing semantic paths of mappings, a query over one peer can obtain relevant data from any reachable peer in the network. Semantic paths are traversed by reformulating queries at a peer into queries on its neighbors.Naively following semantic paths is highly inefficient in practice. We describe several techniques for optimizing the reformulation process in a PDMS and validate their effectiveness using real-life data sets. In particular, we develop techniques for pruning paths in the reformulation process and for minimizing the reformulated queries as they are created. In addition, we consider the effect of the strategy we use to search through the space of reformulations. Finally, we show that pre-computing semantic paths in a PDMS can greatly improve the efficiency of the reformulation process. Together, all of these techniques form a basis for scalable query reformulation in PDMS.To enable our optimizations, we developed practical algorithms, of independent interest, for checking containment and minimization of XML queries, and for composing XML mappings." ] }
1905.01545
2952169544
This paper presents a logic framework for modeling the interaction among deductive databases in a P2P (Peer to Peer) environment. Each peer joining a P2P system provides or imports data from its neighbors by using a set of mapping rules, i.e. a set of semantic correspondences to a set of peers belonging to the same environment. Two different types of mapping rules are defined: mapping rules allowing to import a maximal set of atoms not leading to inconsistency (called maximal mapping rules) and mapping rules allowing to import a minimal set of atoms needed to restore consistency (called minimal mapping rules). Implicitly, the use of maximal mapping rules states it is preferable to import as long as no inconsistencies arise; whereas the use of minimal mapping rules states that it is preferable not to import unless a inconsistency exists. The paper presents three different declarative semantics of a P2P system: (i) the Max Weak Model Semantics, in which mapping rules are used to import as much knowledge as possible from a peer's neighborhood without violating local integrity constraints; (ii) the Min Weak Model Semantics, in which the P2P system can be locally inconsistent and the information provided by the neighbors is used to restore consistency, that is to only integrate the missing portion of a correct, but incomplete database; (iii) the Max-Min Weak Model Semantics that unifies the previous two different perspectives captured by the Max Weak Model Semantics and Min Weak Model Semantics. This last semantics allows to characterize each peer in the neighborhood as a resource used either to enrich (integrate) or to fix (repair) the knowledge, so as to define a kind of integrate-repair strategy for each peer. Under consideration in Theory and Practice of Logic Programming (TPLP).
In a more general perspective, interesting semantics for data exchange systems, that offer the possibility of explicitly modeling some preference criteria while performing the data integration process, has been proposed in @cite_32 @cite_39 @cite_55 @cite_20 @cite_7 . In @cite_32 @cite_39 @cite_55 a semantics is proposed that allows for cooperation among pairwise peers that are related to each other by means of data exchange constraints (i.e. mapping rules) and trust relationships. The decision by a peer on what other data to consider (besides its local data) does not depend only on its data exchange constraints, but also on the trust relationship that it has with other peers. Given a peer @math in a P2P system a solution for @math is a database instance that respects the exchange constraints and trust relationship @math has with its immediate neighbors'. Trust relationships are of the form: @math stating that @math trusts itself less that @math , @math stating that @math trusts itself more that @math and @math stating that @math trusts itself the same as @math . These trust relationships are static and are used in the process of collecting data in order to establish preferences in the case of conflicting information.
{ "cite_N": [ "@cite_7", "@cite_55", "@cite_32", "@cite_39", "@cite_20" ], "mid": [ "1991380207", "1534941349", "2950126731", "", "1582380638" ], "abstract": [ "This paper investigates the data exchange problem among distributed independent sources. It is based on previous works of the authors [11, 12, 14] in which a declarative semantics for P2P systems has been presented and a mechanism to set different degrees of reliability for neighbor peers has been provided. The basic semantics for P2P systems defines the concept of Maximal Weak Models (in [11, 12, 14] these models have been called Preferred Weak Models. In this paper we rename them and use the term Preferred for the subclass of Weak Model defined here) that represent scenarios in which maximal sets of facts not violating integrity constraints are imported into the peers [11, 12]. Previous priority mechanism defined in [14] is rigid in the sense that the preference between conflicting sets of atoms that a peer can import only depends on the priorities associated to the source peers at design time. In this paper we present a different framework that allows to select among different scenarios looking at the properties of data provided by the peers. The framework presented here allows to model concepts like \"in the case of conflicting information, it is preferable to import data from the neighbor peer that can provide the maximum number of tuples\" or \"in the case of conflicting information, it is preferable to import data from the neighbor peer such that the sum of the values of an attribute is minimum\" without selecting a-priori preferred peers. To enforce this preference mechanism we enrich the previous P2P framework with aggregate functions and present significant examples showing the flexibility of the new framework.", "We propose and investigate a semantics for peer data exchange systems (or peer data management systems) where different peers are pairwise related to each other by means of data exchange constraints and trust relationships. These two elements plus the data at the peers' sites and the local integrity constraints for a peer are made compatible via the proposed semantics by determining a set of solution instances, which are the intended virtual instances for the peer. The semantically correct answers from a peer to a query, called its peer consistent answers, are defined as those answers that are invariant under all its different solution instances. We show that solution instances can be specified as the models of logic programs with a stable model semantics.", "The problem of answering queries posed to a peer who is a member of a peer-to-peer data exchange system is studied. The answers have to be consistent wrt to both the local semantic constraints and the data exchange constraints with other peers; and must also respect certain trust relationships between peers. A semantics for peer consistent answers under exchange constraints and trust relationships is introduced and some techniques for obtaining those answers are presented.", "", "This paper investigates the data exchange problem among distributed independent sources. It is based on previous works in [9,10] in which a (declarative) semantics for P2P systems. In this semantics only facts not making the local databases inconsistent are imported Weak Models, and the Preferred Weak Models are those in which peers import maximal sets of facts not violating integrity constraints. The framework proposed in [9,10] does not provide any mechanism to set priorities among mapping rules. Anyhow, while collecting data it is quite natural for a source peer to associate different degrees of reliability to the portion of data provided by its neighbor peers. Starting from this observation, this paper enhances previous semantics by using priority levels among mapping rules in order to select the weak models containing a maximum number of mapping atoms according to their importance. We will call these weak models, Trusted Weak Models and we will show they can be computed as stable models of a logic program with weak constraints." ] }
1905.01078
2943386716
Domain generation algorithms (DGAs) are commonly leveraged by malware to create lists of domain names which can be used for command and control (C&C) purposes. Approaches based on machine learning have recently been developed to automatically detect generated domain names in real-time. In this work, we present a novel DGA called CharBot which is capable of producing large numbers of unregistered domain names that are not detected by state-of-the-art classifiers for real-time detection of DGAs, including the recently published methods FANCI (a random forest based on human-engineered features) and LSTM.MI (a deep learning approach). CharBot is very simple, effective and requires no knowledge of the targeted DGA classifiers. We show that retraining the classifiers on CharBot samples is not a viable defense strategy. We believe these findings show that DGA classifiers are inherently vulnerable to adversarial attacks if they rely only on the domain name string to make a decision. Designing a robust DGA classifier may, therefore, necessitate the use of additional information besides the domain name alone. To the best of our knowledge, CharBot is the simplest and most efficient black-box adversarial attack against DGA classifiers proposed to date.
Machine learning approaches that leverage the domain name string for DGA detection can be categorized into two groups: Popular kinds of classifiers used in the featureful approach for DGA detection are logistic regression and tree ensemble methods, while the featureless approach relies on the use of deep neural networks, namely Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNN). Most papers about the featureless approach include a featureful approach as a baseline method @cite_20 @cite_32 @cite_12 @cite_16 @cite_8 @cite_4 @cite_38 @cite_37 , and the featureless approach is typically reported to yield better, more accurate results.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_4", "@cite_8", "@cite_32", "@cite_16", "@cite_20", "@cite_12" ], "mid": [ "2768793959", "2592440977", "2912464539", "2912761498", "2786906486", "2759618680", "2546910111", "2773671123" ], "abstract": [ "In recent years, botnets have become a major threat on the Internet. Most sophisticated bots use Domain Generation Algorithms (DGA) to pseudo-randomly generate a large number of domains and select a subset in order to communicate with Command and Control (C&C) server. The basic aim is to avoid blacklisting, sinkholing and evade the security systems. Long Short-Term Memory network (LSTM) provides a mean to combat this botnet type. It operates on raw domains and is amenable to immediate applications. LSTM is however prone to multiclass imbalance problem, which becomes even more significant in DGA malware detection. This is due the fact that many DGA classes have a very little support in the training dataset. This paper presents a novel LSTM.MI algorithm to combine both binary and multiclass classification models, where the original LSTM is adapted to be cost-sensitive. The cost items are introduced into backpropagation learning procedure to take into account the identification importance among classes. Experiments are carried out on a real-world collected dataset. They demonstrate that LSTM.MI provides an improvement of at least 7 in terms of macro-averaging recall and precision as compared to the original LSTM and other state-of-the-art cost-sensitive methods. It is also able to preserve the high accuracy on non-DGA generated class (0.9849 F1-score), while helping recognize 5 additional bot families.", "For years security machine learning research has promised to obviate the need for signature based detection by automatically learning to detect indicators of attack. Unfortunately, this vision hasn't come to fruition: in fact, developing and maintaining today's security machine learning systems can require engineering resources that are comparable to that of signature-based detection systems, due in part to the need to develop and continuously tune the \"features\" these machine learning systems look at as attacks evolve. Deep learning, a subfield of machine learning, promises to change this by operating on raw input signals and automating the process of feature design and extraction. In this paper we propose the eXpose neural network, which uses a deep learning approach we have developed to take generic, raw short character strings as input (a common case for security inputs, which include artifacts like potentially malicious URLs, file paths, named pipes, named mutexes, and registry keys), and learns to simultaneously extract features and classify using character-level embeddings and convolutional neural network. In addition to completely automating the feature design and extraction process, eXpose outperforms manual feature extraction based baselines on all of the intrusion detection problems we tested it on, yielding a 5 -10 detection rate gain at 0.1 false positive rate compared to these baselines.", "Domain Generation Algorithms (DGAs) are a popular technique used by contemporary malware for command-and-control (C&C) purposes. Such malware utilizes DGAs to create a set of domain names that, when resolved, provide information necessary to establish a link to a C&C server. Automated discovery of such domain names in real-time DNS traffic is critical for network security as it allows to detect infection, and, in some cases, take countermeasures to disrupt the communication and identify infected machines. Detection of the specific DGA malware family provides the administrator valuable information about the kind of infection and steps that need to be taken. In this paper we compare and evaluate machine learning methods that classify domain names as benign or DGA, and label the latter according to their malware family. Unlike previous work, we select data for test and training sets according to observation time and known seeds. This allows us to assess the robustness of the trained classifiers for detecting domains generated by the same families at a different time or when seeds change. Our study includes tree ensemble models based on human-engineered features and deep neural networks that learn features automatically from domain names. We find that all state-of-the-art classifiers are significantly better at catching domain names from malware families with a time-dependent seed compared to time-invariant DGAs. In addition, when applying the trained classifiers on a day of real traffic, we find that many domain names unjustifiably are flagged as malicious, thereby revealing the shortcomings of relying on a standard whitelist for training a production grade DGA detection system.", "In this paper, we compare the performance of several machine learning based approaches for the tasks of detecting algorithmically generated malicious domains and the categorization of domains according to their malware family. The datasets used for model comparison were provided by the shared task on Detecting Malicious Domain names (DMD 2018). Our models ranked first for two out of the four test datasets provided in the competition.", "Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Little studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and less prone to overfitting.", "Modern malware families often rely on domain-generation algorithms (DGAs) to determine rendezvous points to their command-and-control server. Traditional defence strategies (such as blacklisting domains or IP addresses) are inadequate against such techniques due to the large and continuously changing list of domains produced by these algorithms. This paper demonstrates that a machine learning approach based on recurrent neural networks is able to detect domain names generated by DGAs with high precision. The neural models are estimated on a large training set of domains generated by various malwares. Experimental results show that this data-driven approach can detect malware-generated domain names with a F_1 score of 0.971. To put it differently, the model can automatically detect 93 of malware-generated domain names for a false positive rate of 1:100.", "Various families of malware use domain generation algorithms (DGAs) to generate a large number of pseudo-random domain names to connect to a command and control (C&C) server. In order to block DGA C&C traffic, security organizations must first discover the algorithm by reverse engineering malware samples, then generating a list of domains for a given seed. The domains are then either preregistered or published in a DNS blacklist. This process is not only tedious, but can be readily circumvented by malware authors using a large number of seeds in algorithms with multivariate recurrence properties (e.g., banjori) or by using a dynamic list of seeds (e.g., bedep). Another technique to stop malware from using DGAs is to intercept DNS queries on a network and predict whether domains are DGA generated. Such a technique will alert network administrators to the presence of malware on their networks. In addition, if the predictor can also accurately predict the family of DGAs, then network administrators can also be alerted to the type of malware that is on their networks. This paper presents a DGA classifier that leverages long short-term memory (LSTM) networks to predict DGAs and their respective families without the need for a priori feature extraction. Results are significantly better than state-of-the-art techniques, providing 0.9993 area under the receiver operating characteristic curve for binary classification and a micro-averaged F1 score of 0.9906. In other terms, the LSTM technique can provide a 90 detection rate with a 1:10000 false positive (FP) rate---a twenty times FP improvement over comparable methods. Experiments in this paper are run on open datasets and code snippets are provided to reproduce the results.", "Domain generation algorithms (DGAs) automatically generate large numbers of domain names in DNS domain fluxing for the purpose of command-and-control (C&C) communication. DGAs are immune to static prevention methods like blacklisting and sinkholing. Detection of DGAs in a live stream of queries in a DNS server is referred to as inline detection. Most of the previous approaches in the literature on DGA detection either: (i) are based on small synthetic data sets for training, rather than data collected from real traffic or (ii) require contextual information and therefore cannot be used for inline detection. In this work, we overcome these limitations by proposing a novel way to label a large volume of data collected from real traffic as DGA non-DGA and by using deep learning techniques. Our classifiers can be trained with large amounts of real traffic, rather than small synthetic data sets, and therefore have better performance." ] }
1905.01078
2943386716
Domain generation algorithms (DGAs) are commonly leveraged by malware to create lists of domain names which can be used for command and control (C&C) purposes. Approaches based on machine learning have recently been developed to automatically detect generated domain names in real-time. In this work, we present a novel DGA called CharBot which is capable of producing large numbers of unregistered domain names that are not detected by state-of-the-art classifiers for real-time detection of DGAs, including the recently published methods FANCI (a random forest based on human-engineered features) and LSTM.MI (a deep learning approach). CharBot is very simple, effective and requires no knowledge of the targeted DGA classifiers. We show that retraining the classifiers on CharBot samples is not a viable defense strategy. We believe these findings show that DGA classifiers are inherently vulnerable to adversarial attacks if they rely only on the domain name string to make a decision. Designing a robust DGA classifier may, therefore, necessitate the use of additional information besides the domain name alone. To the best of our knowledge, CharBot is the simplest and most efficient black-box adversarial attack against DGA classifiers proposed to date.
A recent innovation in the area of deep learning and generative modeling, is the or GAN, first proposed by @cite_11 . In the GAN framework, a generative model is trained by pitting it against an adversary. The adversary is a discriminative model whose goal is to discern whether a given sample came from the data generating distribution or from the generative model. The generator is trained to maximize the loss of the discriminator, so the GAN training procedure corresponds to a two-player minimax game. Ideally, when the training converges, the generator should recover the data generating distribution and the discriminator should not be able to do any better than random guessing.
{ "cite_N": [ "@cite_11" ], "mid": [ "2099471712" ], "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1905.01078
2943386716
Domain generation algorithms (DGAs) are commonly leveraged by malware to create lists of domain names which can be used for command and control (C&C) purposes. Approaches based on machine learning have recently been developed to automatically detect generated domain names in real-time. In this work, we present a novel DGA called CharBot which is capable of producing large numbers of unregistered domain names that are not detected by state-of-the-art classifiers for real-time detection of DGAs, including the recently published methods FANCI (a random forest based on human-engineered features) and LSTM.MI (a deep learning approach). CharBot is very simple, effective and requires no knowledge of the targeted DGA classifiers. We show that retraining the classifiers on CharBot samples is not a viable defense strategy. We believe these findings show that DGA classifiers are inherently vulnerable to adversarial attacks if they rely only on the domain name string to make a decision. Designing a robust DGA classifier may, therefore, necessitate the use of additional information besides the domain name alone. To the best of our knowledge, CharBot is the simplest and most efficient black-box adversarial attack against DGA classifiers proposed to date.
URLs intended for phishing are quite different in nature than DGA domains for C &C purposes. Indeed, to be successful, phishing URLs need to deceive humans, which requires them to be as indistinguishable as possible from benign URLs to the human observer. DGA domain names used for C &C purposes are not intended at all to be read by human users. DGA domain names are successful if they can evade DGA classifiers and have not been previously registered, i.e. they should be available for the botmaster to register. To the best of our knowledge, so far are the only ones who have looked into generative modeling of DGA domain names @cite_18 . Although their results are significant, we show in this work that classifiers which have been adversarially trained using DeepDGA remain vulnerable to simple attacks such as the CharBot algorithm we propose in sec:charbot .
{ "cite_N": [ "@cite_18" ], "mid": [ "2528572867" ], "abstract": [ "Many malware families utilize domain generation algorithms (DGAs) to establish command and control (C&C) connections. While there are many methods to pseudorandomly generate domains, we focus in this paper on detecting (and generating) domains on a per-domain basis which provides a simple and flexible means to detect known DGA families. Recent machine learning approaches to DGA detection have been successful on fairly simplistic DGAs, many of which produce names of fixed length. However, models trained on limited datasets are somewhat blind to new DGA variants. In this paper, we leverage the concept of generative adversarial networks to construct a deep learning based DGA that is designed to intentionally bypass a deep learning based detector. In a series of adversarial rounds, the generator learns to generate domain names that are increasingly more difficult to detect. In turn, a detector model updates its parameters to compensate for the adversarially generated domains. We test the hypothesis of whether adversarially generated domains may be used to augment training sets in order to harden other machine learning models against yet-to-be-observed DGAs. We detail solutions to several challenges in training this character-based generative adversarial network. In particular, our deep learning architecture begins as a domain name auto-encoder (encoder + decoder) trained on domains in the Alexa one million. Then the encoder and decoder are reassembled competitively in a generative adversarial network (detector + generator), with novel neural architectures and training strategies to improve convergence." ] }
1905.01078
2943386716
Domain generation algorithms (DGAs) are commonly leveraged by malware to create lists of domain names which can be used for command and control (C&C) purposes. Approaches based on machine learning have recently been developed to automatically detect generated domain names in real-time. In this work, we present a novel DGA called CharBot which is capable of producing large numbers of unregistered domain names that are not detected by state-of-the-art classifiers for real-time detection of DGAs, including the recently published methods FANCI (a random forest based on human-engineered features) and LSTM.MI (a deep learning approach). CharBot is very simple, effective and requires no knowledge of the targeted DGA classifiers. We show that retraining the classifiers on CharBot samples is not a viable defense strategy. We believe these findings show that DGA classifiers are inherently vulnerable to adversarial attacks if they rely only on the domain name string to make a decision. Designing a robust DGA classifier may, therefore, necessitate the use of additional information besides the domain name alone. To the best of our knowledge, CharBot is the simplest and most efficient black-box adversarial attack against DGA classifiers proposed to date.
Similarly to our work here, @cite_0 developed , a novel DGA which incorporates knowledge of the features used by a DGA classifier in order to attack it. They report significant reductions in predictive accuracy for the FANCI model as well as the Endgame LSTM by @cite_20 . The DeceptionDGA algorithm is more complicated than CharBot, requiring knowledge of the underlying model in order to deploy it. Despite this difference in complexity, the detection rates we observe for CharBot in our experiments are comparable to those of DeceptionDGA.
{ "cite_N": [ "@cite_0", "@cite_20" ], "mid": [ "2942650110", "2546910111" ], "abstract": [ "Malware typically uses Domain Generation Algorithms (DGAs) as a mechanism to contact their Command and Control server. In recent years, different approaches to automatically detect generated domain names have been proposed, based on machine learning. The first problem that we address is the difficulty to systematically compare these DGA detection algorithms due to the lack of an independent benchmark. The second problem that we investigate is the difficulty for an adversary to circumvent these classifiers when the machine learning models backing these DGA-detectors are known. In this paper we compare two different approaches on the same set of DGAs: classical machine learning using manually engineered features and a 'deep learning' recurrent neural network. We show that the deep learning approach performs consistently better on all of the tested DGAs, with an average classification accuracy of 98.7 versus 93.8 for the manually engineered features. We also show that one of the dangers of manual feature engineering is that DGAs can adapt their strategy, based on knowledge of the features used to detect them. To demonstrate this, we use the knowledge of the used feature set to design a new DGA which makes the random forest classifier powerless with a classification accuracy of 59.9 . The deep learning classifier is also (albeit less) affected, reducing its accuracy to 85.5 .", "Various families of malware use domain generation algorithms (DGAs) to generate a large number of pseudo-random domain names to connect to a command and control (C&C) server. In order to block DGA C&C traffic, security organizations must first discover the algorithm by reverse engineering malware samples, then generating a list of domains for a given seed. The domains are then either preregistered or published in a DNS blacklist. This process is not only tedious, but can be readily circumvented by malware authors using a large number of seeds in algorithms with multivariate recurrence properties (e.g., banjori) or by using a dynamic list of seeds (e.g., bedep). Another technique to stop malware from using DGAs is to intercept DNS queries on a network and predict whether domains are DGA generated. Such a technique will alert network administrators to the presence of malware on their networks. In addition, if the predictor can also accurately predict the family of DGAs, then network administrators can also be alerted to the type of malware that is on their networks. This paper presents a DGA classifier that leverages long short-term memory (LSTM) networks to predict DGAs and their respective families without the need for a priori feature extraction. Results are significantly better than state-of-the-art techniques, providing 0.9993 area under the receiver operating characteristic curve for binary classification and a micro-averaged F1 score of 0.9906. In other terms, the LSTM technique can provide a 90 detection rate with a 1:10000 false positive (FP) rate---a twenty times FP improvement over comparable methods. Experiments in this paper are run on open datasets and code snippets are provided to reproduce the results." ] }
1905.01078
2943386716
Domain generation algorithms (DGAs) are commonly leveraged by malware to create lists of domain names which can be used for command and control (C&C) purposes. Approaches based on machine learning have recently been developed to automatically detect generated domain names in real-time. In this work, we present a novel DGA called CharBot which is capable of producing large numbers of unregistered domain names that are not detected by state-of-the-art classifiers for real-time detection of DGAs, including the recently published methods FANCI (a random forest based on human-engineered features) and LSTM.MI (a deep learning approach). CharBot is very simple, effective and requires no knowledge of the targeted DGA classifiers. We show that retraining the classifiers on CharBot samples is not a viable defense strategy. We believe these findings show that DGA classifiers are inherently vulnerable to adversarial attacks if they rely only on the domain name string to make a decision. Designing a robust DGA classifier may, therefore, necessitate the use of additional information besides the domain name alone. To the best of our knowledge, CharBot is the simplest and most efficient black-box adversarial attack against DGA classifiers proposed to date.
We also wish to acknowledge the concurrent work of @cite_33 who describe , a black-box technique for evading DGA classifiers that is similar to CharBot. MaskDGA makes use of a surrogate model as well as a list of DGA domains. It uses these data to craft character-level perturbations of the malicious domains such that they are no longer recognized by the surrogate model. Similarly to our own results, the authors find that such techniques are highly effective at reducing the accuracy of state-of-the-art DGA classifiers. They also make the recommendation that DGA classifiers should rely on additional side-information whenever this is possible in order to mitigate adversarial attacks.
{ "cite_N": [ "@cite_33" ], "mid": [ "2917948814" ], "abstract": [ "Domain generation algorithms (DGAs) are commonly used by botnets to generate domain names through which bots can establish a resilient communication channel with their command and control servers. Recent publications presented deep learning, character-level classifiers that are able to detect algorithmically generated domain (AGD) names with high accuracy, and correspondingly, significantly reduce the effectiveness of DGAs for botnet communication. In this paper we present MaskDGA, a practical adversarial learning technique that adds perturbation to the character-level representation of algorithmically generated domain names in order to evade DGA classifiers, without the attacker having any knowledge about the DGA classifier's architecture and parameters. MaskDGA was evaluated using the DMD-2018 dataset of AGD names and four recently published DGA classifiers, in which the average F1-score of the classifiers degrades from 0.977 to 0.495 when applying the evasion technique. An additional evaluation was conducted using the same classifiers but with adversarial defenses implemented: adversarial re-training and distillation. The results of this evaluation show that MaskDGA can be used for improving the robustness of the character-level DGA classifiers against adversarial attacks, but that ideally DGA classifiers should incorporate additional features alongside character-level features that are demonstrated in this study to be vulnerable to adversarial attacks." ] }
1905.01072
2943121983
We revisit residual algorithms in both model-free and model-based reinforcement learning settings. We propose the bidirectional target network technique to stabilize residual algorithms, yielding a residual version of DDPG that significantly outperforms vanilla DDPG in the DeepMind Control Suite benchmark. Moreover, we find the residual algorithm an effective approach to the distribution mismatch problem in model-based planning. Compared with the existing TD( @math ) method, our residual-based method makes weaker assumptions about the model and yields a greater performance boost.
There are also other studies on Bellman residual methods. @cite_17 show that for policy-based methods, maximizing the average reward is better than minimizing the Bellman residual. @cite_9 show RG converges with a problem-dependent constant learning rate when combined with certain function approximators. @cite_22 extend RG with natural gradients. However, this paper appears to be the first to contrast residual gradients and semi-gradients in deep RL problems and demonstrate the efficacy of RA with new algorithms.
{ "cite_N": [ "@cite_9", "@cite_22", "@cite_17" ], "mid": [ "2155143322", "341540352", "2963325394" ], "abstract": [ "Convergence for iterative reinforcement learning algorithms like TD(0) depends on the sampling strategy for the transitions. However, in practical applications it is convenient to take transition data from arbitrary sources without losing convergence. In this paper we investigate the problem of repeated synchronous updates based on a fixed set of transitions. Our main theorem yields sufficient conditions of convergence for combinations of reinforcement learning algorithms and linear function approximation. This allows to analyse if a certain reinforcement learning algorithm and a certain function approximator are compatible. For the combination of the residual gradient algorithm with grid-based linear' interpolation we show that there exists a universal constant learning rate such that the iteration converges independently of the concrete transition data.", "In this paper we investigate the application of natural gradient descent to Bellman error based reinforcement learning algorithms. This combination is interesting because natural gradient descent is invariant to the parameterization of the value function. This invariance property means that natural gradient descent adapts its update directions to correct for poorly conditioned representations. We present and analyze quadratic and linear time natural temporal difference learning algorithms, and prove that they are covariant. We conclude with experiments which suggest that the natural algorithms can match or outperform their non-natural counterparts using linear function approximation, and drastically improve upon their non-natural counterparts when using non-linear function approximation.", "This paper aims at theoretically and empirically comparing two standard optimization criteria for Reinforcement Learning: i) maximization of the mean value and ii) minimization of the Bellman residual. For that purpose, we place ourselves in the framework of policy search algorithms, that are usually designed to maximize the mean value, and derive a method that minimizes the residual @math over policies. A theoretical analysis shows how good this proxy is to policy optimization, and notably that it is better than its value-based counterpart. We also propose experiments on randomly generated generic Markov decision processes, specifically designed for studying the influence of the involved concentrability coefficient. They show that the Bellman residual is generally a bad proxy to policy optimization and that directly maximizing the mean value is much better, despite the current lack of deep theoretical analysis. This might seem obvious, as directly addressing the problem of interest is usually better, but given the prevalence of (projected) Bellman residual minimization in value-based reinforcement learning, we believe that this question is worth to be considered." ] }
1905.01072
2943121983
We revisit residual algorithms in both model-free and model-based reinforcement learning settings. We propose the bidirectional target network technique to stabilize residual algorithms, yielding a residual version of DDPG that significantly outperforms vanilla DDPG in the DeepMind Control Suite benchmark. Moreover, we find the residual algorithm an effective approach to the distribution mismatch problem in model-based planning. Compared with the existing TD( @math ) method, our residual-based method makes weaker assumptions about the model and yields a greater performance boost.
Dyna-style planning in RL has been widely used. @cite_20 learn a local linear model for planning. @cite_16 learn a model ensemble to avoid overfitting to an imperfect model, which is also achieved by meta-learning . @cite_8 use a value function ensemble to decide when to use a model. Besides Dyna-style planning, learned models are also used for a lookahead tree-search to improve value estimation at decision time . This tree-search is also used as an effective inductive bias in value function parameterization . Trajectories from a learned model are also used as extra inputs for a value function , which reduces the negative influence of the model prediction error. In this paper, we focus on the simplest Dyna-style planning and leave the combination of RA and more advanced planning techniques for future work.
{ "cite_N": [ "@cite_16", "@cite_20", "@cite_8" ], "mid": [ "2785389871", "2950471160", "2774354230" ], "abstract": [ "Model-free reinforcement learning (RL) methods are succeeding in a growing number of tasks, aided by recent advances in deep learning. However, they tend to suffer from high sample complexity, which hinders their use in real-world domains. Alternatively, model-based reinforcement learning promises to reduce sample complexity, but tends to require careful tuning and to date have succeeded mainly in restrictive domains where simple models are sufficient for learning. In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training. To overcome this issue, we propose to use an ensemble of models to maintain the model uncertainty and regularize the learning process. We further show that the use of likelihood ratio derivatives yields much more stable learning than backpropagation through time. Altogether, our approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO) significantly reduces the sample complexity compared to model-free deep RL methods on challenging continuous control benchmark tasks.", "Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions. However, the sample complexity of model-free algorithms, particularly when using high-dimensional function approximators, tends to limit their applicability to physical systems. In this paper, we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks. We propose two complementary techniques for improving the efficiency of such algorithms. First, we derive a continuous variant of the Q-learning algorithm, which we call normalized adantage functions (NAF), as an alternative to the more commonly used policy gradient and actor-critic methods. NAF representation allows us to apply Q-learning with experience replay to continuous tasks, and substantially improves performance on a set of simulated robotic control tasks. To further improve the efficiency of our approach, we explore the use of learned models for accelerating model-free reinforcement learning. We show that iteratively refitted local linear models are especially effective for this, and demonstrate substantially faster learning on domains where such models are applicable.", "" ] }
1905.01072
2943121983
We revisit residual algorithms in both model-free and model-based reinforcement learning settings. We propose the bidirectional target network technique to stabilize residual algorithms, yielding a residual version of DDPG that significantly outperforms vanilla DDPG in the DeepMind Control Suite benchmark. Moreover, we find the residual algorithm an effective approach to the distribution mismatch problem in model-based planning. Compared with the existing TD( @math ) method, our residual-based method makes weaker assumptions about the model and yields a greater performance boost.
Besides RL, learned models are also used in other control methods, e.g., model predictive control (MPC, garcia1989model ). @cite_1 learn deterministic models via neural networks for MPC. @cite_0 conduct a thorough comparison between deterministic models and stochastic models and use particle filters when unrolling a model. Besides modeling the observation transition, @cite_11 @cite_19 propose to model the abstract state transition and use MPC on the abstract state space. In this paper, we focus on the simplest deterministic model and leave the combination of RA and more advanced models for future work.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_1", "@cite_11" ], "mid": [ "2963960193", "2950004691", "2962872206", "2795843265" ], "abstract": [ "Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance. This is especially true with high-capacity parametric function approximators, such as deep networks. In this paper, we study how to bridge this gap, by employing uncertainty-aware dynamics models. We propose a new algorithm called probabilistic ensembles with trajectory sampling (PETS) that combines uncertainty-aware deep network dynamics models with sampling-based uncertainty propagation. Our comparison to state-of-the-art model-based and model-free deep RL algorithms shows that our approach matches the asymptotic performance of model-free algorithms on several challenging benchmark tasks, while requiring significantly fewer samples (e.g. 8 and 125 times fewer samples than Soft Actor Critic and Proximal Policy Optimization respectively on the half-cheetah task).", "Planning has been very successful for control tasks with known environment dynamics. To leverage planning in unknown environments, the agent needs to learn the dynamics from interactions with the world. However, learning dynamics models that are accurate enough for planning has been a long-standing challenge, especially in image-based domains. We propose the Deep Planning Network (PlaNet), a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space. To achieve high performance, the dynamics model must accurately predict the rewards ahead for multiple time steps. We approach this using a latent dynamics model with both deterministic and stochastic transition components. Moreover, we propose a multi-step variational inference objective that we name latent overshooting. Using only pixel observations, our agent solves continuous control tasks with contact dynamics, partial observability, and sparse rewards, which exceed the difficulty of tasks that were previously solved by planning with learned models. PlaNet uses substantially fewer episodes and reaches final performance close to and sometimes higher than strong model-free algorithms.", "Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills, but typically require a very large number of samples to achieve good performance. Model-based algorithms, in principle, can provide for much more efficient learning, but have proven difficult to extend to expressive, high-capacity models such as deep neural networks. In this work, we demonstrate that neural network dynamics models can in fact be combined with model predictive control (MPC) to achieve excellent sample complexity in a model-based reinforcement learning algorithm, producing stable and plausible gaits that accomplish various complex locomotion tasks. We further propose using deep neural network dynamics models to initialize a model-free learner, in order to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. We empirically demonstrate on MuJoCo locomotion tasks that our pure model-based approach trained on just random action data can follow arbitrary trajectories with excellent sample efficiency, and that our hybrid algorithm can accelerate model-free learning on high-speed benchmark tasks, achieving sample efficiency gains of @math on swimmer, cheetah, hopper, and ant agents. Videos can be found at https: sites.google.com view mbmf", "We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. An interactive version of this paper is available at https: worldmodels.github.io" ] }
1905.01234
2942700274
After Amdahl's trailblazing work, many other authors proposed analytical speedup models but none have considered the limiting effect of the memory wall. These models exploited aspects such as problem-size variation, memory size, communication overhead, and synchronization overhead, but data-access delays are assumed to be constant. Nevertheless, such delays can vary, for example, according to the number of cores used and the ratio between processor and memory frequencies. Given the large number of possible configurations of operating frequency and number of cores that current architectures can offer, suitable speedup models to describe such variations among these configurations are quite desirable for off-line or on-line scheduling decisions. This work proposes new parallel speedup models that account for variations of the average data-access delay to describe the limiting effect of the memory wall on parallel speedups. Analytical results indicate that the proposed modeling can capture the desired behavior while experimental hardware results validate the former. Additionally, we show that when accounting for parameters that reflect the intrinsic characteristics of the applications, such as degree of parallelism and susceptibility to the memory wall, our proposal has significant advantages over machine-learning-based modeling. Moreover, besides being black-box modeling, our experiments show that conventional machine-learning modeling needs about one order of magnitude more measurements to reach the same level of accuracy achieved in our modeling.
Analytical speedup models for multi-core processors were devised to describe communication @cite_19 and synchronization @cite_13 overhead separately. Communication and synchronization overheads were modeled together in @cite_8 providing a more general description of both behaviors. Apart from not considering the effect of the memory wall on the modeled speedups, no hardware or simulation validation was presented to confirm their results.
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_8" ], "mid": [ "1987772327", "2146921303", "1996916491" ], "abstract": [ "Multicore chips are emerging as the mainstream solution for high performance computing. Generally, communication overheads cause large performance degradation in multi-core collaboration. Interconnects in large scale are needed to deal with these overheads. Amdahl's and Gustafson's law have been applied to multi-core chips but inter-core communication has not been taken into account. In this paper, we introduce interconnection into Amdahl's and Gustafson's law so that these laws work more precisely in the multi-core era. We further propose an area cost model and analyse our speedup models under area constraints. We find optimized parameters according to our speedup model. These parameters provide useful feedbacks to architects at an initial phase of their designs. We also present a case study to show the necessity of incorporating interconnection into Amdahl's and Gustafson's law.", "This paper presents a fundamental law for parallel performance: it shows that parallel performance is not only limited by sequential code (as suggested by Amdahl's law) but is also fundamentally limited by synchronization through critical sections. Extending Amdahl's software model to include critical sections, we derive the surprising result that the impact of critical sections on parallel performance can be modeled as a completely sequential part and a completely parallel part. The sequential part is determined by the probability for entering a critical section and the contention probability (i.e., multiple threads wanting to enter the same critical section). This fundamental result reveals at least three important insights for multicore design. (i) Asymmetric multicore processors deliver less performance benefits relative to symmetric processors than suggested by Amdahl's law, and in some cases even worse performance. (ii) Amdahl's law suggests many tiny cores for optimum performance in asymmetric processors, however, we find that fewer but larger small cores can yield substantially better performance. (iii) Executing critical sections on the big core can yield substantial speedups, however, performance is sensitive to the accuracy of the critical section contention predictor.", "This work analyses the effects of sequential-to-parallel synchronization and inter-core communication on multicore performance, speedup and scaling from Amdahl's law perspective. Analytical modeling supported by simulation leads to a modification of Amdahl's law, reflecting lower than originally predicted speedup, due to these effects. In applications with high degree of data sharing, leading to intense inter-core connectivity requirements, the workload should be executed on a smaller number of larger cores. Applications requiring intense sequential-to-parallel synchronization, even highly parallelizable ones, may better be executed by the sequential core. To improve the scalability and performance speedup of a multicore, it is as important to address the synchronization and connectivity intensities of parallel algorithms as their parallelization factor." ] }
1905.01234
2942700274
After Amdahl's trailblazing work, many other authors proposed analytical speedup models but none have considered the limiting effect of the memory wall. These models exploited aspects such as problem-size variation, memory size, communication overhead, and synchronization overhead, but data-access delays are assumed to be constant. Nevertheless, such delays can vary, for example, according to the number of cores used and the ratio between processor and memory frequencies. Given the large number of possible configurations of operating frequency and number of cores that current architectures can offer, suitable speedup models to describe such variations among these configurations are quite desirable for off-line or on-line scheduling decisions. This work proposes new parallel speedup models that account for variations of the average data-access delay to describe the limiting effect of the memory wall on parallel speedups. Analytical results indicate that the proposed modeling can capture the desired behavior while experimental hardware results validate the former. Additionally, we show that when accounting for parameters that reflect the intrinsic characteristics of the applications, such as degree of parallelism and susceptibility to the memory wall, our proposal has significant advantages over machine-learning-based modeling. Moreover, besides being black-box modeling, our experiments show that conventional machine-learning modeling needs about one order of magnitude more measurements to reach the same level of accuracy achieved in our modeling.
Other analytical models for multi-core architectures consider the variations in parallel speedups caused by variations in the problem or input size, including the modeling of the parallel overhead @cite_20 or not @cite_27 . The parallel overhead was also modeled together with the parallel speedup for distributed parallelism in @cite_18 . Similar to our work, these studies also validated the models using execution time measurements, but no feature was associated with the effect of the memory wall.
{ "cite_N": [ "@cite_27", "@cite_18", "@cite_20" ], "mid": [ "2005636587", "2603952027", "" ], "abstract": [ "Estimating the potential performance of parallel applications on the yet-to-be-designed future many cores is very speculative. The simple models proposed by Amdahl's law (fixed input problem size) or Gustafson's law (fixed number of cores) do not completely capture the scaling behaviour of a multi-threaded (MT) application leading to over estimation of performance in the many-core era. On the other hand, modeling many-core by simulation is too slow to study the applications performance. In this paper, we propose a more refined but still tractable, high level empirical performance model for multi-threaded applications, the Serial Parallel Scaling (SPS) Model to study the scalability and performance of application in many-core era. SPS model learns the application behavior on a given architecture and provides realistic estimates of the performance in future many-cores. Considering both input problem size and the number of cores in modeling, SPS model can help in making high level decisions on the design choice of future many-core applications and architecture. We validate the model on the Many-Integrated Cores (MIC) xeon-phi with 240 logical cores.", "A number of scientific applications run on current HPC systems would benefit from an approximate assessment of parallel overhead. In many instances a quick and simple method to obtain a general overview on the subject is regarded useful auxiliary information by the routine HPC user. Here we present such a method using just execution times for increasing numbers of parallel processing cores. We start out with several common scientific applications and measure the fraction of time spent in MPI communication. Forming the ratio of MPI time to overall execution time we obtain a smooth curve that can be parameterized by only two constants. We then use this two-parameter expression and extend Amdahl's theorem with a new term representing parallel overhead in general. Fitting the original data set with this extended Amdahl expression yields an estimate for the parallel overhead closely matching the MPI time determined previously.", "" ] }
1905.01234
2942700274
After Amdahl's trailblazing work, many other authors proposed analytical speedup models but none have considered the limiting effect of the memory wall. These models exploited aspects such as problem-size variation, memory size, communication overhead, and synchronization overhead, but data-access delays are assumed to be constant. Nevertheless, such delays can vary, for example, according to the number of cores used and the ratio between processor and memory frequencies. Given the large number of possible configurations of operating frequency and number of cores that current architectures can offer, suitable speedup models to describe such variations among these configurations are quite desirable for off-line or on-line scheduling decisions. This work proposes new parallel speedup models that account for variations of the average data-access delay to describe the limiting effect of the memory wall on parallel speedups. Analytical results indicate that the proposed modeling can capture the desired behavior while experimental hardware results validate the former. Additionally, we show that when accounting for parameters that reflect the intrinsic characteristics of the applications, such as degree of parallelism and susceptibility to the memory wall, our proposal has significant advantages over machine-learning-based modeling. Moreover, besides being black-box modeling, our experiments show that conventional machine-learning modeling needs about one order of magnitude more measurements to reach the same level of accuracy achieved in our modeling.
The work of Liu and Sun @cite_14 combines the limitations related to the finite size of the memory @cite_17 with memory access concurrency @cite_6 to provide a speedup model that can be used for multi-core design space exploration. Although this model contains elements that relate to our data-access delay speedup model, the authors focus on chip design and perhaps, for this reason, do not explore the effects of frequency variations on speedups.
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_17" ], "mid": [ "", "2126274747", "2027972911" ], "abstract": [ "", "Traditional memory performance metrics, such as average memory access time (AMAT), are designed for sequential data accesses and can prove misleading for contemporary cache technologies that increasingly rely on access concurrency. C-AMAT, a new performance metric, accounts for concurrency at both the component and system levels for modern memory design.", "Abstract In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup, and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl′s law. The simplified fixed-time speedup is Gustafson′s scaled speedup. The simplified memory-bounded speedup contains both Amdahl′s law and Gustafson′s scaled speedup as special cases. This study leads to a better understanding of parallel processing." ] }
1905.01187
2942910886
This work addresses the problem of coupling vision-based navigation systems for Unmanned Aerial Vehicles (UAVs) with robust obstacle avoidance capabilities. The former is formulated by a maximization of the point of interest visibility, while the latter is modeled by ellipsoidal repulsive areas. The whole problem is transcribed into an Optimal Control Problem (OCP), and solved in a few milliseconds by leveraging state-of-the-art numerical optimization. The resulting trajectories are then well suited to achieve the specified goal location while avoiding obstacles by a safety margin and minimizing the probability to loose track with the target of interest. Combining this technique with a proper ellipsoid shaping (e.g. augmenting the shape with the obstacle velocity, or the obstacle detection uncertainties) results in a robust obstacle avoidance behaviour. We validate our approach within extensive simulated experiments demonstrating (i) capability to satisfy all the constraints, and (ii) the avoidance reactivity even in challenging situations. We release with this paper the open source implementation
The collision-free trajectory generation (requirement 2) is usually categorized into three main strategies: search-based approaches @cite_20 @cite_23 , optimization-based approaches @cite_1 @cite_25 , path sampling and motion primitives @cite_4 @cite_6 .
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_6", "@cite_23", "@cite_25", "@cite_20" ], "mid": [ "2093187605", "", "2064456373", "2482392012", "2564322318", "2091744661" ], "abstract": [ "An algorithm is proposed allowing for the rapid generation and evaluation of quadrocopter state interception trajectories. These trajectories are from arbitrary initial states to final states defined by the vehicle position, velocity and acceleration with a specified end of time. Sufficient criteria are then derived allowing trajectories to be tested for feasibility with respect to thrust and body rates. It is also shown that the range of a linear combination of the vehicle state can be solved for in closed form, useful e.g. for testing that the position remains within a box. The algorithm is applied by revisiting the problem of finding a trajectory to hit a ball towards a target with a racket attached to a quadrocopter. The trajectory generator is used in a model predictive control like strategy, where thousands of trajectories are generated and evaluated at every controller update step, with the first input of the optimal trajectory being sent to the vehicle. It is shown that the method can generate and evaluate on the order of one million trajectories per second on a standard laptop computer.", "", "This paper addresses the problem of motion planning for fast, agile flight through a dense obstacle field. A key contribution is the design of two families of motion primitives for aerial robots flying in dense obstacle fields, along with rules to stitch them together. The primitives are obtained by solving for the flight dynamics of the aerial robot, and explicitly account for limited agility using time delays. The first family of primitives consists of turning maneuvers to link any two points in space. The locations of the terminal points are used to obtain closed-form expressions for the control inputs required to fly between them, while accounting for the finite time required to switch between consecutive sets of control inputs. The second family consists of aggressive turn-around maneuvers wherein the time delay between the angle of attack and roll angle commands is used to optimize the maneuver for the spatial constraints. A 3-D motion planning algorithm based on these primitives is presented for aircraft flying through a dense forest.", "We explore the challenges of planning trajectories for quadrotors through cluttered indoor environments. We extend the existing work on polynomial trajectory generation by presenting a method of jointly optimizing polynomial path segments in an unconstrained quadratic program that is numerically stable for high-order polynomials and large numbers of segments, and is easily formulated for efficient sparse computation. We also present a technique for automatically selecting the amount of time allocated to each segment, and hence the quadrotor speeds along the path, as a function of a single parameter determining aggressiveness, subject to actuator constraints. The use of polynomial trajectories, coupled with the differentially flat representation of the quadrotor, eliminates the need for computationally intensive sampling and simulation in the high dimensional state space of the vehicle during motion planning. Our approach generates high-quality trajecrtories much faster than purely sampling-based optimal kinodynamic planning methods, but sacrifices the guarantee of asymptotic convergence to the global optimum that those methods provide. We demonstrate the performance of our algorithm by efficiently generating trajectories through challenging indoor spaces and successfully traversing them at speeds up to 8 m s. A demonstration of our algorithm and flight performance is available at: http: groups.csail.mit.edu rrg quad_polynomial_trajectory_planning.", "Multirotor unmanned aerial vehicles (UAVs) are rapidly gaining popularity for many applications. However, safe operation in partially unknown, unstructured environments remains an open question. In this paper, we present a continuous-time trajectory optimization method for real-time collision avoidance on multirotor UAVs. We then propose a system where this motion planning method is used as a local replanner, that runs at a high rate to continuously recompute safe trajectories as the robot gains information about its environment. We validate our approach by comparing against existing methods and demonstrate the complete system avoiding obstacles on a multirotor UAV platform.", "The problem of generating a smooth reference path, given a finite family of discrete, locally optimal paths, is investigated. A finite discretization of the environment results in a sequence of obstacle-free square cells. The generated path must lie inside the channel generated by these obstacle-free cells, while minimizing certain performance criteria. Two constrained optimization problems are formulated and solved subject to the given geometric (linear) constraints and boundary conditions in order to generate a library of B-spline path templates offline. These templates are recalled during implementation and are merged together on the fly in order to construct a smooth and feasible reference path to be followed by a closed-loop tracking controller. Combined with a discrete path planner, the proposed algorithm provides a complete solution to the obstacle-free path-generation problem for an unmanned aerial vehicle in a computationally efficient manner, which is suitable for real-time implementation." ] }
1905.01187
2942910886
This work addresses the problem of coupling vision-based navigation systems for Unmanned Aerial Vehicles (UAVs) with robust obstacle avoidance capabilities. The former is formulated by a maximization of the point of interest visibility, while the latter is modeled by ellipsoidal repulsive areas. The whole problem is transcribed into an Optimal Control Problem (OCP), and solved in a few milliseconds by leveraging state-of-the-art numerical optimization. The resulting trajectories are then well suited to achieve the specified goal location while avoiding obstacles by a safety margin and minimizing the probability to loose track with the target of interest. Combining this technique with a proper ellipsoid shaping (e.g. augmenting the shape with the obstacle velocity, or the obstacle detection uncertainties) results in a robust obstacle avoidance behaviour. We validate our approach within extensive simulated experiments demonstrating (i) capability to satisfy all the constraints, and (ii) the avoidance reactivity even in challenging situations. We release with this paper the open source implementation
In @cite_25 the authors propose a motion planning approach capable to run in real-time and to continously recompute safe trajectories as the robot sense the surrounding environment. Although the proposed method allows to replan at a high rate and react to previously unknow obstacles, it might be vulnerable to vision-based perception limitations.
{ "cite_N": [ "@cite_25" ], "mid": [ "2564322318" ], "abstract": [ "Multirotor unmanned aerial vehicles (UAVs) are rapidly gaining popularity for many applications. However, safe operation in partially unknown, unstructured environments remains an open question. In this paper, we present a continuous-time trajectory optimization method for real-time collision avoidance on multirotor UAVs. We then propose a system where this motion planning method is used as a local replanner, that runs at a high rate to continuously recompute safe trajectories as the robot gains information about its environment. We validate our approach by comparing against existing methods and demonstrate the complete system avoiding obstacles on a multirotor UAV platform." ] }
1905.01187
2942910886
This work addresses the problem of coupling vision-based navigation systems for Unmanned Aerial Vehicles (UAVs) with robust obstacle avoidance capabilities. The former is formulated by a maximization of the point of interest visibility, while the latter is modeled by ellipsoidal repulsive areas. The whole problem is transcribed into an Optimal Control Problem (OCP), and solved in a few milliseconds by leveraging state-of-the-art numerical optimization. The resulting trajectories are then well suited to achieve the specified goal location while avoiding obstacles by a safety margin and minimizing the probability to loose track with the target of interest. Combining this technique with a proper ellipsoid shaping (e.g. augmenting the shape with the obstacle velocity, or the obstacle detection uncertainties) results in a robust obstacle avoidance behaviour. We validate our approach within extensive simulated experiments demonstrating (i) capability to satisfy all the constraints, and (ii) the avoidance reactivity even in challenging situations. We release with this paper the open source implementation
Steering a robot to its desired state by using visual feedbacks obtained from one or more cameras (requirement 3) is formally defined as Visual Servoing (VS), with several applications within the UAV domain @cite_15 @cite_24 @cite_5 @cite_22 . Among the others, in Falanga @cite_22 , the authors address the flight through narrow gaps by proposing an active-vision approach and by relying only on onboard sensing and computing. The system is capable to provide an accurate trajectory while simultaneously estimating the UAV's position by detecting the gap in the camera images. Nevertheless, it might fail in presence of unmodeled obstacles along the path.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_15", "@cite_22" ], "mid": [ "2323652230", "2047001857", "2139241879", "2614122538" ], "abstract": [ "In this paper we propose a new control method for quadrotor autonomous landing on a visual target without linear velocity measurements. Only onboard sensing is exploited, such that only the images of the landing pad from a down-looking camera, along with data from an Inertial Measurement Unit's gyro, are used. The control system consists of an image-based nonlinear observer that estimates online the linear velocity of the vehicle and a backstepping image-based controller that generates attitude, and thrust setpoints to the quadrotor autopilot. Both observer and controller share the same feedback information: spherical visual features. Therefore no further image elaboration is needed for the estimation. This, along with the fact that only simple computations on low- and constant-dimension arrays are involved, makes the proposed solution computationally cheap. Real-hardware experiments on a quadrotor are carried out to verify the validity of the proposed control system.", "This paper presents an adaptive image-based visual servoing (IBVS) integratedwith adaptive slidingmode control for a vision-based operation of a quadrotor unmanned aerial vehicle (UAV). For a seamless integration with underactuated quadrotor dynamics, roll and pitch channels are decoupled from the other channels using virtual features. This allows a simple and accurate algorithm for estimating depth information and successful application of the proposed guidance and control algorithm.By employing an adaptive gain in the IBVS controlmethod, the chance of image feature loss is reduced, and performance and stability of the vision-guided UAV control system are improved. The overall setup allows image features to be placed at the desired position in the image plane of a camera mounted on the quadrotor UAV. Stability of the IBVS systemwith the controller is proved using Lyapunov stability analysis. Performance of the overall approach is validated by numerical simulation, vision integrated hardware-inthe-loop simulation, and experiments. The results confirm that the target image is successfully placed at the desired position of the image plane and the quadrotor state variables are properly regulated, showing robustness in the presence of sensor noise, parametric uncertainty, and vibration from motors.", "The motivation of this research is to show that visual based object tracking and following is reliable using a cheap GPS-denied multirotor platform such as the AR Drone 2.0. Our architecture allows the user to specify an object in the image that the robot has to follow from an approximate constant distance. At the current stage of our development, in the event of image tracking loss the system starts to hover and waits for the image tracking recovery or second detection, which requires the usage of odometry measurements for self stabilization. During the following task, our software utilizes the forward-facing camera images and part of the IMU data to calculate the references for the four on-board low-level control loops. To obtain a stronger wind disturbance rejection and an improved navigation performance, a yaw heading reference based on the IMU data is internally kept and updated by our control algorithm. We validate the architecture using an AR Drone 2.0 and the OpenTLD tracker in outdoor suburban areas. The experimental tests have shown robustness against wind perturbations, target occlusion and illumination changes, and the system's capability to track a great variety of objects present on suburban areas, for instance: walking or running people, windows, AC machines, static and moving cars and plants.", "We address one of the main challenges towards autonomous quadrotor flight in complex environments, which is flight through narrow gaps. While previous works relied on off-board localization systems or on accurate prior knowledge of the gap position and orientation in the world reference frame, we rely solely on onboard sensing and computing and estimate the full state by fusing gap detection from a single onboard camera with an IMU. This problem is challenging for two reasons: (i) the quadrotor pose uncertainty with respect to the gap increases quadratically with the distance from the gap; (ii) the quadrotor has to actively control its orientation towards the gap to enable state estimation (i.e., active vision). We solve this problem by generating a trajectory that considers geometric, dynamic, and perception constraints: during the approach maneuver, the quadrotor always faces the gap to allow state estimation, while respecting the vehicle dynamics; during the traverse through the gap, the distance of the quadrotor to the edges of the gap is maximized. Furthermore, we replan the trajectory during its execution to cope with the varying uncertainty of the state estimate. We successfully evaluate and demonstrate the proposed approach in many real experiments, achieving a success rate of 80 and gap orientations up to 45°. To the best of our knowledge, this is the first work that addresses and achieves autonomous, aggressive flight through narrow gaps using only onboard sensing and computing and without prior knowledge of the pose of the gap." ] }
1905.01187
2942910886
This work addresses the problem of coupling vision-based navigation systems for Unmanned Aerial Vehicles (UAVs) with robust obstacle avoidance capabilities. The former is formulated by a maximization of the point of interest visibility, while the latter is modeled by ellipsoidal repulsive areas. The whole problem is transcribed into an Optimal Control Problem (OCP), and solved in a few milliseconds by leveraging state-of-the-art numerical optimization. The resulting trajectories are then well suited to achieve the specified goal location while avoiding obstacles by a safety margin and minimizing the probability to loose track with the target of interest. Combining this technique with a proper ellipsoid shaping (e.g. augmenting the shape with the obstacle velocity, or the obstacle detection uncertainties) results in a robust obstacle avoidance behaviour. We validate our approach within extensive simulated experiments demonstrating (i) capability to satisfy all the constraints, and (ii) the avoidance reactivity even in challenging situations. We release with this paper the open source implementation
In @cite_10 , the authors propose a NMPC which incorporate obstacles in the cost function. To increase the robustness in avoiding the obstacles, the UAV trajectories are computed taking into account the uncertainties of the vehicle state. Kamel @cite_19 deal with the problem of multi-UAV reactive collision avoidance. They employ a model-based controller to simultaneously track a reference trajectory and avoid collisions. The proposed method also takes into account the uncertainty of the state estimator and of the position and velocity of the other agents, achieving a higer degree of robustness. Both the methods show a reactive control strategy, but might not allow the vehicle to perform a vison-based navigation.
{ "cite_N": [ "@cite_19", "@cite_10" ], "mid": [ "2771809894", "2738812057" ], "abstract": [ "When several Multirotor Micro Aerial Vehicles (MAVs) share the same airspace, reliable and robust collision avoidance is required. In this paper we address the problem of multi-MAV reactive collision avoidance. We employ a model-based controller to simultaneously track a reference trajectory and avoid collisions. Moreover, to achieve a higher degree of robustness, our method also accounts for the uncertainty of the state estimator and of the position and velocity of the other agents. The proposed approach is decentralized, does not require a collision-free reference trajectory and accounts for the full MAV dynamics. We validated our approach in simulation and experimentally with two MAV.", "This work addresses the problem of motion planning among obstacles for quadrotor platforms under external disturbances and with model uncertainty. A novel Nonlinear Model Predictive Control (NMPC) optimization technique is proposed which incorporates specified uncertainties into the planned trajectories. At the core of the procedure lies the propagation of model parameter uncertainty and initial state uncertainty as high-confidence ellipsoids in pose space. The quadrotor trajectories are then computed to avoid obstacles by a required safety margin, expressed as ellipsoid penetration while minimizing control effort and achieving a user-specified goal location. Combining this technique with online model identification results in robust obstacle avoidance behavior. Experiments in outdoor scenarios with virtual obstacles show that the quadrotor can avoid obstacles robustly, even under the influence of external disturbances." ] }
1905.01173
2943714223
In this paper, we present a novel method for analysis and segmentation of laminar structure of the cortex based on tissue characteristics whose change across the gray matter facilitates distinction between cortical layers. We develop and analyze features of individual neurons to investigate changes in architectonic differentiation and present a novel high-performance, automated tree-ensemble method trained on data manually labeled by three human investigators. From the location and basic measures of neurons, more complex features are developed and used in machine learning models for automatic segmentation of cortical layers. Tree ensembles are used on data manually labeled by three human experts. The most accurate classification results were obtained by training three models separately and creating another ensemble by combining probability outputs for final neuron layer classification. Measurement of importances of developed neuron features on both global model level and individual prediction level are obtained.
Since the first methods for automated or semi-automated analysis of cortical layers were developed, central idea in almost all methods was the use of sampling along transverse lines drawn either manually or semi-automatically @cite_17 across the cortex, perpendicular to the laminar structure and spanning the full width of the cortex @cite_15 , @cite_24 , @cite_28 . At first, changes of optical density, or gray-level index (GLI) was measured by these profiles, which is a crude estimate of changes in neuronal density.
{ "cite_N": [ "@cite_24", "@cite_15", "@cite_28", "@cite_17" ], "mid": [ "2011794397", "2061348462", "77251642", "1973000876" ], "abstract": [ "", "Functional differences among various portions of the cerebral cortex are often correlated with differing cortical layering patterns. Convenient, accurate techniques for scoring layering should therefore prove useful in electrophysiological as well as anatomical investigations. We report the application of a computer-controlled scanning microdensitometer as a means of rapid measurement of optical densities in histological sections of monkey visual cortices, areas 17 and 18. The technique readily permits recognition of the previously defined cortical layers and suggests that still finer consistent layering patterns exist; it provides objective \"fingerprints\" of cortical regions which facilitate comparisons of structure from area to area and from animal to animal. The procedure should serve also to score the positions of autoradiographic grains, degenerating axonal terminals, and other labeled structures, and to allow the comparison of preparations stained by various techniques.", "A toy which enables a child to draw pictures by remote manipulation of a writing instrument, including a frame for supporting a sheet of paper, a marker holder for holding a writing instrument over the paper, and a pair of knobs which can be turned by a child to move the holder in each of two perpendicular directions over the paper. The marker holder is mounted for sliding in the \"X\" direction on a carriage and the carriage is mounted to move in the \"Y\" direction on the frame. When a \"Y\" knob is turned by a child, it rotates a pulley which moves a cord that is attached to the carriage to move it in the \"Y\" position. When an \"X\" knob is turned, it rotates a square shaft, and a pinion rotatably mounted on the carriage and coupled to the shaft then moves a rack on the marker holder to slide it in the \"X\" direction. A program in the form of a strip with rows of letters and numbers defining \"X\" and \"Y\" locations to which the holder can be moved, defines a picture that can be drawn by a child.", "We describe a new, observer-independent procedure for identifying boundaries between cortical areas. The method is useful for images obtained from sections which provide microstructural information on the cortical laminar pattern, e.g., Nissl-, myelin-, or immunohistochemically stained sections or receptor autoradiographs. The laminar pattern is represented by profile curves extending from the cortical surface to the white matter boundary. These profiles are constructed from digitized images. Digitization is based on the grey level index (Nissl) or densitometry (myelin, immunohistochemistry, receptor autoradiography). The shapes of neighboring profiles are compared by calculating their distances according to feature vectors extracted from the profiles. Profiles derived from a homogeneous area can be expected to be similar in shape and hence show low distance values between each other. Maximum distances can be found between profiles which lie on opposite sides of a structural boundary. The Mahalanobis distance was found to be more sensitive and to yield greater spatial resolution than other distance measures such as the Euclidean distance. Cell-stained sections of the human neocortex were analyzed. The method not only verified boundaries which had been defined by visual inspection, it also revealed new ones which had not been detected visually. The procedure offers an important supplement to the traditional methods based on visual inspection which, for the first time, is based on quantitative data and therefore offers a new level of reproducibility and observer independence. Anatomical atlases based on this procedure thus provide a new tool for the interpretation of structural data obtained from functional imaging techniques." ] }
1905.01173
2943714223
In this paper, we present a novel method for analysis and segmentation of laminar structure of the cortex based on tissue characteristics whose change across the gray matter facilitates distinction between cortical layers. We develop and analyze features of individual neurons to investigate changes in architectonic differentiation and present a novel high-performance, automated tree-ensemble method trained on data manually labeled by three human investigators. From the location and basic measures of neurons, more complex features are developed and used in machine learning models for automatic segmentation of cortical layers. Tree ensembles are used on data manually labeled by three human experts. The most accurate classification results were obtained by training three models separately and creating another ensemble by combining probability outputs for final neuron layer classification. Measurement of importances of developed neuron features on both global model level and individual prediction level are obtained.
Recent methods for automated laminar analysis developed after the year 2000 have moved on from only using GLI in analysis of cortical profiles. In year 2002 @cite_2 , authors develop statistical features that characterize each GLI profile and its discreet derivative, such as the first four moments about the mean, treating the profile as a frequency distribution, for the total of 10 features for each profile. Authors also bring these features in relation to cytoarchitectonics and discuss different distance measures used to compare profile vectors derived from different brain areas. An important step in development of profile features was made eight years later in @cite_13 by the introduction of automatic methods for estimation of cell counts, thus providing realistic information about neuron density. Authors provide the number and distribution of neurons within several projection columns in rat cortex, reporting an estimate of neuron number per layers. This data is further used to derive the action potential output of projection columns specific for each layer, clearly showing direct link to analysis of the brain function through analysis of its cytoarchitecture.
{ "cite_N": [ "@cite_13", "@cite_2" ], "mid": [ "2117144089", "180394468" ], "abstract": [ "This is the second article in a series of three studies that investigate the anatomical determinants of thalamocortical (TC) input to excitatory neurons in a cortical column of rat primary somatosensory cortex (S1). Here, we report the number and distribution of NeuN-positive neurons within the C2, D2, and D3 TC projection columns in P27 rat somatosensory barrel cortex based on an exhaustive identification of 89834 somata in a 1.15 mm", "This chapter discusses the principles of classical architectonic mapping in the context of recent imaging techniques. It presents an observer-independent approach for a quantitative analysis of cortical areas and their borders, which is based on a multivariate statistical analysis of the cytoarchitecture and illustrates the application of this approach for the cytoarchitectonic mapping of the human visual cortex. Important criteria for cytoarchitectonic mapping are the absolute thickness of cortical layers, the proportionate thickness of a layer relative to the other cortical layers and to the total cortical depth, the presence of clearly recognizable laminar borders and vertical columns, the packing density and size of neuronal cell bodies, the homogeneous or clustered distribution of cell bodies throughout the layers, and the presence of special cell types such as Betz cells. Understanding the regional distributions of neurotransmitter receptors is likely to provide a crucial intermediary level of description between function and structure, since different cytoarchitectonic and functional areas have different mean receptor densities as well as distinct laminar distribution patterns. Another way for a better understanding of brain function and the underlying anatomy is to compare architectonic maps obtained in postmortem brains with activation maps obtained in functional imaging studies in a common spatial reference system. Since these two kinds of maps stem from different subsets of brains, such a comparison must be performed on a probabilistic basis." ] }
1905.01173
2943714223
In this paper, we present a novel method for analysis and segmentation of laminar structure of the cortex based on tissue characteristics whose change across the gray matter facilitates distinction between cortical layers. We develop and analyze features of individual neurons to investigate changes in architectonic differentiation and present a novel high-performance, automated tree-ensemble method trained on data manually labeled by three human investigators. From the location and basic measures of neurons, more complex features are developed and used in machine learning models for automatic segmentation of cortical layers. Tree ensembles are used on data manually labeled by three human experts. The most accurate classification results were obtained by training three models separately and creating another ensemble by combining probability outputs for final neuron layer classification. Measurement of importances of developed neuron features on both global model level and individual prediction level are obtained.
The first article that uses features and statistics of individual neurons appeared in 2017. In @cite_10 , authors use automatic segmentation of cells in @math m thick Nissl-stained sections of the mouse brain and develop @math shape descriptor features for each cell. After manually setting the upper and lower bounds for thresholding, a binary image is produced, and blobs are classified as glial cells or neurons, which are further subdivided into pyramidal and non-pyramidal. Using profiles across the mouse cortex for each cell type, authors distinguish five cortical layers, merging layers II and III into a single layer. However, only a few hundreds of cells are found on a small patch of tissue labeled by one expert. The method nevertheless offers a novel approach by using shape descriptors of individual cells for cell classification. A year after, use of GLI profiling on a large in 3D dataset was presented @cite_23 . Authors develop automated analysis of the laminar structure of the BigBrain @cite_6 by analysis of GLI peaks to distinguish between the layers. A relation between cytoarchitectonics an in vivo MRI imaging is discussed.
{ "cite_N": [ "@cite_10", "@cite_6", "@cite_23" ], "mid": [ "2753351808", "2037192432", "2801394998" ], "abstract": [ "We present an image processing algorithm for automatic detection of cortex layers and cells from optical microscopy images for Nissl-stained mouse brain. For every layer of cortex we automatically detect a shape and localization of following cortex cells type: neurons of molecular layer, pyramidal neurons, stellate neurons, and astrocytes. The algorithm includes the steps of: preprocessing, neurons and astrocytes localization, neurons classification, refined cortex layer detection, neurons reclassification. For preprocessing we use converting to gray image, Gaussian blurring, converting to black-white image after background removing, and rough estimate of layers. We use morphological operations with variation radius of structure element for neurons localization and neurons classification.", "Reference brains are indispensable tools in human brain mapping, enabling integration of multimodal data into an anatomically realistic standard space. Available reference brains, however, are restricted to the macroscopic scale and do not provide information on the functionally important microscopic dimension. We created an ultrahigh-resolution three-dimensional (3D) model of a human brain at nearly cellular resolution of 20 micrometers, based on the reconstruction of 7404 histological sections. “BigBrain” is a free, publicly available tool that provides considerable neuroanatomical insight into the human brain, thereby allowing the extraction of microscopic data for modeling and simulation. BigBrain enables testing of hypotheses on optimal path lengths between interconnected cortical regions or on spatial organization of genetic patterning, redefining the traditional neuroanatomy maps such as those of Brodmann and von Economo.", "" ] }
1905.01173
2943714223
In this paper, we present a novel method for analysis and segmentation of laminar structure of the cortex based on tissue characteristics whose change across the gray matter facilitates distinction between cortical layers. We develop and analyze features of individual neurons to investigate changes in architectonic differentiation and present a novel high-performance, automated tree-ensemble method trained on data manually labeled by three human investigators. From the location and basic measures of neurons, more complex features are developed and used in machine learning models for automatic segmentation of cortical layers. Tree ensembles are used on data manually labeled by three human experts. The most accurate classification results were obtained by training three models separately and creating another ensemble by combining probability outputs for final neuron layer classification. Measurement of importances of developed neuron features on both global model level and individual prediction level are obtained.
Most recently, in the year 2018, a first approach that does not use profiles across the cortex was proposed @cite_38 . A combined approach of unsupervised and supervised machine learning was used on an extensive dataset of 2-photon microscopic images of the rat cortex. Neurons are automatically segmented and some neurobiologically sound statistics developed and combined with texture descriptors to produce several sets of features used in learning methods. Importance of neuron density is especially emphasized, and it is pointed out that it alone leads to revelation of laminar hierarchies by the proposed unsupervised method. Authors also address the issue of human bias in manual segmentation of cortical layers and use an unsupervised clustering approach to identify and represent the laminar structure. Supervised learning is used to transfer resulting layer segmentation on different brain regions.
{ "cite_N": [ "@cite_38" ], "mid": [ "2953338657" ], "abstract": [ "The laminar organization of the cerebral cortex is a fundamental characteristic of the brain, with essential implications for cortical function. Due to the rapidly growing amount of high-resolution brain imaging data, a great demand arises for automated and flexible methods for discriminating the laminar texture of the cortex. Here, we propose a combined approach of unsupervised and supervised machine learning to discriminate the hierarchical cortical laminar organization in high-resolution 2-photon microscopic neural image data of mouse brain without observer bias, that is, without the prerequisite of manually labeled training data. For local cortical foci, we modify an unsupervised clustering approach to identify and represent the laminar cortical structure. Subsequently, supervised machine learning is applied to transfer the resulting layer labels across different locations and image data, to ensure the existence of a consistent layer label system. By using neurobiologically meaningful features, the discrimination results are shown to be consistent with the layer classification of the classical Brodmann scheme, and provide additional insight into the structure of the cerebral cortex and its hierarchical organization. Thus, our work paves a new way for studying the anatomical organization of the cerebral cortex, and potentially its functional organization." ] }
1905.01248
2942857324
Coordinated dual-arm manipulation tasks can be broadly characterized as possessing absolute and relative motion components. Relative motion tasks, in particular, are inherently redundant in the way they can be distributed between end-effectors. In this work, we analyse cooperative manipulation in terms of the asymmetric resolution of relative motion tasks. We discuss how existing approaches enable the asymmetric execution of a relative motion task, and show how an asymmetric relative motion space can be defined. We leverage this result to propose an extended relative Jacobian to model the cooperative system, which allows a user to set a concrete degree of asymmetry in the task execution. This is achieved without the need for prescribing an absolute motion target. Instead, the absolute motion remains available as a functional redundancy to the system. We illustrate the properties of our proposed Jacobian through numerical simulations of a novel differential Inverse Kinematics algorithm.
A common solution to the problem of bimanual cooperative manipulation is to employ a completely asymmetrical leader-follower (or master-slave) approach @cite_19 @cite_10 @cite_9 @cite_28 @cite_1 . Alternatively, by considering the external and internal forces on a jointly held rigid object, the task can be modelled in terms of absolute and relative motion components @cite_13 @cite_24 . The CTS definition results from a modelling approach which is independent of the statics of the dual-armed system @cite_30 @cite_26 . The ECTS @cite_32 @cite_27 is obtained by redefining the absolute motion space of the coordinated system. CTS-based approaches have been used to describe coordinated tasks in, e.g., human-robot interaction settings @cite_21 , the cooperative manipulation of a mechanism @cite_5 or the execution of a bimanual dexterous manipulation task @cite_20 .
{ "cite_N": [ "@cite_13", "@cite_30", "@cite_26", "@cite_28", "@cite_9", "@cite_21", "@cite_1", "@cite_32", "@cite_24", "@cite_19", "@cite_27", "@cite_5", "@cite_10", "@cite_20" ], "mid": [ "2128671915", "1506600535", "", "", "", "2567946639", "", "1533220671", "", "2092453501", "", "2799879332", "", "2963348180" ], "abstract": [ "In this paper we discuss the control of cooperating tasks being done by two robotic arms. In order to control those tasks, we extend hybrid position force control scheme presented thus far by various researchers for a single-arm robot. The point of the extension is formulation of kinematics and statics for a two-arm robot which is new in this paper. We define a unique system of workspace coordinates and, corresponding to the unique workspace, introduce an unique jointspace vector consisting of joint-vectors of the two arms. Using these work and joint spaces, we formulate kinematics and statics. Based upon this formulation, we successfully apply the hybrid scheme to the two-arm robot. A demonstration of the theory working on a real two-arm industrial robot and experimental data of simultaneous control of position and force proves the effectiveness of our method.", "Three schemes are developed which are aimed at achieving cooperative control of multiple arm systems manipulating a common object. The first scheme operates wholly on the object task space variables. The second scheme operates on the joint space variables that can be derived via a kinematic inversion from the cooperative task space variables. The third scheme combines the features of the two by solving the cooperation at the inverse kinematic level and acting the control at the object level. Simulation results are provided for a two-arm planar system to investigate the behavior of the controlled system in the case of inaccurate object modeling. >", "", "", "", "We propose a human-robot cooperation scheme for bimanual robots. After the initial task demonstration, the human co-worker can modify both the spatial course of motion as well as the speed of execution in an intuitive way. To achieve this goal, speed-scaled dynamic motion primitives are applied for the underlying task representation. The proposed adaptation scheme adjusts the robot's stiffness in path operational space, i. e. along the trajectory. It allows a human co-worker to be less precise in the parts of the task that require high precision, as the precision aspect can be provided by the robot. The required dynamic capabilities of the robot were obtained by decoupling the bimanual robot dynamics in operational space, which is attached to the desired trajectory. The proposed scheme was validated in a task where two Kuka LWR-4 robot arms cooperate with a human to carry an object.", "", "A humanoid robot can be viewed as a constrained dynamic system with constraints imposed by manipulation tasks, locomotion tasks, and the environment. This paper focuses on dealing with constraints in the upper-body of humanoid robots for manipulation tasks that involve coordinated motion of two arms. Inspired by research on human bimanual actions in the biomechanics area, we have developed the Extended-Cooperative-Task-Space (ECTS) representation that efficiently describes various coordinated motion tasks performed by a humanoid robot. Furthermore, we present a general whole-body control framework as an optimal controller based on Gauss's principle of least constraint. We show that all the constraints imposed on a humanoid system can be handled in a unified manner. The proposed framework is verified by numerical simulations on a Hubo II+ humanoid robot model.", "", "Tasks for two coordinated industrial robots always bring the robots in contact with the same object. Physically the three form a closed kinematic chain mechanism. When the chain is in motion, the positions and orientations of the two robots must satisfy a set of holonomic equality canstraints for every time instant. To eliminate motion errors between them, we assign one of them to carry the major part of the task. Its motion is planned accordingly. The motion of the second robot is to follow that of the first robot, as specified by the re lations of the joint velocities derived from the constraint conditions. Thus if any modification of the motion is needed in real time, only the motion of the first robot is modified. The modification for the second robot is done implicitly through the constraint conditions. Specifically, when the joint displacements, velocities, and accelerations of the first robot are known for the planned or modified motion, the corre sponding variables for the second robot and the for...", "", "In this work, we address the dual-arm manipulation of a two degrees-of-freedom articulated object that consists of two rigid links. This can include a linkage constrained along two motion directions, or two objects in contact, where the contact imposes motion constraints. We formulate the problem as a cooperative task, which allows the employment of coordinated task space frameworks, thus enabling redundancy exploitation by adjusting how the task is shared by the robot arms. In addition, we propose a method that can estimate the joint location and the direction of the degrees-of-freedom, based on the contact forces and the motion constraints imposed by the object. Experimental results demonstrate the performance of the system in its ability to estimate the two degrees of freedom independently or simultaneously.", "", "We propose the Dexterous Manipulation Graph as a tool to address in-hand manipulation and reposition an object inside a robot's end-effector. This graph is used to plan a sequence of manipulation primitives so to bring the object to the desired end pose. This sequence of primitives is translated into motions of the robot to move the object held by the end-effector. We use a dual arm robot with parallel grippers to test our method on a real system and show successful planning and execution of in-hand manipulation." ] }
1905.01248
2942857324
Coordinated dual-arm manipulation tasks can be broadly characterized as possessing absolute and relative motion components. Relative motion tasks, in particular, are inherently redundant in the way they can be distributed between end-effectors. In this work, we analyse cooperative manipulation in terms of the asymmetric resolution of relative motion tasks. We discuss how existing approaches enable the asymmetric execution of a relative motion task, and show how an asymmetric relative motion space can be defined. We leverage this result to propose an extended relative Jacobian to model the cooperative system, which allows a user to set a concrete degree of asymmetry in the task execution. This is achieved without the need for prescribing an absolute motion target. Instead, the absolute motion remains available as a functional redundancy to the system. We illustrate the properties of our proposed Jacobian through numerical simulations of a novel differential Inverse Kinematics algorithm.
Relative Jacobian methods model solely the relative motion task @cite_34 @cite_25 @cite_29 , making it a common choice when addressing tasks that require just the relative motion space, such as machining @cite_2 @cite_33 @cite_14 , assembly @cite_18 or drawing @cite_15 . The absolute motion is part of the relative Jacobian's redundant space. This can be exploited, e.g., to enhance self-collision and obstacle avoidance as secondary tasks @cite_22 , or for joint-limit avoidance @cite_16 @cite_7 @cite_0 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_33", "@cite_22", "@cite_7", "@cite_29", "@cite_16", "@cite_0", "@cite_2", "@cite_15", "@cite_34", "@cite_25" ], "mid": [ "1988257509", "", "", "2108538808", "", "", "2766050731", "", "2110110938", "1496308346", "2156039572", "" ], "abstract": [ "", "", "", "This paper proposes a method of planning collision free joints paths for two cooperative manipulators. When two manipulators hold a common object and execute specified tasks, there are problems related to handling and collision. To move two manipulators effectively, all of them must be considered together. There are redundancy aspects peculiar to the cooperative two manipulators. We use this redundancy and the potential function method to avoid collisions between the object and the link, the link and the link, the link and the obstacle and so on. Simulation results show the effectiveness of the proposed method.", "", "", "Abstract Cooperative manipulation of a rigid object is challenging and represents an interesting and active research area, especially when these robots are subject to joint and task prioritization constraints. In cooperative manipulation, a primary task is to maintain the coordination of motions, to avoid severe damage caused by the violation of kinematic constraints imposed by the closed chain mechanism. This paper proposes a kinematic controller for dual-arm cooperative manipulation that ensures safety by providing relative coordinated motion as highest priority task and joint limit avoidance and world-space trajectory following at a lower priority. The coordination of motions is based on modular relative Jacobian formulation. The approach is applicable to systems composed of redundant or non-redundant manipulators. Experiments in simulation demonstrate the behavior of the approach under different redundancy configurations. Experiments on two robots with different number of redundant motions show the applicability of the proposed approach to cooperative manipulation under joint limit constraints.", "", "Recent research has considered robotic machining as an alternative to traditional computer numerical control machining, particularly for prototyping applications. However, unlike traditional machine tools, robots are subject to relatively larger dynamic disturbances and operate closer to their torque limits. These factors, combined with inaccurate manipulator and machining process models, can cause joint actuator saturation during operation. This paper presents a trajectory planner that will reduce torques that are near saturation by generating trajectories with a weighted pseudoinverse. Using a relative Jacobian, the tool path is resolved into joint trajectories at the acceleration level. This paper presents a new method for selecting the weighting matrix based on the proximity of the joint torques to saturation limits. This weighting reduces the joint accelerations contributing the most to the torques near saturation, thereby reducing the joint torques. The accelerations of other joints increase to satisfy the increased demand. The effectiveness of the acceleration and torque redistribution algorithm has been demonstrated via extensive simulations.", "This paper proposes an optimization method for dual-arm robots, inspired by a study on human asymmetric bimanual action called Guiard's principle which states that when humans perform asymmetric bimanual tasks, the right hand (as the lateral preference) performs a fine motion (and force) resolution, while the left hand performs a coarse resolution. To effectively transfer the human bimanual-task knowledge to dual-arm robots, we proposed a cost function, based on task-compatibility index, which is used to set the desired motion and force resolution for each end-effector of the dual-arm according to its role in the bimanual action. Thus the right-arm posture is optimized such that the tool attached to its end-effector can exert fine motion and force. And the left-arm posture is optimized to assume a strong and dynamic structural support for the right arm action. We experimentally compare the proposed cost function against previous methods for dual-arm robots using two six degrees-of-freedom torque-controlled manipulators. The control performance, when mimicking Guiard's principle shows considerably better results.", "A formulation for online trajectory generation for two robots cooperating to perform an assembly task is derived. The two robots are treated as a single redundant system. A Jacobian is formulated that relates the joint rates of the entire system to the relative motion of one of the hands with respect to the other. The minimum norm solution of this relative Jacobian equation results in a set of joint rates which perform the cooperative task. In addition to the cooperative task, secondary goals, which include obstacle and joint limit avoidance, are specified using velocities in the null space of the relative Jacobian. This formulation also allows the robots to be controlled in parallel on independent tasks", "" ] }
1905.01248
2942857324
Coordinated dual-arm manipulation tasks can be broadly characterized as possessing absolute and relative motion components. Relative motion tasks, in particular, are inherently redundant in the way they can be distributed between end-effectors. In this work, we analyse cooperative manipulation in terms of the asymmetric resolution of relative motion tasks. We discuss how existing approaches enable the asymmetric execution of a relative motion task, and show how an asymmetric relative motion space can be defined. We leverage this result to propose an extended relative Jacobian to model the cooperative system, which allows a user to set a concrete degree of asymmetry in the task execution. This is achieved without the need for prescribing an absolute motion target. Instead, the absolute motion remains available as a functional redundancy to the system. We illustrate the properties of our proposed Jacobian through numerical simulations of a novel differential Inverse Kinematics algorithm.
Hierarchical quadratic programs (HQP) @cite_17 are an alternative approach to obtain solutions to the problem of inverse differential kinematics. While in this article we focus on pseudo-inverse solutions to the differential IK problem, all the discussed Jacobian formulations can be employed in the context of HQP, as showed in, e.g., @cite_4 and @cite_11 for the relative Jacobian.
{ "cite_N": [ "@cite_11", "@cite_4", "@cite_17" ], "mid": [ "2800788658", "2406214672", "2019606703" ], "abstract": [ "To make production lines more flexible, dual-arm robots are good candidates to be deployed in autonomous assembly units. In this paper, we propose a sparse kinematic control strategy, that minimizes the number of joints actuated for a coordinated task between two arms. The control strategy is based on a hierarchical sparse QP architecture. We present experimental results that highlight the capability of this architecture to produce sparser motions (for an assembly task) than those obtained with standard controllers.", "Human-robot interaction (HRI) is a key element for diffusion of robotised production. Clear advantages in flexibility and productivity are possible, when the two operators are free to interact, as they are endowed with complementary skills. To achieve such a goal, safety systems capable of coping with task and robot constraints have to be designed. In this paper, a collision avoidance strategy, tackling consistency with task constraints and robot kinematic limitations, is proposed. Robot joint velocities are selected with a QP optimisation problem, minimising the difference from evasive velocities, while respecting task constraints. Integration with an industrial controller is discussed as well, while the strategy is experimentally validated on a dual arm industrial robot prototype, working in close interaction with a human.", "Hierarchical least-square optimization is often used in robotics to inverse a direct function when multiple incompatible objectives are involved. Typical examples are inverse kinematics or dynamics. The objectives can be given as equalities to be satisfied e.g. point-to-point task or as areas of satisfaction e.g. the joint range. This paper proposes a complete solution to solve multiple least-square quadratic problems of both equality and inequality constraints ordered into a strict hierarchy. Our method is able to solve a hierarchy of only equalities 10 times faster than the iterative-projection hierarchical solvers and can consider inequalities at any level while running at the typical control frequency on whole-body size problems. This generic solver is used to resolve the redundancy of humanoid robots while generating complex movements in constrained environments." ] }
1905.01077
2904022486
It is well believed that video captioning is a fundamental but challenging task in both computer vision and artificial intelligence fields. The prevalent approach is to map an input video to a variable-length output sentence in a sequence to sequence manner via Recurrent Neural Network (RNN). Nevertheless, the training of RNN still suffers to some degree from vanishing exploding gradient problem, making the optimization difficult. Moreover, the inherently recurrent dependency in RNN prevents parallelization within a sequence during training and therefore limits the computations. In this paper, we present a novel design --- Temporal Deformable Convolutional Encoder-Decoder Networks (dubbed as TDConvED) that fully employ convolutions in both encoder and decoder networks for video captioning. Technically, we exploit convolutional block structures that compute intermediate states of a fixed number of inputs and stack several blocks to capture long-term relationships. The structure in encoder is further equipped with temporal deformable convolution to enable free-form deformation of temporal sampling. Our model also capitalizes on temporal attention mechanism for sentence generation. Extensive experiments are conducted on both MSVD and MSR-VTT video captioning datasets, and superior results are reported when comparing to conventional RNN-based encoder-decoder techniques. More remarkably, TDConvED increases CIDEr-D performance from 58.8 to 67.2 on MSVD.
The dominant direction in modern video captioning is sequence learning approaches @cite_9 @cite_14 @cite_36 @cite_28 @cite_5 @cite_29 @cite_20 @cite_4 which utilize RNN-based architecture to generate novel sentences with flexible syntactical structures. For instance, Venugopalan present a LSTM-based model to generate video descriptions with the mean pooled representation over all frames in @cite_5 . The framework is then extended by inputting both frames and optical flow images into an encoder-decoder LSTM in @cite_28 . Compared to mean pooling, Yao propose to utilize the temporal attention mechanism to exploit temporal structure for video captioning @cite_29 . Later in @cite_22 , Zhao design an object-aware tube feature for video captioning to enable attention on salient objects. Most recently, Hao develop several deep fusion strategies to effectively integrate both visual and audio cues into sentence generation in @cite_7 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_22", "@cite_7", "@cite_36", "@cite_28", "@cite_9", "@cite_29", "@cite_5", "@cite_20" ], "mid": [ "2951159095", "2799042952", "2807834696", "2770520232", "2752191396", "2950019618", "1573040851", "2950307714", "2136036867", "1957740064" ], "abstract": [ "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNN) to encode video content and Recurrent Neural Networks (RNN) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)---a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results when compared to state-of-the-art methods are also reported on M-VAD and MPII-MD.", "Automatically describing a video with natural language is regarded as a fundamental challenge in computer vision. The problem nevertheless is not trivial especially when a video contains multiple events to be worthy of mention, which often happens in real videos. A valid question is how to temporally localize and then describe events, which is known as \"dense video captioning.\" In this paper, we present a novel framework for dense video captioning that unifies the localization of temporal event proposals and sentence generation of each proposal, by jointly training them in an end-to-end manner. To combine these two worlds, we integrate a new design, namely descriptiveness regression, into a single shot detection structure to infer the descriptive complexity of each detected proposal via sentence generation. This in turn adjusts the temporal locations of each event proposal. Our model differs from existing dense video captioning methods since we propose a joint and global optimization of detection and captioning, and the framework uniquely capitalizes on an attribute-augmented video captioning architecture. Extensive experiments are conducted on ActivityNet Captions dataset and our framework shows clear improvements when compared to the state-of-the-art techniques. More remarkably, we obtain a new record: METEOR of 12.96 on ActivityNet Captions official test set.", "", "Video caption refers to generating a descriptive sentence for a specific short video clip automatically, which has achieved remarkable success recently. However, most of the existing methods focus more on visual information while ignoring the synchronized audio cues. We propose three multimodal deep fusion strategies to maximize the benefits of visual-audio resonance information. The first one explores the impact on cross-modalities feature fusion from low to high order. The second establishes the visual-audio short-term dependency by sharing weights of corresponding front-end networks. The third extends the temporal dependency to long-term through sharing multimodal memory across visual and audio modalities. Extensive experiments have validated the effectiveness of our three cross-modalities fusion strategies on two benchmark datasets, including Microsoft Research Video to Text (MSRVTT) and Microsoft Video Description (MSVD). It is worth mentioning that sharing weight can coordinate visual-audio feature fusion effectively and achieve the state-of-art performance on both BELU and METEOR metrics. Furthermore, we first propose a dynamic multimodal feature fusion framework to deal with the part modalities missing case. Experimental results demonstrate that even in the audio absence mode, we can still obtain comparable results with the aid of the additional audio modality inference module.", "The topic diversity of open-domain videos leads to various vocabularies and linguistic expressions in describing video contents, and therefore, makes the video captioning task even more challenging. In this paper, we propose an unified caption framework, M&M TGM, which mines multimodal topics in unsupervised fashion from data and guides the caption decoder with these topics. Compared to pre-defined topics, the mined multimodal topics are more semantically and visually coherent and can reflect the topic distribution of videos better. We formulate the topic-aware caption generation as a multi-task learning problem, in which we add a parallel task, topic prediction, in addition to the caption task. For the topic prediction task, we use the mined topics as the teacher to train a student topic prediction model, which learns to predict the latent topics from multimodal contents of videos. The topic prediction provides intermediate supervision to the learning process. As for the caption task, we propose a novel topic-aware decoder to generate more accurate and detailed video descriptions with the guidance from latent topics. The entire learning procedure is end-to-end and it optimizes both tasks simultaneously. The results from extensive experiments conducted on the MSR-VTT and Youtube2Text datasets demonstrate the effectiveness of our proposed model. M&M TGM not only outperforms prior state-of-the-art methods on multiple evaluation metrics and on both benchmark datasets, but also achieves better generalization ability.", "Real-world videos often have complex dynamics; and methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem, we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).", "Automatically describing video content with natural language is a fundamental challenge of computer vision. Re-current Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with the given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best published performance in generating natural sentences: 45.3 and 31.0 in terms of BLEU@4 and METEOR, respectively. Superior performances are also reported on two movie description datasets (M-VAD and MPII-MD). In addition, we demonstrate that LSTM-E outperforms several state-of-the-art techniques in predicting Subject-Verb-Object (SVO) triplets.", "Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.", "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.", "We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal- and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively." ] }
1905.01077
2904022486
It is well believed that video captioning is a fundamental but challenging task in both computer vision and artificial intelligence fields. The prevalent approach is to map an input video to a variable-length output sentence in a sequence to sequence manner via Recurrent Neural Network (RNN). Nevertheless, the training of RNN still suffers to some degree from vanishing exploding gradient problem, making the optimization difficult. Moreover, the inherently recurrent dependency in RNN prevents parallelization within a sequence during training and therefore limits the computations. In this paper, we present a novel design --- Temporal Deformable Convolutional Encoder-Decoder Networks (dubbed as TDConvED) that fully employ convolutions in both encoder and decoder networks for video captioning. Technically, we exploit convolutional block structures that compute intermediate states of a fixed number of inputs and stack several blocks to capture long-term relationships. The structure in encoder is further equipped with temporal deformable convolution to enable free-form deformation of temporal sampling. Our model also capitalizes on temporal attention mechanism for sentence generation. Extensive experiments are conducted on both MSVD and MSR-VTT video captioning datasets, and superior results are reported when comparing to conventional RNN-based encoder-decoder techniques. More remarkably, TDConvED increases CIDEr-D performance from 58.8 to 67.2 on MSVD.
The most typical paradigm in sequence learning is RNN-based encoder-decoder structure which mainly capitalizes on RNN to model the probability of decoding an output given the previous outputs and all the inputs. Although the remarkable results have been observed on a number of sequential tasks (e.g., image video captioning, machine translation), the inherent recursive characteristic inevitably limits the parallelization abilities and even raises vanishing exploding gradient problems in the training stage. To tackle these barriers, there is an emerging trend of leveraging Convolutional Neural Network (CNN) for sequence learning in language modeling for NLP tasks @cite_17 @cite_25 @cite_37 @cite_26 . A convolutional encoder with a gated architecture has been designed in @cite_17 , which could pinpoint the parts of a source sentence that are relevant to the target word for machine translation. Recently, the first fully convolutional model for sequence learning is proposed in @cite_26 to design both encoder and decoder in the form of convolutions with CNN, which even outperforms strong recurrent models on machine translation task.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_25", "@cite_17" ], "mid": [ "2540404261", "2613904329", "2952436057", "2132043663" ], "abstract": [ "We present a novel neural network for processing sequences. The ByteNet is a one-dimensional convolutional neural network that is composed of two parts, one to encode the source sequence and the other to decode the target sequence. The two network parts are connected by stacking the decoder on top of the encoder and preserving the temporal resolution of the sequences. To address the differing lengths of the source and the target, we introduce an efficient mechanism by which the decoder is dynamically unfolded over the representation of the encoder. The ByteNet uses dilation in the convolutional layers to increase its receptive field. The resulting network has two core properties: it runs in time that is linear in the length of the sequences and it sidesteps the need for excessive memorization. The ByteNet decoder attains state-of-the-art performance on character-level language modelling and outperforms the previous best results obtained with recurrent networks. The ByteNet also achieves state-of-the-art performance on character-to-character machine translation on the English-to-German WMT translation task, surpassing comparable neural translation models that are based on recurrent networks with attentional pooling and run in quadratic time. We find that the latent alignment structure contained in the representations reflects the expected alignment between the tokens.", "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.", "The recently proposed neural network joint model (NNJM) (, 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT. In this paper, we give a more systematic treatment by summarizing the relevant source information through a convolutional architecture guided by the target information. With different guiding signals during decoding, our specifically designed convolution+gating architectures can pinpoint the parts of a source sentence that are relevant to predicting a target word, and fuse them with the context of entire source sentence to form a unified representation. This representation, together with target language words, are fed to a deep neural network (DNN) to form a stronger NNJM. Experiments on two NIST Chinese-English translation tasks show that the proposed model can achieve significant improvements over the previous NNJM by up to +1.08 BLEU points on average" ] }
1905.00996
2942932288
In this work, we propose a novel framework named Region-Aware Network (RANet), which learns the ability of anti-confusing in case of heavy occlusion, nearby person and symmetric appearance, for human pose estimation. Specifically, the proposed method addresses three key aspects, i.e., data augmentation, feature learning and prediction fusion, respectively. First, we propose Parsing-based Data Augmentation (PDA) to generate abundant data that synthesizes confusing textures. Second, we not only propose a Feature Pyramid Stem (FPS) to learn stronger low-level features in lower stage; but also incorporate an Effective Region Extraction (ERE) module to excavate better target-specific features. Third, we introduce Cascade Voting Fusion (CVF) to explicitly exclude the inferior predictions and fuse the rest effective predictions for the final pose estimation. Extensive experimental results on two popular benchmarks, i.e. MPII and LSP, demonstrate the effectiveness of our method against the state-of-the-art competitors. Especially on easily-confusable joints, our method makes significant improvement.
Previous DCNN based human pose estimation works can be roughly divided into two categories. The first group directly regresses the location coordinates of joints @cite_21 @cite_31 , called regression-based methods. The second group predicts heatmaps followed by estimating joint locations according to the peak or integration response of heatmap @cite_32 @cite_30 , termed heatmap-based methods. Our work is closely related to the second group while differing from three perspectives.
{ "cite_N": [ "@cite_30", "@cite_31", "@cite_21", "@cite_32" ], "mid": [ "2307770531", "2952074561", "2113325037", "2964304707" ], "abstract": [ "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "We propose a novel cascaded framework, namely deep deformation network (DDN), for localizing landmarks in non-rigid objects. The hallmarks of DDN are its incorporation of geometric constraints within a convolutional neural network (CNN) framework, ease and efficiency of training, as well as generality of application. A novel shape basis network (SBN) forms the first stage of the cascade, whereby landmarks are initialized by combining the benefits of CNN features and a learned shape basis to reduce the complexity of the highly nonlinear pose manifold. In the second stage, a point transformer network (PTN) estimates local deformation parameterized as thin-plate spline transformation for a finer refinement. Our framework does not incorporate either handcrafted features or part connectivity, which enables an end-to-end shape prediction pipeline during both training and testing. In contrast to prior cascaded networks for landmark localization that learn a mapping from feature space to landmark locations, we demonstrate that the regularization induced through geometric priors in the DDN makes it easier to train, yet produces superior results. The efficacy and generality of the architecture is demonstrated through state-of-the-art performances on several benchmarks for multiple tasks such as facial landmark localization, human body pose estimation and bird part localization.", "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.", "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets." ] }
1905.00996
2942932288
In this work, we propose a novel framework named Region-Aware Network (RANet), which learns the ability of anti-confusing in case of heavy occlusion, nearby person and symmetric appearance, for human pose estimation. Specifically, the proposed method addresses three key aspects, i.e., data augmentation, feature learning and prediction fusion, respectively. First, we propose Parsing-based Data Augmentation (PDA) to generate abundant data that synthesizes confusing textures. Second, we not only propose a Feature Pyramid Stem (FPS) to learn stronger low-level features in lower stage; but also incorporate an Effective Region Extraction (ERE) module to excavate better target-specific features. Third, we introduce Cascade Voting Fusion (CVF) to explicitly exclude the inferior predictions and fuse the rest effective predictions for the final pose estimation. Extensive experimental results on two popular benchmarks, i.e. MPII and LSP, demonstrate the effectiveness of our method against the state-of-the-art competitors. Especially on easily-confusable joints, our method makes significant improvement.
Conventional data augmentation methods on human pose estimation task @cite_30 @cite_14 @cite_13 mainly performs scaling, rotating and flipping, on the training images. Recently, PoseRefiner @cite_25 mimics incorrect pose joints and refines them by cascaded network. MSR-net @cite_19 introduces keypoint-masking to simulate the hard training samples. Different from the previous data augmentation strategies, we propose a novel parsing-based data augmentation scheme that taking advantage of the semantic segmentation for synthesizing various confusing situations.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_19", "@cite_13", "@cite_25" ], "mid": [ "2307770531", "2611013256", "2963197583", "2907137919", "2798721181" ], "abstract": [ "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "For human pose estimation in monocular images, joint occlusions and overlapping upon human bodies often result in deviated pose predictions. Under these circumstances, biologically implausible pose predictions may be produced. In contrast, human vision is able to predict poses by exploiting geometric constraints of joint inter-connectivity. To address the problem by incorporating priors about the structure of human bodies, we propose a novel structure-aware convolutional network to implicitly take such priors into account during training of the deep network. Explicit learning of such constraints is typically challenging. Instead, we design discriminators to distinguish the real poses from the fake ones (such as biologically implausible ones). If the pose generator (G) generates results that the discriminator fails to distinguish from real ones, the network successfully learns the priors.", "We develop a robust multi-scale structure-aware neural network for human pose estimation. This method improves the recent deep conv-deconv hourglass models with four key improvements: (1) multi-scale supervision to strengthen contextual feature learning in matching body keypoints by combining feature heatmaps across scales, (2) multi-scale regression network at the end to globally optimize the structural matching of the multi-scale features, (3) structure-aware loss used in the intermediate supervision and at the regression to improve the matching of keypoints and respective neighbors to infer a higher-order matching configurations, and (4) a keypoint masking training scheme that can effectively fine-tune our network to robustly localize occluded keypoints via adjacent matches. Our method can effectively improve state-of-the-art pose estimation methods that suffer from difficulties in scale varieties, occlusions, and complex multi-person scenarios. This multi-scale supervision tightly integrates with the regression network to effectively (i) localize keypoints using the ensemble of multi-scale features, and (ii) infer global pose configuration by maximizing structural consistencies across multiple keypoints and scales. The keypoint masking training enhances these advantages to focus learning on hard occlusion samples. Our method achieves the leading position in the MPII challenge leaderboard among the state-of-the-art methods.", "We explore the importance of spatial contextual information in human pose estimation. Most state-of-the-art pose networks are trained in a multi-stage manner and produce several auxiliary predictions for deep supervision. With this principle, we present two conceptually simple and yet computational efficient modules, namely Cascade Prediction Fusion (CPF) and Pose Graph Neural Network (PGNN), to exploit underlying contextual information. Cascade prediction fusion accumulates prediction maps from previous stages to extract informative signals. The resulting maps also function as a prior to guide prediction at following stages. To promote spatial correlation among joints, our PGNN learns a structured representation of human pose as a graph. Direct message passing between different joints is enabled and spatial relation is captured. These two modules require very limited computational complexity. Experimental results demonstrate that our method consistently outperforms previous methods on MPII and LSP benchmark.", "Multi-person pose estimation in images and videos is an important yet challenging task with many applications. Despite the large improvements in human pose estimation enabled by the development of convolutional neural networks, there still exist a lot of difficult cases where even the state-of-the-art models fail to correctly localize all body joints. This motivates the need for an additional refinement step that addresses these challenging cases and can be easily applied on top of any existing method. In this work, we introduce a pose refinement network (PoseRefiner) which takes as input both the image and a given pose estimate and learns to directly predict a refined pose by jointly reasoning about the input-output space. In order for the network to learn to refine incorrect body joint predictions, we employ a novel data augmentation scheme for training, where we model \"hard\" human pose cases. We evaluate our approach on four popular large-scale pose estimation benchmarks such as MPII Single- and Multi-Person Pose Estimation, PoseTrack Pose Estimation, and PoseTrack Pose Tracking, and report systematic improvement over the state of the art." ] }
1905.00996
2942932288
In this work, we propose a novel framework named Region-Aware Network (RANet), which learns the ability of anti-confusing in case of heavy occlusion, nearby person and symmetric appearance, for human pose estimation. Specifically, the proposed method addresses three key aspects, i.e., data augmentation, feature learning and prediction fusion, respectively. First, we propose Parsing-based Data Augmentation (PDA) to generate abundant data that synthesizes confusing textures. Second, we not only propose a Feature Pyramid Stem (FPS) to learn stronger low-level features in lower stage; but also incorporate an Effective Region Extraction (ERE) module to excavate better target-specific features. Third, we introduce Cascade Voting Fusion (CVF) to explicitly exclude the inferior predictions and fuse the rest effective predictions for the final pose estimation. Extensive experimental results on two popular benchmarks, i.e. MPII and LSP, demonstrate the effectiveness of our method against the state-of-the-art competitors. Especially on easily-confusable joints, our method makes significant improvement.
Most previous networks @cite_30 @cite_16 @cite_26 @cite_13 @cite_14 @cite_22 focus on learning effective high-level features for heatmap prediction, which incorporates the hourglass-like structure that includes down-sampling for feature encoding and up-sampling for heatmap decoding. Typically, before entering the heatmap prediction sub-network, a rough stem module that converts the input image to smaller feature maps is adopted to reduce the complexity. For instance, stacked hourglass @cite_30 accepts input in resolution of @math while generates feature map of @math for heatmap prediction. Simple-baseline @cite_16 and HRNet @cite_26 extract @math low-resolution feature map for heatmap prediction from @math high-resolution input image. However, the rough stem module may not make full use of the effective pixel-level information from the raw input images. In contrast, we on one hand propose a Feature Pyramid Stem module for learning stronger low-level feature; on the other hand propose an Effective Region Extraction module for better target-specific features.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_26", "@cite_22", "@cite_16", "@cite_13" ], "mid": [ "2307770531", "2611013256", "2949962589", "2547884650", "", "2907137919" ], "abstract": [ "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "For human pose estimation in monocular images, joint occlusions and overlapping upon human bodies often result in deviated pose predictions. Under these circumstances, biologically implausible pose predictions may be produced. In contrast, human vision is able to predict poses by exploiting geometric constraints of joint inter-connectivity. To address the problem by incorporating priors about the structure of human bodies, we propose a novel structure-aware convolutional network to implicitly take such priors into account during training of the deep network. Explicit learning of such constraints is typically challenging. Instead, we design discriminators to distinguish the real poses from the fake ones (such as biologically implausible ones). If the pose generator (G) generates results that the discriminator fails to distinguish from real ones, the network successfully learns the priors.", "This is an official pytorch implementation of Deep High-Resolution Representation Learning for Human Pose Estimation. In this work, we are interested in the human pose estimation problem with a focus on learning reliable high-resolution representations. Most existing methods recover high-resolution representations from low-resolution representations produced by a high-to-low resolution network. Instead, our proposed network maintains high-resolution representations through the whole process. We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworks in parallel. We conduct repeated multi-scale fusions such that each of the high-to-low resolution representations receives information from other parallel representations over and over, leading to rich high-resolution representations. As a result, the predicted keypoint heatmap is potentially more accurate and spatially more precise. We empirically demonstrate the effectiveness of our network through the superior pose estimation results over two benchmark datasets: the COCO keypoint detection dataset and the MPII Human Pose dataset. The code and models have been publicly available at this https URL .", "Deep convolutional neural networks (CNN) have achieved great success. On the other hand, modeling structural information has been proved critical in many vision problems. It is of great interest to integrate them effectively. In a classical neural network, there is no message passing between neurons in the same layer. In this paper, we propose a CRF-CNN framework which can simultaneously model structural information in both output and hidden feature layers in a probabilistic way, and it is applied to human pose estimation. A message passing scheme is proposed, so that in various layers each body joint receives messages from all the others in an efficient way. Such message passing can be implemented with convolution between features maps in the same layer, and it is also integrated with feedforward propagation in neural networks. Finally, a neural network implementation of end-to-end learning CRF-CNN is provided. Its effectiveness is demonstrated through experiments on two benchmark datasets.", "", "We explore the importance of spatial contextual information in human pose estimation. Most state-of-the-art pose networks are trained in a multi-stage manner and produce several auxiliary predictions for deep supervision. With this principle, we present two conceptually simple and yet computational efficient modules, namely Cascade Prediction Fusion (CPF) and Pose Graph Neural Network (PGNN), to exploit underlying contextual information. Cascade prediction fusion accumulates prediction maps from previous stages to extract informative signals. The resulting maps also function as a prior to guide prediction at following stages. To promote spatial correlation among joints, our PGNN learns a structured representation of human pose as a graph. Direct message passing between different joints is enabled and spatial relation is captured. These two modules require very limited computational complexity. Experimental results demonstrate that our method consistently outperforms previous methods on MPII and LSP benchmark." ] }
1905.00996
2942932288
In this work, we propose a novel framework named Region-Aware Network (RANet), which learns the ability of anti-confusing in case of heavy occlusion, nearby person and symmetric appearance, for human pose estimation. Specifically, the proposed method addresses three key aspects, i.e., data augmentation, feature learning and prediction fusion, respectively. First, we propose Parsing-based Data Augmentation (PDA) to generate abundant data that synthesizes confusing textures. Second, we not only propose a Feature Pyramid Stem (FPS) to learn stronger low-level features in lower stage; but also incorporate an Effective Region Extraction (ERE) module to excavate better target-specific features. Third, we introduce Cascade Voting Fusion (CVF) to explicitly exclude the inferior predictions and fuse the rest effective predictions for the final pose estimation. Extensive experimental results on two popular benchmarks, i.e. MPII and LSP, demonstrate the effectiveness of our method against the state-of-the-art competitors. Especially on easily-confusable joints, our method makes significant improvement.
Prediction fusion strategy is a common solution to improve the hard joints in challenging cases. Zhang @cite_13 designs a Cascade Prediction Fusion (CPF) network that takes all heatmaps in different stages into considerations for final prediction. Yang @cite_9 concatenate coarse output heatmaps with raw input for further keypoints refinement. Compared with these methods, our method explicitly excludes the inferior candidate predictions by voting and gets more accurate results by merging the rest superior predictions.
{ "cite_N": [ "@cite_9", "@cite_13" ], "mid": [ "2742737904", "2907137919" ], "abstract": [ "Articulated human pose estimation is a fundamental yet challenging task in computer vision. The difficulty is particularly pronounced in scale variations of human body parts when camera view changes or severe foreshortening happens. Although pyramid methods are widely used to handle scale changes at inference time, learning feature pyramids in deep convolutional neural networks (DCNNs) is still not well explored. In this work, we design a Pyramid Residual Module (PRMs) to enhance the invariance in scales of DCNNs. Given input features, the PRMs learn convolutional filters on various scales of input features, which are obtained with different subsampling ratios in a multibranch network. Moreover, we observe that it is inappropriate to adopt existing methods to initialize the weights of multi-branch networks, which achieve superior performance than plain networks in many tasks recently. Therefore, we provide theoretic derivation to extend the current weight initialization scheme to multi-branch network structures. We investigate our method on two standard benchmarks for human pose estimation. Our approach obtains state-of-the-art results on both benchmarks. Code is available at https: github.com bearpaw PyraNet.", "We explore the importance of spatial contextual information in human pose estimation. Most state-of-the-art pose networks are trained in a multi-stage manner and produce several auxiliary predictions for deep supervision. With this principle, we present two conceptually simple and yet computational efficient modules, namely Cascade Prediction Fusion (CPF) and Pose Graph Neural Network (PGNN), to exploit underlying contextual information. Cascade prediction fusion accumulates prediction maps from previous stages to extract informative signals. The resulting maps also function as a prior to guide prediction at following stages. To promote spatial correlation among joints, our PGNN learns a structured representation of human pose as a graph. Direct message passing between different joints is enabled and spatial relation is captured. These two modules require very limited computational complexity. Experimental results demonstrate that our method consistently outperforms previous methods on MPII and LSP benchmark." ] }
1905.00773
2943665611
We propose a novel agglomerative clustering method based on unmasking, a technique that was previously used for authorship verification of text documents and for abnormal event detection in videos. In order to join two clusters, we alternate between (i) training a binary classifier to distinguish between the samples from one cluster and the samples from the other cluster, and (ii) removing at each step the most discriminant features. The faster-decreasing accuracy rates of the intermediately-obtained classifiers indicate that the two clusters should be joined. To the best of our knowledge, this is the first work to apply unmasking in order to cluster images. We compare our method with k-means as well as a recent state-of-the-art clustering method. The empirical results indicate that our approach is able to improve performance for various (deep and shallow) feature representations and different tasks, such as handwritten digit recognition, texture classification and fine-grained object recognition.
More closely-related to our work, are recent methods focused particularly on clustering images @cite_17 @cite_22 @cite_11 @cite_10 @cite_19 . While some researchers have focused strictly on the clustering task @cite_22 @cite_11 , others considered learning unsupervised image embeddings using neural networks @cite_19 @cite_3 or auto-encoders @cite_17 @cite_10 @cite_4 . Different from the recent approaches focused on learning deep image embeddings, our work is particularly focused on the clustering task. As shown in the experiments, various features can be plugged into our clustering framework, including features that are trained in an unsupervised manner @cite_3 .
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_17", "@cite_3", "@cite_19", "@cite_10", "@cite_11" ], "mid": [ "2963365397", "2129793592", "2741943936", "2883725317", "2533545350", "2964074409", "2146561754" ], "abstract": [ "We present a novel deep neural network architecture for unsupervised subspace clustering. This architecture is built upon deep auto-encoders, which non-linearly map the input data into a latent space. Our key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the \"self-expressiveness\" property that has proven effective in traditional subspace clustering. Being differentiable, our new self-expressive layer provides a simple but effective way to learn pairwise affinities between all data points through a standard back-propagation procedure. Being nonlinear, our neural-network based method is able to cluster data points having complex (often nonlinear) structures. We further propose pre-training and fine-tuning strategies that let us effectively learn the parameters of our subspace clustering networks. Our experiments show that the proposed method significantly outperforms the state-of-the-art unsupervised subspace clustering methods.", "In this paper, we propose a new image clustering algorithm, referred to as clustering using local discriminant models and global integration (LDMGI). To deal with the data points sampled from a nonlinear manifold, for each data point, we construct a local clique comprising this data point and its neighboring data points. Inspired by the Fisher criterion, we use a local discriminant model for each local clique to evaluate the clustering performance of samples within the local clique. To obtain the clustering result, we further propose a unified objective function to globally integrate the local models of all the local cliques. With the unified objective function, spectral relaxation and spectral rotation are used to obtain the binary cluster indicator matrix for all the samples. We show that LDMGI shares a similar objective function with the spectral clustering (SC) algorithms, e.g., normalized cut (NCut). In contrast to NCut in which the Laplacian matrix is directly calculated based upon a Gaussian function, a new Laplacian matrix is learnt in LDMGI by exploiting both manifold structure and local discriminant information. We also prove that K-means and discriminative K-means (DisKmeans) are both special cases of LDMGI. Extensive experiments on several benchmark image datasets demonstrate the effectiveness of LDMGI. We observe in the experiments that LDMGI is more robust to algorithmic parameter, when compared with NCut. Thus, LDMGI is more appealing for the real image clustering applications in which the ground truth is generally not available for tuning algorithmic parameters.", "", "Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large-scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.", "Most learning approaches treat dimensionality reduction (DR) and clustering separately (i.e., sequentially), but recent research has shown that optimizing the two tasks jointly can substantially improve the performance of both. The premise behind the latter genre is that the data samples are obtained via linear transformation of latent representations that are easy to cluster; but in practice, the transformation from the latent space to the data can be more complicated. In this work, we assume that this transformation is an unknown and possibly nonlinear function. To recover the 'clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN). The motivation is to keep the advantages of jointly optimizing the two tasks, while exploiting the deep neural network's ability to approximate any nonlinear function. This way, the proposed approach can work well for a broad class of generative models. Towards this end, we carefully design the DNN structure and the associated joint optimization criterion, and propose an effective and scalable algorithm to handle the formulated optimization problem. Experiments using different real datasets are employed to showcase the effectiveness of the proposed approach.", "Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.", "We define a “good image cluster” as one in which images can be easily composed (like a puzzle) using pieces from each other, while are difficult to compose from images outside the cluster. The larger and more statistically significant the pieces are, the stronger the affinity between the images. This gives rise to unsupervised discovery of very challenging image categories. We further show how multiple images can be composed from each other simultaneously and efficiently using a collaborative randomized search algorithm. This collaborative process exploits the “wisdom of crowds of images”, to obtain a sparse yet meaningful set of image affinities, and in time which is almost linear in the size of the image collection. “Clustering-by-Composition” yields state-of-the-art results on current benchmark data sets. It further yields promising results on new challenging data sets, such as data sets with very few images (where a ‘cluster model’ cannot be ‘learned’ by current methods), and a subset of the PASCAL VOC data set (with huge variability in scale and appearance)." ] }
1905.00794
2943251453
In this paper, we propose a speed-up approach for subclass discriminant analysis and formulate a novel efficient multi-view solution to it. The speed-up approach is developed based on graph embedding and spectral regression approaches that involve eigendecomposition of the corresponding Laplacian matrix and regression to its eigenvectors. We show that by exploiting the structure of the between-class Laplacian matrix, the eigendecomposition step can be substituted with a much faster process. Furthermore, we formulate a novel criterion for multi-view subclass discriminant analysis and show that an efficient solution for it can be obtained in a similar to the single-view manner. We evaluate the proposed methods on nine single-view and nine multi-view datasets and compare them with related existing approaches. Experimental results show that the proposed solutions achieve competitive performance, often outperforming the existing methods. At the same time, they significantly decrease the training time.
Many extensions to LDA have been proposed over the recent years. Methods relaxing the assumption of LDA for normally distributed classes and the limitations on the dimensionality of the learned subspace in binary problems have been recently proposed in @cite_3 @cite_2 @cite_4 . CDA @cite_18 relaxes the assumption on unimodal classes and applies clustering techniques to incorporate the subclass structure of the data in the training process. SMFA relies on a framework of Subclass Graph Embeddings @cite_26 , where the dimensionality reduction problem is described from a graph embedding perspective. The problem is defined by intrinsic and penalty graph matrices, which are built relying on the label information of @math nearest neighbors of the data points, as defined by Euclidean distance or some other distance metric. The intrinsic graph matrix represents the compactness within the subclass, while penalty graph matrix enforces penalization to ensure inter-class separability.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2092395618", "603167732", "1994997914", "2019127072", "2035667327" ], "abstract": [ "This paper describes a new clustering based feature extraction method for facial expression recognition. We demonstrate the effectiveness of this method and compare it with commonly used principal component analysis method and linear discriminant analysis method.", "Subspace learning techniques have been extensively used for dimensionality reduction (DR) in many pattern classification problem domains. Recently, methods like Subclass Discriminant Analysis (SDA) and Clustering-based Discriminant Analysis (CDA), which use subclass information for the discrimination between the data classes, have attracted much attention. In parallel, important work has been accomplished on Graph Embedding (GE), which is a general framework unifying several subspace learning techniques. In this paper, GE has been extended in order to integrate subclass discriminant information resulting to the novel Subclass Graph Embedding (SGE) framework, which is the main contribution of our work. It is proven that SGE encapsulates a diversity of both supervised and unsupervised unimodal methods like Locality Preserving Projections (LPP), Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The theoretical link of SDA and CDA methods with SGE is also established. Along these lines, it is shown that SGE comprises a generalization of the typical GE framework including subclass DR methods. Moreover, it allows for an easy utilization of kernels for confronting non-linearly separable data. Employing SGE, in this paper a novel DR algorithm, which uses subclass discriminant information, called Subclass Marginal Fisher Analysis (SMFA) has been proposed. Through a series of experiments on various real-world datasets, it is shown that SMFA outperforms in most of the cases the state-of-the-art demonstrating the efficacy and power of SGE as a platform to develop new methods. HighlightsGraph Embedding is extended in order to integrate subclass informationThe novel Subclass Graph Embedding framework is proposed.The kernelized version of the new framework is presentedSubclass Graph Embedding encapsulates various subspace learning methods.A novel Subclass Marginal Fisher Analysis method is proposed.", "In this paper, a novel nonlinear subspace learning technique for class-specific data representation is proposed. A novel data representation is obtained by applying nonlinear class-specific data projection to a discriminant feature space, where the data belonging to the class under consideration are enforced to be close to their class representation, while the data belonging to the remaining classes are enforced to be as far as possible from it. A class is represented by an optimized class vector, enhancing class discrimination in the resulting feature space. An iterative optimization scheme is proposed to this end, where both the optimal nonlinear data projection and the optimal class representation are determined in each optimization step. The proposed approach is tested on three problems relating to human behavior analysis: Face recognition, facial expression recognition, and human action recognition. Experimental results denote the effectiveness of the proposed approach, since the proposed class-specific reference discriminant analysis outperforms kernel discriminant analysis, kernel spectral regression, and class-specific kernel discriminant analysis, as well as support vector machine-based classification, in most cases.", "Linear discriminant analysis (LDA) is a widely used technique for supervised feature extraction and dimensionality reduction. LDA determines an optimal discriminant space for linear data projection based on certain assumptions, e.g., on using normal distributions for each class and employing class representation by the mean class vectors. However, there might be other vectors that can represent each class, to increase class discrimination. In this brief, we propose an optimization scheme aiming at the optimal class representation, in terms of Fisher ratio maximization, for LDA-based data projection. Compared with the standard LDA approach, the proposed optimization scheme increases class discrimination in the reduced dimensionality space and achieves higher classification rates in publicly available data sets.", "Linear Discriminant Analysis (LDA) and its nonlinear version Kernel Discriminant Analysis (KDA) are well-known and widely used techniques for supervised feature extraction and dimensionality reduction. They determine an optimal discriminant space for (non)linear data projection based on certain assumptions, e.g. on using normal distributions (either on the input or in the kernel space) for each class and employing class representation by the corresponding class mean vectors. However, there might be other vectors that can be used for classes representation, in order to increase class discrimination in the resulted feature space. In this paper, we propose an optimization scheme aiming at the optimal class representation, in terms of Fisher ratio maximization, for nonlinear data projection. Compared to the standard approach, the proposed optimization scheme increases class discrimination in the reduced-dimensionality feature space and achieves higher classification rates in publicly available data sets." ] }
1905.00794
2943251453
In this paper, we propose a speed-up approach for subclass discriminant analysis and formulate a novel efficient multi-view solution to it. The speed-up approach is developed based on graph embedding and spectral regression approaches that involve eigendecomposition of the corresponding Laplacian matrix and regression to its eigenvectors. We show that by exploiting the structure of the between-class Laplacian matrix, the eigendecomposition step can be substituted with a much faster process. Furthermore, we formulate a novel criterion for multi-view subclass discriminant analysis and show that an efficient solution for it can be obtained in a similar to the single-view manner. We evaluate the proposed methods on nine single-view and nine multi-view datasets and compare them with related existing approaches. Experimental results show that the proposed solutions achieve competitive performance, often outperforming the existing methods. At the same time, they significantly decrease the training time.
Conventional approach to solving the nonlinear problems involves exploitation of kernel function defined over the pair of data points in @math that maps them to the dot product of their projections in @math and formulating the problem accordingly. By exploiting the dot product representation, the explicit mapping of each data point @math in @math to its image @math can be omitted, therefore avoiding the issues related to the arbitrary dimensionality of @math . The @math kernel matrix @math is defined as @math . It is easy to note that since @math , where @math . According to the Representer Theorem @cite_33 , @math can be represented as a linear combination of data in @math Therefore, @math
{ "cite_N": [ "@cite_33" ], "mid": [ "2088032561" ], "abstract": [ "This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data." ] }
1905.00794
2943251453
In this paper, we propose a speed-up approach for subclass discriminant analysis and formulate a novel efficient multi-view solution to it. The speed-up approach is developed based on graph embedding and spectral regression approaches that involve eigendecomposition of the corresponding Laplacian matrix and regression to its eigenvectors. We show that by exploiting the structure of the between-class Laplacian matrix, the eigendecomposition step can be substituted with a much faster process. Furthermore, we formulate a novel criterion for multi-view subclass discriminant analysis and show that an efficient solution for it can be obtained in a similar to the single-view manner. We evaluate the proposed methods on nine single-view and nine multi-view datasets and compare them with related existing approaches. Experimental results show that the proposed solutions achieve competitive performance, often outperforming the existing methods. At the same time, they significantly decrease the training time.
The kernelization of the SDA can be easily obtained by exploiting the modified representation of @math and @math (7) @cite_17 . Here we can assume that data is centered in @math . The kernel matrix of the centered data can be obtained as in (12) @cite_48 where @math is a vector of ones.
{ "cite_N": [ "@cite_48", "@cite_17" ], "mid": [ "2140095548", "2047737850" ], "abstract": [ "A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map—for instance, the space of all possible five-pixel products in 16 × 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.", "In order to overcome the restricts of linear discriminant analysis (LDA), such as multivariate Normal distributed classes with equal covariance matrix but different means and the single-cluster structure in each class, subclass discriminant analysis (SDA) is proposed recently. In this paper the kernel SDA is presented, called KSDA. Moreover, we reformulate SDA so as to avoid the complicated derivation in the feature space. The encouraging experimental results on eight UCI data sets demonstrate the efficiency of our method." ] }
1905.00794
2943251453
In this paper, we propose a speed-up approach for subclass discriminant analysis and formulate a novel efficient multi-view solution to it. The speed-up approach is developed based on graph embedding and spectral regression approaches that involve eigendecomposition of the corresponding Laplacian matrix and regression to its eigenvectors. We show that by exploiting the structure of the between-class Laplacian matrix, the eigendecomposition step can be substituted with a much faster process. Furthermore, we formulate a novel criterion for multi-view subclass discriminant analysis and show that an efficient solution for it can be obtained in a similar to the single-view manner. We evaluate the proposed methods on nine single-view and nine multi-view datasets and compare them with related existing approaches. Experimental results show that the proposed solutions achieve competitive performance, often outperforming the existing methods. At the same time, they significantly decrease the training time.
In multi-view learning, the data @math is described from @math views and we seek to find @math matrices @math that project the data @math from all views @math to a common (latent) space, where the separability between the classes is the highest. A generalized framework for multi-view subspace learning, that includes many of the existing methods as special cases, was proposed in @cite_0 . Here, the optimization problem is defined as where @math and @math are the inter-view and intra-view covariance matrices. The solution is obtained by solving the generalized eigendecomposition problem ), where @math is the projection matrix of the view @math . The feature vectors in the latent space are obtained as @math , where @math is data representation in the view @math . =1pt Here, ), ), ), where @math is either @math or @math , as defined below, @math and @math are the view labels, and @math is the number of views.
{ "cite_N": [ "@cite_0" ], "mid": [ "2414522539" ], "abstract": [ "In this paper, the problem of multi-view embedding from different visual cues and modalities is considered. We propose a unified solution for subspace learning methods using the Rayleigh quotient, which is extensible for multiple views, supervised learning, and nonlinear embeddings. Numerous methods including canonical correlation analysis, partial least square regression, and linear discriminant analysis are studied using specific intrinsic and penalty graphs within the same framework. Nonlinear extensions based on kernels and (deep) neural networks are derived, achieving better performance than the linear ones. Moreover, a novel multi-view modular discriminant analysis is proposed by taking the view difference into consideration. We demonstrate the effectiveness of the proposed multi-view embedding methods on visual object recognition and cross-modal image retrieval, and obtain superior results in both applications compared to related methods." ] }
1905.00794
2943251453
In this paper, we propose a speed-up approach for subclass discriminant analysis and formulate a novel efficient multi-view solution to it. The speed-up approach is developed based on graph embedding and spectral regression approaches that involve eigendecomposition of the corresponding Laplacian matrix and regression to its eigenvectors. We show that by exploiting the structure of the between-class Laplacian matrix, the eigendecomposition step can be substituted with a much faster process. Furthermore, we formulate a novel criterion for multi-view subclass discriminant analysis and show that an efficient solution for it can be obtained in a similar to the single-view manner. We evaluate the proposed methods on nine single-view and nine multi-view datasets and compare them with related existing approaches. Experimental results show that the proposed solutions achieve competitive performance, often outperforming the existing methods. At the same time, they significantly decrease the training time.
In this section, we focus on the spectral regression approach that was introduced as a way of speeding up the eigendecomposition step of LDA @cite_35 . It has been shown that the solution of the generalized eigendecomposition problem (10) is equivalent to the problem @math with the same eigenpairs, for @math and @math = @math : Exploiting this fact, the solution of (10) can be obtained by solving an eigenvalue decomposition problem @math and finding such @math that @math . In practice, such @math may not always exist, but it can be approximated with the closest value in the least squares sense: where @math is a regularization parameter and @math = @math .
{ "cite_N": [ "@cite_35" ], "mid": [ "2106253207" ], "abstract": [ "Linear Discriminant Analysis (LDA) has been a popular method for extracting features that preserves class separability. The projection functions of LDA are commonly obtained by maximizing the between-class covariance and simultaneously minimizing the within-class covariance. It has been widely used in many fields of information processing, such as machine learning, data mining, information retrieval, and pattern recognition. However, the computation of LDA involves dense matrices eigendecomposition, which can be computationally expensive in both time and memory. Specifically, LDA has O(mnt + t3) time complexity and requires O(mn + mt + nt) memory, where m is the number of samples, n is the number of features, and t = min(m,n). When both m and n are large, it is infeasible to apply LDA. In this paper, we propose a novel algorithm for discriminant analysis, called Spectral Regression Discriminant Analysis (SRDA). By using spectral graph analysis, SRDA casts discriminant analysis into a regression framework that facilitates both efficient computation and the use of regularization techniques. Specifically, SRDA only needs to solve a set of regularized least squares problems, and there is no eigenvector computation involved, which is a huge save of both time and memory. Our theoretical analysis shows that SRDA can be computed with O(mn) time and O(ms) memory, where .s(les n) is the average number of nonzero features in each sample. Extensive experimental results on four real-world data sets demonstrate the effectiveness and efficiency of our algorithm." ] }
1905.00794
2943251453
In this paper, we propose a speed-up approach for subclass discriminant analysis and formulate a novel efficient multi-view solution to it. The speed-up approach is developed based on graph embedding and spectral regression approaches that involve eigendecomposition of the corresponding Laplacian matrix and regression to its eigenvectors. We show that by exploiting the structure of the between-class Laplacian matrix, the eigendecomposition step can be substituted with a much faster process. Furthermore, we formulate a novel criterion for multi-view subclass discriminant analysis and show that an efficient solution for it can be obtained in a similar to the single-view manner. We evaluate the proposed methods on nine single-view and nine multi-view datasets and compare them with related existing approaches. Experimental results show that the proposed solutions achieve competitive performance, often outperforming the existing methods. At the same time, they significantly decrease the training time.
Spectral Regression Discriminant Analysis (SRDA) was proposed as an extension to LDA based on the spectral regression @cite_35 . It has been shown that in the case of LDA the matrix @math (33) has @math eigenvectors corresponding to nonzero values, all of which correspond to the eigenvalue of 1 and have the form of where @math is the class label, @math is the number of samples in class @math and @math is the number of classes. Therefore, the solution can be obtained by selecting the vector of ones as the first eigenvector and obtaining the rest by orthogonalization of the vectors of the structure as in (37). A tensor extension to SRDA has been recently proposed in @cite_46 , where the eigendecomposition problem of Higher Order Discriminant Analysis is transformed into a regression problem.
{ "cite_N": [ "@cite_35", "@cite_46" ], "mid": [ "2106253207", "2613951549" ], "abstract": [ "Linear Discriminant Analysis (LDA) has been a popular method for extracting features that preserves class separability. The projection functions of LDA are commonly obtained by maximizing the between-class covariance and simultaneously minimizing the within-class covariance. It has been widely used in many fields of information processing, such as machine learning, data mining, information retrieval, and pattern recognition. However, the computation of LDA involves dense matrices eigendecomposition, which can be computationally expensive in both time and memory. Specifically, LDA has O(mnt + t3) time complexity and requires O(mn + mt + nt) memory, where m is the number of samples, n is the number of features, and t = min(m,n). When both m and n are large, it is infeasible to apply LDA. In this paper, we propose a novel algorithm for discriminant analysis, called Spectral Regression Discriminant Analysis (SRDA). By using spectral graph analysis, SRDA casts discriminant analysis into a regression framework that facilitates both efficient computation and the use of regularization techniques. Specifically, SRDA only needs to solve a set of regularized least squares problems, and there is no eigenvector computation involved, which is a huge save of both time and memory. Our theoretical analysis shows that SRDA can be computed with O(mn) time and O(ms) memory, where .s(les n) is the average number of nonzero features in each sample. Extensive experimental results on four real-world data sets demonstrate the effectiveness and efficiency of our algorithm.", "Abstract Tensors are valuable tools to represent Electroencephalogram (EEG) data. Tucker decomposition is the most used tensor decomposition in multidimensional discriminant analysis and tensor extension of Linear Discriminant Analysis (LDA), called Higher Order Discriminant Analysis (HODA), is a popular tensor discriminant method used for analyzing Event Related Potentials (ERP). In this paper, we introduce a new tensor-based feature reduction technique, named Higher Order Spectral Regression Discriminant Analysis (HOSRDA), for use in a classification framework for ERP detection. The proposed method (HOSRDA) is a tensor extension of Spectral Regression Discriminant Analysis (SRDA) and casts the eigenproblem of HODA to a regression problem. The formulation of HOSRDA can open a new framework for adding different regularization constraints in higher order feature reduction problem. Additionally, when the dimension and number of samples is very large, the regression problem can be solved via efficient iterative algorithms. We applied HOSRDA on data of a P300 speller from BCI competition III and reached average character detection accuracy of 96.5 for the two subjects. HOSRDA outperforms almost all of other reported methods on this dataset. Additionally, the results of our method are fairly comparable with those of other methods when 5 and 10 repetitions are used in the P300 speller paradigm." ] }
1905.00794
2943251453
In this paper, we propose a speed-up approach for subclass discriminant analysis and formulate a novel efficient multi-view solution to it. The speed-up approach is developed based on graph embedding and spectral regression approaches that involve eigendecomposition of the corresponding Laplacian matrix and regression to its eigenvectors. We show that by exploiting the structure of the between-class Laplacian matrix, the eigendecomposition step can be substituted with a much faster process. Furthermore, we formulate a novel criterion for multi-view subclass discriminant analysis and show that an efficient solution for it can be obtained in a similar to the single-view manner. We evaluate the proposed methods on nine single-view and nine multi-view datasets and compare them with related existing approaches. Experimental results show that the proposed solutions achieve competitive performance, often outperforming the existing methods. At the same time, they significantly decrease the training time.
A kernelized version of the spectral regression was proposed in @cite_28 . In this case, the objective is to solve the eigendecomposition problem @math , which is equivalent to solving the eigendecomposition problem of @math given @math : Then the kernel regression is applied to obtain where @math is the regularization parameter.
{ "cite_N": [ "@cite_28" ], "mid": [ "2085044339" ], "abstract": [ "Linear discriminant analysis (LDA) has been a popular method for dimensionality reduction, which preserves class separability. The projection vectors are commonly obtained by maximizing the between-class covariance and simultaneously minimizing the within-class covariance. LDA can be performed either in the original input space or in the reproducing kernel Hilbert space (RKHS) into which data points are mapped, which leads to kernel discriminant analysis (KDA). When the data are highly nonlinear distributed, KDA can achieve better performance than LDA. However, computing the projective functions in KDA involves eigen-decomposition of kernel matrix, which is very expensive when a large number of training samples exist. In this paper, we present a new algorithm for kernel discriminant analysis, called Spectral Regression Kernel Discriminant Analysis (SRKDA). By using spectral graph analysis, SRKDA casts discriminant analysis into a regression framework, which facilitates both efficient computation and the use of regularization techniques. Specifically, SRKDA only needs to solve a set of regularized regression problems, and there is no eigenvector computation involved, which is a huge save of computational cost. The new formulation makes it very easy to develop incremental version of the algorithm, which can fully utilize the computational results of the existing training samples. Moreover, it is easy to produce sparse projections (Sparse KDA) with a L 1-norm regularizer. Extensive experiments on spoken letter, handwritten digit image and face image data demonstrate the effectiveness and efficiency of the proposed algorithm." ] }
1905.00794
2943251453
In this paper, we propose a speed-up approach for subclass discriminant analysis and formulate a novel efficient multi-view solution to it. The speed-up approach is developed based on graph embedding and spectral regression approaches that involve eigendecomposition of the corresponding Laplacian matrix and regression to its eigenvectors. We show that by exploiting the structure of the between-class Laplacian matrix, the eigendecomposition step can be substituted with a much faster process. Furthermore, we formulate a novel criterion for multi-view subclass discriminant analysis and show that an efficient solution for it can be obtained in a similar to the single-view manner. We evaluate the proposed methods on nine single-view and nine multi-view datasets and compare them with related existing approaches. Experimental results show that the proposed solutions achieve competitive performance, often outperforming the existing methods. At the same time, they significantly decrease the training time.
For large-scale datasets, kernel regression method can be substituted by an approximate kernel regression, where @math is expressed as a linear combination of @math reference vectors @math @cite_19 . We define @math , where @math is a set of reference vectors in @math . The reference vectors in @math correspond to @math prototype vectors from @math that can be randomly selected training vectors from @math , random data following the same distribution as data in @math , subclass centers obtained by clustering all data, or subclass centers obtained by clustering data in each subclass separately.
{ "cite_N": [ "@cite_19" ], "mid": [ "2472954743" ], "abstract": [ "In this paper, a novel approximate solution of the criterion used in non-linear class-specific discriminant subspace learning is proposed. We build on the class-specific kernel spectral regression method, which is a two-step process formed by an eigenanalysis step and a kernel regression step. Based on the structure of the intra-class and out-of-class scatter matrices, we provide a fast solution for the first step. For the second step, we propose the use of approximate kernel space definitions. We analytically show that the adoption of randomized and class-specific kernels has the effect of regularization and Nystrom-based approximation, respectively. We evaluate the proposed approach in face verification problems and compare it with the existing approaches. Experimental results show the effectiveness and efficiency of the proposed approximate class-specific kernel spectral regression method, since it can provide satisfactory performance and scale well with the size of the data." ] }
1905.00794
2943251453
In this paper, we propose a speed-up approach for subclass discriminant analysis and formulate a novel efficient multi-view solution to it. The speed-up approach is developed based on graph embedding and spectral regression approaches that involve eigendecomposition of the corresponding Laplacian matrix and regression to its eigenvectors. We show that by exploiting the structure of the between-class Laplacian matrix, the eigendecomposition step can be substituted with a much faster process. Furthermore, we formulate a novel criterion for multi-view subclass discriminant analysis and show that an efficient solution for it can be obtained in a similar to the single-view manner. We evaluate the proposed methods on nine single-view and nine multi-view datasets and compare them with related existing approaches. Experimental results show that the proposed solutions achieve competitive performance, often outperforming the existing methods. At the same time, they significantly decrease the training time.
The above-described process for solving the SDA optimization problem provides several advantages. Firstly, as we will show in the next section, the eigendecomposition step (33) can be substituted with a much faster process. Secondly, the eigendecomposition step (10) or (17) is avoided and substituted with the least squares regression, for which several efficient solutions exist @cite_44 .
{ "cite_N": [ "@cite_44" ], "mid": [ "2097897435" ], "abstract": [ "An iterative method is given for solving Ax ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I QR with several other conjugate-gradient algorithms, indicating that I QR is the most reliable algorithm when A is ill-conditioned." ] }
1905.00741
2943584704
Training agents with reinforcement learning based techniques requires thousands of steps, which translates to long training periods when applied to robots. By training the policy in a simulated environment we avoid such limitation. Typically, the action spaces in a simulation and real robot are kept as similar as possible, but if we want to use a generic simulation environment, this strategy will not work. Video games, such as Doom (1993), offer a crude but multi-purpose environments that can used for learning various tasks. However, original Doom has four discrete actions for movement and the robot in our case has two continuous actions. In this work, we study the transfer between these two different action spaces. We begin with experiments in a simulated environment, after which we validate the results with experiments on a real robot. Results show that fine-tuning initially learned network parameters leads to unreliable results, but by keeping most of the neural network frozen we obtain above @math success rate in simulation and real robot experiments.
Our work is closely related to experiments conducted by Rusu in @cite_6 , where authors presented a neural network architecture which was able to quickly learn to play new Atari game, once it was originally trained in another game. While methods defined in their work are similar to ours, we focus on same task under different action spaces, rather than transferring skill between different tasks. Gupta @cite_20 presented a method to learn a feature extraction method using multiple skills, by finding which skill in one domain is a closest match to skill in another domain and demonstrated the effectiveness of the method by transferring learned skills between morphologically two different robots with different number of joints.
{ "cite_N": [ "@cite_20", "@cite_6" ], "mid": [ "2605368761", "2426267443" ], "abstract": [ "People can learn a wide range of tasks from their own experience, but can also learn from observing other creatures. This can accelerate acquisition of new skills even when the observed agent differs substantially from the learning agent in terms of morphology. In this paper, we examine how reinforcement learning algorithms can transfer knowledge between morphologically different agents (e.g., different robots). We introduce a problem formulation where twp agents are tasked with learning multiple skills by sharing information. Our method uses the skills that were learned by both agents to train invariant feature spaces that can then be used to transfer other skills from one agent to another. The process of learning these invariant feature spaces can be viewed as a kind of analogy making,'' or implicit learning of partial correspondences between two distinct domains. We evaluate our transfer learning algorithm in two simulated robotic manipulation skills, and illustrate that we can transfer knowledge between simulated robotic arms with different numbers of links, as well as simulated arms with different actuation mechanisms, where one robot is torque-driven while the other is tendon-driven.", "Methods and systems for performing a sequence of machine learning tasks. One system includes a sequence of deep neural networks (DNNs), including: a first DNN corresponding to a first machine learning task, wherein the first DNN comprises a first plurality of indexed layers, and each layer in the first plurality of indexed layers is configured to receive a respective layer input and process the layer input to generate a respective layer output; and one or more subsequent DNNs corresponding to one or more respective machine learning tasks, wherein each subsequent DNN comprises a respective plurality of indexed layers, and each layer in a respective plurality of indexed layers with index greater than one receives input from a preceding layer of the respective subsequent DNN, and one or more preceding layers of respective preceding DNNs, wherein a preceding layer is a layer whose index is one less than the current index." ] }
1905.00741
2943584704
Training agents with reinforcement learning based techniques requires thousands of steps, which translates to long training periods when applied to robots. By training the policy in a simulated environment we avoid such limitation. Typically, the action spaces in a simulation and real robot are kept as similar as possible, but if we want to use a generic simulation environment, this strategy will not work. Video games, such as Doom (1993), offer a crude but multi-purpose environments that can used for learning various tasks. However, original Doom has four discrete actions for movement and the robot in our case has two continuous actions. In this work, we study the transfer between these two different action spaces. We begin with experiments in a simulated environment, after which we validate the results with experiments on a real robot. Results show that fine-tuning initially learned network parameters leads to unreliable results, but by keeping most of the neural network frozen we obtain above @math success rate in simulation and real robot experiments.
This work was motivated by the popularity of using reinforcement learning in robotics, despite RL being known to require large number of training samples and thus making it difficult to apply to robotics @cite_17 . Part of this RL plus robotics work focuses on training policies in simulations and then transferring them to real robot, with or without further training on the robot @cite_18 . Such work focuses on e.g. learning models that predict real-world dynamics @cite_23 or the use of high-fidelity simulations that are tuned to match the real-world @cite_8 @cite_17 .
{ "cite_N": [ "@cite_8", "@cite_18", "@cite_23", "@cite_17" ], "mid": [ "2963184939", "2097381042", "2911087563", "2963428623" ], "abstract": [ "", "The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.", "Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. In the present work, we introduce a method for training a neural network policy in simulation and transferring it to a state-of-the-art legged system, thereby leveraging fast, automated, and cost-effective data generation schemes. The approach is applied to the ANYmal robot, a sophisticated medium-dog–sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than before, and recovering from falling even in complex configurations.", "We present a learning-based mapless motion planner by taking the sparse 10-dimensional range findings and the target position with respect to the mobile robot coordinate frame as input and the continuous steering commands as output. Traditional motion planners for mobile ground robots with a laser range sensor mostly depend on the obstacle map of the navigation environment where both the highly precise laser sensor and the obstacle map building work of the environment are indispensable. We show that, through an asynchronous deep reinforcement learning method, a mapless motion planner can be trained end-to-end without any manually designed features and prior demonstrations. The trained planner can be directly applied in unseen virtual and real environments. The experiments show that the proposed mapless motion planner can navigate the nonholonomic mobile robot to the desired targets without colliding with any obstacles." ] }
1905.00741
2943584704
Training agents with reinforcement learning based techniques requires thousands of steps, which translates to long training periods when applied to robots. By training the policy in a simulated environment we avoid such limitation. Typically, the action spaces in a simulation and real robot are kept as similar as possible, but if we want to use a generic simulation environment, this strategy will not work. Video games, such as Doom (1993), offer a crude but multi-purpose environments that can used for learning various tasks. However, original Doom has four discrete actions for movement and the robot in our case has two continuous actions. In this work, we study the transfer between these two different action spaces. We begin with experiments in a simulated environment, after which we validate the results with experiments on a real robot. Results show that fine-tuning initially learned network parameters leads to unreliable results, but by keeping most of the neural network frozen we obtain above @math success rate in simulation and real robot experiments.
is one such technique for learning real-world policies in simulations. This entails randomizing the simulation in different ways, so that the learned policy has to generalize over different system dynamics. This includes randomizing the visual appearance of the simulation @cite_10 , randomizing the dynamics such as friction @cite_22 and or including limitations of the robots such as delays between decided actions and actuated actions @cite_2 .
{ "cite_N": [ "@cite_10", "@cite_22", "@cite_2" ], "mid": [ "2605102758", "", "2591697182" ], "abstract": [ "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.", "", "In this work we propose an approach to learn a robust policy for solving the pivoting task. Recently, several model-free continuous control algorithms were shown to learn successful policies without prior knowledge of the dynamics of the task. However, obtaining successful policies required thousands to millions of training episodes, limiting the applicability of these approaches to real hardware. We developed a training procedure that allows us to use a simple custom simulator to learn policies robust to the mismatch of simulation vs robot. In our experiments, we demonstrate that the policy learned in the simulator is able to pivot the object to the desired target angle on the real robot. We also show generalization to an object with different inertia, shape, mass and friction properties than those used during training. This result is a step towards making model-free reinforcement learning available for solving robotics tasks via pre-training in simulators that offer only an imprecise match to the real-world dynamics." ] }
1905.00741
2943584704
Training agents with reinforcement learning based techniques requires thousands of steps, which translates to long training periods when applied to robots. By training the policy in a simulated environment we avoid such limitation. Typically, the action spaces in a simulation and real robot are kept as similar as possible, but if we want to use a generic simulation environment, this strategy will not work. Video games, such as Doom (1993), offer a crude but multi-purpose environments that can used for learning various tasks. However, original Doom has four discrete actions for movement and the robot in our case has two continuous actions. In this work, we study the transfer between these two different action spaces. We begin with experiments in a simulated environment, after which we validate the results with experiments on a real robot. Results show that fine-tuning initially learned network parameters leads to unreliable results, but by keeping most of the neural network frozen we obtain above @math success rate in simulation and real robot experiments.
Video games can be, and have been, used as benchmark environments for different learning techniques (e.g. Atari games @cite_19 , Doom @cite_9 , Starcraft @cite_16 and Toribash @cite_5 ). They provide wide range of complex tasks and environments, which were originally designed for human players. While they lack the fidelity of accurate physics simulations, they emulate the real-world to an extent human players are comfortable with.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_16", "@cite_5" ], "mid": [ "2150468603", "2963871073", "2547416798", "2883192382" ], "abstract": [ "In this article we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. All of the software, including the benchmark agents, is publicly available.", "The recent advances in deep neural networks have led to effective vision-based reinforcement learning methods that have been employed to obtain human-level controllers in Atari 2600 games from pixel data. Atari 2600 games, however, do not resemble real-world tasks since they involve non-realistic 2D environments and the third-person perspective. Here, we propose a novel test-bed platform for reinforcement learning research from raw visual information which employs the first-person perspective in a semi-realistic 3D world. The software, called ViZDoom, is based on the classical first-person shooter video game, Doom. It allows developing bots that play the game using the screen buffer. ViZDoom is lightweight, fast, and highly customizable via a convenient mechanism of user scenarios. In the experimental part, we test the environment by trying to learn bots for two scenarios: a basic move-and-shoot task and a more complex maze-navigation problem. Using convolutional deep neural networks with Q-learning and experience replay, for both scenarios, we were able to train competent bots, which exhibit human-like behaviors. The results confirm the utility of ViZDoom as an AI research platform and imply that visual reinforcement learning in 3D realistic first-person perspective environments is feasible.", "We present TorchCraft, a library that enables deep learning research on Real-Time Strategy (RTS) games such as StarCraft: Brood War, by making it easier to control these games from a machine learning framework, here Torch [9]. This white paper argues for using RTS games as a benchmark for AI research, and describes the design and components of TorchCraft.", "We present Toribash Learning Environment (ToriLLE), a learning environment for machine learning agents based on the video game Toribash. Toribash is a MuJoCo-like environment of two humanoid character fighting each other hand-to-hand, controlled by changing actuation modes of the joints. Competitive nature of Toribash as well its focused domain provide a platform for evaluating self-play methods, and evaluating machine learning agents against human players. In this paper we describe the environment with ToriLLE's capabilities and limitations, and experimentally show its applicability as a learning environment. The source code of the environment and conducted experiments can be found at this https URL." ] }
1905.00724
2943064116
We introduce KnowBias, a system for detecting the degree of political bias in textual content such as social media posts and news articles. In the space of scalable text classification, a common problem is domain mismatch, where easily accessible training data (i.e., tweets) does not correspond in format to the desired testing domain (i.e., longer form article content). While universal text encoders such as word or sentence embeddings could be leveraged to train target agnostic classifiers, such schemes result in poor performance on long-form articles. Our key insight is that long-form articles are a mix of neutral and political sentences, while tweets are concentrated with opinion. We propose a two-step classification system that first automatically filters out neutral sentences from the input text document at evaluation time, and then the resulting text is input into a polarity classifier. We evaluate our two-step approach using a variety of test suites, including a set of tweets and long-form articles where annotations were crowd-sourced to decrease label noise, measuring accuracy and Spearman-rho rank correlation. In practice, KnowBias achieves a high accuracy of 86 (rho = 0.65) on these tweets and 75 (rho = 0.69) on long-form articles.
There is extensive literature @cite_21 in the social science literature on polarization that deals with mathematical models and supervised learning techniques to accurately identify polarity in written content. While the survey recognizes the need for scalability, the authors are pessimistic that automatic content analysis can fully replace humans. Moreover, the methods described in the survey hinge on the availability of reliable and clean annotated data, which we do not have access to in our problem.
{ "cite_N": [ "@cite_21" ], "mid": [ "2095655043" ], "abstract": [ "Politics and political conflict often occur in the written and spoken word. Scholars have long recognized this, but the massive costs of analyzing even moderately sized collections of texts have hindered their use in political science research. Here lies the promise of automated text analysis: it substantially reduces the costs of analyzing large collections of text. We provide a guide to this exciting new area of research and show how, in many instances, the methods have already obtained part of their promise. But there are pitfalls to using automated methods—they are no substitute for careful thought and close reading and require extensive and problem-specific validation. We survey a wide range of new methods, provide guidance on how to validate the output of the models, and clarify misconceptions and errors in the literature. To conclude, we argue that for automated text methods to become a standard tool for political scientists, methodologists must contribute new methods and new methods of validation. Language is the medium for politics and political conflict. Candidates debate and state policy positions during a campaign. Once elected, representatives write and debate legislation. After laws are passed, bureaucrats solicit comments before they issue regulations. Nations regularly negotiate and then sign peace treaties, with language that signals the motivations and relative power of the countries involved. News reports document the day-to-day affairs of international relations that provide a detailed picture of conflict and cooperation. Individual candidates and political parties articulate their views through party platforms and manifestos. Terrorist groups even reveal their preferences and goals through recruiting materials, magazines, and public statements. These examples, and many others throughout political science, show that to understand what politics is about we need to know what political actors are saying and writing. Recognizing that language is central to the study of politics is not new. To the contrary, scholars of politics have long recognized that much of politics is expressed in words. But scholars have struggled when using texts to make inferences about politics. The primary problem is volume: there are simply too many political texts. Rarely are scholars able to manually read all the texts in even moderately sized corpora. And hiring coders to manually read all documents is still very expensive. The result is that" ] }
1905.00724
2943064116
We introduce KnowBias, a system for detecting the degree of political bias in textual content such as social media posts and news articles. In the space of scalable text classification, a common problem is domain mismatch, where easily accessible training data (i.e., tweets) does not correspond in format to the desired testing domain (i.e., longer form article content). While universal text encoders such as word or sentence embeddings could be leveraged to train target agnostic classifiers, such schemes result in poor performance on long-form articles. Our key insight is that long-form articles are a mix of neutral and political sentences, while tweets are concentrated with opinion. We propose a two-step classification system that first automatically filters out neutral sentences from the input text document at evaluation time, and then the resulting text is input into a polarity classifier. We evaluate our two-step approach using a variety of test suites, including a set of tweets and long-form articles where annotations were crowd-sourced to decrease label noise, measuring accuracy and Spearman-rho rank correlation. In practice, KnowBias achieves a high accuracy of 86 (rho = 0.65) on these tweets and 75 (rho = 0.69) on long-form articles.
Nevertheless, the emphasis in existing works is on short texts, and in particular, domain adaptation @cite_12 @cite_9 scenarios described deal with similar tasks and contexts during training and testing. For instance, experiments described in @cite_6 predict the sentiment of kitchen reviews by transferring knowledge from furniture reviews. No prior work in the domain-adaptation space exists for training on short-form content such as tweets and testing on long-form article data.
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_12" ], "mid": [ "", "71795751", "22861983" ], "abstract": [ "", "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.", "The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains." ] }
1905.00854
2943013792
Stochastic simulation is a widely used method for estimating quantities in models of chemical reaction networks where uncertainty plays a crucial role. However, reducing the statistical uncertainty of the corresponding estimators requires the generation of a large number of simulation runs, which is computationally expensive. To reduce the number of necessary runs, we propose a variance reduction technique based on control variates. We exploit constraints on the statistical moments of the stochastic process to reduce the estimators' variances. We develop an algorithm that selects appropriate control variates in an on-line fashion and demonstrate the efficiency of our approach on several case studies.
If the state-space is finite and small enough one can deal with the underlying Markov chain directly. But there are also cases where the transient distribution has an infinitely large support and one can still deal with explicit state probabilities. To this end, one can fix a finite state-space, that should contain most of the probability @cite_6 . Refinements of the method work dynamically and adjust the state-space according to the transient distributions @cite_9 @cite_7 @cite_23 .
{ "cite_N": [ "@cite_9", "@cite_23", "@cite_7", "@cite_6" ], "mid": [ "1849560532", "2039331233", "1535517661", "2066594542" ], "abstract": [ "We propose a numerical technique for parameter inference in Markov models of biological processes. Based on time-series data of a process we estimate the kinetic rate constants by maximizing the likelihood of the data. The computation of the likelihood relies on a dynamic abstraction of the discrete state space of the Markov model which successfully mitigates the problem of state space largeness. We compare two variants of our method to state-of-the-art, recently published methods and demonstrate their usefulness and efficiency on several case studies from systems biology.", "Within systems biology there is an increasing interest in the stochastic behaviour of biochemical reaction networks. An appropriate stochastic description is provided by the chemical master equation, which represents a continuous-time Markov chain (CTMC). The uniformisation technique is an efficient method to compute probability distributions of a CTMC if the number of states is manageable. However, the size of a CTMC that represents a biochemical reaction network is usually far beyond what is feasible. In this study, the authors present an on-the-fly variant of uniformisation, where they improve the original algorithm at the cost of a small approximation error. By means of several examples, the authors show that their approach is particularly well-suited for biochemical reaction networks.", "We present an on-the-fly abstraction technique for infinite-state continuous -time Markov chains. We consider Markov chains that are specified by a finite set of transition classes. Such models naturally represent biochemical reactions and therefore play an important role in the stochastic modeling of biological systems. We approximate the transient probability distributions at various time instances by solving a sequence of dynamically constructed abstract models, each depending on the previous one. Each abstract model is a finite Markov chain that represents the behavior of the original, infinite chain during a specific time interval. Our approach provides complete information about probability distributions, not just about individual parameters like the mean. The error of each abstraction can be computed, and the precision of the abstraction refined when desired. We implemented the algorithm and demonstrate its usefulness and efficiency on several case studies from systems biology.", "This article introduces the finite state projection (FSP) method for use in the stochastic analysis of chemically reacting systems. One can describe the chemical populations of such systems with probability density vectors that evolve according to a set of linear ordinary differential equations known as the chemical master equation (CME). Unlike Monte Carlo methods such as the stochastic simulation algorithm (SSA) or τ leaping, the FSP directly solves or approximates the solution of the CME. If the CME describes a system that has a finite number of distinct population vectors, the FSP method provides an exact analytical solution. When an infinite or extremely large number of population variations is possible, the state space can be truncated, and the FSP method provides a certificate of accuracy for how closely the truncated space approximation matches the true solution. The proposed FSP algorithm systematically increases the projection space in order to meet prespecified tolerance in the total probabilit..." ] }
1905.00854
2943013792
Stochastic simulation is a widely used method for estimating quantities in models of chemical reaction networks where uncertainty plays a crucial role. However, reducing the statistical uncertainty of the corresponding estimators requires the generation of a large number of simulation runs, which is computationally expensive. To reduce the number of necessary runs, we propose a variance reduction technique based on control variates. We exploit constraints on the statistical moments of the stochastic process to reduce the estimators' variances. We develop an algorithm that selects appropriate control variates in an on-line fashion and demonstrate the efficiency of our approach on several case studies.
On the other end of the spectrum there are mean-field approximations, which model the mean densities faithfully in the system size limit @cite_15 . In between there are techniques such as moment closure @cite_14 , that not only consider the mean, but also the variance and other higher order moments. These methods depend on ad-hoc approximations of higher order moments to close the ODE system given by the moment equations. Yet another class of methods approximate molecular counts continuously and approximate the dynamics in such a continuous space, e.g. the system size expansion @cite_0 and the chemical Langevin equation @cite_26 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_14", "@cite_26" ], "mid": [ "2102787760", "2053513378", "2150055571", "2046451925" ], "abstract": [ "Preface to the first edition. Preface to the second edition. Abbreviated references. I. Stochastic variables. II. Random events. III. Stochastic processes. IV. Markov processes. V. The master equation. VI. One-step processes. VII. Chemical reactions. VIII. The Fokker-Planck equation. IX. The Langevin approach. X. The expansion of the master equation. XI. The diffusion type. XII. First-passage problems. XIII. Unstable systems. XIV. Fluctuations in continuous systems. XV. The statistics of jump events. XVI. Stochastic differential equations. XVII. Stochastic behavior of quantum systems.", "In this paper we present an overview of the field of deterministic approximation of Markov processes, both in discrete and continuous times. We will discuss mean field approximation of discrete time Markov chains and fluid approximation of continuous time Markov chains, considering the cases in which the deterministic limit process lives in continuous time or discrete time. We also consider some more advanced results, especially those relating to the limit stationary behaviour. We assume a knowledge of modelling with Markov chains, but not of more advanced topics in stochastic processes.", "In the stochastic formulation of chemical reactions, the dynamics of the first M-order moments of the species populations generally do not form a closed system of differential equations, in the sense that the time-derivatives of first M-order moments generally depend on moments of order higher than M. However, for analysis purposes, these dynamics are often made to be closed by approximating the needed derivatives of the first M-order moments by nonlinear functions of the same moments. These functions are called the moment closure functions. Recent results have introduced the technique of derivative-matching, where the moment closure functions are obtained by first assuming that they exhibit a certain separable form, and then matching time derivatives of the exact (not closed) moment equations with that of the approximate (closed) equations for some initial time and set of initial conditions. However, for multi-species reactions these results have been restricted to second order truncations, i.e, M = 2. This paper extends these results by providing explicit formulas to construct moment closure functions for any arbitrary order of truncation M. We show that with increasing M the closed moment equations provide more accurate approximations to the exact moment equations. Striking features of these moment closure functions are that they are independent of the reaction parameters (reaction rates and stoichiometry) and moreover the dependence of higher-order moment on lower order ones is consistent with the population being jointly lognormally distributed. To illustrate the applicability of our results we consider a simple bi-molecular reaction. Moment estimates from a third order truncation are compared with estimates obtained from a large number of Monte Carlo simulations", "The stochastic dynamical behavior of a well-stirred mixture of N molecular species that chemically interact through M reaction channels is accurately described by the chemical master equation. It is shown here that, whenever two explicit dynamical conditions are satisfied, the microphysical premise from which the chemical master equation is derived leads directly to an approximate time-evolution equation of the Langevin type. This chemical Langevin equation is the same as one studied earlier by Kurtz, in contradistinction to some other earlier proposed forms that assume a deterministic macroscopic evolution law. The novel aspect of the present analysis is that it shows that the accuracy of the equation depends on the satisfaction of certain specific conditions that can change from moment to moment, rather than on a static system size parameter. The derivation affords a new perspective on the origin and magnitude of noise in a chemically reacting system. It also clarifies the connection between the stochas..." ] }
1905.00854
2943013792
Stochastic simulation is a widely used method for estimating quantities in models of chemical reaction networks where uncertainty plays a crucial role. However, reducing the statistical uncertainty of the corresponding estimators requires the generation of a large number of simulation runs, which is computationally expensive. To reduce the number of necessary runs, we propose a variance reduction technique based on control variates. We exploit constraints on the statistical moments of the stochastic process to reduce the estimators' variances. We develop an algorithm that selects appropriate control variates in an on-line fashion and demonstrate the efficiency of our approach on several case studies.
While the moment closure method uses ad-hoc approximations for high order moments to facilitate numerical integration, they can be avoided in some contexts. For the equilibrium distribution, for example, the time-derivative of all moments is equal to zero. This directly yields constraints that have been used for parameter estimation at steady-state @cite_13 and bounding moments of the equilibrium distribution using semi-definite programming @cite_25 @cite_22 @cite_8 . The latter technique of bounding moments has been successfully adapted in the context of transient analysis @cite_1 @cite_30 @cite_24 . We adapt the constraints proposed in these works to improve statistical estimations via stochastic simulation (cf. section ).
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_8", "@cite_1", "@cite_24", "@cite_13", "@cite_25" ], "mid": [ "2607727173", "2727980082", "2745064379", "2786311729", "2804272026", "2768527088", "2964286051" ], "abstract": [ "Model-based prediction of stochastic noise in biomolecular reactions often resorts to approximation with unknown precision. As a result, unexpected stochastic fluctuation causes a headache for the designers of biomolecular circuits. This paper proposes a convex optimization approach to quantifying the steady state moments of molecular copy counts with theoretical rigor. We show that the stochastic moments lie in a convex semi-algebraic set specified by linear matrix inequalities. Thus, the upper and the lower bounds of some moments can be computed by a semidefinite program. Using a protein dimerization process as an example, we demonstrate that the proposed method can precisely predict the mean and the variance of the copy number of the monomer protein.", "", "The stochastic dynamics of biochemical networks are usually modelled with the chemical master equation (CME). The stationary distributions of CMEs are seldom solvable analytically, and numerical methods typically produce estimates with uncontrolled errors. Here, we introduce mathematical programming approaches that yield approximations of these distributions with computable error bounds which enable the verification of their accuracy. First, we use semidefinite programming to compute increasingly tighter upper and lower bounds on the moments of the stationary distributions for networks with rational propensities. Second, we use these moment bounds to formulate linear programs that yield convergent upper and lower bounds on the stationary distributions themselves, their marginals and stationary averages. The bounds obtained also provide a computational test for the uniqueness of the distribution. In the unique case, the bounds form an approximation of the stationary distribution with a computable bound on its error. In the non-unique case, our approach yields converging approximations of the ergodic distributions. We illustrate our methodology through several biochemical examples taken from the literature: Schl \"ogl's model for a chemical bifurcation, a two-dimensional toggle switch, a model for bursty gene expression, and a dimerisation model with multiple stationary distributions.", "Applying the method of moments to the chemical master equation appearing in stochastic chemical kinetics often leads to the so-called closure problem. Recently, several authors showed that this problem can be partially overcome using moment-based semidefinite programs (SDPs). In particular, they showed that moment-based SDPs can be used to calculate rigorous bounds on various descriptions of the stochastic chemical kinetic system’s stationary distribution(s)—for example, mean molecular counts, variances in these counts, and so on. In this paper, we show that these ideas can be extended to the corresponding dynamic problem, calculating time-varying bounds on the same descriptions.", "The predictive ability of stochastic chemical reactions is currently limited by the lack of closed form solutions to the governing chemical master equation. To overcome this limitation, this letter proposes a computational method capable of predicting mathematically rigorous upper and lower bounds of transient moments for reactions governed by the law of mass action. We first derive an equation that transient moments must satisfy based on the moment equation. Although this equation is underdetermined, we introduce a set of semidefinite constraints known as moment condition to narrow the feasible set of the variables in the equation. Using these conditions, we formulate a semidefinite program that efficiently and rigorously computes the bounds of transient moment dynamics. The proposed method is demonstrated with illustrative numerical examples and is compared with related works to discuss advantages and limitations.", "Calibrating parameters is a crucial problem within quantitative modeling approaches to reaction networks. Existing methods for stochastic models rely either on statistical sampling or can only be applied to small systems. Here, we present an inference procedure for stochastic models in equilibrium that is based on a moment matching scheme with optimal weighting and that can be used with high-throughput data like the one collected by flow cytometry. Our method does not require an approximation of the underlying equilibrium probability distribution and, if reaction rate constants have to be learned, the optimal values can be computed by solving a linear system of equations. We discuss important practical issues such as the selection of the moments and evaluate the effectiveness of the proposed approach on three case studies.", "This paper proposes a methodology to estimate characteristic functions of stochastic differential equations that are defined over polynomials. For such systems, the time evolution of the characteristic function is governed by a partial differential equation; consequently, the stationary characteristic function can be obtained by solving an ordinary differential equation (ODE). However, except for a few special cases, the solution to the ODE consists of unknown coefficients. These coefficients are closely related with the stationary moments of the process, which could be estimated by utilizing the fact that the characteristic function is positive definite. The method is illustrated via examples." ] }
1905.00854
2943013792
Stochastic simulation is a widely used method for estimating quantities in models of chemical reaction networks where uncertainty plays a crucial role. However, reducing the statistical uncertainty of the corresponding estimators requires the generation of a large number of simulation runs, which is computationally expensive. To reduce the number of necessary runs, we propose a variance reduction technique based on control variates. We exploit constraints on the statistical moments of the stochastic process to reduce the estimators' variances. We develop an algorithm that selects appropriate control variates in an on-line fashion and demonstrate the efficiency of our approach on several case studies.
While the above techniques give a deterministic output, stochastic simulation generates single executions of the stochastic process @cite_16 . This necessitates accumulating large numbers of simulation runs to estimate quantities. This adds a significant computational burden. Consequently, some effort has been directed at lowering this cost. A prominent technique is @math -leaping @cite_4 , which in one step performs multiple instead of only a single reaction. Another approach is to find approximations that are specific to the problem at hand, such as approximations based on time-scale separations @cite_32 @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_16", "@cite_4", "@cite_32" ], "mid": [ "2234972264", "2155418451", "2084924279", "2018670096" ], "abstract": [ "Stiffness in chemical reaction systems is a frequently encountered computational problem, arising when different reactions in the system take place at different time-scales. Computational savings can be obtained under time-scale separation. Assuming that the system can be partitioned into slow- and fast- equilibrating subsystems, it is then possible to efficiently simulate the slow subsystem only, provided that the corresponding kinetic laws have been modified so that they reflect their dependency on the fast system. We show that the rate expectation with respect to the fast subsystem’s steady-state is a continuous function of the state of the slow system. We exploit this result to construct an analytic representation of the modified rate functions via statistical modelling, which can be used to simulate the slow system in isolation. The computational savings of our approach are demonstrated in a number of non-trivial examples of stiff systems.", "", "The stochastic simulation algorithm (SSA) is an essentially exact procedure for numerically simulating the time evolution of a well-stirred chemically reacting system. Despite recent major improvements in the efficiency of the SSA, its drawback remains the great amount of computer time that is often required to simulate a desired amount of system time. Presented here is the “τ-leap” method, an approximate procedure that in some circumstances can produce significant gains in simulation speed with acceptable losses in accuracy. Some primitive strategies for control parameter selection and error mitigation for the τ-leap method are described, and simulation results for two simple model systems are exhibited. With further refinement, the τ-leap method should provide a viable way of segueing from the exact SSA to the approximate chemical Langevin equation, and thence to the conventional deterministic reaction rate equation, as the system size becomes larger.", "Reactions in real chemical systems often take place on vastly different time scales, with “fast” reaction channels firing very much more frequently than “slow” ones. These firings will be interdependent if, as is usually the case, the fast and slow reactions involve some of the same species. An exact stochastic simulation of such a system will necessarily spend most of its time simulating the more numerous fast reaction events. This is a frustratingly inefficient allocation of computational effort when dynamical stiffness is present, since in that case a fast reaction event will be of much less importance to the system’s evolution than will a slow reaction event. For such situations, this paper develops a systematic approximate theory that allows one to stochastically advance the system in time by simulating the firings of only the slow reaction events. Developing an effective strategy to implement this theory poses some challenges, but as is illustrated here for two simple systems, when those challenges ..." ] }
1905.00854
2943013792
Stochastic simulation is a widely used method for estimating quantities in models of chemical reaction networks where uncertainty plays a crucial role. However, reducing the statistical uncertainty of the corresponding estimators requires the generation of a large number of simulation runs, which is computationally expensive. To reduce the number of necessary runs, we propose a variance reduction technique based on control variates. We exploit constraints on the statistical moments of the stochastic process to reduce the estimators' variances. We develop an algorithm that selects appropriate control variates in an on-line fashion and demonstrate the efficiency of our approach on several case studies.
The most prominent application of a variance reduction technique in the context of stochastic reaction networks is importance sampling @cite_2 . This technique relies on an alteration of the process and then weighting samples using the likelihood-ratio between the original and the altered process.
{ "cite_N": [ "@cite_2" ], "mid": [ "1997982531" ], "abstract": [ "In robust biological systems, wide deviations from highly controlled normal behavior may be rare, yet they may result in catastrophic complications. While in silico analysis has gained an appreciation as a tool to offer insights into system-level properties of biological systems, analysis of such rare events provides a particularly challenging computational problem. This paper proposes an efficient stochastic simulation method to analyze rare events in biochemical systems. Our new approach can substantially increase the frequency of the rare events of interest by appropriately manipulating the underlying probability measure of the system, allowing high-precision results to be obtained with substantially fewer simulation runs than the conventional direct Monte Carlo simulation. Here, we show the algorithm of our new approach, and we apply it to the analysis of rare deviant transitions of two systems, resulting in several orders of magnitude speedup in generating high-precision estimates compared with the c..." ] }
1905.00966
2943248951
State of the art visual relation detection methods have been relying on features extracted from RGB images including objects' 2D positions. In this paper, we argue that the 3D positions of objects in space can provide additional valuable information about object relations. This information helps not only to detect spatial relations, such as "standing behind", but also non-spatial relations, such as "holding". Since 3D information of a scene is not easily accessible, we propose incorporating a pre-trained RGB-to-Depth model within visual relation detection frameworks. We discuss different feature extraction strategies from depth maps and show their critical role in relation detection. Our experiments confirm that the performance of state-of-the-art visual relation detection approaches can significantly be improved by utilizing depth map information.
While several works have leveraged depth maps to improve object detection @cite_20 @cite_12 @cite_22 , to the best of our knowledge this is the first time that depth maps are used in the relation detection task.
{ "cite_N": [ "@cite_22", "@cite_12", "@cite_20" ], "mid": [ "1565402342", "2963956866", "1573897183" ], "abstract": [ "In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an average precision of 37.3 , which is a 56 relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to object instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our object detectors in an existing superpixel classification framework for semantic scene segmentation and achieve a 24 relative improvement over current state-of-the-art for the object categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.", "Robust object recognition is a crucial ingredient of many, if not all, real-world robotics applications. This paper leverages recent progress on Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture for object recognition. Our architecture is composed of two separate CNN processing streams - one for each modality - which are consecutively combined with a late fusion network. We focus on learning with imperfect sensor data, a typical problem in real-world robotics tasks. For accurate learning, we introduce a multi-stage training methodology and two crucial ingredients for handling depth data with CNNs. The first, an effective encoding of depth information for CNNs that enables learning without the need for large depth datasets. The second, a data augmentation scheme for robust learning with depth images by corrupting them with realistic noise patterns. We present state-of-the-art results on the RGB-D object dataset [15] and show recognition in challenging RGB-D real-world noisy settings.", "Recently introduced RGB-D cameras are capable of providing high quality synchronized videos of both color and depth. With its advanced sensing capabilities, this technology represents an opportunity to dramatically increase the capabilities of object recognition. It also raises the problem of developing expressive features for the color and depth channels of these sensors. In this paper we introduce hierarchical matching pursuit (HMP) for RGB-D data. HMP uses sparse coding to learn hierarchical feature representations from raw RGB-D data in an unsupervised way. Extensive experiments on various datasets indicate that the features learned with our approach enable superior object recognition results using linear support vector machines." ] }
1905.00774
2943825848
Predicting the execution time of queries is an important problem with applications in scheduling, service level agreements and error detection. During query planning, a cost is associated with the chosen execution plan and used to rank competing plans. It would be convenient to use that cost to predict execution time, but it has been claimed in the literature that this is not possible. In this paper, we thoroughly investigate this claim considering both linear and non-linear models. We find that the accuracy using more complex models with only the optimizer cost is comparable to the reported accuracy in the literature. The most accurate method in the literature is nearest-neighbour regression which does not produce a model. The published results used a large feature set to identify nearest neighbours. We show that it is possible to achieve the same level of accuracy using only the cost to identify nearest neighbours. Using a smaller feature set brings the advantages of reduced overhead in terms of both storage space for the training data and the time to produce a prediction.
The first work on predicting execution time was by Gupta who aimed to provide an upper and lower bound for the execution time of a query @cite_15 . They used historical data to construct a binary decision tree with a unique classifier at each internal node to direct searches. The leaf nodes contained time ranges in the form @math where @math and @math are the lower- and upper-bound on the execution time of the queries that fall in that leaf node.
{ "cite_N": [ "@cite_15" ], "mid": [ "2150304332" ], "abstract": [ "Modern enterprise data warehouses have complex workloads that are notoriously difficult to manage. One of the key pieces to managing workloads is an estimate of how long a query will take to execute. An accurate estimate of this query execution time is critical to self managing Enterprise Class Data Warehouses. In this paper we study the problem of predicting the execution time of a query on a loaded data warehouse with a dynamically changing workload. We use a machine learning approach that takes the query plan, combines it with the observed load vector of the system and uses the new vector to predict the execution time of the query. The predictions are made as time ranges. We validate our solution using real databases and real workloads. We show experimentally that our machine learning approach works well. This technology is slated for incorporation into a commercial, enterprise class DBMS." ] }
1905.00774
2943825848
Predicting the execution time of queries is an important problem with applications in scheduling, service level agreements and error detection. During query planning, a cost is associated with the chosen execution plan and used to rank competing plans. It would be convenient to use that cost to predict execution time, but it has been claimed in the literature that this is not possible. In this paper, we thoroughly investigate this claim considering both linear and non-linear models. We find that the accuracy using more complex models with only the optimizer cost is comparable to the reported accuracy in the literature. The most accurate method in the literature is nearest-neighbour regression which does not produce a model. The published results used a large feature set to identify nearest neighbours. We show that it is possible to achieve the same level of accuracy using only the cost to identify nearest neighbours. Using a smaller feature set brings the advantages of reduced overhead in terms of both storage space for the training data and the time to produce a prediction.
Following that work, Ganapathi proposed using nearest-neighbour regression to provide predictions of resource usage, not just execution time @cite_17 . Nearest-neighbour regression is a powerful, non-parametric method that requires no training and performs all its work when a prediction is required @cite_7 . Instead of proposing a model and tuning its parameters, nearest-neighbour regression works by identifying the @math most similar data points in the training data and assuming that the target values associated with them will be similar to the value associated with the new data point.
{ "cite_N": [ "@cite_7", "@cite_17" ], "mid": [ "1985258161", "2100773341" ], "abstract": [ "Abstract Nonparametric regression is a set of techniques for estimating a regression curve without making strong assumptions about the shape of the true regression function. These techniques are therefore useful for building and checking parametric models, as well as for data description. Kernel and nearest-neighbor regression estimators are local versions of univariate location estimators, and so they can readily be introduced to beginning students and consulting clients who are familiar with such summaries as the sample mean and median.", "One of the most challenging aspects of managing a very large data warehouse is identifying how queries will behave before they start executing. Yet knowing their performance characteristics --- their runtimes and resource usage --- can solve two important problems. First, every database vendor struggles with managing unexpectedly long-running queries. When these long-running queries can be identified before they start, they can be rejected or scheduled when they will not cause extreme resource contention for the other queries in the system. Second, deciding whether a system can complete a given workload in a given time period (or a bigger system is necessary) depends on knowing the resource requirements of the queries in that workload. We have developed a system that uses machine learning to accurately predict the performance metrics of database queries whose execution times range from milliseconds to hours. For training and testing our system, we used both real customer queries and queries generated from an extended set of TPC-DS templates. The extensions mimic queries that caused customer problems. We used these queries to compare how accurately different techniques predict metrics such as elapsed time, records used, disk I Os, and message bytes. The most promising technique was not only the most accurate, but also predicted these metrics simultaneously and using only information available prior to query execution. We validated the accuracy of this machine learning technique on a number of HP Neoview configurations. We were able to predict individual query elapsed time within 20 of its actual time for 85 of the test queries. Most importantly, we were able to correctly identify both the short and long-running (up to two hour) queries to inform workload management and capacity planning." ] }
1905.00774
2943825848
Predicting the execution time of queries is an important problem with applications in scheduling, service level agreements and error detection. During query planning, a cost is associated with the chosen execution plan and used to rank competing plans. It would be convenient to use that cost to predict execution time, but it has been claimed in the literature that this is not possible. In this paper, we thoroughly investigate this claim considering both linear and non-linear models. We find that the accuracy using more complex models with only the optimizer cost is comparable to the reported accuracy in the literature. The most accurate method in the literature is nearest-neighbour regression which does not produce a model. The published results used a large feature set to identify nearest neighbours. We show that it is possible to achieve the same level of accuracy using only the cost to identify nearest neighbours. Using a smaller feature set brings the advantages of reduced overhead in terms of both storage space for the training data and the time to produce a prediction.
Because the feature set is large, they also applied dimensionality reduction which they reported as taking minutes to hours''. They found that, using this method, they could predict the execution time of 85 Akdere considered the use of Support Vector Regression @cite_8 . Support Vector Regression is the regression version of Support Vector Machines @cite_16 and is able to model non-linear relationships. Support Vector Machines work by finding a hyperplane that best divides the data points of different categories. The hyperplane becomes a decision boundary and predictions are made based on the location of a new data point in relation to the boundary. Using kernels to project the points into higher dimensions before trying to find the hyperplane allows Support Vector Machines to generate non-linear decision boundaries.
{ "cite_N": [ "@cite_16", "@cite_8" ], "mid": [ "1964357740", "2081728040" ], "abstract": [ "In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.", "Accurate query performance prediction (QPP) is central to effective resource management, query optimization and query scheduling. Analytical cost models, used in current generation of query optimizers, have been successful in comparing the costs of alternative query plans, but they are poor predictors of execution latency. As a more promising approach to QPP, this paper studies the practicality and utility of sophisticated learning-based models, which have recently been applied to a variety of predictive tasks with great success, in both static (i.e., fixed) and dynamic query workloads. We propose and evaluate predictive modeling techniques that learn query execution behavior at different granularities, ranging from coarse-grained plan-level models to fine-grained operator-level models. We demonstrate that these two extremes offer a tradeoff between high accuracy for static workload queries and generality to unforeseen queries in dynamic workloads, respectively, and introduce a hybrid approach that combines their respective strengths by selectively composing them in the process of QPP. We discuss how we can use a training workload to (i) pre-build and materialize such models offline, so that they are readily available for future predictions, and (ii) build new models online as new predictions are needed. All prediction models are built using only static features (available prior to query execution) and the performance values obtained from the offline execution of the training workload. We fully implemented all these techniques and extensions on top of Postgre SQL and evaluated them experimentally by quantifying their effectiveness over analytical workloads, represented by well-established TPC-H data and queries. The results provide quantitative evidence that learning-based modeling for QPP is both feasible and effective for both static and dynamic workload scenarios." ] }
1905.00774
2943825848
Predicting the execution time of queries is an important problem with applications in scheduling, service level agreements and error detection. During query planning, a cost is associated with the chosen execution plan and used to rank competing plans. It would be convenient to use that cost to predict execution time, but it has been claimed in the literature that this is not possible. In this paper, we thoroughly investigate this claim considering both linear and non-linear models. We find that the accuracy using more complex models with only the optimizer cost is comparable to the reported accuracy in the literature. The most accurate method in the literature is nearest-neighbour regression which does not produce a model. The published results used a large feature set to identify nearest neighbours. We show that it is possible to achieve the same level of accuracy using only the cost to identify nearest neighbours. Using a smaller feature set brings the advantages of reduced overhead in terms of both storage space for the training data and the time to produce a prediction.
Overall, they found that they could predict the execution time with a mean relative error of 6.75 Li considered the problem of applying machine learning solutions to queries that are either running on larger datasets or derived from different templates than those in the training data @cite_6 . They used regression trees to predict execution times at the operator level and then designed and trained scaling functions'' that allow the predictions derived from the regression tree to be scaled for queries with cardinalities not previously seen in the training data.
{ "cite_N": [ "@cite_6" ], "mid": [ "2167978511" ], "abstract": [ "The ability to estimate resource consumption of SQL queries is crucial for a number of tasks in a database system such as admission control, query scheduling and costing during query optimization. Recent work has explored the use of statistical techniques for resource estimation in place of the manually constructed cost models used in query optimization. Such techniques, which require as training data examples of resource usage in queries, offer the promise of superior estimation accuracy since they can account for factors such as hardware characteristics of the system or bias in cardinality estimates. However, the proposed approaches lack robustness in that they do not generalize well to queries that are different from the training examples, resulting in significant estimation errors. Our approach aims to address this problem by combining knowledge of database query processing with statistical models. We model resource-usage at the level of individual operators, with different models and features for each operator type, and explicitly model the asymptotic behavior of each operator. This results in significantly better estimation accuracy and the ability to estimate resource usage of arbitrary plans, even when they are very different from the training instances. We validate our approach using various large scale real-life and benchmark workloads on Microsoft SQL Server." ] }
1905.00702
2942843559
Recent years have witnessed the world-wide emergence of mega-metropolises with incredibly huge populations. Understanding residents mobility patterns, or urban dynamics, thus becomes crucial for building modern smart cities. In this paper, we propose a Neighbor-Regularized and context-aware Non-negative Tensor Factorization model (NR-cNTF) to discover interpretable urban dynamics from urban heterogeneous data. Different from many existing studies concerned with prediction tasks via tensor completion, NR-cNTF focuses on gaining urban managerial insights from spatial, temporal, and spatio-temporal patterns. This is enabled by high-quality Tucker factorizations regularized by both POI-based urban contexts and geographically neighboring relations. NR-cNTF is also capable of unveiling long-term evolutions of urban dynamics via a pipeline initialization approach. We apply NR-cNTF to a real-life data set containing rich taxi GPS trajectories and POI records of Beijing. The results indicate: 1) NR-cNTF accurately captures four kinds of city rhythms and seventeen spatial communities; 2) the rapid development of Beijing, epitomized by the CBD area, indeed intensifies the job-housing imbalance; 3) the southern areas with recent government investments have shown more healthy development tendency. Finally, NR-cNTF is compared with some baselines on traffic prediction, which further justifies the importance of urban contexts awareness and neighboring regulations.
Mining knowledge from human mobility data generated in urban areas has attracted many researchers' interests in recent years @cite_11 @cite_26 . Various types of social sensors'', such as cell phones @cite_37 , GPS terminals @cite_26 , and smart bus metro cards @cite_13 , have been adopted to record mobility information of urban residents, based on which many successful applications have emerged for intelligent transportation @cite_1 @cite_38 , environmental protection @cite_33 , urban planning @cite_10 , urban emergency @cite_29 , etc . An excellent survey from an urban computing perspective can be found in @cite_11 , while @cite_26 provides a survey from a social and community dynamics perspective.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_26", "@cite_33", "@cite_10", "@cite_29", "@cite_1", "@cite_13", "@cite_11" ], "mid": [ "2558098160", "2100378013", "1995151908", "1971402834", "", "2020934359", "2075364600", "2139153982", "2112738128" ], "abstract": [ "Bike sharing systems are booming globally as a green and flexible transportationmode, but the flexibility also brings difficulties in keeping the bike stations balanced with enough bikes and docks. Understanding the spatio-temporal bike trip patterns in a bike sharing system, such as the popular trip origins and destinations during rush hours, is important for researchers to design models for bike scheduling and station management. However, due to privacy and operational concerns, bike trip data are usually not publicly available in many cities. Instead, the station feeds about real-time bike and dock number in stations are usually public, which we refer to as bike sharing system open data. In this paper, we propose an approach to infer the spatio-temporal bike trip patterns from the public station feeds. Since the number of possible trips (i.e., origin-destination station pairs) is much larger than the number of stations, we define the trip inference as an ill-posed inverse problem. To solve this problem, we identify the sparsity and locality properties of bike trip patterns, and propose a sparse and weighted regularization model to impose both properties in the solution. We evaluate our method using real-world data fromWashington, D.C. and New York City. Results show that our method can effectively infer the spatio-temporal bike trip patterns and outperform the baselines in both cities.", "This paper describes a new real-time urban monitoring system. The system uses the Localizing and Handling Network Event Systems (LocHNESs) platform developed by Telecom Italia for the real-time evaluation of urban dynamics based on the anonymous monitoring of mobile cellular networks. In addition, data are supplemented based on the instantaneous positioning of buses and taxis to provide information about urban mobility in real time, ranging from traffic conditions to the movements of pedestrians throughout the city. This system was exhibited at the Tenth International Architecture Exhibition of the Venice Biennale. It marks the unprecedented monitoring of a large urban area, which covered most of the city of Rome, in real time using a variety of sensing systems and will hopefully open the way to a new paradigm of understanding and optimizing urban dynamics.", "Vehicles equipped with GPS localizers are an important sensory device for examining people’s movements and activities. Taxis equipped with GPS localizers serve the transportation needs of a large number of people driven by diverse needs; their traces can tell us where passengers were picked up and dropped off, which route was taken, and what steps the driver took to find a new passenger. In this article, we provide an exhaustive survey of the work on mining these traces. We first provide a formalization of the data sets, along with an overview of different mechanisms for preprocessing the data. We then classify the existing work into three main categories: social dynamics, traffic dynamics and operational dynamics. Social dynamics refers to the study of the collective behaviour of a city’s population, based on their observed movements; Traffic dynamics studies the resulting flow of the movement through the road network; Operational dynamics refers to the study and analysis of taxi driver’s modus operandi. We discuss the different problems currently being researched, the various approaches proposed, and suggest new avenues of research. Finally, we present a historical overview of the research work in this field and discuss which areas hold most promise for future research.", "Information about urban air quality, e.g., the concentration of PM2.5, is of great importance to protect human health and control air pollution. While there are limited air-quality-monitor-stations in a city, air quality varies in urban spaces non-linearly and depends on multiple factors, such as meteorology, traffic volume, and land uses. In this paper, we infer the real-time and fine-grained air quality information throughout a city, based on the (historical and real-time) air quality data reported by existing monitor stations and a variety of data sources we observed in the city, such as meteorology, traffic flow, human mobility, structure of road networks, and point of interests (POIs). We propose a semi-supervised learning approach based on a co-training framework that consists of two separated classifiers. One is a spatial classifier based on an artificial neural network (ANN), which takes spatially-related features (e.g., the density of POIs and length of highways) as input to model the spatial correlation between air qualities of different locations. The other is a temporal classifier based on a linear-chain conditional random field (CRF), involving temporally-related features (e.g., traffic and meteorology) to model the temporal dependency of air quality in a location. We evaluated our approach with extensive experiments based on five real data sources obtained in Beijing and Shanghai. The results show the advantages of our method over four categories of baselines, including linear Gaussian interpolations, classical dispersion models, well-known classification models like decision tree and CRF, and ANN.", "", "The Great East Japan Earthquake and the Fukushima nuclear accident cause large human population movements and evacuations. Understanding and predicting these movements is critical for planning effective humanitarian relief, disaster management, and long-term societ al reconstruction. In this paper, we construct a large human mobility database that stores and manages GPS records from mobile devices used by approximately 1.6 million people throughout Japan from 1 August 2010 to 31 July 2011. By mining this enormous set of Auto-GPS mobile sensor data, the short-term and long-term evacuation behaviors for individuals throughout Japan during this disaster are able to be automatically discovered. To better understand and simulate human mobility during the disasters, we develop a probabilistic model that is able to be effectively trained by the discovered evacuations via machine learning technique. Based on our training model, population mobility in various cities impacted by the disasters throughout the country is able to be automatically simulated or predicted. On the basis of the whole database, developed model, and experimental results, it is easy for us to find some new features or population mobility patterns after the recent severe earthquake, tsunami and release of radioactivity in Japan, which are likely to play a vital role in future disaster relief and management worldwide.", "This paper presents a smart driving direction system leveraging the intelligence of experienced drivers. In this system, GPS-equipped taxis are employed as mobile sensors probing the traffic rhythm of a city and taxi drivers' intelligence in choosing driving directions in the physical world. We propose a time-dependent landmark graph to model the dynamic traffic pattern as well as the intelligence of experienced drivers so as to provide a user with the practically fastest route to a given destination at a given departure time. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest and customized route for end users. We build our system based on a real-world trajectory data set generated by over 33,000 taxis in a period of three months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70 percent of the routes suggested by our method are faster than the competing methods, and 20 percent of the routes share the same results. On average, 50 percent of our routes are at least 20 percent faster than the competing approaches.", "Understanding of the mechanisms driving our daily face-to-face encounters is still limited; the field lacks large-scale datasets describing both individual behaviors and their collective interactions. However, here, with the help of travel smart card data, we uncover such encounter mechanisms and structures by constructing a time-resolved in-vehicle social encounter network on public buses in a city (about 5 million residents). Using a population scale dataset, we find physical encounters display reproducible temporal patterns, indicating that repeated encounters are regular and identical. On an individual scale, we find that collective regularities dominate distinct encounters’ bounded nature. An individual’s encounter capability is rooted in his her daily behavioral regularity, explaining the emergence of “familiar strangers” in daily life. Strikingly, we find individuals with repeated encounters are not grouped into small communities, but become strongly connected over time, resulting in a large, but imperceptible, small-world contact network or “structure of co-presence” across the whole metropolitan area. Revealing the encounter pattern and identifying this large-scale contact network are crucial to understanding the dynamics in patterns of social acquaintances, collective human behaviors, and—particularly—disclosing the impact of human behavior on various diffusion spreading processes.", "Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community." ] }
1905.00702
2942843559
Recent years have witnessed the world-wide emergence of mega-metropolises with incredibly huge populations. Understanding residents mobility patterns, or urban dynamics, thus becomes crucial for building modern smart cities. In this paper, we propose a Neighbor-Regularized and context-aware Non-negative Tensor Factorization model (NR-cNTF) to discover interpretable urban dynamics from urban heterogeneous data. Different from many existing studies concerned with prediction tasks via tensor completion, NR-cNTF focuses on gaining urban managerial insights from spatial, temporal, and spatio-temporal patterns. This is enabled by high-quality Tucker factorizations regularized by both POI-based urban contexts and geographically neighboring relations. NR-cNTF is also capable of unveiling long-term evolutions of urban dynamics via a pipeline initialization approach. We apply NR-cNTF to a real-life data set containing rich taxi GPS trajectories and POI records of Beijing. The results indicate: 1) NR-cNTF accurately captures four kinds of city rhythms and seventeen spatial communities; 2) the rapid development of Beijing, epitomized by the CBD area, indeed intensifies the job-housing imbalance; 3) the southern areas with recent government investments have shown more healthy development tendency. Finally, NR-cNTF is compared with some baselines on traffic prediction, which further justifies the importance of urban contexts awareness and neighboring regulations.
Among the abundant methods for human mobility data mining, tensor factorization decomposition, like CANDECOMP PARAFAC (CP) @cite_9 and Tucker factorizations @cite_2 , gains particular interests for its distinct ability in modeling multi-aspect heterogeneous big data. Indeed, in city scenarios data samples are always involved with many aspects, such as time, space, human, urban contexts and so on, and therefore are very suitable for tensor factorization based data mining methods @cite_11 . Typical applications of tensor factorization could be classified into two categories. The first category is to reconstruct tensors for predicting unknown values in multi-aspect data sets, such as completing missing traffic data @cite_23 , inferring urban gas consumption @cite_31 , predicting travel time @cite_25 , recommending social tags @cite_21 , movies @cite_30 and sightseeing locations @cite_35 @cite_34 , and so on.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_9", "@cite_21", "@cite_23", "@cite_2", "@cite_31", "@cite_34", "@cite_25", "@cite_11" ], "mid": [ "1998932002", "2117587045", "1986326495", "2160118270", "", "1963826206", "2106403424", "2122516730", "2144475703", "2112738128" ], "abstract": [ "In this paper, we propose a novel cross-space affinity learning algorithm over different spaces with heterogeneous structures. Unlike most of affinity learning algorithms on the homogeneous space, we construct a cross-space tensor model to learn the affinity measures on heterogeneous spaces subject to a set of order constraints from the training pool. We further enhance the model with a factorization form which greatly reduces the number of parameters of the model with a controlled complexity. Moreover, from the practical perspective, we show the proposed factorized cross-space tensor model can be efficiently optimized by a series of simple quadratic optimization problems in an iterative manner. The proposed cross-space affinity learning algorithm can be applied to many real-world problems, which involve multiple heterogeneous data objects defined over different spaces. In this paper, we apply it into the recommendation system to measure the affinity between users and the product items, where a higher affinity means a higher rating of the user on the product. For an empirical evaluation, a widely used benchmark movie recommendation data set-MovieLens-is used to compare the proposed algorithm with other state-of-the-art recommendation algorithms and we show that very competitive results can be obtained.", "With the increasing popularity of location tracking services such as GPS, more and more mobile data are being accumulated. Based on such data, a potentially useful service is to make timely and targeted recommendations for users on places where they might be interested to go and activities that they are likely to conduct. For example, a user arriving in Beijing might wonder where to visit and what she can do around the Forbidden City. A key challenge for such recommendation problems is that the data we have on each individual user might be very limited, while to make useful and accurate recommendations, we need extensive annotated location and activity information from user trace data. In this paper, we present a new approach, known as user-centered collaborative location and activity filtering (UCLAF), to pull many users' data together and apply collaborative filtering to find like-minded users and like-patterned activities at different locations. We model the user-location-activity relations with a tensor representation, and propose a regularized tensor and matrix decomposition solution which can better address the sparse data problem in mobile information retrieval. We empirically evaluate UCLAF using a real-world GPS dataset collected from 164 users over 2.5 years, and showed that our system can outperform several state-of-the-art solutions to the problem.", "This paper presents a standardized notation and terminology to be used for three- and multiway analyses, especially when these involve (variants of) the CANDECOMP PARAFAC model and the Tucker model. The notation also deals with basic aspects such as symbols for different kinds of products, and terminology for three- and higher-way data. The choices for terminology and symbols to be used have to some extent been based on earlier (informal) conventions. Simplicity and reduction of the possibility of confusion have also played a role in the choices made. Copyright (C) 2000 John Wiley & Sons, Ltd.", "Social tagging is the process by which many users add metadata in the form of keywords, to annotate and categorize items (songs, pictures, Web links, products, etc.). Social tagging systems (STSs) can provide three different types of recommendations: They can recommend 1) tags to users, based on what tags other users have used for the same items, 2) items to users, based on tags they have in common with other similar users, and 3) users with common social interest, based on common tags on similar items. However, users may have different interests for an item, and items may have multiple facets. In contrast to the current recommendation algorithms, our approach develops a unified framework to model the three types of entities that exist in a social tagging system: users, items, and tags. These data are modeled by a 3-order tensor, on which multiway latent semantic analysis and dimensionality reduction is performed using both the higher order singular value decomposition (HOSVD) method and the kernel-SVD smoothing technique. We perform experimental comparison of the proposed method against state-of-the-art recommendation algorithms with two real data sets (Last.fm and BibSonomy). Our results show significant improvements in terms of effectiveness measured through recall precision.", "", "The model for three-mode factor analysis is discussed in terms of newer applications of mathematical processes including a type of matrix process termed the Kronecker product and the definition of combination variables. Three methods of analysis to a type of extension of principal components analysis are discussed. Methods II and III are applicable to analysis of data collected for a large sample of individuals. An extension of the model is described in which allowance is made for unique variance for each combination variable when the data are collected for a large sample of individuals.", "Urban transportation is increasingly studied due to its complexity and economic importance. It is also a major component of urban energy use and pollution. The importance of this topic will only increase as urbanization continues around the world. A less researched aspect of transportation is the refueling behavior of drivers. In this paper, we propose a step toward real-time sensing of refueling behavior and citywide petrol consumption. We use reported trajectories from a fleet of GPS-equipped taxicabs to detect gas station visits, measure the time spent, and estimate overall demand. For times and stations with sparse data, we use collaborative filtering to estimate conditions. Our system provides real-time estimates of gas stations' waiting times, from which recommendations could be made, an indicator of overall gas usage, from which macro-scale economic decisions could be made, and a geographic view of the efficiency of gas station placement.", "With the increasing popularity of location-based services, we have accumulated a lot of location data on the Web. In this paper, we are interested in answering two popular location-related queries in our daily life: (1) if we want to do something such as sightseeing or dining in a large city like Beijing, where should we go? (2) If we want to visit a place such as the Bird@?s Nest in Beijing Olympic park, what can we do there? We develop a mobile recommendation system to answer these queries. In our system, we first model the users@? location and activity histories as a user-location-activity rating tensor. Because each user has limited data, the resulting rating tensor is essentially very sparse. This makes our recommendation task difficult. In order to address this data sparsity problem, we propose three algorithms based on collaborative filtering. The first algorithm merges all the users@? data together, and uses a collective matrix factorization model to provide general recommendation (, 2010 [3]). The second algorithm treats each user differently and uses a collective tensor and matrix factorization model to provide personalized recommendation (, 2010 [4]). The third algorithm is a new algorithm which further improves our previous two algorithms by using a ranking-based collective tensor and matrix factorization model. Instead of trying to predict the missing entry values as accurately as possible, it focuses on directly optimizing the ranking loss w.r.t. user preferences on the locations and activities. Therefore, it is more consistent with our ultimate goal of ranking locations activities for recommendations. For these three algorithms, we also exploit some additional information, such as user-user similarities, location features, activity-activity correlations and user-location preferences, to help the CF tasks. We extensively evaluate our algorithms using a real-world GPS dataset collected by 119 users over 2.5 years. We show that all our three algorithms can consistently outperform the competing baselines, and our newly proposed third algorithm can also outperform our other two previous algorithms.", "In this paper, we propose a citywide and real-time model for estimating the travel time of any path (represented as a sequence of connected road segments) in real time in a city, based on the GPS trajectories of vehicles received in current time slots and over a period of history as well as map data sources. Though this is a strategically important task in many traffic monitoring and routing systems, the problem has not been well solved yet given the following three challenges. The first is the data sparsity problem, i.e., many road segments may not be traveled by any GPS-equipped vehicles in present time slot. In most cases, we cannot find a trajectory exactly traversing a query path either. Second, for the fragment of a path with trajectories, they are multiple ways of using (or combining) the trajectories to estimate the corresponding travel time. Finding an optimal combination is a challenging problem, subject to a tradeoff between the length of a path and the number of trajectories traversing the path (i.e., support). Third, we need to instantly answer users' queries which may occur in any part of a given city. This calls for an efficient, scalable and effective solution that can enable a citywide and real-time travel time estimation. To address these challenges, we model different drivers' travel times on different road segments in different time slots with a three dimension tensor. Combined with geospatial, temporal and historical contexts learned from trajectories and map data, we fill in the tensor's missing values through a context-aware tensor decomposition approach. We then devise and prove an object function to model the aforementioned tradeoff, with which we find the most optimal concatenation of trajectories for an estimate through a dynamic programming solution. In addition, we propose using frequent trajectory patterns (mined from historical trajectories) to scale down the candidates of concatenation and a suffix-tree-based index to manage the trajectories received in the present time slot. We evaluate our method based on extensive experiments, using GPS trajectories generated by more than 32,000 taxis over a period of two months. The results demonstrate the effectiveness, efficiency and scalability of our method beyond baseline approaches.", "Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community." ] }
1905.00702
2942843559
Recent years have witnessed the world-wide emergence of mega-metropolises with incredibly huge populations. Understanding residents mobility patterns, or urban dynamics, thus becomes crucial for building modern smart cities. In this paper, we propose a Neighbor-Regularized and context-aware Non-negative Tensor Factorization model (NR-cNTF) to discover interpretable urban dynamics from urban heterogeneous data. Different from many existing studies concerned with prediction tasks via tensor completion, NR-cNTF focuses on gaining urban managerial insights from spatial, temporal, and spatio-temporal patterns. This is enabled by high-quality Tucker factorizations regularized by both POI-based urban contexts and geographically neighboring relations. NR-cNTF is also capable of unveiling long-term evolutions of urban dynamics via a pipeline initialization approach. We apply NR-cNTF to a real-life data set containing rich taxi GPS trajectories and POI records of Beijing. The results indicate: 1) NR-cNTF accurately captures four kinds of city rhythms and seventeen spatial communities; 2) the rapid development of Beijing, epitomized by the CBD area, indeed intensifies the job-housing imbalance; 3) the southern areas with recent government investments have shown more healthy development tendency. Finally, NR-cNTF is compared with some baselines on traffic prediction, which further justifies the importance of urban contexts awareness and neighboring regulations.
In recent years, more and more works focused on mining explainable latent factors from multi-aspect urban data sets, which form the second category of applications. The focal point here is to use tensor factorization to discover latent lower-dimensional factors from higher-dimensional multi-aspect data sets. For instance, Metafac @cite_20 used CP factorizations to extract latent community structures from various social networks, and @cite_27 proposed a multi-view data clustering and partitioning method based on Tucker factorization. Our study in this paper also falls in this category, with some most related works as follows.
{ "cite_N": [ "@cite_27", "@cite_20" ], "mid": [ "2085265340", "2130852547" ], "abstract": [ "Clustering by integrating multiview representations has become a crucial issue for knowledge discovery in heterogeneous environments. However, most prior approaches assume that the multiple representations share the same dimension, limiting their applicability to homogeneous environments. In this paper, we present a novel tensor-based framework for integrating heterogeneous multiview data in the context of spectral clustering. Our framework includes two novel formulations; that is multiview clustering based on the integration of the Frobenius-norm objective function (MC-FR-OI) and that based on matrix integration in the Frobenius-norm objective function (MC-FR-MI). We show that the solutions for both formulations can be computed by tensor decompositions. We evaluated our methods on synthetic data and two real-world data sets in comparison with baseline methods. Experimental results demonstrate that the proposed formulations are effective in integrating multiview data in heterogeneous environments.", "This paper aims at discovering community structure in rich media social networks, through analysis of time-varying, multi-relational data. Community structure represents the latent social context of user actions. It has important applications in information tasks such as search and recommendation. Social media has several unique challenges. (a) In social media, the context of user actions is constantly changing and co-evolving; hence the social context contains time-evolving multi-dimensional relations. (b) The social context is determined by the available system features and is unique in each social media website. In this paper we propose MetaFac (MetaGraph Factorization), a framework that extracts community structures from various social contexts and interactions. Our work has three key contributions: (1) metagraph, a novel relational hypergraph representation for modeling multi-relational and multi-dimensional social data; (2) an efficient factorization method for community extraction on a given metagraph; (3) an on-line method to handle time-varying relations through incremental metagraph factorization. Extensive experiments on real-world social data collected from the Digg social media website suggest that our technique is scalable and is able to extract meaningful communities based on the social media contexts. We illustrate the usefulness of our framework through prediction tasks. We outperform baseline methods (including aspect model and tensor analysis) by an order of magnitude." ] }
1905.00702
2942843559
Recent years have witnessed the world-wide emergence of mega-metropolises with incredibly huge populations. Understanding residents mobility patterns, or urban dynamics, thus becomes crucial for building modern smart cities. In this paper, we propose a Neighbor-Regularized and context-aware Non-negative Tensor Factorization model (NR-cNTF) to discover interpretable urban dynamics from urban heterogeneous data. Different from many existing studies concerned with prediction tasks via tensor completion, NR-cNTF focuses on gaining urban managerial insights from spatial, temporal, and spatio-temporal patterns. This is enabled by high-quality Tucker factorizations regularized by both POI-based urban contexts and geographically neighboring relations. NR-cNTF is also capable of unveiling long-term evolutions of urban dynamics via a pipeline initialization approach. We apply NR-cNTF to a real-life data set containing rich taxi GPS trajectories and POI records of Beijing. The results indicate: 1) NR-cNTF accurately captures four kinds of city rhythms and seventeen spatial communities; 2) the rapid development of Beijing, epitomized by the CBD area, indeed intensifies the job-housing imbalance; 3) the southern areas with recent government investments have shown more healthy development tendency. Finally, NR-cNTF is compared with some baselines on traffic prediction, which further justifies the importance of urban contexts awareness and neighboring regulations.
Despite of the wide existence of related works mentioned above, our study in this paper has its own uniqueness. Unlike the previous works, we focus on understanding urban dynamics from multiple aspects, including spatial, temporal, as well as spatio-temporal interactions, with still a pursue to long-term evolution patterns. The results indeed bring some important managerial insights and suggestions to city development of Beijing. The proposed NR-cNTF model takes Tucker factorization as a basic framework, which compared with CP and matrix factorization based models @cite_14 @cite_0 @cite_36 @cite_40 has better interpretability for adopting a core tensor to model relations among latent factors. Compared with the existing Tucker factorization based methods @cite_4 @cite_23 @cite_11 , NR-cNTF incorporates urban contexts and neighboring regulation, which improve both the accuracy and interpretability of Tucker factorization greatly. Moreover, we proposed a pipeline initialization approach to analyze the evolution of urban dynamics across several years, which is simple yet practical.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_36", "@cite_0", "@cite_40", "@cite_23", "@cite_11" ], "mid": [ "2082729958", "2465297350", "2027047167", "2509145720", "2347172331", "", "2112738128" ], "abstract": [ "We analyze the passengers' traffic pattern for 1.58 million taxi trips of Shanghai, China. By employing the non-negative matrix factorization and optimization methods, we find that, people travel on workdays mainly for three purposes: commuting between home and workplace, traveling from workplace to workplace, and others such as leisure activities. Therefore, traffic flow in one area or between any pair of locations can be approximated by a linear combination of three basis flows, corresponding to the three purposes respectively. We name the coefficients in the linear combination as traffic powers, each of which indicates the strength of each basis flow. The traffic powers on different days are typically different even for the same location, due to the uncertainty of the human motion. Therefore, we provide a probability distribution function for the relative deviation of the traffic power. This distribution function is in terms of a series of functions for normalized binomial distributions. It can be well explained by statistical theories and is verified by empirical data. These findings are applicable in predicting the road traffic, tracing the traffic pattern and diagnosing the traffic related abnormal events. These results can also be used to infer land uses of urban area quite parsimoniously.", "The rapid developments of ubiquitous mobile computing provide planners and researchers with new opportunities to understand and build smart cities by mining the massive spatial-temporal mobility data. However, given the increasing complexity and volume of the emerging mobility datasets, it also becomes challenging to build novel analytical framework that is capable of understanding the structural properties and critical features. In this paper, we introduce an analytical framework to deal with high-dimensional human mobility data. To this end, we formulate mobility data in a probabilistic setting and consider each record a multivariate observation sampled from an underlying distribution. In order to characterize this distribution, we use a multi-way probabilistic factorization model based on the concept of tensor decomposition and probabilistic latent semantic analysis (PLSA). The model provides us with a flexible approach to understand multi-way mobility involving higher-order interactions—which are difficult to characterize with conventional approaches—using simple latent structures. The model can be efficiently estimated using the expectation maximization (EM) algorithm. As a numerical example, this model is applied on a four-way dataset recording 14 million public transport journeys extracted from smart card transactions in Singapore. This framework can shed light on the modeling of urban structure by understanding mobility flows in both spatial and temporal dimensions.", "People flow at a citywide level is in a mixed state with several basic patterns (e.g. commuting, working, commercial), and it is therefore difficult to extract useful information from such a mixture of patterns directly. In this paper, we proposed a novel tensor factorization approach to modeling city dynamics in a basic life pattern space (CitySpectral Space). To obtain the CitySpectrum, we utilized Non-negative Tensor Factorization (NTF) to decompose a people flow tensor into basic life pattern tensors, described by three bases i.e. the intensity variation among different regions, the time-of-day and the sample days. We apply our approach to a big mobile phone GPS log dataset (containing 1.6 million users) to model the fluctuation in people flow before and after the Great East Japan Earthquake from a CitySpectral perspective. In addition, our framework is extensible to a variety of auxiliary spatial-temporal data. We parametrize a people flow with a spatial distribution of the Points of Interest (POIs) to quantitatively analyze the relationship between human mobility and POI distribution. Based on the parametric people flow, we propose a spectral approach for a site-selection recommendation and people flow simulation in another similar area using POI distribution.", "Abstract Taxicabs play significant roles in public transport systems in large cities. To meet repetitive demands of daily intra-urban travels, cabdrivers acquire self-organized habitual operation behaviors in space and time, largely with assistance of their longitudinal operating experience. Recognizing those collective operation behavior patterns of taxicabs will enable us to better design and implement public transport services and urban development plan. In this paper, we systematically study patterns of the spatial supply of 6000 + taxicabs in Wuhan, China based on a monthly collection of their digital traces and the non-negative matrix factorization method. We successfully identify a set of high-level statistical features of the spatial operation behaviors of taxicabs in Wuhan, providing valuable insights to our knowledge of the demand and supply of taxicabs in (similar) large cities. First, we decouple several spatially cohesive regions with intensive internal taxicab travels (termed as demand regions ), which intuitively reveal the well-known multi-sectored urban configuration of Wuhan. Second, by applying the non-negative matrix factorization to taxicab's longitudinal traces, we uncover remarkably self-organized operation patterns of cabdrivers in space (termed as supply regions ) as reactions to the sectored distribution of daily travel behaviors. We find that a large proportion of cabdrivers frequently operate within single specific service area and a small proportion of taxicabs works as shifting tools between different service areas. Last, we focus on performances of taxicabs with distinct spatial operation behaviors and unveil their statistical characteristics in terms of frequency, duration and distance with passenger on board. Our work demonstrates the great potential to understand and improve urban mobility and public transport system from cabdrivers' collective intelligence.", "A traffic tensor or simply origin ? destination ? time is a new data model for conventional origin destination (O D) matrices. Tensor models are traffic data analysis techniques which use this new data model to improve performance. Tensors outperform other models because both temporal and spatial fluctuations of traffic patterns are simultaneously taken into account, obtaining results that follow a more natural pattern. Three major types of fluctuations can occur in traffic tensors: mutations to the overall traffic flows, alterations to the network topology and chaotic behaviors. How can we detect events in a system that is faced with all types of fluctuations during its life cycle? Our initial studies reveal that the current design of tensor models face some difficulties in dealing with such a realistic scenario. We propose a new hybrid tensor model called HTM that enhances the detection ability of tensor models by using a parallel tracking technique on the traffic's topology. However, tensor decomposition techniques such as Tucker, a key step for tensor models, require a complicated parameter that not only is difficult to choose but also affects the model's quality. We address this problem examining a recent technique called adjustable core size Tucker decomposition (ACS-Tucker). Experiments on simulated and real-world data sets from different domains versus several techniques indicate that the proposed model is effective and robust, therefore it constitutes a viable alternative for analysis of the traffic tensors. HighlightsA new problem in tensor-based traffic models is introduced and formulated.Separation of topology makes tensor models more sensitive to topological changes.Our hybrid tensor model has 10 better detection power than naive tensor models.Tensor-based models are more robust than matrix residual methods such as DTA.Adjustable core size Tucker decomposition is a potential method for tensor models.", "", "Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community." ] }
1905.00777
2942772114
In this paper, we consider the combination between two promising techniques: space-shift keying (SSK) and non-orthogonal multiple access (NOMA) for future radio-access networks. We analyze the performance of SSK-NOMA networks and provide a comprehensive analytical framework of SSK-NOMA regarding bit error probability (BEP), ergodic capacity, and outage probability. It is worth pointing out that all analysis also stand for conventional SIMO-NOMA networks. We derive closed-form exact average BEP (ABEP) expressions when the number of users in a resource block is equal to three (i.e., @math ). Nevertheless, we analyze the ABEP of users when the number of users is more than three, i.e., @math , and derive bit-error-rate (BER) union bound since the error propagation due to iterative successive interference canceler (SIC) makes the exact analysis intractable. Then, we analyze the achievable rate of users and derive exact ergodic capacity of the users, so the ergodic sum rate of the system is in closed forms. Moreover, we provide the average outage probability of the users exactly in the closed form. All derived expressions are validated via Monte Carlo simulations and it is proved that SSK-NOMA outperforms conventional NOMA networks in terms of all performance metrics (i.e., BER, sum rate, outage). Finally, the effect of the power allocation (PA) on the performance of SSK-NOMA networks is investigated, and the optimum PA is discussed under BER and outage constraints.
Furthermore, not only in SSK-NOMA but also in conventional NOMA networks, the error probability analyses are very limited in the literature. To the best of authors' knowledge, BER analysis of two user NOMA is given in @cite_34 and the union BER bound is provided for multi-user NOMA in @cite_8 . However, these works only consider SISO networks and there has been no work which consider multiple antenna models, yet. Since the intra-cell users are multiplexed by NOMA in the considered SSK-NOMA network, the analysis in this paper is the first study which considers multiple antenna NOMA networks in terms of error probability.
{ "cite_N": [ "@cite_34", "@cite_8" ], "mid": [ "2810802021", "2896566269" ], "abstract": [ "Non-orthogonal multiple access (NOMA) is a strong candidate for next generation radio access networks due to its ability of serving multiple users using the same time and frequency resources. Therefore, researchers in academia and industry have been recently investigating the error performances and capacity of NOMA schemes. The main drawback of NOMA techniques is the interference among users due to the its non-orthogonal access nature, that is usually solved by interference cancellation techniques such as successive interference cancellation (SIC) at the receivers. On the other hand, the interference among users may not be completely eliminated in the SIC process due to the erroneous decisions in the receivers usually caused by channels. In this study, for the first time in the literature, the authors derive an exact closed-form bit error rate (BER) expressions under SIC error for downlink NOMA over Rayleigh fading channels. Besides, they derive one-degree integral form exact BER expressions and closed-form approximate expressions for uplink NOMA. Then, the derived expressions are validated by simulations. The numerical results are depicted to reveal the effects of error during SIC process on the performance for various cases such as power allocation for downlink and channel quality difference for uplink.", "Non-orthogonal multiple access (NOMA) is currently considered as a promising technology for the next-generation wireless networks. In this paper, the error rate performance of NOMA systems is investigated over Nakagami- @math fading channels, while considering imperfect successive interference cancelation. In particular, this paper focuses on the pairwise error probability (PEP) analysis, where exact PEP expressions are derived to characterize the performance of all users under different fading conditions. The obtained PEP expressions are then used to derive an exact union bound on the bit error rate (BER). Through the derived PEP expressions, the asymptotic PEP analysis is presented to investigate the maximum achievable diversity gain of NOMA users. Moreover, using the derived BER bound, the power allocation problem for all users in NOMA systems is considered under average power and users BER constraints, which allows realizing the full potential of NOMA. Monte Carlo simulation and numerical results are presented to corroborate the derived analytical expressions and give valuable insights into the error rate performance of each user and the achievable diversity gain." ] }
1905.01013
2949520894
It is common to view people in real applications walking in arbitrary directions, holding items, or wearing heavy coats. These factors are challenges in gait-based application methods because they significantly change a person's appearance. This paper proposes a novel method for classifying human gender in real time using gait information. The use of an average gait image (AGI), rather than a gait energy image (GEI), allows this method to be computationally efficient and robust against view changes. A viewpoint (VP) model is created for automatically determining the viewing angle during the testing phase. A distance signal (DS) model is constructed to remove any areas with an attachment (carried items, worn coats) from a silhouette to reduce the interference in the resulting classification. Finally, the human gender is classified using multiple view-dependent classifiers trained using a support vector machine. Experiment results confirm that the proposed method achieves a high accuracy of 98.8 on the CASIA Dataset B and outperforms the recent state-of-the-art methods.
Gait-based recognition techniques can be divided into two categories, marker-based and markerless methods. Using markers, in an early work, Kozlowski and Cutting @cite_0 @cite_42 attempted to attach a point-light display (marker-based method) to a human body to extract the gait information. With this system, a human observer can determine a subject’s gender based on the signals obtained with an acceptable level of accuracy (63 Today, owing to technical innovations in camera and sensor development, the human gait can be easily obtained without a point-light display, leading to the development of markerless methods. The markerless-based methods for gait recognition can be classified into the model and appearance-based approaches. Such categorization can also be used for gender classification.
{ "cite_N": [ "@cite_0", "@cite_42" ], "mid": [ "1986874010", "1996579457" ], "abstract": [ "The sex of human walkers can be recognized without familiarity cues from displays of pointlight sources mounted on major joints. Static versions of these abstract displays do not permit accurate recognition of sex. Variation in the degree of armswing or in walking speed generally interferes with recognition, except that faster speeds are associated somewhat with improved recognition of females. Lights on upper-body joints permit more accurate guesses than do Lights on lower-body joints, but identification is possible even from minimal displays, with lights placed only on the ankles. No feedback was given to observers. Confidence judgments of sex relate to the accuracy of responses in a manner that suggests that viewers know what they are doing.", "Several temporal and spatial factors affect gender recognition of a walker when portrayed, without familiarity cues, as a dynamic point-light display. We demonstrate that, among temporal parameters, the duration of the dynamic stimulus must be longer than 1.6 sec, but that 2.7 sec is fully adequate. Given the speed of our walkers, the recognition threshold appears to be roughly two step cycles. In addition, presentation rate of the stimulus must be near to normal, perhaps because nonnormal rates alter apparent gravity and obscure the normal relationship between output and conservation of energy. We demonstrate that, among spatial factors, the discreteness of the joint information must be maintained for accurate recognition. We go on to argue that it is the information about the shoulder and the hip of a walker that is of primary importance. Finally, inversion of the stimulus display produces the unexpected effect of reversing the apparent sex of most walkers. That is, when presented upside down, male walkers appear female and female walkers appear male." ] }