aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1903.02173 | 2920063196 | Consider the lifelong learning paradigm whose objective is to learn a sequence of tasks depending on previous experiences, e.g., knowledge library or deep network weights. However, the knowledge libraries or deep networks for most recent lifelong learning models are with prescribed size, and can degenerate the performance for both learned tasks and coming ones when facing with a new task environment (cluster). To address this challenge, we propose a novel incremental clustered lifelong learning framework with two knowledge libraries: feature learning library and model knowledge library, called Flexible Clustered Lifelong Learning (FCL3). Specifically, the feature learning library modeled by an autoencoder architecture maintains a set of representation common across all the observed tasks, and the model knowledge library can be self-selected by identifying and adding new representative models (clusters). When a new task arrives, our proposed FCL3 model firstly transfers knowledge from these libraries to encode the new task, i.e., effectively and selectively soft-assigning this new task to multiple representative models over feature learning library. Then, 1) the new task with a higher outlier probability will then be judged as a new representative, and used to redefine both feature learning library and representative models over time; or 2) the new task with lower outlier probability will only refine the feature learning library. For model optimization, we cast this lifelong learning problem as an alternating direction minimization problem as a new task comes. Finally, we evaluate the proposed framework by analyzing several multi-task datasets, and the experimental results demonstrate that our FCL3 model can achieve better performance than most lifelong learning frameworks, even batch clustered multi-task learning models. | For the task-clustering based MTL models @cite_3 @cite_33 @cite_52 @cite_34 , the main idea is that all the task can be partitioned into several clusters, and the task parameters within each cluster are either sharing a common probabilistic prior or close to each other in distance metric. A benefit of this model is its robustness against outlier tasks since they reside in independent clusters that do not affect other tasks. However, these models might fail to take benefit of negatively correlated tasks because they can just put these in different clusters. Furthermore, @cite_12 clusters multiple tasks by identifying a set of representative tasks, and an arbitrary task can be described by multiple representative tasks. The objective function of this method is: where @math denotes the assignment of representative tasks for all tasks. However, (i) this method which selects a subset of representative tasks in the offline regime cannot be transferred into new task environments; (ii) discriminative features among multiple tasks are not learned during the training phase, which leads to high computational cost due to redundant features. These two challenges above are what we address in this flexible clustered lifelong learning framework. | {
"cite_N": [
"@cite_33",
"@cite_52",
"@cite_3",
"@cite_34",
"@cite_12"
],
"mid": [
"2949664970",
"2119187866",
"2118099552",
"1647603301",
""
],
"abstract": [
"In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non convex methods dedicated to the same problem.",
"Modeling a collection of similar regression or classification tasks can be improved by making the tasks 'learn from each other'. In machine learning, this subject is approached through 'multitask learning', where parallel tasks are modeled as multiple outputs of the same network. In multilevel analysis this is generally implemented through the mixed-effects linear model where a distinction is made between 'fixed effects', which are the same for all tasks, and 'random effects', which may vary between tasks. In the present article we will adopt a Bayesian approach in which some of the model parameters are shared (the same for all tasks) and others more loosely connected through a joint prior distribution that can be learned from the data. We seek in this way to combine the best parts of both the statistical multilevel approach and the neural network machinery. The standard assumption expressed in both approaches is that each task can learn equally well from any other task. In this article we extend the model by allowing more differentiation in the similarities between tasks. One such extension is to make the prior mean depend on higher-level task characteristics. More unsupervised clustering of tasks is obtained if we go from a single Gaussian prior to a mixture of Gaussians. This can be further generalized to a mixture of experts architecture with the gates depending on task characteristics. All three extensions are demonstrated through application both on an artificial data set and on two real-world problems, one a school problem and the other involving single-copy newspaper sales.",
"Multi-task learning (MTL) learns multiple related tasks simultaneously to improve generalization performance. Alternating structure optimization (ASO) is a popular MTL method that learns a shared low-dimensional predictive structure on hypothesis spaces from multiple related tasks. It has been applied successfully in many real world applications. As an alternative MTL approach, clustered multi-task learning (CMTL) assumes that multiple tasks follow a clustered structure, i.e., tasks are partitioned into a set of groups where tasks in the same group are similar to each other, and that such a clustered structure is unknown a priori. The objectives in ASO and CMTL differ in how multiple tasks are related. Interestingly, we show in this paper the equivalence relationship between ASO and CMTL, providing significant new insights into ASO and CMTL as well as their inherent relationship. The CMTL formulation is non-convex, and we adopt a convex relaxation to the CMTL formulation. We further establish the equivalence relationship between the proposed convex relaxation of CMTL and an existing convex relaxation of ASO, and show that the proposed convex CMTL formulation is significantly more efficient especially for high-dimensional data. In addition, we present three algorithms for solving the convex CMTL formulation. We report experimental results on benchmark datasets to demonstrate the efficiency of the proposed algorithms.",
"In multi-task learning, multiple related tasks are considered simultaneously, with the goal to improve the generalization performance by utilizing the intrinsic sharing of information across tasks. This paper presents a multitask learning approach by modeling the task-feature relationships. Specifically, instead of assuming that similar tasks have similar weights on all the features, we start with the motivation that the tasks should be related in terms of subsets of features, which implies a co-cluster structure. We design a novel regularization term to capture this task-feature co-cluster structure. A proximal algorithm is adopted to solve the optimization problem. Convincing experimental results demonstrate the effectiveness of the proposed algorithm and justify the idea of exploiting the task-feature relationships.",
""
]
} |
1903.02313 | 2922495297 | Driven by the goal to enable sleep apnea monitoring and machine learning-based detection at home with small mobile devices, we investigate whether interpretation-based indirect knowledge transfer can be used to create classifiers with acceptable performance. Interpretation-based indirect knowledge transfer means that a classifier (student) learns from a synthetic dataset based on the knowledge representation from an already trained Deep Network (teacher). We use activation maximization to generate visualizations and create a synthetic dataset to train the student classifier. This approach has the advantage that student classifiers can be trained without access to the original training data. With experiments we investigate the feasibility of interpretation-based indirect knowledge transfer and its limitations. The student achieves an accuracy of 97.8 on MNIST (teacher accuracy: 99.3 ) with a similar smaller architecture to that of the teacher. The student classifier achieves an accuracy of 86.1 and 89.5 for a subset of the Apnea-ECG dataset (teacher: 89.5 and 91.1 , respectively). | Recently, many new techniques for transfering knowledge have been proposed, especially with the goal to reduce the size of a DNN to decrease the execution time and reduce memory consumption. Existing model compression techniques, e.g., via pruning or parameter sharing @cite_2 @cite_5 @cite_6 @cite_8 can be considered as a form of knowledge transfer from a trained teacher to a student. Other types of methods, transfer the knowledge from a smaller to a larger DNN to make it learn faster @cite_24 or even between different task domains @cite_3 @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_6",
"@cite_3",
"@cite_24",
"@cite_2",
"@cite_5"
],
"mid": [
"2739879705",
"992687842",
"2963674932",
"2165698076",
"2178031510",
"2766839578",
"2125389748"
],
"abstract": [
"We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN performs a mapping from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model, (2) the student DNN outperforms the original DNN, and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.",
"Deep Neural nets (NNs) with millions of parameters are at the heart of many state-of-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85 of the total parameters in an MNIST-trained network, and about 35 for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"We introduce techniques for rapidly transferring the information stored in one neural net into another neural net. The main purpose is to accelerate the training of a significantly larger neural net. During real-world workflows, one often trains very many different neural networks during the experimentation and design process. This is a wasteful process in which each new model is trained from scratch. Our Net2Net technique accelerates the experimentation process by instantaneously transferring the knowledge from a previous network to each new deeper or wider network. Our techniques are based on the concept of function-preserving transformations between neural network specifications. This differs from previous approaches to pre-training that altered the function represented by a neural net when adding layers to it. Using our knowledge transfer mechanism to add depth to Inception modules, we demonstrate a new state of the art accuracy rating on the ImageNet dataset.",
"Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.",
"We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization."
]
} |
1903.02313 | 2922495297 | Driven by the goal to enable sleep apnea monitoring and machine learning-based detection at home with small mobile devices, we investigate whether interpretation-based indirect knowledge transfer can be used to create classifiers with acceptable performance. Interpretation-based indirect knowledge transfer means that a classifier (student) learns from a synthetic dataset based on the knowledge representation from an already trained Deep Network (teacher). We use activation maximization to generate visualizations and create a synthetic dataset to train the student classifier. This approach has the advantage that student classifiers can be trained without access to the original training data. With experiments we investigate the feasibility of interpretation-based indirect knowledge transfer and its limitations. The student achieves an accuracy of 97.8 on MNIST (teacher accuracy: 99.3 ) with a similar smaller architecture to that of the teacher. The student classifier achieves an accuracy of 86.1 and 89.5 for a subset of the Apnea-ECG dataset (teacher: 89.5 and 91.1 , respectively). | In the knowledge distillation method @cite_19 , the student network is trained to match the softmax output layer logits of the trained teacher network and the classes of the original data. @cite_4 introduce fitnets, an extension of the knowledge distillation method to train thinner deeper networks (student) from wider shallower ones (teacher). @cite_1 investigate the compression of large ensembles (like RF, bagged decision trees, etc.) via the use of a very small artificial neural network (ANN). As a universal approximator the ANN is able to generalize to mimic the learned function of the ensemble given sufficient data. To train the ANN they create a larger synthetic dataset based on the real dataset that is labeled by the ensemble. @cite_19 use knowledge distillation on a selection of informative neurons of top hidden layers to train the student network. The selection is done by minimizing an energy function that penalizes high correlation and low discriminativeness. | {
"cite_N": [
"@cite_1",
"@cite_19",
"@cite_4"
],
"mid": [
"",
"2543539599",
"1690739335"
],
"abstract": [
"",
"The recent advanced face recognition systems were built on large Deep Neural Networks (DNNs) or their ensembles, which have millions of parameters. However, the expensive computation of DNNs make their deployment difficult on mobile and embedded devices. This work addresses model compression for face recognition, where the learned knowledge of a large teacher network or its ensemble is utilized as supervision to train a compact student network. Unlike previous works that represent the knowledge by the soften label probabilities, which are difficult to fit, we represent the knowledge by using the neurons at the higher hidden layer, which preserve as much information as the label probabilities, but are more compact. By leveraging the essential characteristics (domain knowledge) of the learned face representation, a neuron selection method is proposed to choose neurons that are most relevant to face recognition. Using the selected neurons as supervision to mimic the single networks of DeepID2+ and DeepID3, which are the state-of-the-art face recognition systems, a compact student with simple network structure achieves better verification accuracy on LFW than its teachers, respectively. When using an ensemble of DeepID2+ as teacher, a mimicked student is able to outperform it and achieves 51.6× compression ratio and 90× speed-up in inference, making this cumbersome model applicable on portable devices.",
"While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network."
]
} |
1903.02114 | 2921974118 | During the past few years, probabilistic approaches to imitation learning have earned a relevant place in the literature. One of their most prominent features, in addition to extracting a mean trajectory from task demonstrations, is that they provide a variance estimation. The intuitive meaning of this variance, however, changes across different techniques, indicating either variability or uncertainty. In this paper we leverage kernelized movement primitives (KMP) to provide a new perspective on imitation learning by predicting variability, correlations and uncertainty about robot actions. This rich set of information is used in combination with optimal controller fusion to learn actions from data, with two main advantages: i) robots become safe when uncertain about their actions and ii) they are able to leverage partial demonstrations, given as elementary sub-tasks, to optimally perform a higher level, more complex task. We showcase our approach in a painting task, where a human user and a KUKA robot collaborate to paint a wooden board. The task is divided into two sub-tasks and we show that using our approach the robot becomes compliant (hence safe) outside the training regions and executes the two sub-tasks with optimal gains. | In terms of estimating optimal controllers from demonstrations, previous works have either exploited full covariance matrices encoding variability and correlations @cite_5 @cite_19 @cite_12 @cite_14 or diagonal uncertainty matrices @cite_25 . While the former are aimed at control efficiency, by having the robot apply higher control efforts where required depending on variability, the latter target safety with the robot becoming more compliant when uncertain about its actions. The LQR we propose in Section is identical to the one in @cite_5 @cite_25 @cite_12 however, by benefiting from the KMP predictions, it unifies the best of the two approaches. | {
"cite_N": [
"@cite_14",
"@cite_19",
"@cite_5",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"2118291463",
"2053324916",
"2909940853",
"2103831616"
],
"abstract": [
"",
"While human behavior prediction can increase the capability of a robotic partner to generate anticipatory behavior during physical human robot interaction (pHRI), predictions in uncertain situations can lead to large disturbances for the human if they do not match the human intentions. In this paper we present a novel control concept in which the assistive control parameters are adapted to the uncertainty in the sense that a the robot takes a more or less active role depending on its confidence in the human behavior prediction. The approach is based on risk-sensitive optimal feedback control. The human behavior is modeled using probabilistic learning methods and any unexpected disturbance is considered as a source of noise. The proposed approach is validated in situations with different uncertainties, process noise and risk-sensitivities in a tow- Degree-of-Freedom virtual reality experiment.",
"We present a task-parameterized probabilistic model encoding movements in the form of virtual spring-damper systems acting in multiple frames of reference. Each candidate coordinate system observes a set of demonstrations from its own perspective, by extracting an attractor path whose variations depend on the relevance of the frame at each step of the task. This information is exploited to generate new attractor paths in new situations (new position and orientation of the frames), with the predicted covariances used to estimate the varying stiffness and damping of the spring-damper systems, resulting in a minimal intervention control strategy. The approach is tested with a 7-DOFs Barrett WAM manipulator whose movement and impedance behavior need to be modulated in regard to the position and orientation of two external objects varying during demonstration and reproduction.",
"Motivated by the desire to have robots physically present in human environments, in recent years we have witnessed an emergence of different approaches for learning active compliance. Some of the most compelling solutions exploit a minimal intervention control principle, correcting deviations from a goal only when necessary, and among those who follow this concept, several probabilistic techniques have stood out from the rest. However, these approaches are prone to requiring several task demonstrations for proper gain estimation and to generating unpredictable robot motions in the face of uncertainty. Here we present a Programming by Demonstration approach for uncertainty-aware impedance regulation, aimed at making the robot compliant - and safe to interact with - when the uncertainty about its predicted actions is high. Moreover, we propose a data-efficient strategy, based on the energy observed during demonstrations, to achieve minimal intervention control, when the uncertainty is low. The approach is validated in an experimental scenario, where a human collaboratively moves an object with a 7-DoF torque-controlled robot.",
"Human-robot collaboration seeks to have humans and robots closely interacting in everyday situations. For some tasks, physical contact between the user and the robot may occur, originating significant challenges at safety, cognition, perception and control levels, among others. This paper focuses on robot motion adaptation to parameters of a collaborative task, extraction of the desired robot behavior, and variable impedance control for human-safe interaction. We propose to teach a robot cooperative behaviors from demonstrations, which are probabilistically encoded by a task-parametrized formulation of a Gaussian mixture model. Such encoding is later used for specifying both the desired state of the robot, and an optimal feedback control law that exploits the variability in position, velocity and force spaces observed during the demonstrations. The whole framework allows the robot to modify its movements as a function of parameters of the task, while showing different impedance behaviors. Tests were successfully carried out in a scenario where a 7 DOF backdrivable manipulator learns to cooperate with a human to transport an object."
]
} |
1903.02114 | 2921974118 | During the past few years, probabilistic approaches to imitation learning have earned a relevant place in the literature. One of their most prominent features, in addition to extracting a mean trajectory from task demonstrations, is that they provide a variance estimation. The intuitive meaning of this variance, however, changes across different techniques, indicating either variability or uncertainty. In this paper we leverage kernelized movement primitives (KMP) to provide a new perspective on imitation learning by predicting variability, correlations and uncertainty about robot actions. This rich set of information is used in combination with optimal controller fusion to learn actions from data, with two main advantages: i) robots become safe when uncertain about their actions and ii) they are able to leverage partial demonstrations, given as elementary sub-tasks, to optimally perform a higher level, more complex task. We showcase our approach in a painting task, where a human user and a KUKA robot collaborate to paint a wooden board. The task is divided into two sub-tasks and we show that using our approach the robot becomes compliant (hence safe) outside the training regions and executes the two sub-tasks with optimal gains. | Finally, inspired by previous work on learning kinematic constraints @cite_4 , we proposed a in @cite_15 to allow robots to smoothly switch between sub-tasks based on the uncertainty of each sub-task's controller, when performing a more complex high-level task. Here we go one step further and consider controllers learned from demonstrations into the fusion, instead of manually defining the control gains. | {
"cite_N": [
"@cite_15",
"@cite_4"
],
"mid": [
"2885325819",
"2063978650"
],
"abstract": [
"When learning skills from demonstrations, one is often required to think in advance about the appropriate task representation (usually in either operational or configuration space). We here propose a probabilistic approach for simultaneously learning and synthesizing torque control commands which take into account task space, joint space and force constraints. We treat the problem by considering different torque controllers acting on the robot, whose relevance is learned probabilistically from demonstrations. This information is used to combine the controllers by exploiting the properties of Gaussian distributions, generating new torque commands that satisfy the important features of the task. We validate the approach in two experimental scenarios using 7- DoF torque-controlled manipulators, with tasks that require the consideration of different controllers to be properly executed.",
"We present a probabilistic architecture for solving generically the problem of extracting the task constraints through a Programming by Demonstration (PbD) framework and for generalizing the acquired knowledge to various situations. In previous work, we proposed an approach based on Gaussian Mixture Regression (GMR) to find a controller for the robot reproducing the statistical characteristics of a movement in joint space and in task space through Lagrange optimization. In this paper, we develop an alternative procedure to handle simultaneously constraints in joint space and in task space by combining directly the probabilistic representation of the task constraints with a solution to Jacobian-based inverse kinematics. The method is validated in manipulation tasks with two 5 DOFs Katana robotic arms displacing a set of objects."
]
} |
1903.02013 | 2964172352 | We present PROPS, a lightweight transfer learning mechanism for sequential data. PROPS learns probabilistic perturbations around the predictions of one or more arbitrarily complex, pre-trained black box models (such as recurrent neural networks). The technique pins the black-box prediction functions to "source states" of a hidden Markov model (HMM), and uses the remaining states as "perturbation states" for learning customized perturbations around those predictions. In this paper, we describe the PROPS model, provide an algorithm for online learning of its parameters, and demonstrate the consistency of this estimation. We also explore the utility of PROPS in the context of personalized language modeling. In particular, we construct a baseline language model by training a LSTM on the entire Wikipedia corpus of 2.5 million articles (around 6.6 billion words), and then use PROPS to provide lightweight customization into a personalized language model of President Donald J. Trump’s tweeting. We achieved good customization after only 2,000 additional words, and find that the PROPS model, being fully probabilistic, provides insight into when President Trump’s speech departs from generic patterns in the Wikipedia corpus. All code (both the PROPS training algorithm as well as reproducible experiments) are available as a pip-installable Python package.1 | The (RNN-based) language model personalization scheme of @cite_6 provides many of the desiderata outlined in the introduction. By forming a source RNN on a large dataset and then retraining only the last layer on the endpoint, they obtain a lightweight, fast transfer learning scheme for sequential data that respects user privacy. The primary drawback of @cite_6 relative to is the loss of interpretability in the personalized model. Because the hidden variables of that model are not stochastic, one loses any insight into the respective contributions of the original source model vs. the personalization modes towards predictions about upcoming behavior. A subsidiary drawback is the loss of reliability as the more expressive model will also have higher variance. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2574917025"
],
"abstract": [
"In this paper, we propose an efficient transfer leaning methods for training a personalized language model using a recurrent neural network with long short-term memory architecture. With our proposed fast transfer learning schemes, a general language model is updated to a personalized language model with a small amount of user data and a limited computing resource. These methods are especially useful for a mobile device environment while the data is prevented from transferring out of the device for privacy purposes. Through experiments on dialogue data in a drama, it is verified that our transfer learning methods have successfully generated the personalized language model, whose output is more similar to the personal language style in both qualitative and quantitative aspects."
]
} |
1903.02013 | 2964172352 | We present PROPS, a lightweight transfer learning mechanism for sequential data. PROPS learns probabilistic perturbations around the predictions of one or more arbitrarily complex, pre-trained black box models (such as recurrent neural networks). The technique pins the black-box prediction functions to "source states" of a hidden Markov model (HMM), and uses the remaining states as "perturbation states" for learning customized perturbations around those predictions. In this paper, we describe the PROPS model, provide an algorithm for online learning of its parameters, and demonstrate the consistency of this estimation. We also explore the utility of PROPS in the context of personalized language modeling. In particular, we construct a baseline language model by training a LSTM on the entire Wikipedia corpus of 2.5 million articles (around 6.6 billion words), and then use PROPS to provide lightweight customization into a personalized language model of President Donald J. Trump’s tweeting. We achieved good customization after only 2,000 additional words, and find that the PROPS model, being fully probabilistic, provides insight into when President Trump’s speech departs from generic patterns in the Wikipedia corpus. All code (both the PROPS training algorithm as well as reproducible experiments) are available as a pip-installable Python package.1 | A (SRNN) @cite_2 addresses a shortcoming of RNN's relative to state-space models such as HMM's by allowing for stochastic hidden variables. We surmise that the the SRNN framework will eventually generate a state-of-the-art transfer learning mechanism for sequential data that satisfies the interpretability desideratum from the introduction. However, to our knowledge, such a mechanism has not yet been developed. Moreover, the training of a SRNN is substantially more complex than training a standard RNN, let alone a HMM, and one would expect that computational complexity to spill over into the transference algorithm. If so, would provide a lightweight alternative. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2963279312"
],
"abstract": [
"How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model’s posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over the uncertainty in a latent path, like a state space model, we improve the state of the art results on the Blizzard and TIMIT speech modeling data sets by a large margin, while achieving comparable performances to competing methods on polyphonic music modeling."
]
} |
1903.01855 | 2920536219 | TensorFlow Eager is a multi-stage, Python-embedded domain-specific language for hardware-accelerated machine learning, suitable for both interactive research and production. TensorFlow, which TensorFlow Eager extends, requires users to represent computations as dataflow graphs; this permits compiler optimizations and simplifies deployment but hinders rapid prototyping and run-time dynamism. TensorFlow Eager eliminates these usability costs without sacrificing the benefits furnished by graphs: It provides an imperative front-end to TensorFlow that executes operations immediately and a JIT tracer that translates Python functions composed of TensorFlow operations into executable dataflow graphs. TensorFlow Eager thus offers a multi-stage programming model that makes it easy to interpolate between imperative and staged execution in a single package. | An alternative to staging computations as graphs for performance is to implement fused kernels. For example, NVIDIA provides fused CuDNN kernels for popular recurrent neural network operations that are dramatically faster than non-fused implementations @cite_23 . This approach, while useful, is difficult to scale, as it requires substantial programmer intervention. | {
"cite_N": [
"@cite_23"
],
"mid": [
"1667652561"
],
"abstract": [
"We present a library that provides optimized implementations for deep learning primitives. Deep learning workloads are computationally intensive, and optimizing the kernels of deep learning workloads is difficult and time-consuming. As parallel architectures evolve, kernels must be reoptimized for new processors, which makes maintaining codebases difficult over time. Similar issues have long been addressed in the HPC community by libraries such as the Basic Linear Algebra Subroutines (BLAS) [2]. However, there is no analogous library for deep learning. Without such a library, researchers implementing deep learning workloads on parallel processors must create and optimize their own implementations of the main computational kernels, and this work must be repeated as new parallel processors emerge. To address this problem, we have created a library similar in intent to BLAS, with optimized routines for deep learning workloads. Our implementation contains routines for GPUs, and similarly to the BLAS library, could be implemented for other platforms. The library is easy to integrate into existing frameworks, and provides optimized performance and memory usage. For example, integrating cuDNN into Caffe, a popular framework for convolutional networks, improves performance by 36 on a standard model while also reducing memory consumption."
]
} |
1903.01855 | 2920536219 | TensorFlow Eager is a multi-stage, Python-embedded domain-specific language for hardware-accelerated machine learning, suitable for both interactive research and production. TensorFlow, which TensorFlow Eager extends, requires users to represent computations as dataflow graphs; this permits compiler optimizations and simplifies deployment but hinders rapid prototyping and run-time dynamism. TensorFlow Eager eliminates these usability costs without sacrificing the benefits furnished by graphs: It provides an imperative front-end to TensorFlow that executes operations immediately and a JIT tracer that translates Python functions composed of TensorFlow operations into executable dataflow graphs. TensorFlow Eager thus offers a multi-stage programming model that makes it easy to interpolate between imperative and staged execution in a single package. | is not the first Python library to offer a multi-stage programming model. JAX @cite_26 , a tracing-JIT compiler that generates code for heterogeneous devices via XLA @cite_4 , provides a similar programming paradigm; MXNet and Gluon also let users interpolate between imperative and staged computations, but at a level of abstraction that is higher than ours @cite_25 @cite_19 ; and PyTorch is implementing a staging tracer that is similar to ours @cite_9 . Outside of differentiable programming, Terra is a Lua-embedded DSL that supports code generation, and the paper in which it was introduced presents a thorough treatment of multi-stage programming that is more formal than ours @cite_31 ; as another example, OptiML is a Scala-embedded DSL for machine learning with support for staging and code generation but without support for automatic differentiation @cite_10 . Outside of DSLs, there are several projects that provide just-in-time (JIT) compilation for Python, of which Numba @cite_20 and PyPy @cite_7 are two examples. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_19",
"@cite_31",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"2899971035",
"",
"1993335798",
"",
"",
"2132598718",
"2240938131",
"2186615578",
"2152175008"
],
"abstract": [
"",
"",
"We attempt to apply the technique of Tracing JIT Compilers in the context of the PyPy project, i.e., to programs that are interpreters for some dynamic languages, including Python. Tracing JIT compilers can greatly speed up programs that spend most of their time in loops in which they take similar code paths. However, applying an unmodified tracing JIT to a program that is itself a bytecode interpreter results in very limited or no speedup. In this paper we show how to guide tracing JIT compilers to greatly improve the speed of bytecode interpreters. One crucial point is to unroll the bytecode dispatch loop, based on two kinds of hints provided by the implementer of the bytecode interpreter. We evaluate our technique by applying it to two PyPy interpreters: one is a small example, and the other one is the full Python interpreter.",
"",
"",
"High-performance computing applications, such as auto-tuners and domain-specific languages, rely on generative programming techniques to achieve high performance and portability. However, these systems are often implemented in multiple disparate languages and perform code generation in a separate process from program execution, making certain optimizations difficult to engineer. We leverage a popular scripting language, Lua, to stage the execution of a novel low-level language, Terra. Users can implement optimizations in the high-level language, and use built-in constructs to generate and execute high-performance Terra code. To simplify meta-programming, Lua and Terra share the same lexical environment, but, to ensure performance, Terra code can execute independently of Lua's runtime. We evaluate our design by reimplementing existing multi-language systems entirely in Terra. Our Terra-based auto-tuner for BLAS routines performs within 20 of ATLAS, and our DSL for stencil computations runs 2.3x faster than hand-written C.",
"As the size of datasets continues to grow, machine learning applications are becoming increasingly limited by the amount of available computational power. Taking advantage of modern hardware requires using multiple parallel programming models targeted at different devices (e.g. CPUs and GPUs). However, programming these devices to run efficiently and correctly is difficult, error-prone, and results in software that is harder to read and maintain. We present OptiML, a domain-specific language (DSL) for machine learning. OptiML is an implicitly parallel, expressive and high performance alternative to MATLAB and C++. OptiML performs domain-specific analyses and optimizations and automatically generates CUDA code for GPUs. We show that OptiML outperforms explicitly parallelized MATLAB code in nearly all cases.",
"MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines.",
"Theano is a compiler for mathematical expressions in Python that combines the convenience of NumPy's syntax with the speed of optimized native machine language. The user composes mathematical expressions in a high-level description that mimics NumPy's syntax and semantics, while being statically typed and functional (as opposed to imperative). These expressions allow Theano to provide symbolic differentiation. Before performing computation, Theano optimizes the choice of expressions, translates them into C++ (or CUDA for GPU), compiles them into dynamically loaded Python modules, all automatically. Common machine learn- ing algorithms implemented with Theano are from 1:6 to 7:5 faster than competitive alternatives (including those implemented with C C++, NumPy SciPy and MATLAB) when compiled for the CPU and between 6:5 and 44 faster when compiled for the GPU. This paper illustrates how to use Theano, outlines the scope of the compiler, provides benchmarks on both CPU and GPU processors, and explains its overall design."
]
} |
1903.01855 | 2920536219 | TensorFlow Eager is a multi-stage, Python-embedded domain-specific language for hardware-accelerated machine learning, suitable for both interactive research and production. TensorFlow, which TensorFlow Eager extends, requires users to represent computations as dataflow graphs; this permits compiler optimizations and simplifies deployment but hinders rapid prototyping and run-time dynamism. TensorFlow Eager eliminates these usability costs without sacrificing the benefits furnished by graphs: It provides an imperative front-end to TensorFlow that executes operations immediately and a JIT tracer that translates Python functions composed of TensorFlow operations into executable dataflow graphs. TensorFlow Eager thus offers a multi-stage programming model that makes it easy to interpolate between imperative and staged execution in a single package. | Multi-stage programming is a well-studied topic in programming languages; a good reference is @cite_1 , and a modern design from which we drew inspiration is Scala's lightweight modular staging @cite_29 . Multi-stage programming is related to staging transformations in compilers and partial evaluation in programming languages, for which @cite_11 and @cite_30 are classic references, respectively. | {
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_1",
"@cite_11"
],
"mid": [
"1556604985",
"2154697693",
"1650987719",
"1990347915"
],
"abstract": [
"Functions, types and expressions programming languages and their operational semantics compilation partial evaluation of a flow chart languages partial evaluation of a first-order functional languages the view from Olympus partial evaluation of the Lambda calculus partial evaluation of prolog aspects of Similix - a partial evaluator for a subset of scheme partial evaluation of C applications of partial evaluation termination of partial evaluation program analysis more general program transformation guide to the literature the self-applicable scheme specializer.",
"Software engineering demands generality and abstraction, performance demands specialization and concretization. Generative programming can provide both, but the effort required to develop high-quality program generators likely offsets their benefits, even if a multi-stage programming language is used. We present lightweight modular staging, a library-based multi-stage programming approach that breaks with the tradition of syntactic quasi-quotation and instead uses only types to distinguish between binding times. Through extensive use of component technology, lightweight modular staging makes an optimizing compiler framework available at the library level, allowing programmers to tightly integrate domain-specific abstractions and optimizations into the generation process. We argue that lightweight modular staging enables a form of language virtualization, i.e. allows to go from a pure-library embedded language to one that is practically equivalent to a stand-alone implementation with only modest effort.",
"Multi-stage programming (MSP) is a paradigm for developing generic software that does not pay a runtime penalty for this generality. This is achieved through concise, carefully-designed language extensions that support runtime code generation and program execution. Additionally, type systems for MSP languages are designed to statically ensure that dynamically generated programs are type-safe, and therefore require no type checking after they are generated.",
"Computations can generally be separated into stages, which are distinguished from one another by either frequency of execution or availability of data. Precomputation and frequency reduction involve moving computation among a collection of stages so that work is done as early as possible (so less time is required in later steps) and as infrequently as possible (to reduce overall time).We present, by means of examples, several general transformation techniques for carrying out precomputation transformations. We illustrate the techniques by deriving fragments of simple compilers from interpreters, including an example of Prolog compilation, but the techniques are applicable in a broad range of circumstances. Our aim is to demonstrate how perspicuous accounts of precomputation and frequency reduction can be given for a wide range of applications using a small number of relatively straightforward techniques.Related work in partial evaluation, semantically directed compilation, and compiler optimization is discussed."
]
} |
1903.01756 | 2920165356 | Given a directed graph @math with arbitrary real-valued weights, the single source shortest-path problem (SSSP) asks for, given a source @math in @math , finding a shortest path from @math to each vertex @math in @math . A classical SSSP algorithm detects a negative cycle of @math or constructs a shortest-path tree (SPT) rooted at @math in @math time, where @math are the numbers of edges and vertices in @math respectively. In many practical applications, new constraints come from time to time and we need to update the SPT frequently. Given an SPT @math of @math , suppose the weight on a certain edge is modified. We show by rigorous proof that the well-known Ball-String algorithm for positively weighted graphs can be adapted to solve the dynamic SPT problem for directed graphs with arbitrary weights. Let @math be the number of vertices that are affected (i.e., vertices that have different distances from @math or different parents in the input and output SPTs) and @math the number of edges incident to an affected vertex. The adapted algorithms terminate in @math time, either detecting a negative cycle (only in the decremental case) or constructing a new SPT @math for the updated graph. We show by an example that the output SPT @math may have more than necessary edge changes to @math . To remedy this, we give a general method for transforming @math into an SPT with minimal edge changes in time @math provided that @math has no cycles with zero length. | Let @math be a weighted directed graph. For any edge @math in @math , we call @math , respectively, the and the of @math . The weight of an edge @math in @math is written as @math . In applications, these weights may denote distances between two cities or costs between two routers and thus are usually non-negative. The simple temporal problem @cite_16 can also be represented as a weighted directed graph, where each vertex @math represents a time point @math , and a weight @math on the edge from vertex @math to @math specifies that the time difference of @math and @math is upper bounded by @math , i.e., @math . Clearly, weights in this case may be negative. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2130182605"
],
"abstract": [
"Abstract This paper extends network-based methods of constraint satisfaction to include continuous variables, thus providing a framework for processing temporal constraints. In this framework, called temporal constraint satisfaction problem (TCSP), variables represent time points and temporal information is represented by a set of unary and binary constraints, each specifying a set of permitted intervals. The unique feature of this framework lies in permitting the processing of metric information, namely, assessments of time differences between events. We present algorithms for performing the following reasoning tasks: finding all feasible times that a given event can occur, finding all possible relationships between two given events, and generating one or more scenarios consistent with the information provided. We distinguish between simple temporal problems (STPs) and general temporal problems, the former admitting at most one interval constraint on any pair of time points. We show that the STP, which subsumes the major part of Vilain and Kautz's point algebra, can be solved in polynomial time. For general TCSPs, we present a decomposition scheme that performs the three reasoning tasks considered, and introduce a variety of techniques for improving its efficiency. We also study the applicability of path consistency algorithms as preprocessing of temporal problems, demonstrate their termination and bound their complexities."
]
} |
1903.01756 | 2920165356 | Given a directed graph @math with arbitrary real-valued weights, the single source shortest-path problem (SSSP) asks for, given a source @math in @math , finding a shortest path from @math to each vertex @math in @math . A classical SSSP algorithm detects a negative cycle of @math or constructs a shortest-path tree (SPT) rooted at @math in @math time, where @math are the numbers of edges and vertices in @math respectively. In many practical applications, new constraints come from time to time and we need to update the SPT frequently. Given an SPT @math of @math , suppose the weight on a certain edge is modified. We show by rigorous proof that the well-known Ball-String algorithm for positively weighted graphs can be adapted to solve the dynamic SPT problem for directed graphs with arbitrary weights. Let @math be the number of vertices that are affected (i.e., vertices that have different distances from @math or different parents in the input and output SPTs) and @math the number of edges incident to an affected vertex. The adapted algorithms terminate in @math time, either detecting a negative cycle (only in the decremental case) or constructing a new SPT @math for the updated graph. We show by an example that the output SPT @math may have more than necessary edge changes to @math . To remedy this, we give a general method for transforming @math into an SPT with minimal edge changes in time @math provided that @math has no cycles with zero length. | The Ball-String algorithm was first described in @cite_33 for dynamic graphs with only positive weights. When negative weights are allowed and the weight of an edge is decreased, the algorithm cannot be directly applied as there may exist negative cycles in the updated graph. We thus need to introduce procedures for checking negative cycles. Moreover, as we will show in section 3, the algorithm does not always output an SPT with minimal edge changes even in the incremental case. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2132523953"
],
"abstract": [
"A key functionality in today's widely used interior gateway routing protocols such as OSPF and IS-IS involves the computation of a shortest path tree (SPT). In many existing commercial routers, the computation of an SPT is done from scratch following changes in the link states of the network. As there may coexist multiple SPTs in a network with a set of given link states, such recomputation of an entire SPT not only is inefficient but also causes frequent unnecessary changes in the topology of an existing SPT and creates routing instability.. This paper presents a new dynamic SPT algorithm that makes use of the structure of the previously computed SPT. Our algorithm is derived by recasting the SPT problem into an optimization problem in a dual linear programming framework, which can also be interpreted using a ball-and-string model. In this model, the increase (or decrease) of an edge weight in the tree corresponds to the lengthening (or shortening) of a string. By stretching the strings until each node is attached to a tight string, the resulting topology of the model defines an (or multiple) SPT(s). By emulating the dynamics of the ball-and-string model, we can derive an efficient algorithm that propagates changes in distances to all affected nodes in a natural order and in a most economical way. Compared with existing results, our algorithm has the best-known performance in terms of computational complexity as well as minimum changes made to the topology of an SPT. Rigorous proofs for correctness of our algorithm and simulation results illustrating its complexity are also presented."
]
} |
1903.01695 | 2920362488 | We propose novel real-time algorithm to localize hands and find their associations with multiple people in the cluttered 4D volumetric data (dynamic 3D volumes). Different from the traditional multiple view approaches, which find key points in 2D and then triangulate to recover the 3D locations, our method directly processes the dynamic 3D data that involve both clutter and crowd. The volumetric representation is more desirable than the partial observations from different view points and enables more robust and accurate results. However, due to the large amount of data in the volumetric representation brute force 3D schemes are slow. In this paper, we propose novel real-time methods to tackle the problem to achieve both higher accuracy and faster speed than previous approaches. Our method detects the 3D bounding box of each subject and localizes the hands of each person. We develop new 2D features for fast candidate proposals and optimize the trajectory linking using a new max-covering bipartite matching formulation, which is critical for robust performance. We propose a novel decomposition method to reduce the key point localization in each person 3D volume to a sequence of efficient 2D problems. Our experiments show that the proposed method is faster than different competing methods and it gives almost half the localization error. | Hand point localization can be tackled as a deep regression problem: a person volume is sent to a deep neural network which directly outputs the coordinates of the hands of the person centered in the volume. Such a method is an extension from DeepPose @cite_19 on 2D color images. With proper design, 3D deep regression gives decent results, but it often fails when people poses are drastically different from the training examples. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2113325037"
],
"abstract": [
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images."
]
} |
1903.01695 | 2920362488 | We propose novel real-time algorithm to localize hands and find their associations with multiple people in the cluttered 4D volumetric data (dynamic 3D volumes). Different from the traditional multiple view approaches, which find key points in 2D and then triangulate to recover the 3D locations, our method directly processes the dynamic 3D data that involve both clutter and crowd. The volumetric representation is more desirable than the partial observations from different view points and enables more robust and accurate results. However, due to the large amount of data in the volumetric representation brute force 3D schemes are slow. In this paper, we propose novel real-time methods to tackle the problem to achieve both higher accuracy and faster speed than previous approaches. Our method detects the 3D bounding box of each subject and localizes the hands of each person. We develop new 2D features for fast candidate proposals and optimize the trajectory linking using a new max-covering bipartite matching formulation, which is critical for robust performance. We propose a novel decomposition method to reduce the key point localization in each person 3D volume to a sequence of efficient 2D problems. Our experiments show that the proposed method is faster than different competing methods and it gives almost half the localization error. | Our method is also related to the point cloud or volumetric data semantic segmentation @cite_11 @cite_8 @cite_7 @cite_18 @cite_31 @cite_27 @cite_16 . 3D semantic segmentation labels points or voxels in a volume with specific classes. Potentially it can be used to find the left and right hand voxels in the volume data and then the locations of the left and right hand of the person can be further extracted. However, 3D semantic segmentation methods are not sufficient for hand detection and association because they are instance agnostic. When there are multiple people in a close proximity, we in fact want the hand labeling specific to each single subject. Traditional 3D semantic segmentation methods thus have difficulty when associating the hands with subjects in a scene. These methods also have high complexity even by taking advantage of the sparseness of the volumetric data. Our proposed method solves the problem. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_27",
"@cite_31",
"@cite_16",
"@cite_11"
],
"mid": [
"2765754958",
"2559902616",
"2950642167",
"2603429625",
"2737234477",
"2962928871",
"2963182550"
],
"abstract": [
"3D semantic scene labeling is fundamental to agents operating in the real world. In particular, labeling raw 3D point sets from sensors provides fine-grained semantics. Recent works leverage the capabilities of Neural Networks (NNs), but are limited to coarse voxel predictions and do not explicitly enforce global consistency. We present SEGCloud, an end-to-end framework to obtain 3D point-level segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are transferred back to the raw 3D points via trilinear interpolation. Then the FC-CRF enforces global consistency and provides fine-grained semantics on the points. We implement the latter as a differentiable Recurrent NN to allow joint optimization. We evaluate the framework on two indoor and two outdoor 3D datasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance comparable or superior to the state-of-the-art on all datasets.",
"In this paper, we study the problem of semantic annotation on 3D models that are represented as shape graphs. A functional view is taken to represent localized information on graphs, so that annotations such as part segment or keypoint are nothing but 0-1 indicator vertex functions. Compared with images that are 2D grids, shape graphs are irregular and non-isomorphic data structures. To enable the prediction of vertex functions on them by convolutional neural networks, we resort to spectral CNN method that enables weight sharing by parameterizing kernels in the spectral domain spanned by graph laplacian eigenbases. Under this setting, our network, named SyncSpecCNN, strive to overcome two key challenges: how to share coefficients and conduct multi-scale analysis in different parts of the graph for a single shape, and how to share information across related but different shapes that may be represented by very different graphs. Towards these goals, we introduce a spectral parameterization of dilated convolutional kernels and a spectral transformer network. Experimentally we tested our SyncSpecCNN on various tasks, including 3D shape part segmentation and 3D keypoint prediction. State-of-the-art performance has been achieved on all benchmark datasets.",
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute- and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from high-level representations, and shape from a single image.",
"We present O-CNN, an Octree-based Convolutional Neural Network (CNN) for 3D shape analysis. Built upon the octree representation of 3D shapes, our method takes the average normal vectors of a 3D model sampled in the finest leaf octants as input and performs 3D CNN operations on the octants occupied by the 3D shape surface. We design a novel octree data structure to efficiently store the octant information and CNN features into the graphics memory and execute the entire O-CNN training and evaluation on the GPU. O-CNN supports various CNN structures and works for 3D shapes in different representations. By restraining the computations on the octants occupied by 3D surfaces, the memory and computational costs of the O-CNN grow quadratically as the depth of the octree increases, which makes the 3D CNN feasible for high-resolution 3D models. We compare the performance of the O-CNN with other existing 3D CNN solutions and demonstrate the efficiency and efficacy of O-CNN in three shape analysis tasks, including object classification, shape retrieval, and shape segmentation.",
"We present a new deep learning architecture (called Kdnetwork) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and shares parameters of these transformations according to the subdivisions of the point clouds imposed onto them by kdtrees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform twodimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behavior. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation.",
"Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SS-CNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition."
]
} |
1903.02044 | 2971157333 | This paper introduces a method to compute a sparse lattice planner control set that is suited to a particular task by learning from a representative dataset of vehicle paths. To do this, we use a scoring measure similar to the Frechet distance and propose an algorithm for evaluating a given control set according to the scoring measure. Control actions are then selected from a dense control set according to an objective function that rewards improvements in matching the dataset while also encouraging sparsity. This method is evaluated across several experiments involving real and synthetic datasets, and it is shown to generate smaller control sets when compared to the previous state-of-the-art lattice control set computation technique, with these smaller control sets maintaining a high degree of manoeuvrability in the required task. This results in a planning time speedup of up to 4.31x when using the learned control set over the state-of-the-art computed control set. In addition, we show the learned control sets are better able to capture the driving style of the dataset in terms of path curvature. | In previous work, data-driven motion planning has often focused on learning search heuristics or policies for the motion planner rather than learning the underlying structure of the planner itself. developed a method for learning a sampling distribution for RRT* motion planning @cite_15 . Imitation learning can also be used to learn a search heuristic based on previously planned optimal paths @cite_3 @cite_7 . have developed a method for optimizing search heuristics for a given kinodynamic planning problem @cite_24 . used reinforcement learning to learn a control policy for quadcopters by training on MPC outputs @cite_12 . | {
"cite_N": [
"@cite_7",
"@cite_3",
"@cite_24",
"@cite_15",
"@cite_12"
],
"mid": [
"",
"2963409832",
"2568324136",
"2963439114",
"1923344279"
],
"abstract": [
"",
"Robot planning is the process of selecting a sequence of actions that optimize for a task=specific objective. For instance, the objective for a navigation task would be to find collision-free paths, whereas the objective for an exploration task would be to map unknown areas. The optimal solutions to such tasks are heavily influenced by the implicit structure in the environment, i.e. the configuration of objects in the world. State-of-the-art planning approaches, however, do not exploit this structure, thereby expending valuable effort searching the action space instead of focusing on potentially good actions. In this paper, we address the problem of enabling planners to adapt their search strategies by inferring such good actions in an efficient manner using only the information uncovered by the search up until that time. We formulate this as a problem of sequential decision making under uncertainty where at a given iteration a planning policy must map the state of the search to a planning action. Unfortu...",
"How does one obtain an admissible heuristic for a kinodynamic motion planning problem? This letter develops the analytical tools and techniques to answer this question. A sufficient condition for the admissibility of a heuristic is presented, which can be checked directly from problem data. This condition is also used to formulate an infinite-dimensional linear program to optimize an admissible heuristic. We then investigate the use of sum-of-squares programming techniques to obtain an approximate solution to this linear program. A number of examples are provided to demonstrate these new concepts.",
"A defining feature of sampling-based motion planning is the reliance on an implicit representation of the state space, which is enabled by a set of probing samples. Traditionally, these samples are drawn either probabilistically or deterministically to uniformly cover the state space. Yet, the motion of many robotic systems is often restricted to “small” regions of the state space, due to e.g. differential constraints or collision-avoidance constraints. To accelerate the planning process, it is thus desirable to devise non-uniform sampling strategies that favor sampling in those regions where an optimal solution might lie. This paper proposes a methodology for nonuniform sampling, whereby a sampling distribution is learned from demonstrations, and then used to bias sampling. The sampling distribution is computed through a conditional variational autoencoder, allowing sample generation from the latent space conditioned on the specific planning problem. This methodology is general, can be used in combination with any sampling-based planner, and can effectively exploit the underlying structure of a planning problem while maintaining the theoretical guarantees of sampling-based approaches. Specifically, on several planning problems, the proposed methodology is shown to effectively learn representations for the relevant regions of the state space, resulting in an order of magnitude improvement in terms of success rate and convergence to the optimal cost.",
"Model predictive control (MPC) is an effective method for controlling robotic systems, particularly autonomous aerial vehicles such as quadcopters. However, application of MPC can be computationally demanding, and typically requires estimating the state of the system, which can be challenging in complex, unstructured environments. Reinforcement learning can in principle forego the need for explicit state estimation and acquire a policy that directly maps sensor readings to actions, but is difficult to apply to unstable systems that are liable to fail catastrophically during training before an effective policy has been found. We propose to combine MPC with reinforcement learning in the framework of guided policy search, where MPC is used to generate data at training time, under full state observations provided by an instrumented training environment. This data is used to train a deep neural network policy, which is allowed to access only the raw observations from the vehicle's onboard sensors. After training, the neural network policy can successfully control the robot without knowledge of the full state, and at a fraction of the computational cost of MPC. We evaluate our method by learning obstacle avoidance policies for a simulated quadrotor, using simulated onboard sensors and no explicit state estimation at test time."
]
} |
1903.02044 | 2971157333 | This paper introduces a method to compute a sparse lattice planner control set that is suited to a particular task by learning from a representative dataset of vehicle paths. To do this, we use a scoring measure similar to the Frechet distance and propose an algorithm for evaluating a given control set according to the scoring measure. Control actions are then selected from a dense control set according to an objective function that rewards improvements in matching the dataset while also encouraging sparsity. This method is evaluated across several experiments involving real and synthetic datasets, and it is shown to generate smaller control sets when compared to the previous state-of-the-art lattice control set computation technique, with these smaller control sets maintaining a high degree of manoeuvrability in the required task. This results in a planning time speedup of up to 4.31x when using the learned control set over the state-of-the-art computed control set. In addition, we show the learned control sets are better able to capture the driving style of the dataset in terms of path curvature. | For work involving lattice planner control set optimization, have developed a D*-like (DL) algorithm for finding a subset of a lattice control set that spans the same reachability of the original control set, but does so within a multiplicative factor of each original control action's arc length @cite_9 . This algorithm does not rely on data, but instead relies on the structure of the original control set to find redundancy. In contrast, our method attempts to leverage data for a particular application to optimize the control set. This paper uses the DL algorithm as the state-of-the-art comparison for the quality of the presented learning algorithm. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2142298978"
],
"abstract": [
"This paper presents a type of motion primitives that can be used for building efficient kinodynamic motion planners. The primitives are pre-computed to meet two objective: to capture the mobility constraints of the robot as well as possible and to establish a state sampling policy that is conductive to efficient search. The first objective allows encoding mobility constraints into primitives, thereby enabling fast unconstrained search to produce feasible solutions. The second objective enables high quality (lattice) sampling of state space, further speeding up exploration during search. We further discuss several novel results enabled by using such motion primitives for kinodynamic planning, including incremental search, efficient bi-directional search and incremental sampling."
]
} |
1903.02044 | 2971157333 | This paper introduces a method to compute a sparse lattice planner control set that is suited to a particular task by learning from a representative dataset of vehicle paths. To do this, we use a scoring measure similar to the Frechet distance and propose an algorithm for evaluating a given control set according to the scoring measure. Control actions are then selected from a dense control set according to an objective function that rewards improvements in matching the dataset while also encouraging sparsity. This method is evaluated across several experiments involving real and synthetic datasets, and it is shown to generate smaller control sets when compared to the previous state-of-the-art lattice control set computation technique, with these smaller control sets maintaining a high degree of manoeuvrability in the required task. This results in a planning time speedup of up to 4.31x when using the learned control set over the state-of-the-art computed control set. In addition, we show the learned control sets are better able to capture the driving style of the dataset in terms of path curvature. | To optimize a planner, we require a measure of similarity between two paths. This has been discussed in the field of path clustering @cite_4 , where measures such as the pointwise Euclidean distance, Hausdorff distance @cite_25 , the Longest Common Sub-Sequence, and the Fr 'echet distance @cite_16 are commonly used. | {
"cite_N": [
"@cite_16",
"@cite_4",
"@cite_25"
],
"mid": [
"2562184125",
"2322990480",
"2076943964"
],
"abstract": [
"We propose a method for generating a configuration space path that closely follows a desired task space path despite the presence of obstacles. We formalize closeness via two path metrics based on the discrete Hausdorff and Frechet distances. Armed with these metrics, we can cast our problem as a trajectory optimization problem. We also present two techniques to assist our optimizer in the case of local minima by further constraining the trajectory Finally, we leverage shape matching analysis, the Procrustes metric, to compare with respect to only their shape.",
"Clustering is an efficient way to group data into different classes on basis of the internal and previously unknown schemes inherent of the data. With the development of the location based positioning devices, more and more moving objects are traced and their trajectories are recorded. Therefore, moving object trajectory clustering undoubtedly becomes the focus of the study in moving object data mining. To provide an overview, we survey and summarize the development and trend of moving object clustering and analyze typical moving object clustering algorithms presented in recent years. In this paper, we firstly summarize the strategies and implement processes of classical moving object clustering algorithms. Secondly, the measures which can determine the similarity dissimilarity between two trajectories are discussed. Thirdly, the validation criteria are analyzed for evaluating the performance and efficiency of clustering algorithms. Finally, some application scenarios are point out for the potential application in future. It is hope that this research will serve as the steppingstone for those interested in advancing moving object mining.",
"Spatio-temporal and geo-referenced datasets are growing rapidly, with the rapid development of some technology, such as GPS, satellite systems. At present, many scholars are very interested in the clustering of the trajectory. Existing trajectory clustering algorithms group similar trajectories as a whole and can't distinguish the direction of trajectory. Our key finding is that clustering trajectories as a whole could miss common sub-trajectories and trajectory has direction information. In many applications, discovering common sub-trajectories is very useful. In this paper, we present a trajectory clustering algorithm CTHD (clustering of trajectory based on hausdorff distance). In the CTHD, the trajectory is firstly described by a sequence of flow vectors and partitioned into a set of sub-trajectory. Next the similarity between trajectories is measured by their respective Hausdorff distances. Finally, the trajectories are clustered by the DBSCAN clustering algorithm. The proposed algorithm is different from other schemes using Hausdorff distance that the flow vectors include the position and direction. So it can distinguish the trajectories in different directions. The experimental result shows the phenomenon."
]
} |
1903.01698 | 2919793688 | Cross-domain Chinese Word Segmentation (CWS) remains a challenge despite recent progress in neural-based CWS. The limited amount of annotated data in the target domain has been the key obstacle to a satisfactory performance. In this paper, we propose a semi-supervised word-based approach to improving cross-domain CWS given a baseline segmenter. Particularly, our model only deploys word embeddings trained on raw text in the target domain, discarding complex hand-crafted features and domain-specific dictionaries. Innovative subsampling and negative sampling methods are proposed to derive word embeddings optimized for CWS. We conduct experiments on five datasets in special domains, covering domains in novels, medicine, and patent. Results show that our model can obviously improve cross-domain CWS, especially in the segmentation of domain-specific noun entities. The word F-measure increases by over 3.0 on four datasets, outperforming state-of-the-art semi-supervised and unsupervised cross-domain CWS approaches with a large margin. We make our code and data available on Github. | Instead of labeling a sequence character-wise, word-based CWS tries to pick the most probable segmentation of a sequence. zhang2007chinese design a statistical method for word-based CWS, extracting word-level features directly from segmented text. The perceptron algorithm @cite_15 is used for training and beam-search is used for decoding. cai2016neural use Gated Combination Neural Networks and LSTM to present both character sequences and partially segmented word sequences, combining word scores and link scores for segmentation. Our work is in line with their work in directly using word information for CWS. In contrast, our method is conceptually simpler by directly using word embeddings. In addition, our work aims at domain-adaptation, rather than training from scratch. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2008652694"
],
"abstract": [
"We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modification of the proof of convergence of the perceptron algorithm for classification problems. We give experimental results on part-of-speech tagging and base noun phrase chunking, in both cases showing improvements over results for a maximum-entropy tagger."
]
} |
1903.01698 | 2919793688 | Cross-domain Chinese Word Segmentation (CWS) remains a challenge despite recent progress in neural-based CWS. The limited amount of annotated data in the target domain has been the key obstacle to a satisfactory performance. In this paper, we propose a semi-supervised word-based approach to improving cross-domain CWS given a baseline segmenter. Particularly, our model only deploys word embeddings trained on raw text in the target domain, discarding complex hand-crafted features and domain-specific dictionaries. Innovative subsampling and negative sampling methods are proposed to derive word embeddings optimized for CWS. We conduct experiments on five datasets in special domains, covering domains in novels, medicine, and patent. Results show that our model can obviously improve cross-domain CWS, especially in the segmentation of domain-specific noun entities. The word F-measure increases by over 3.0 on four datasets, outperforming state-of-the-art semi-supervised and unsupervised cross-domain CWS approaches with a large margin. We make our code and data available on Github. | However, one challenge for cross-domain CWS is the lack of such annotated data. liu2012unsupervised propose an unsupervised model, in which they use features derived from character clustering, together with a self-training algorithm to jointly model CWS and POS-tagging. This approach is highly time-consuming @cite_3 . Another challenge is the segmentation of domain-specific noun entities. In a task of segmenting Chinese novels, qiu2015word design a double-propagation algorithm with complex feature templates to iteratively extract noun entities and their context, to improve segmentation performance. This approach still relies heavily on feature templates. Similarly, our model does not require any annotated target data. In contrast to their work, our model is efficient and feature-free. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2219922309"
],
"abstract": [
"Word segmentation is a necessary first step for automatic syntactic analysis of Chinese text. Chinese segmentation is highly accurate on news data, but the accuracies drop significantly on other domains, such as science and literature. For scientific domains, a significant portion of out-of-vocabulary words are domain-specific terms, and therefore lexicons can be used to improve segmentation significantly. For the literature domain, however, there is not a fixed set of domain terms. For example, each novel can contain a specific set of person, organization and location names. We investigate a method for automatically mining common noun entities for each novel using information extraction techniques, and use the resulting entities to improve a state-of-the-art segmentation model for the novel. In particular, we design a novel double-propagation algorithm that mines noun entities together with common contextual patterns, and use them as plug-in features to a model trained on the source domain. An advantage of our method is that no retraining for the segmentation model is needed for each novel, and hence it can be applied efficiently given the huge number of novels on the web. Results on five different novels show significantly improved accuracies, in particular for OOV words."
]
} |
1903.01780 | 2920517764 | The prevalence of misinformation on online social media has tangible empirical connections to increasing political polarization and partisan antipathy in the United States. Ranking algorithms for social recommendation often encode broad assumptions about network structure (like homophily) and group cognition (like, social action is largely imitative). Assumptions like these can be na "ive and exclusionary in the era of fake news and ideological uniformity towards the political poles. We examine these assumptions with aid from the user-centric framework of trustworthiness in social recommendation. The constituent dimensions of trustworthiness (diversity, transparency, explainability, disruption) highlight new opportunities for discouraging dogmatization and building decision-aware, transparent news recommender systems. | Trust-based recommender systems are concerned with learning the preferences of trustworthy social neighbors, or 'friends' of an individual user, as well as the mistrusted 'foes' @cite_5 . These preferences inform the latent features inferred in CF, such that features for an individual are ranked closer to his or her friends' features, rather than the foes. In @cite_7 , learning-to-rank models minimize a loss function on a personalized ranking function. Using trust and mistrust relationships proves effective in combating the sparsity inherent in users' preferences data. | {
"cite_N": [
"@cite_5",
"@cite_7"
],
"mid": [
"2730085903",
"2133266261"
],
"abstract": [
"The sparsity of users' preferences can significantly degrade the quality of recommendations in the collaborative filtering strategy. To account for the fact that the selections of social friends and foes may improve the recommendation accuracy, we propose a learning to rank model that exploits users' trust and distrust relationships. Our learning to rank model focusses on the performance at the top of the list, with the recommended items that end-users will actually see. In our model, we try to push the relevant items of users and their friends at the top of the list, while ranking low those of their foes. Furthermore, we propose a weighting strategy to capture the correlations of users' preferences with friends' trust and foes' distrust degrees in two intermediate trust- and distrust-preference user latent spaces, respectively. Our experiments on the Epinions dataset show that the proposed learning to rank model significantly outperforms other state-of-the-art methods in the presence of sparsity in users' preferences and when a part of trust and distrust relationships is not available. Furthermore, we demonstrate the crucial role of our weighting strategy in our model, to balance well the influences of friends and foes on users' preferences.",
"Recommender Systems based on Collaborative Filtering suggest to users items they might like. However due to data sparsity of the input ratings matrix, the step of finding similar users often fails. We propose to replace this step with the use of a trust metric, an algorithm able to propagate trust over the trust network and to estimate a trust weight that can be used in place of the similarity weight. An empirical evaluation on Epinions.com dataset shows that Recommender Systems that make use of trust information are the most effective in term of accuracy while preserving a good coverage. This is especially evident on users who provided few ratings."
]
} |
1903.01780 | 2920517764 | The prevalence of misinformation on online social media has tangible empirical connections to increasing political polarization and partisan antipathy in the United States. Ranking algorithms for social recommendation often encode broad assumptions about network structure (like homophily) and group cognition (like, social action is largely imitative). Assumptions like these can be na "ive and exclusionary in the era of fake news and ideological uniformity towards the political poles. We examine these assumptions with aid from the user-centric framework of trustworthiness in social recommendation. The constituent dimensions of trustworthiness (diversity, transparency, explainability, disruption) highlight new opportunities for discouraging dogmatization and building decision-aware, transparent news recommender systems. | User-perceived quality of recommender systems' output is a broad way to evaluate anything from aggregate emotional impact to perceived relevance and variety. User-perceived variety or diversity in the recommender output is a related, important notion. Authors in @cite_14 explore an organization interface (ORG) as opposed to a list interface for a top-N style recommender. This interface clusters and annotates subgroups of the recommender output. These annotations go beyond category labels and express tradeoffs in product quality and price. ORG ranked better than lists in perceived ease of use and diversity. Studies like @cite_13 , however, note that statistical accuracy of such recommendations might be lower, even as they are rated better in quality by field trial participants. | {
"cite_N": [
"@cite_14",
"@cite_13"
],
"mid": [
"2017155001",
"2155912844"
],
"abstract": [
"Research increasingly indicates that accuracy cannot be the sole criteria in creating a satisfying recommender from the users' point of view. Other criteria, such as diversity, are emerging as important characteristics for consideration as well. In this paper, we try to address the problem of augmenting users' perception of recommendation diversity by applying an organization interface design method to the commonly used list interface. An in-depth user study was conducted to compare an organization interface with a standard list interface. Our results show that the organization interface indeed effectively increased users' perceived diversity of recommendations, especially perceived categorical diversity. Furthermore, 65 of users preferred the organization interface, versus 20 for the list interface. 70 of users thought the organization interface is better at helping them perceive recommendation diversity versus only 15 for the list interface.",
"In this work we present topic diversification, a novel method designed to balance and diversify personalized recommendation lists in order to reflect the user's complete spectrum of interests. Though being detrimental to average accuracy, we show that our method improves user satisfaction with recommendation lists, in particular for lists generated using the common item-based collaborative filtering algorithm.Our work builds upon prior research on recommender systems, looking at properties of recommendation lists as entities in their own right rather than specifically focusing on the accuracy of individual recommendations. We introduce the intra-list similarity metric to assess the topical diversity of recommendation lists and the topic diversification approach for decreasing the intra-list similarity. We evaluate our method using book recommendation data, including offline analysis on 361, !, 349 ratings and an online study involving more than 2, !, 100 subjects."
]
} |
1903.01977 | 2918852205 | Microtask programming is a form of crowdsourcing for programming in which implementation work is decomposed into short, self-contained microtasks. Each microtask offers a specific goal (e.g., write a unit test) as well as all of the required context and environment support necessary to accomplish this goal. Key to microtasking is the choice of workflow, which delineates the microtasks developers may complete and how contributions from each are aggregated to generate the final software product. Existing approaches either rely on a single developer to manually generate all microtasks, limiting their potential scalability, or impose coordination requirements which limit their effectiveness. Inspired by behavior-driven development, we describe a novel workflow for decomposing programming into microtasks in which each microtask involves identifying, testing, implementing, and debugging an individual behavior within a single function. We apply this approach to the implementation of microservices, demonstrating the first approach for implementing a microservice through microtasks. To evaluate our approach, we conducted a user study in which a small crowd worked to implement a simple microservice and test suite. We found that the crowd was able to use a behavior-driven microtask workflow to successfully complete 350 microtasks and implement 13 functions, quickly onboard and submit their first microtask in less than 24 minutes, contribute new behaviors in less than 5 minutes, and together implement a functioning microservice with only four defects. We discuss these findings and their implications for incorporating microtask work into open source projects. | Building on work in crowdsourcing in other domains, a number of approaches have been proposed for applying crowdsourcing to software engineering @cite_32 . One category of approaches is microtask crowdsourcing, in which a large task is decomposed into several smaller, self-contained microtasks and then aggregated to create a finished product @cite_36 . This approach was first popularized by Amazon's Mechanical Turk https: www.mturk.com and then used broadly in a number of systems @cite_6 . In the following sections, we survey how microtasking has been applied to software engineering work, focusing on issues arising in decomposition, parallelism, fast onboarding, and achieving quality. | {
"cite_N": [
"@cite_36",
"@cite_32",
"@cite_6"
],
"mid": [
"2020740057",
"2522186013",
"2146286563"
],
"abstract": [
"",
"Abstract The term ‘crowdsourcing’ was initially introduced in 2006 to describe an emerging distributed problem-solving model by online workers. Since then it has been widely studied and practiced to support software engineering. In this paper we provide a comprehensive survey of the use of crowdsourcing in software engineering, seeking to cover all literature on this topic. We first review the definitions of crowdsourcing and derive our definition of Crowdsourcing Software Engineering together with its taxonomy. Then we summarise industrial crowdsourcing practice in software engineering and corresponding case studies. We further analyse the software engineering domains, tasks and applications for crowdsourcing and the platforms and stakeholders involved in realising Crowdsourced Software Engineering solutions. We conclude by exposing trends, open issues and opportunities for future research on Crowdsourced Software Engineering.",
"Paid crowd work offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale. But it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework. Can we foresee a future crowd workplace in which we would want our children to participate? This paper frames the major challenges that stand in the way of this goal. Drawing on theory from organizational behavior and distributed computing, as well as direct feedback from workers, we outline a framework that will enable crowd work that is complex, collaborative, and sustainable. The framework lays out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation."
]
} |
1903.01977 | 2918852205 | Microtask programming is a form of crowdsourcing for programming in which implementation work is decomposed into short, self-contained microtasks. Each microtask offers a specific goal (e.g., write a unit test) as well as all of the required context and environment support necessary to accomplish this goal. Key to microtasking is the choice of workflow, which delineates the microtasks developers may complete and how contributions from each are aggregated to generate the final software product. Existing approaches either rely on a single developer to manually generate all microtasks, limiting their potential scalability, or impose coordination requirements which limit their effectiveness. Inspired by behavior-driven development, we describe a novel workflow for decomposing programming into microtasks in which each microtask involves identifying, testing, implementing, and debugging an individual behavior within a single function. We apply this approach to the implementation of microservices, demonstrating the first approach for implementing a microservice through microtasks. To evaluate our approach, we conducted a user study in which a small crowd worked to implement a simple microservice and test suite. We found that the crowd was able to use a behavior-driven microtask workflow to successfully complete 350 microtasks and implement 13 functions, quickly onboard and submit their first microtask in less than 24 minutes, contribute new behaviors in less than 5 minutes, and together implement a functioning microservice with only four defects. We discuss these findings and their implications for incorporating microtask work into open source projects. | Decomposing work is a key challenge in microtasking in all crowdsourcing domains, as the choice of decomposition creates a workflow and the associated individual steps, the context and information required, and the types of contributions which can be made @cite_23 @cite_20 @cite_0 @cite_14 . Depending on the choice of microtask boundaries, contributions may be easier or harder, may vary in quality, and may impose differing levels of overhead. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_20",
"@cite_23"
],
"mid": [
"2127008633",
"",
"2058179030",
"2772461498"
],
"abstract": [
"This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.",
"",
"Micro-task markets such as Amazon's Mechanical Turk represent a new paradigm for accomplishing work, in which employers can tap into a large population of workers around the globe to accomplish tasks in a fraction of the time and money of more traditional methods. However, such markets typically support only simple, independent tasks, such as labeling an image or judging the relevance of a search result. Here we present a general purpose framework for micro-task markets that provides a scaffolding for more complex human computation tasks which require coordination among many individuals, such as writing an article.",
"The dominant crowdsourcing infrastructure today is the workflow, which decomposes goals into small independent tasks. However, complex goals such as design and engineering have remained stubbornly difficult to achieve with crowdsourcing workflows. Is this due to a lack of imagination, or a more fundamental limit? This paper explores this question through in-depth case studies of 22 workers across six workflow-based crowd teams, each pursuing a complex and interdependent web development goal. We used an inductive mixed method approach to analyze behavior trace data, chat logs, survey responses and work artifacts to understand how workers enacted and adapted the crowdsourcing workflows. Our results indicate that workflows served as useful coordination artifacts, but in many cases critically inhibited crowd workers from pursuing real-time adaptations to their work plans. However, the CSCW and organizational behavior literature argues that all sufficiently complex goals require open-ended adaptation. If complex work requires adaptation but traditional static crowdsourcing workflows can't support it, our results suggest that complex work may remain a fundamental limitation of workflow-based crowdsourcing infrastructures."
]
} |
1903.01977 | 2918852205 | Microtask programming is a form of crowdsourcing for programming in which implementation work is decomposed into short, self-contained microtasks. Each microtask offers a specific goal (e.g., write a unit test) as well as all of the required context and environment support necessary to accomplish this goal. Key to microtasking is the choice of workflow, which delineates the microtasks developers may complete and how contributions from each are aggregated to generate the final software product. Existing approaches either rely on a single developer to manually generate all microtasks, limiting their potential scalability, or impose coordination requirements which limit their effectiveness. Inspired by behavior-driven development, we describe a novel workflow for decomposing programming into microtasks in which each microtask involves identifying, testing, implementing, and debugging an individual behavior within a single function. We apply this approach to the implementation of microservices, demonstrating the first approach for implementing a microservice through microtasks. To evaluate our approach, we conducted a user study in which a small crowd worked to implement a simple microservice and test suite. We found that the crowd was able to use a behavior-driven microtask workflow to successfully complete 350 microtasks and implement 13 functions, quickly onboard and submit their first microtask in less than 24 minutes, contribute new behaviors in less than 5 minutes, and together implement a functioning microservice with only four defects. We discuss these findings and their implications for incorporating microtask work into open source projects. | In traditional open source development, developers onboarding onto a new software project must first complete an extensive , installing necessary tools, downloading code from a server, identifying and downloading dependencies, and configuring their build environment @cite_21 @cite_7 @cite_29 . Each of these steps serve as barriers which dissuade casual contributors from contributing. Researchers have explored designing environments to alleviate these barriers, often through dedicated, preconfigured, online environments. In the Collabode IDE, multiple developers can use an online editor to synchronously edit code at the same time, enabling new forms of collaborative programming @cite_37 @cite_15 . Apparition offers an online environment for building UI mockups, offering an integrated environment for authoring, viewing, and collaborating on the visual look and feel and behavior of UI elements @cite_13 . CrowdCode offers an online preconfigured environment for implementing libraries, enabling developers to onboard quickly onto programming tasks @cite_31 . | {
"cite_N": [
"@cite_37",
"@cite_31",
"@cite_7",
"@cite_29",
"@cite_21",
"@cite_15",
"@cite_13"
],
"mid": [
"2143602702",
"",
"2098879844",
"1972052832",
"2082539579",
"2184500637",
"2151785234"
],
"abstract": [
"This paper describes Collabode, a web-based Java integrated development environment designed to support close, synchronous collaboration between programmers. We examine the problem of collaborative coding in the face of program compilation errors introduced by other users which make collaboration more difficult, and describe an algorithm for error-mediated integration of program code. Concurrent editors see the text of changes made by collaborators, but the errors reported in their view are based only on their own changes. Editors may run the program at any time, using only error-free edits supplied so far, and ignoring incomplete or otherwise error-generating changes. We evaluate this algorithm and interface on recorded data from previous pilot experiments with Collabode, and via a user study with student and professional programmers. We conclude that it offers appreciable benefits over naive continuous synchronization without regard to errors and over manual version control.",
"",
"This paper develops an inductive theory of the open source software (OSS) innovation process by focussing on the creation of Freenet, a project aimed at developing a decentralized and anonymous peer-to-peer electronic file sharing network. We are particularly interested in the strategies and processes by which new people join the existing community of software developers, and how they initially contribute code. Analyzing data from multiple sources on the Freenet software development process, we generate the constructs of \"joining script\", \"specialization\", \"contribution barriers\", and \"feature gifts\", and propose relationships among these. Implications for theory and research are discussed.",
"Past research established that individuals joining an Open Source community typically follow a socialization process called \"the onion model\": newcomers join a project by first contributing at the periphery through mailing list discussions and bug trackers and as they develop skill and reputation within the community they advance to central roles of contributing code and making design decisions. However, the modern Open Source landscape has fewer projects that operate independently and many projects under the umbrella of software ecosystems that bring together projects with common underlying components, technology, and social norms. Participants in such an ecosystems may be able to utilize a significant amount of transferrable knowledge when moving between projects in the ecosystem and, thereby, skip steps in the onion model. In this paper, we examine whether the onion model of joining and progressing in a standalone Open Source project still holds true in large project ecosystems and how the model might change in such settings.",
"Display Omitted ContextNumerous open source software projects are based on volunteers collaboration and require a continuous influx of newcomers for their continuity. Newcomers face barriers that can lead them to give up. These barriers hinder both developers willing to make a single contribution and those willing to become a project member. ObjectiveThis study aims to identify and classify the barriers that newcomers face when contributing to open source software projects. MethodWe conducted a systematic literature review of papers reporting empirical evidence regarding the barriers that newcomers face when contributing to open source software (OSS) projects. We retrieved 291 studies by querying 4 digital libraries. Twenty studies were identified as primary. We performed a backward snowballing approach, and searched for other papers published by the authors of the selected papers to identify potential studies. Then, we used a coding approach inspired by open coding and axial coding procedures from Grounded Theory to categorize the barriers reported by the selected studies. ResultsWe identified 20 studies providing empirical evidence of barriers faced by newcomers to OSS projects while making a contribution. From the analysis, we identified 15 different barriers, which we grouped into five categories: social interaction, newcomers' previous knowledge, finding a way to start, documentation, and technical hurdles. We also classified the problems with regard to their origin: newcomers, community, or product. ConclusionThe results are useful to researchers and OSS practitioners willing to investigate or to implement tools to support newcomers. We mapped technical and non-technical barriers that hinder newcomers' first contributions. The most evidenced barriers are related to socialization, appearing in 75 (15 out of 20) of the studies analyzed, with a high focus on interactions in mailing lists (receiving answers and socialization with other members). There is a lack of in-depth studies on technical issues, such as code issues. We also noticed that the majority of the studies relied on historical data gathered from software repositories and that there was a lack of experiments and qualitative studies in this area.",
"This thesis presents Collabode, a web-based integrated development environment for Java. With real-time collaborative editing, multiple programmers can use Collabode to edit the same source code at the same time. Collabode introduces error-mediated integration , where multiple editors see the text of one another's changes while being isolated from errors and in-progress work, and error-free changes are integrated automatically. Three models of collaborative programming are presented and evaluated using Collabode. Classroom programming brings zero-setup web-based programming to computer science students working in a classroom or lab. Test-driven pair programming combines two existing software development strategies to create a model with clear roles and explicit tool support. And micro-outsourcing enables one programmer to easily request and integrate very small contributions from many distributed assistants, demonstrating how a system for highly-collaborative programming enables a development model infeasible with current tools. To show that highly-collaborative programming, using real-time collaborative editing of source code, is practical, useful, and enables new models of software development, this thesis presents a series of user studies. A study with pairs of both student and professional programmers shows that error-mediated integration allows them to work productively in parallel. In a semester-long deployment of Collabode, students in an MIT software engineering course used the system for classroom programming. In a lab study of a Collabode prototype, professional programmers used test-driven pair programming. Finally, a study involving both in-lab participants and contractors hired online demonstrated how micro-outsourcing allowed participants to approach programming in a new way, one enabled by collaborative editing, automatic error-mediated integration, and a web-based environment requiring no local setup. (Copies available exclusively from MIT Libraries, libraries.mit.edu docs - [email protected])",
"Prototyping allows designers to quickly iterate and gather feedback, but the time it takes to create even a Wizard-of-Oz prototype reduces the utility of the process. In this paper, we introduce crowdsourcing techniques and tools for prototyping interactive systems in the time it takes to describe the idea. Our Apparition system uses paid microtask crowds to make even hard-to-automate functions work immediately, allowing more fluid prototyping of interfaces that contain interactive elements and complex behaviors. As users sketch their interface and describe it aloud in natural language, crowd workers and sketch recognition algorithms translate the input into user interface elements, add animations, and provide Wizard-of-Oz functionality. We discuss how design teams can use our approach to reflect on prototypes or begin user studies within seconds, and how, over time, Apparition prototypes can become fully-implemented versions of the systems they simulate. Powering Apparition is the first self-coordinated, real-time crowdsourcing infrastructure. We anchor this infrastructure on a new, lightweight write-locking mechanism that workers can use to signal their intentions to each other."
]
} |
1903.01977 | 2918852205 | Microtask programming is a form of crowdsourcing for programming in which implementation work is decomposed into short, self-contained microtasks. Each microtask offers a specific goal (e.g., write a unit test) as well as all of the required context and environment support necessary to accomplish this goal. Key to microtasking is the choice of workflow, which delineates the microtasks developers may complete and how contributions from each are aggregated to generate the final software product. Existing approaches either rely on a single developer to manually generate all microtasks, limiting their potential scalability, or impose coordination requirements which limit their effectiveness. Inspired by behavior-driven development, we describe a novel workflow for decomposing programming into microtasks in which each microtask involves identifying, testing, implementing, and debugging an individual behavior within a single function. We apply this approach to the implementation of microservices, demonstrating the first approach for implementing a microservice through microtasks. To evaluate our approach, we conducted a user study in which a small crowd worked to implement a simple microservice and test suite. We found that the crowd was able to use a behavior-driven microtask workflow to successfully complete 350 microtasks and implement 13 functions, quickly onboard and submit their first microtask in less than 24 minutes, contribute new behaviors in less than 5 minutes, and together implement a functioning microservice with only four defects. We discuss these findings and their implications for incorporating microtask work into open source projects. | Other work has explored preconfigured environments which enable teachers to manage a crowd of programming students. Codeopticon @cite_34 enables instructors to continuously monitor multiple students and help them code. OverCode @cite_28 and Foobaz @cite_19 help mentors to cluster student code submissions, enabling teachers to give feedback on clusters of submissions rather than individual submissions. Codepilot @cite_25 reduces the complexity of programming environments for novice programmers by integrating a preconfigured environment for real-time collaborative programming, testing, bug reporting, and version control into a single, simplified system. | {
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_34",
"@cite_25"
],
"mid": [
"2076771354",
"2179578783",
"2241837413",
"2611714150"
],
"abstract": [
"In MOOCs, a single programming exercise may produce thousands of solutions from learners. Understanding solution variation is important for providing appropriate feedback to students at scale. The wide variation among these solutions can be a source of pedagogically valuable examples and can be used to refine the autograder for the exercise by exposing corner cases. We present OverCode, a system for visualizing and exploring thousands of programming solutions. OverCode uses both static and dynamic analysis to cluster similar solutions, and lets teachers further filter and cluster solutions based on different criteria. We evaluated OverCode against a nonclustering baseline in a within-subjects study with 24 teaching assistants and found that the OverCode interface allows teachers to more quickly develop a high-level view of students' understanding and misconceptions, and to provide feedback that is relevant to more students' solutions.",
"Current traditional feedback methods, such as hand-grading student code for substance and style, are labor intensive and do not scale. We created a user interface that addresses feedback at scale for a particular and important aspect of code quality: variable names. We built this user interface on top of an existing back-end that distinguishes variables by their behavior in the program. Therefore our interface not only allows teachers to comment on poor variable names, they can comment on names that mislead the reader about the variable's role in the program. We ran two user studies in which 10 teachers and 6 students created and received feedback, respectively. The interface helped teachers give personalized variable name feedback on thousands of student solutions from an edX introductory programming MOOC. In the second study, students composed solutions to the same programming assignments and immediately received personalized quizzes composed by teachers in the previous user study.",
"One-on-one tutoring from a human expert is an effective way for novices to overcome learning barriers in complex domains such as computer programming. But there are usually far fewer experts than learners. To enable a single expert to help more learners at once, we built Codeopticon, an interface that enables a programming tutor to monitor and chat with dozens of learners in real time. Each learner codes in a workspace that consists of an editor, compiler, and visual debugger. The tutor sees a real-time view of each learner's actions on a dashboard, with each learner's workspace summarized in a tile. At a glance, the tutor can see how learners are editing and debugging their code, and what errors they are encountering. The dashboard automatically reshuffles tiles so that the most active learners are always in the tutor's main field of view. When the tutor sees that a particular learner needs help, they can open an embedded chat window to start a one-on-one conversation. A user study showed that 8 first-time Codeopticon users successfully tutored anonymous learners from 54 countries in a naturalistic online setting. On average, in a 30-minute session, each tutor monitored 226 learners, started 12 conversations, exchanged 47 chats, and helped 2.4 learners.",
"Novice programmers often have trouble installing, configuring, and managing disparate tools (e.g., version control systems, testing infrastructure, bug trackers) that are required to become productive in a modern collaborative software development environment. To lower the barriers to entry into software development, we created a prototype IDE for novices called CodePilot, which is, to our knowledge, the first attempt to integrate coding, testing, bug reporting, and version control management into a real-time collaborative system. CodePilot enables multiple users to connect to a web-based programming session and work together on several major phases of software development. An eight-subject exploratory user study found that first-time users of CodePilot spontaneously used it to assume roles such as developer tester and developer assistant when creating a web application together in pairs. Users felt that CodePilot could aid in scaffolding for novices, situational awareness, and lowering barriers to impromptu collaboration."
]
} |
1903.01804 | 2918466037 | Indoor localization is one of the crucial enablers for deployment of service robots. Although several successful techniques for indoor localization have been proposed in the past, the majority of them relies on maps generated based on data gathered with the same sensor modality that is used for localization. Typically, tedious labor by experts is needed to acquire this data, thus limiting the readiness of the system as well as its ease of installation for inexperienced operators. In this paper, we propose a memory and computationally efficient monocular camera-based localization system that allows a robot to estimate its pose given an architectural floor plan. Our method employs a convolutional neural network to predict room layout edges from a single camera image and estimates the robot pose using a particle filter that matches the extracted edges to the given floor plan. We evaluate our localization system with multiple real-world experiments and demonstrate that it has the robustness and accuracy required for reliable indoor navigation. | Most of the CNN-based approaches for estimating room layout edges employ a encoder-decoder topology with a standard classification network for the encoder and utilize a series of deconvolutional layers for upsampling the feature maps @cite_10 @cite_13 @cite_6 @cite_21 . Ren al @cite_6 propose an architecture that employs the VGG-16 network for the encoder followed by fully-connected layers and deconvolutional layers that upsample encoder to one quarter of the input resolution. The use of fully-connected layers enables their network to have a large receptive field but at the cost of loosing the feature localization ability. Lin al @cite_10 introduces a similar approach with the stronger ResNet-101 backbone and models the network in a fully-convolutional manner. Most recently, Zhang al @cite_13 propose an architecture based on the VGG-16 backbone for simultaneously estimating the layout edges as well as predicting the semantic segmentation of the walls, floors and ceiling. As opposed these networks, we employ a more parameter efficient encoder with dilated convolutions and we incorporate the novel eASPP for capturing long-range context, complemented with an iterative training strategy that enables our network to predict thin layout edges without discontinuities. | {
"cite_N": [
"@cite_13",
"@cite_21",
"@cite_10",
"@cite_6"
],
"mid": [
"2906952329",
"2899224788",
"2902864727",
"2964030239"
],
"abstract": [
"Visual cognition of the indoor environment can benefit from the spatial layout estimation, which is to represent an indoor scene with a 2D box on a monocular image. In this paper, we propose to fully exploit the edge and semantic information of a room image for layout estimation. More specifically, we present an encoder-decoder network with shared encoder and two separate decoders, which are composed of multiple deconvolution (transposed convolution) layers, to jointly learn the edge maps and semantic labels of a room image. We combine these two network predictions in a scoring function to evaluate the quality of the layouts, which are generated by ray sampling and from a predefined layout pool. Guided by the scoring function, we apply a novel refinement strategy to further optimize the layout hypotheses. Experimental results show that the proposed network can yield accurate estimates of edge maps and semantic labels. By fully utilizing the two different types of labels, the proposed method achieves state-of-the-art layout estimation performance on benchmark datasets.",
"The main contribution of this paper is an extended Kalman filter (EKF)based algorithm for estimating the 6 DOF pose of a camera using monocular images of an indoor environment. In contrast to popular visual simultaneous localisation and mapping algorithms, the technique proposed relies on a pre-built map represented as an unsigned distance function of the ground plane edges. Images from the camera are processed using a Convolutional Neural Network (CNN)to extract a ground plane edge image. Pixels that belong to these edges are used in the observation equation of the EKF to estimate the camera location. Use of the CNN makes it possible to extract ground plane edges under significant changes to scene illumination. The EKF framework lends itself to use of a suitable motion model, fusing information from any other sensors such as wheel encoders or inertial measurement units, if available, and rejecting spurious observations. A series of experiments are presented to demonstrate the effectiveness of the proposed technique.",
"With the popularity of the hand devices and intelligent agents, many aimed to explore machine's potential in interacting with reality. Scene understanding, among the many facets of reality interaction, has gained much attention for its relevance in applications such as augmented reality (AR). Scene understanding can be partitioned into several sub tasks (i.e., layout estimation, scene classification, saliency prediction, etc). In this paper, we propose a deep learning-based approach for estimating the layout of a given indoor image in real-time. Our method consists of a deep fully convolutional network, a novel layout-degeneration augmentation method, and a new training pipeline which integrate an adaptive edge penalty and smoothness terms into the training process. Unlike previous deep learning-based methods that depend on post-processing refinement (e.g., proposal ranking and optimization), our method motivates the generalization ability of the network and the smoothness of estimated layout edges without deploying postprocessing techniques. Moreover, the proposed approach is time-efficient since it only takes the model one forward pass to render accurate layouts. We evaluate our method on LSUN Room Layout and Hedau dataset and obtain estimation results comparable with the state-of-the-art methods.",
"The task of estimating the spatial layout of cluttered indoor scenes from a single RGB image is addressed in this work. Existing solutions to this problem largely rely on hand-crafted features and vanishing lines, and they often fail in highly cluttered indoor scenes. The proposed coarse-to-fine indoor layout estimation (CFILE) method consists of two stages: (1) coarse layout estimation; and (2) fine layout localization. In the first stage, we adopt a fully convolutional neural network (FCN) to obtain a coarse-scale room layout estimate that is close to the ground truth globally. The proposed FCN combines the layout contour property and the surface property so as to provide a robust estimation in the presence of cluttered objects. In the second stage, we formulate an optimization framework that enforces several constraints such as layout contour straightness, surface smoothness and geometric constraints for layout detail refinement. Our proposed system offers the state-of-the-art performance on two commonly used benchmark datasets."
]
} |
1903.01865 | 2562216120 | Argumentation theory is a powerful paradigm that formalizes a type of commonsense reasoning that aims to simulate the human ability to resolve a specific problem in an intelligent manner. A classical argumentation process takes into account only the properties related to the intrinsic logical soundness of an argument in order to determine its acceptability status. However, these properties are not always the only ones that matter to establish the argument's acceptabilitythere exist other qualities, such as strength, weight, social votes, trust degree, relevance level, and certainty degree, among others.In this work, we redefine the argumentative process to improve the analysis of arguments by considering their special features in order to obtain more refined results. Towards this end, we propose adding meta-level information to the arguments in the form of labels representing quantifiable data ranking over a range of fuzzy valuations. These labels are propagated through an argumentative graph according to the relations of support, conflict, and aggregation between arguments. Through this process we obtain final labels that are useful in determining argument acceptability. We redefine the argumentative process improving the argumentation analysis considering special features of the arguments.We add meta-level information to arguments as labels representing quantifiable data ranking over a fuzzy valuations range.Labels are propagated through an argumentative graph according to the relations of support, conflict, and aggregation.Label information is used to establish the arguments' acceptability according to different acceptance levels.We define an acceptability threshold to determine whether an argument satisfies certain conditions to be accepted. | @cite_16 @cite_59 @cite_43 @cite_23 @cite_22 the authors prompted the use of a special valuations associated with the arguments to specify their strengths, and how this valuations affect the argumentative process. Next, we analyze each one of them, discriminated by their representation capability. | {
"cite_N": [
"@cite_22",
"@cite_43",
"@cite_23",
"@cite_59",
"@cite_16"
],
"mid": [
"36665044",
"2950390265",
"1979170182",
"2109955529",
"1583628292"
],
"abstract": [
"In this paper we take a step towards using Argumentation in Social Networks and introduce Social Abstract Argumentation Frameworks, an extension of Dung's Abstract Argumentation Frameworks that incorporates social voting. We propose a class of semantics for these new Social Abstract Argumentation Frameworks and prove some important non-trivial properties which are crucial for their applicability in Social Networks.",
"Argumentation is a promising model for reasoning with uncertain knowledge. The key concept of acceptability enables to differentiate arguments and counterarguments: The certainty of a proposition can then be evaluated through the most acceptable arguments for that proposition. In this paper, we investigate different complementary points of view: - an acceptability based on the existence of direct counterarguments, - an acceptability based on the existence of defenders. Pursuing previous work on preference-based argumentation principles, we enforce both points of view by taking into account preference orderings for comparing arguments. Our approach is illustrated in the context of reasoning with stratified knowldge bases.",
"In preference-based argumentation theory, an argument may be preferred to another one when, for example, it is more specific, its beliefs have a higher probability or certainty, or it promotes a higher value. In this paper we generalize Bench-Capon's value-based argumentation theory such that arguments can promote multiple values, and preferences among values or arguments can be specified in various ways. We assume in addition that there is default knowledge about the preferences over the arguments, and we use an algorithm to derive the most likely preference order. In particular, we show how to use non-monotonic preference reasoning to compute preferences among arguments, and subsequently the acceptable arguments, from preferences among values. We show also how the preference ordering can be used to optimize the algorithm to construct the grounded extension by proceeding from most to least preferred arguments.",
"Argumentation is based on the exchange and valuation of interacting arguments, followed by the selection of the most acceptable of them (for example, in order to take a decision, to make a choice). Starting from the framework proposed by Dung in 1995, our purpose is to introduce \"graduality\" in the selection of the best arguments, i.e. to be able to partition the set of the arguments in more than the two usual subsets of \"selected\" and \"non-selected\" arguments in order to represent different levels of selection. Our basic idea is that an argument is all the more acceptable if it can be preferred to its attackers. First, we discuss general principles underlying a \"gradual\" valuation of arguments based on their interactions. Following these principles, we define several valuation models for an abstract argumentation system. Then, we introduce \"graduality\" in the concept of acceptability of arguments. We propose new acceptability classes and a refinement of existing classes taking advantage of an available \"gradual\" valuation.",
"From an inconsistent database non-trivial arguments may be constructed both for a proposition, and for the contrary of that proposition. Therefore, inconsistency in a logical database causes uncertainty about which conclusions to accept. This kind of uncertainty is called logical uncertainty. We define a concept of \"acceptability\", which induces a means for differentiating arguments. The more acceptable an argument, the more confident we are in it. A specific interest is to use the acceptability classes to assign linguistic qualifiers to propositions, such that the qualifier assigned to a propositions reflects its logical uncertainty. A more general interest is to understand how classes of acceptability can be defined for arguments constructed from an inconsistent database, and how this notion of acceptability can be devised to reflect different criteria. Whilst concentrating on the aspects of assigning linguistic qualifiers to propositions, we also indicate the more general significance of the notion of acceptability."
]
} |
1903.01865 | 2562216120 | Argumentation theory is a powerful paradigm that formalizes a type of commonsense reasoning that aims to simulate the human ability to resolve a specific problem in an intelligent manner. A classical argumentation process takes into account only the properties related to the intrinsic logical soundness of an argument in order to determine its acceptability status. However, these properties are not always the only ones that matter to establish the argument's acceptabilitythere exist other qualities, such as strength, weight, social votes, trust degree, relevance level, and certainty degree, among others.In this work, we redefine the argumentative process to improve the analysis of arguments by considering their special features in order to obtain more refined results. Towards this end, we propose adding meta-level information to the arguments in the form of labels representing quantifiable data ranking over a range of fuzzy valuations. These labels are propagated through an argumentative graph according to the relations of support, conflict, and aggregation between arguments. Through this process we obtain final labels that are useful in determining argument acceptability. We redefine the argumentative process improving the argumentation analysis considering special features of the arguments.We add meta-level information to arguments as labels representing quantifiable data ranking over a fuzzy valuations range.Labels are propagated through an argumentative graph according to the relations of support, conflict, and aggregation.Label information is used to establish the arguments' acceptability according to different acceptance levels.We define an acceptability threshold to determine whether an argument satisfies certain conditions to be accepted. | In a similar fashion, probabilistic argumentation frameworks can be divided into those that arise from extending abstract models @cite_17 @cite_57 @cite_31 and others based on incorporating probabilities in structured frameworks---the earliest such works appeared almost two decades ago @cite_2 @cite_18 , followed by a period of inactivity and a recent resurgence of interest in the topic @cite_6 @cite_19 @cite_76 . Clearly, our work is closest in spirit to the latter, since labels can be seen as a generalization of probability values associated with items in the knowledge base---the generalization is not, however, a complete subsumption since the algebra in this case would simply model a probabilistic space; the modeling of the corresponding probability distribution needs to reside elsewhere, as done in the works mentioned above. | {
"cite_N": [
"@cite_18",
"@cite_6",
"@cite_57",
"@cite_19",
"@cite_2",
"@cite_31",
"@cite_76",
"@cite_17"
],
"mid": [
"2169160043",
"2078438690",
"2400192453",
"1168380817",
"2105953471",
"2207251955",
"2175917793",
"2405720308"
],
"abstract": [
"Different formalisms for solving problems of inference under uncertainty have been developed so far. The most popular numerical approach is the theory of Bayesian inference [Lauritzen and Spiegelhalter, 1988]. More general approaches are the Dempster-Shafer theory of evidence [Shafer, 1976], and possibility theory [Dubois and Prade, 1990], which is closely related to fuzzy systems. For these systems computer implementations are available. In competition with these numerical methods are different symbolic approaches. Many of them are based on different types of non-monotonic logic.",
"Argumentation can be modelled at an abstract level using a directed graph where each node denotes an argument and each arc denotes an attack by one argument on another. Since arguments are often uncertain, it can be useful to quantify the uncertainty associated with each argument. Recently, there have been proposals to extend abstract argumentation to take this uncertainty into account. This assigns a probability value for each argument that represents the degree to which the argument is believed to hold, and this is then used to generate a probability distribution over the full subgraphs of the argument graph, which in turn can be used to determine the probability that a set of arguments is admissible or an extension. In order to more fully understand uncertainty in argumentation, in this paper, we extend this idea by considering logic-based argumentation with uncertain arguments. This is based on a probability distribution over models of the language, which can then be used to give a probability distribution over arguments that are constructed using classical logic. We show how this formalization of uncertainty of logical arguments relates to uncertainty of abstract arguments, and we consider a number of interesting classes of probability assignments.",
"Recently, there has been a proposal by Dung and Thang and by to extend abstract argumentation to take uncertainty of arguments into account by assigning a probability value to each argument, and then use this assignment to determine the probability that a set of arguments is an extension. In this paper, we explore some of the assumptions behind the definitions, and some of the resulting properties, of the proposal for probabilistic argument graphs.",
"Attributing a cyber-operation through the use of multiple pieces of technical evidence (i.e., malware reverse-engineering and source tracking) and conventional intelligence sources (i.e., human or signals intelligence) is a difficult problem not only due to the effort required to obtain evidence, but the ease with which an adversary can plant false evidence. In this paper, we introduce a formal reasoning system called the InCA (Intelligent Cyber Attribution) framework that is designed to aid an analyst in the attribution of a cyber-operation even when the available information is conflicting and or uncertain. Our approach combines argumentation-based reasoning, logic programming, and probabilistic models to not only attribute an operation but also explain to the analyst why the system reaches its conclusions.",
"We present the syntax and proof theory of a logic of ar gumentation, LA. We also outline the development of a category theoretic semantics for LA. LA is the core of a proof theoretic model for reasoning under uncertainty. In this logic, propositions are labelled with a representation of the arguments which support their validity. Arguments may then be aggregated to collect more information about the potential validity of the propositions of interest. W e make the notion of aggregation primitive to the logic, and then define strength mappings from sets of ar guments to one of a number of possible dictionaries. This provides a uniform framework which incorporates a number of numerical and symbolic techniques for assigning subjective confidences to propositions on the basis of their supporting ar guments. These aggregation techniques are also described, with examples.",
"Probabilistic abstract argumentation combines Dung’s abstract argumentation framework with probability theory in order to model uncertainty in argumentation. In this setting, we address the fundamental problem of computing the probability that a set of arguments is an extension according to a given semantics. We focus on the most popular semantics (i.e., admissible, stable, complete, grounded, preferred, ideal-set, ideal, stage, and semistable) and show the following dichotomy result: computing the probability that a set of arguments is an extension is either FP or FP#P-complete depending on the semantics adopted. Our polynomial-time results are particularly interesting, as they hold for some semantics for which no polynomial-time technique was known so far.",
"Many real-world knowledge-based systems must deal with information coming from different sources that invariably leads to incompleteness, overspecification, or inherently uncertain content. The presence of these varying levels of uncertainty doesn't mean that the information is worthless --- rather, these are hurdles that the knowledge engineer must learn to work with. In this paper, we continue work on an argumentation-based framework that extends the well-known Defeasible Logic Programming (DeLP) language with probabilistic uncertainty, giving rise to the Defeasible Logic Programming with Presumptions and Probabilistic Environments (DeLP3E) model. Our prior work focused on the problem of belief revision in DeLP3E, where we proposed a non-prioritized class of revision operators called AFO (Annotation Function-based Operators) to solve this problem. In this paper, we further study this class and argue that in some cases it may be desirable to define revision operators that take quantitative aspects into account, such as how the probabilities of certain literals or formulas of interest change after the revision takes place. To the best of our knowledge, this problem has not been addressed in the argumentation literature to date. We propose the QAFO (Quantitative Annotation Function-based Operators) class of operators, a subclass of AFO, and then go on to study the complexity of several problems related to their specification and application in revising knowledge bases. Finally, we present an algorithm for computing the probability that a literal is warranted in a DeLP3E knowledge base, and discuss how it could be applied towards implementing QAFO-style operators that compute approximations rather than exact operations.",
"Classical semantics for abstract argumentation frameworks are usually defined in terms of extensions or, more recently, labelings. That is, an argument is either regarded as accepted with respect to a labeling or not. In order to reason with a specific semantics one takes either a credulous or skeptical approach, i. e. an argument is ultimately accepted, if it is accepted in one or all labelings, respectively. In this paper, we propose a more general approach for a semantics that allows for a more fine-grained differentiation between those two extreme views on reasoning. In particular, we propose a probabilistic semantics for abstract argumentation that assigns probabilities or degrees of belief to individual arguments. We show that our semantics generalizes the classical notions of semantics and we point out interesting relationships between concepts from argumentation and probabilistic reasoning. We illustrate the usefulness of our semantics on an example from the medical domain."
]
} |
1903.01433 | 2949700122 | In pervasive Internet of Things (IoT) applications, the use of short packets is expected to meet the stringent latency requirement in ultra-reliable low-latency communications; however, the incurred security issues and the impact of finite blocklength coding on the physical-layer security have not been well understood. This paper comprehensively investigates the performance of secure short-packet communications in a mission-critical IoT system with an external multi-antenna eavesdropper. An analytical framework is proposed to approximate the average achievable secrecy throughput of the system with finite blocklength coding. To gain more insight, a simple case with a single-antenna access point (AP) is considered first, in which the secrecy throughput is approximated in a closed form. Based on that result, the optimal blocklengths to maximize the secrecy throughput with and without the reliability and latency constraints, respectively, are derived. For the case with a multi-antenna AP, following the proposed analytical framework, closed-form approximations for the secrecy throughput are obtained under both beamforming and artificial-noise-aided transmission schemes. Numerical results verify the accuracy of the proposed approximations and illustrate the impact of the system parameters on the tradeoff between transmission latency and reliability under the secrecy constraint. | So far, there is only limited work devoted to investigating the secrecy rate under the case of finite blocklength @cite_3 @cite_16 @cite_32 @cite_24 @cite_8 . In @cite_3 , general achievability bounds for wiretap channels with finite blocklength coding were obtained. Subsequently, much effort has been put to improving the bounds @cite_16 @cite_32 . For given reliability and secrecy constraints under the finite blocklength case, the tightest bounds and the second-order coding rate for discrete memoryless and Gaussian wiretap channels were obtained in @cite_24 and @cite_8 . However, to the best of the authors' knowledge no results are available on analyzing practical system performance with these information-theoretic results, nor has there been a comprehensive study of the secrecy system throughput with finite blocklength coding. Furthermore, how to design the blocklength in order to balance the latency-reliability tradeoff under the secrecy constraint remains unclear. The above issues motivate our work. | {
"cite_N": [
"@cite_8",
"@cite_32",
"@cite_3",
"@cite_24",
"@cite_16"
],
"mid": [
"2626218764",
"1964733087",
"2103674915",
"2963154570",
"1963552431"
],
"abstract": [
"This paper investigates the maximal secret communication rate over a wiretap channel subject to reliability and secrecy constraints at a given blocklength. New achievability and converse bounds are derived, which are uniformly tighter than existing bounds, and lead to the tightest bounds on the second-order coding rate for discrete memoryless and Gaussian wiretap channels. The exact second-order coding rate is established for semi-deterministic wiretap channels, which characterizes the optimal tradeoff between reliability and secrecy in the finite-blocklength regime. Underlying our achievability bounds are two new privacy amplification results, which not only refine the existing results, but also achieve stronger notions of secrecy.",
"In this paper we develop a finite blocklength version of the Output Statistics of Random Binning (OSRB) framework. This framework is shown to be optimal in the point-to-point case. New second order regions for broadcast channel and wiretap channel with strong secrecy criterion are derived.",
"Several nonasymptotic formulas are established in channel resolvability and identification capacity, and they are applied to the wiretap channel. By using these formulas, the epsi capacities of the above three problems are considered in the most general setting, where no structural assumptions such as the stationary memoryless property are made on a channel. As a result, we solve an open problem proposed by Han and Verduacute. Moreover, we obtain lower bounds of the exponents of error probability and the wiretapper's information in the wiretap channel",
"This paper investigates the maximal secrecy rate over a wiretap channel subject to reliability and secrecy constraints at a given blocklength. New achievability and converse bounds are derived, which are shown to be tighter than existing bounds. The bounds also lead to the tightest second-order coding rate for discrete memoryless and Gaussian wiretap channels.",
"We derive lower bounds to the second-order coding rates for the wiretap channel. The decoding error probability and the information leakage measured in terms of the variational distance secrecy metric are fixed at some constants e r and e s respectively. We leverage on the connection between wiretap channel coding and channel resolvability to derive tighter secrecy bounds than those available in the literature. We then use central limit theorem-style analysis to evaluate these bounds for the discrete memoryless wiretap channel with cost constraints and the Gaussian wiretap channel."
]
} |
1903.01620 | 2920790122 | While discriminative classifiers often yield strong predictive performance, missing feature values at prediction time can still be a challenge. Classifiers may not behave as expected under certain ways of substituting the missing values, since they inherently make assumptions about the data distribution they were trained on. In this paper, we propose a novel framework that classifies examples with missing features by computing the expected prediction with respect to a feature distribution. Moreover, we use geometric programming to learn a naive Bayes distribution that embeds a given logistic regression classifier and can efficiently take its expected predictions. Empirical evaluations show that our model achieves the same performance as the logistic regression with all features observed, and outperforms standard imputation techniques when features go missing during prediction time. Furthermore, we demonstrate that our method can be used to generate "sufficient explanations" of logistic regression classifications, by removing features that do not affect the classification. | There have been many approaches developed to classify with missing values, which can broadly be grouped into two different types. The first one focuses on increasing classifiers' inherent robustness to feature corruption, which includes missingness. A common way to achieve such robustness is to spread the importance weights more evenly among features @cite_21 @cite_2 @cite_28 . One downside of this approach is that the trained classifier may not achieve its best possible performance if no features go missing. | {
"cite_N": [
"@cite_28",
"@cite_21",
"@cite_2"
],
"mid": [
"2604928999",
"2114296159",
"2198288948"
],
"abstract": [
"A novel algorithm based on random forests with surrogate splits is proposed to address the classification problem of incomplete data without imputation.The algorithm allows each tree to cast a vote even the voting process is interrupted by missing attributes.Experimental results on various acknowledged datasets show that the proposed method is robust and efficient. Random forests (RF) is known as an efficient algorithm in classification, however it depends on the integrity of datasets. Conventional methods in dealing with missing values usually employ estimation and imputation approaches whose efficiency is tied to the assumptions of data features. Recently, algorithm of surrogate decisions in RF was developed and this paper proposes a random forests algorithm with modified surrogate splits (Adjusted Weight Voting Random Forest, AWVRF) which is able to address the incomplete data without imputation.Differing from the present surrogate method, in AWVRF algorithm, when the primary splitting attribute and the surrogate attributes of an internal node are all missing, the undergoing instance is allowed to exit at the current node with a vote. Then the weight of the vote is adjusted by the strength of the involved attributes and the final decision is made by weighted voting. AWVRF does not comprise imputation step, thus it is independent of data features.AWVRF is compared with the methods of mean imputation, LeoFill, knnimpute, BPCAfill and conventional RF with surrogate decisions (surrRF) using 50 times repeated 5-fold cross validation on 10 acknowledged datasets. In a total of 22 experiment settings, the method of AWVRF harvests the highest accuracy in 14 settings and the largest AUC in 7 settings, exhibiting its superiority over other methods. Compared with surrRF, AWVRF is significantly more efficient and remain good discrimination of prediction. Experimental results show that the present AWVRF algorithm can successfully handle the classification task for incomplete data.",
"When constructing a classifier from labeled data, it is important not to assign too much weight to any single input feature, in order to increase the robustness of the classifier. This is particularly important in domains with nonstationary feature distributions or with input sensor failures. A common approach to achieving such robustness is to introduce regularization which spreads the weight more evenly between the features. However, this strategy is very generic, and cannot induce robustness specifically tailored to the classification task at hand. In this work, we introduce a new algorithm for avoiding single feature over-weighting by analyzing robustness using a game theoretic formalization. We develop classifiers which are optimally resilient to deletion of features in a minimax sense, and show how to construct such classifiers using quadratic programming. We illustrate the applicability of our methods on spam filtering and handwritten digit recognition tasks, where feature deletion is indeed a realistic noise model.",
"After a classifier is trained using a machine learning algorithm and put to use in a real world system, it often faces noise which did not appear in the training data. Particularly, some subset of features may be missing or may become corrupted. We present two novel machine learning techniques that are robust to this type of classification-time noise. First, we solve an approximation to the learning problem using linear programming. We analyze the tightness of our approximation and prove statistical risk bounds for this approach. Second, we define the online-learning variant of our problem, address this variant using a modified Perceptron, and obtain a statistical learning algorithm using an online-to-batch technique. We conclude with a set of experiments that demonstrate the effectiveness of our algorithms."
]
} |
1903.01620 | 2920790122 | While discriminative classifiers often yield strong predictive performance, missing feature values at prediction time can still be a challenge. Classifiers may not behave as expected under certain ways of substituting the missing values, since they inherently make assumptions about the data distribution they were trained on. In this paper, we propose a novel framework that classifies examples with missing features by computing the expected prediction with respect to a feature distribution. Moreover, we use geometric programming to learn a naive Bayes distribution that embeds a given logistic regression classifier and can efficiently take its expected predictions. Empirical evaluations show that our model achieves the same performance as the logistic regression with all features observed, and outperforms standard imputation techniques when features go missing during prediction time. Furthermore, we demonstrate that our method can be used to generate "sufficient explanations" of logistic regression classifications, by removing features that do not affect the classification. | The second one investigates how to impute the missing values. In essence, imputation is a form of reasoning about missing values from observed ones @cite_0 @cite_20 @cite_27 . An iterative process is commonly used during this reasoning process @cite_24 . Some recent works also adapt auto-encoders and GANs for the task @cite_10 @cite_19 . Some of these works can be incorporated into a framework called multiple imputations to reflect and better bound one's uncertainty @cite_26 @cite_9 . These existing methods focus on substituting missing values with those closer to the ground truth, but do not model how the imputed values interact with the trained classifier. On the other hand, our proposed method explicitly reasons about what the classifier is expected to return. | {
"cite_N": [
"@cite_26",
"@cite_9",
"@cite_0",
"@cite_24",
"@cite_27",
"@cite_19",
"@cite_10",
"@cite_20"
],
"mid": [
"2127841934",
"1919216911",
"1985419027",
"2115098571",
"321726205",
"2885878606",
"2951397438",
"1533198000"
],
"abstract": [
"In recent years, multiple imputation has emerged as a convenient and flexible paradigm for analysing data with missing values. Essential features of multiple imputation are reviewed, with answers to frequently asked questions about using the method in practice.",
"Multivariate imputation by chained equations (MICE) has emerged as a principled method of dealing with missing data. Despite properties that make MICE particularly useful for large imputation procedures and advances in software development that now make it accessible to many researchers, many psychiatric researchers have not been trained in these methods and few practical resources exist to guide researchers in the implementation of this technique. This paper provides an introduction to the MICE method with a focus on practical aspects and challenges in using this method. A brief review of software programs available to implement MICE and then analyze multiply imputed data is also provided.",
"Backpropagation neural networks have been applied to prediction and classification problems in many real world situations. However, a drawback of this type of neural network is that it requires a full set of input data, and real world data is seldom complete. We have investigated two ways of dealing with incomplete data — network reduction using multiple neural network classifiers, and value substitution using estimated values from predictor networks — and compared their performance with an induction method. On a thyroid disease database collected in a clinical situation, we found that the network reduction method was superior. We conclude that network reduction can be a useful method for dealing with missing values in diagnostic systems based on backpropagation neural networks.",
"The R package mice imputes incomplete multivariate data by chained equations. The software mice 1.0 appeared in the year 2000 as an S-PLUS library, and in 2001 as an R package. mice 1.0 introduced predictor selection, passive imputation and automatic pooling. This article documents mice, which extends the functionality of mice 1.0 in several ways. In mice, the analysis of imputed data is made completely general, whereas the range of models under which pooling works is substantially extended. mice adds new functionality for imputing multilevel data, automatic predictor selection, data handling, post-processing imputed values, specialized pooling routines, model selection tools, and diagnostic graphs. Imputation of categorical data is improved in order to bypass problems caused by perfect prediction. Special attention is paid to transformations, sum scores, indices and interactions using passive imputation, and to the proper setup of the predictor matrix. mice can be downloaded from the Comprehensive R Archive Network. This article provides a hands-on, stepwise approach to solve applied incomplete data problems.",
"Part 1. A Gentle Introduction to Missing Data. The Concept of Missing Data. The Prevalence of Missing Data. Why Data Might Be Missing. The Impact of Missing Data. What's Missing in the Missing Data Literature? A Cost-Benefit Approach to Missing Data. Missing Data - Not Just for Statisticians Anymore. Part 2. Consequences of Missing Data. Three General Consequences of Missing Data. Consequences of Missing Data on Construct Validity. Consequences of Missing Data on Internal Validity. Consequences on Causal Generalization. Summary. Part 3. Classifying Missing Data. \"The Silence That Betokens\". The Current Classification System: Mechanisms of Missing Data. Expanding the Classification System. Summary. Part 4. Preventing Missing Data by Design. Overall Study Design. Characteristics of the Target Population and the Sample. Data Collection and Measurement. Treatment Implementation. Data Entry Process. Summary. Part 5. Diagnostic Procedures. Traditional Diagnostics. Dummy Coding Missing Data. Numerical Diagnostic Procedures. Graphical Diagnostic Procedures. Summary. Part 6. The Selection of Data Analytic Procedures. Preliminary Steps. Decision Making. Summary. Part 7. Data Deletion Methods for Handling Missing Data. Data Sets. Complete Case Method. Available Case Method. Available Item Method. Individual Growth Curve Analysis. Multisample Analyses. Summary. Part 8. Data Augmentation Procedures. Model-Based Procedures. Markov Chain Monte Carlo. Adjustment Methods. Summary. Part 9. Single Imputation Procedures. Constant Replacement Methods. Random Value Imputation. Nonrandom Value Imputation: Single Condition. Nonrandom Value Imputation: Multiple Conditions. Summary. Part 10. Multiple Imputation. The MI Process. Summary. Part 11. Reporting Missing Data and Results. APA Task Force Recommendations. Missing Data and Study Stages. TFSI Recommendations and Missing Data. Reporting Format. Summary. Part 12. Epilogue.",
"Missing values widely exist in many real-world datasets, which hinders the performing of advanced data analytics. Properly filling these missing values is crucial but challenging, especially when the missing rate is high. Many approaches have been proposed for missing value imputation (MVI), but they are mostly heuristics-based, lacking a principled foundation and do not perform satisfactorily in practice. In this paper, we propose a probabilistic framework based on deep generative models for MVI. Under this framework, imputing the missing entries amounts to seeking a fixed-point solution between two conditional distributions defined on the missing entries and latent variables respectively. These distributions are parameterized by deep neural networks (DNNs) which possess high approximation power and can capture the nonlinear relationships between missing entries and the observed values. The learning of weight parameters of DNNs is performed by maximizing an approximation of the log-likelihood of observed values. We conducted extensive evaluation on 13 datasets and compared with 11 baselines methods, where our methods largely outperforms the baselines.",
"Missing data is a significant problem impacting all domains. State-of-the-art framework for minimizing missing data bias is multiple imputation, for which the choice of an imputation model remains nontrivial. We propose a multiple imputation model based on overcomplete deep denoising autoencoders. Our proposed model is capable of handling different data types, missingness patterns, missingness proportions and distributions. Evaluation on several real life datasets show our proposed model significantly outperforms current state-of-the-art methods under varying conditions while simultaneously improving end of the line analytics.",
"Data quality is a major concern in Machine Learning and other correlated areas such as Knowledge Discovery from Databases (KDD). As most Machine Learning algorithms induce knowledge strictly from data, the quality of the knowledge extracted is largely determined by the quality of the underlying data. One relevant problem in data quality is the presence of missing data. Despite the frequent occurrence of missing data, many Machine Learning algorithms handle missing data in a rather naive way. Missing data treatment should be carefully thought, otherwise bias might be introduced into the knowledge induced. In this work, we analyse the use of the k-nearest neighbour as an imputation method. Imputation is a term that denotes a procedure that replaces the missing values in a data set by some plausible values. Our analysis indicates that missing data imputation based on the k-nearest neighbour algorithm can outperform the internal methods used by C4.5 and CN2 to treat missing data."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | Evolutionary algorithms have been quite successful in solving PFO. An ecology inspired algorithm for PFO is presented in @cite_8 . A key concept of this algorithm is the definition of habitats. These habitats, or clusters, are determined by using a hierarchical clustering algorithm. For example, in a multimodal optimization problem, each peak can become a promising habitat for some populations. Two categories of ecological relationships can be defined, according to the defined habitats, intra-habitats' relationships that occur between populations inside each habitat, and inter-habitats' relationships that occur between habitats. The intra-habitats' relationships are responsible for intensifying the search, and the inter-habitats' relationships are responsible for diversifying the search. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2060594550"
],
"abstract": [
"This paper applies an ecology-inspired algorithm (ECO) to solve a complex problem from bioinformatics. The ecological-inspired algorithm represents a new perspective to develop cooperative evolutionary algorithms. Different algorithms are applied to compose the computational ecosystem, both homogeneously and heterogeneously. The aim is to search low energy conformations for the Protein Structure Prediction problem, concerning the 2D-AB off-lattice model. From the results, the heterogeneous configuration obtained the best conformations for almost all cases, possibly due to the use of different intensification and diversification strategies provided by different search algorithms."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | The paper @cite_34 presents the basic and adaptive versions of the DE algorithm with parallel architecture (master-slave). With this architecture, the computational load is divided and the overall performance is improved. An explosion and mirror mutation operators were also included into DE. The explosion is a mechanism that reinitializes the population when the stagnation has occurred, and, thus, it is responsible for preventing premature convergence. The second mechanism, the mirror mutation, was designed to perform a local search by using mirror angles within the sequence. | {
"cite_N": [
"@cite_34"
],
"mid": [
"2085602712"
],
"abstract": [
"Protein structure prediction (PSP) is a well-known problem in bioinformatics. Identifying protein native conformations makes it possible to predict its function within the organism. Knowing this also helps the development of new drugs and the comprehension of some biological mechanisms. During years some techniques have been developed for this purpose but, due to their high cost, it is necessary to use simplified models of protein structures. However, even the simplest models, with low biological plausibility, are excessively complex from the computational point of view. This paper reports the application of Differential Evolution (DE) to solve the PSP problem using a Toy Model (also known as the AB Model) in both 2D and 3D to represent the protein structure. This work presents two different versions of the DE algorithm (basic and adaptive) using a parallel architecture (master-slave) based on Message Passing Interface in a cluster. Some special operators for DE were developed: explosion and mirror mutation. All tests executed in this work used four benchmark sequences, ranging from 13 to 55 amino acids. The results for both parallel DE algorithms using both 2D and 3D models were compared with other works in the literature. The DE algorithm achieved excellent results. Overall results encourage further research towards the use of knowledge-based operators to improve further the performance of DE."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | A Biogeography-Based Optimization (BBO) is also applied to PFO @cite_1 . This algorithm is based on the definition of habitats. Each habitat has its amount of species, and different habitats usually have different amounts of species. Within the algorithm, Habitat Suitability Index (HSI) is used to measure the quality of the habitat. Habitats with high HSI are suitable for survival. Thus, these habitats have low immigration rates and high emigration rates. On the contrary, habitats with low HSI have high immigration rates and low emigration rates. Additionally, BBO includes a mutation operator to avoid premature convergence, and elitism to avoid the degeneration phenomena. The improved BBO contains an improved migration process. In the migration process, a feature from a habitat is replaced by another feature from a different habitat. In the improved version, different features were selected from different habitats according to their emigration rates and their values with weights determine the features of the habitat. This algorithm was compared with the standard BBO and DE. The results show that the improved BBO outperforms all competitors. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2031468664"
],
"abstract": [
"In recent years, many bio-inspired computation algorithms have been proposed to solve constraint problems. Biogeography-Based Optimization (BBO) is one of these newly proposed optimization algorithms. As a new way to solve complicated optimization problems, BBO has a quick convergence. In this paper, we proposed an improved BBO for solving protein structure prediction problems. Comparative experiments with standard BBO and differential evolution algorithm (DE) are also conducted, and the results demonstrate this improved BBO approach performs better in solving these complicated protein prediction problems."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | To improve the Artificial Bee Colony (ABC) algorithm convergence performance, an internal feedback strategy based ABC is proposed in @cite_3 . In this strategy, internal states are used fully in each iteration, to guide the subsequent searching process. @cite_7 , a chaotic ABC algorithm was introduced. This algorithm combines the artificial bee colony and the chaotic search algorithm to avoid the premature convergence. If the algorithm was trapped into the local optimum, it uses a chaotic search algorithm to prevent stagnation. A balance-evolution artificial bee colony algorithm was presented in @cite_5 @cite_13 . During the optimization process, this algorithm uses convergence information to manipulate adaptively between the local and global searches. | {
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_7",
"@cite_3"
],
"mid": [
"2056882961",
"1861417953",
"2016902353",
"2283559742"
],
"abstract": [
"Abstract Protein structure prediction is a fundamental issue in the field of computational molecular biology. In this paper, the AB off-lattice model is adopted to transform the original protein structure prediction scheme into a numerical optimization problem. We present a balance-evolution artificial bee colony (BE-ABC) algorithm to address the problem, with the aim of finding the structure for a given protein sequence with the minimal free-energy value. This is achieved through the use of convergence information during the optimization process to adaptively manipulate the search intensity. Besides that, an overall degradation procedure is introduced as part of the BE-ABC algorithm to prevent premature convergence. Comprehensive simulation experiments based on the well-known artificial Fibonacci sequence set and several real sequences from the database of Protein Data Bank have been carried out to compare the performance of BE-ABC against other algorithms. Our numerical results show that the BE-ABC algorithm is able to outperform many state-of-the-art approaches and can be effectively employed for protein structure optimization.",
"Protein folding is a fundamental topic in molecular biology. Conventional experimental techniques for protein structure identification or protein folding recognition require strict laboratory requirements and heavy operating burdens, which have largely limited their applications. Alternatively, computer-aided techniques have been developed to optimize protein structures or to predict the protein folding process. In this paper, we utilize a 3D off-lattice model to describe the original protein folding scheme as a simplified energy-optimal numerical problem, where all types of amino acid residues are binarized into hydrophobic and hydrophilic ones. We apply a balance-evolution artificial bee colony (BE-ABC) algorithm as the minimization solver, which is featured by the adaptive adjustment of search intensity to cater for the varying needs during the entire optimization process. In this work, we establish a benchmark case set with 13 real protein sequences from the Protein Data Bank database and evaluate the convergence performance of BE-ABC algorithm through strict comparisons with several state-of-the-art ABC variants in short-term numerical experiments. Besides that, our obtained best-so-far protein structures are compared to the ones in comprehensive previous literature. This study also provides preliminary insights into how artificial intelligence techniques can be applied to reveal the dynamics of protein folding.",
"Prediction of the three-dimensional structure of a protein from its amino acid sequence can be considered as a global optimization problem. In this paper, the Chaotic Artificial Bee Colony (CABC) algorithm was introduced and applied to 3D protein structure prediction. Based on the 3D off-lattice AB model, the CABC algorithm combines global search and local search of the Artificial Bee Colony (ABC) algorithm with the chaotic search algorithm to avoid the problem of premature convergence and easily trapping the local optimum solution. The experiments carried out with the popular Fibonacci sequences demonstrate that the proposed algorithm provides an effective and high-performance method for protein structure prediction.",
"The biological function of the protein is folded by their spatial structure decisions, and therefore the process of protein folding is one of the most challenging problems in the field of bioinformatics. Although many heuristic algorithms have been proposed to solve the protein structure prediction (PSP) problem. The existing algorithms are far from perfect since PSP is an NP-problem. In this paper, we proposed artificial bee colony algorithm on 3D AB off-lattice model to PSP problem. In order to improve the global convergence ability and convergence speed of ABC algorithm, we adopt the new search strategy by combining the global solution into the search equation. Experimental results illustrate that the suggested algorithm is effective when the algorithm is applied to the Fibonacci sequences and four real protein sequences in the Protein Data Bank."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | Researches combined two or more algorithms in order to develop hybrid algorithms that can obtain better results in comparison with the original algorithms. @cite_35 , the authors combined simulated annealing and the tabu search algorithm. This algorithm was improved additionally with a local adjust strategy that improves the accuracy and speed of searching. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2072519649"
],
"abstract": [
"Background Protein folding structure prediction is one of the most challenging problems in the bioinformatics domain. Because of the complexity of the realistic protein structure, the simplified structure model and the computational method should be adopted in the research. The AB off-lattice model is one of the simplification models, which only considers two classes of amino acids, hydrophobic (A) residues and hydrophilic (B) residues."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | The algorithm that combines the particle swarm optimization, genetic algorithm, and tabu search was presented in @cite_30 . Within this algorithm, the particle swarm optimization is used to generate an initial solution that is not too random, and the factor of stochastic disturbance is adopted to improve the ability of global search. The genetic algorithm was used to generate local optima in order to speed up the convergence of the algorithm, while the tabu search is used with a mutation operator to locate the global optimum. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2087760458"
],
"abstract": [
"A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | An improved stochastic fractal search algorithm was applied to the AB off-lattice model in @cite_17 . In order to avoid the algorithm becoming trapped into the local optimum, L 'e vy flight and internal feedback information are incorporated into the algorithm. The algorithm consists of diffusion and an update process. The L 'e vy flight was used in the diffusion process to generate some new particles around each population particle. In the update process, the best particle generated from the diffusion process is used to generate new particles. To prevent stagnation within a local optimum, the internal feedback information is incorporated into the algorithm. This information is used to trigger the mechanism that generates new particles according to two randomly selected particles from the population. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2801547197"
],
"abstract": [
"Protein structure prediction (PSP) is a significant area for biological information research, disease treatment, and drug development and so on. In this paper, three-dimensional structures of proteins are predicted based on the known amino acid sequences, and the structure prediction problem is transformed into a typical NP problem by an AB off-lattice model. This work applies a novel improved Stochastic Fractal Search algorithm (ISFS) to solve the problem. The Stochastic Fractal Search algorithm (SFS) is an effective evolutionary algorithm that performs well in exploring the search space but falls into local minimums sometimes. In order to avoid the weakness, Lvy flight and internal feedback information are introduced in ISFS. In the experimental process, simulations are conducted by ISFS algorithm on Fibonacci sequences and real peptide sequences. Experimental results prove that the ISFS performs more efficiently and robust in terms of finding the global minimum and avoiding getting stuck in local minimums."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | The authors in @cite_2 have shown that the differential evolution algorithm converges to better solutions when the initial population is created by using trained neural networks. The neural networks were trained successfully using the reinforcement learning method, by knowing only the fitness function of the class of optimization problems. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2799508920"
],
"abstract": [
"Deep neural networks are constructed that are able to partially solve a protein structure optimization problem. The networks are trained using reinforcement learning approach so that free energy of predicted protein structure is minimized. Free energy of a protein structure is calculated using generalized three-dimensional AB off-lattice protein model. This methodology can be applied to other classes of optimization problems and represents a step toward automatic heuristic construction using deep neural networks. Trained networks can be used to construct better initial populations for optimization. It is shown that differential evolution applied to protein structure optimization problem converges to better solutions when initial population is constructed in this way."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | A multi-agent simulated annealing algorithm with parallel adaptive multiple sampling was proposed in @cite_32 . A parallel elitist sampling strategy was used to overcome the inherent serialization of the original simulated annealing algorithm. This strategy additionally provides benefit information, that is helpful for the convergence. An adaptive neighborhood search and a parallel multiple move mechanism were also used inside the algorithm to improve the algorithm's efficiency. In this work, the following methods were analyzed for generating candidate solutions: Simulated annealing, a mutation from the differential evolution algorithm, and the velocity and position update from the particle swarm optimization. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2762084119"
],
"abstract": [
"Abstract Protein structure prediction (PSP) with ab initio model keeps a challenge in bioinformatics on account of high computational complexity. To solve the problem within a limited time and resource, the parallel capacity and search efficiency are of significance for a successful algorithm. Traditional simulated annealing (SA) algorithm is extremely slow in convergence, and the implementation and efficiency of parallel SA algorithms are typically problem-dependent. To overcome such intrinsic limitation, in this paper a multi-agent simulated annealing (MASA) algorithm with parallel adaptive multiple sampling (MASA-PAMS) that features better search ability is proposed. The MASA-PAMS contains two main issues. First, a parallel elitist sampling strategy overcomes the inherent serialization of the original SA, provides benefit information for the iteration which is helpful for the convergence. Then an adaptive neighborhood search and a parallel multiple move mechanism displace the random sampling scheme which balance the intensification and diversification iteratively. Conducted experiments with 2D and 3D AB off-lattice models indicate that the MASA-PAMS performs better than, or at least comparable to other MASAs with different sampling schemes and several state-of-the-art algorithms for PSP."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | Although powerful optimization algorithms have been introduced for PFO, researchers are also focused on the time-consuming optimization problems. For solving such a problem for PFO, the authors in @cite_12 introduced a new version of DE which uses the computationally cheap surrogate models and gene expression programming. The purpose of the incorporated gene expression programming is to generate a diversified set of configurations, while the purpose of the surrogate model is to help DE to find the best set of configurations. Additionally, a covariance matrix adaptation evolution strategy was also adopted, to explore the search space more efficiently. This algorithm is called SGDE, and it outperforms all state-of-the-art algorithms according to the number of function evaluations. Its efficiency was also demonstrated in terms of runtime on the adopted all-atom model which represents time-consuming PFO. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2913188636"
],
"abstract": [
"Abstract Protein structure prediction (PSP) plays an important role in the field of computational molecular biology. Although powerful optimization algorithms have been proven effective to tackle the PSP, researchers are faced with the challenge of time consuming simulations. This paper introduces a new modification of differential evolution (DE) which makes use of the computationally cheap surrogate models and gene expression programming (GEP) in order to address the aforementioned issue. The incorporated GEP is used to generate a diversified set of configurations, while radial basis function (RBF) surrogate model helps DE to find the best set of configurations. In addition to this, covariance matrix adaptation evolution strategy (CMAES) is also adopted to explore the search space more efficiently. The introduced algorithm, called SGDE, is tested on real-world proteins from the Protein data bank (PDB) using both a simplified and an all-atom model. The experiments show that SGDE performs better than the state-of-the-art algorithms on the PSP problems in both terms of the convergence rate and accuracy. In the case of run time complexity, SGDE significantly outperforms the other competitive algorithms for the adopted all-atom model."
]
} |
1903.01456 | 2920755795 | This paper presents a two-level protein folding optimization on a three-dimensional AB off-lattice model. The first level is responsible for forming conformations with a good hydrophobic core or a set of compact hydrophobic amino acid positions. These conformations are forwarded to the second level, where an accurate search is performed with the aim of locating conformations with the best energy value. The optimization process switches between these two levels until the stopping condition is satisfied. An auxiliary fitness function was designed for the first level, while the original fitness function is used in the second level. The auxiliary fitness function includes expression about the quality of the hydrophobic core. This expression is crucial for leading the search process to the promising solutions that have a good hydrophobic core and, consequently, improves the efficiency of the whole optimization process. Our differential evolution algorithm was used for demonstrating the efficiency of the two-level optimization. It was analyzed on well-known amino acid sequences that are used frequently in the literature. The obtained experimental results show that the employed two-level optimization improves the efficiency of our algorithm significantly, and that the proposed algorithm is superior to other state-of-the-art algorithms. | Information about a hydrophobic core, or a set of positions of the hydrophobic amino acids, is very useful for structure prediction in different methods. The authors in @cite_11 presented a constraint-based method. The key concept of this method is the ability to compute maximally compact hydrophobic cores. Information about hydrophobic core was also used within stochastic algorithms for PFO. In @cite_27 @cite_31 a macro-mutation operator is incorporated into the genetic algorithm and applied to the three-dimensional face-centered cubic lattice. This operator compresses the conformation and quickly forms the hydrophobic-core. The obtained results show that the macro-mutation operator improves the efficiency of the algorithm significantly. | {
"cite_N": [
"@cite_27",
"@cite_31",
"@cite_11"
],
"mid": [
"2344850686",
"2257563257",
""
],
"abstract": [
"In-vitro methods for protein structure determination are time-consuming, cost-intensive, and failure-prone. Because of these expenses, alternative computer-based predictive methods have emerged. Predicting a protein’s 3-D structure from only its amino acid sequence—also known as ab initio protein structure prediction (PSP)—is computationally demanding because the search space is astronomically large and energy models are extremely complex. Some successes have been achieved in predictive methods but these are limited to small sized proteins (around 100 amino acids); thus, developing efficient algorithms, reducing the search space, and designing effective search guidance heuristics are necessary to study large sized proteins. An on-lattice model can be a better ground for rapidly developing and measuring the performance of a new algorithm, and hence we consider this model for larger proteins (>150 amino acids) to enhance the genetic algorithms (GAs) framework. In this paper, we formulate PSP as a combinatorial optimization problem that uses 3-D face-centered-cubic lattice coordinates to reduce the search space and hydrophobic-polar energy model to guide the search. The whole optimization process is controlled by an enhanced GA framework with four enhanced features: 1) an exhaustive generation approach to diversify the search; 2) a novel hydrophobic core-directed macro-mutation operator to intensify the search; 3) a per-generation duplication elimination strategy to prevent early convergence; and 4) a random-walk technique to recover from stagnation. On a set of standard benchmark proteins, our algorithm significantly outperforms state-of-the-art algorithms. We also experimentally show that our algorithm is robust enough to produce very similar results regardless of different parameter settings.",
"Graphical abstractDisplay Omitted HighlightsGraded energy-model strategically mixes the 20×20 MJ potential matrix with 2×2 HP energy model.HP guided macro-mutation operator within GA provides efficient sampling.Proposed Algorithm outperformed other state-of-the-art approaches.Splits the energy function related complexities into two less complex functions through macro-mutation operator. Protein structure prediction is considered as one of the most challenging and computationally intractable combinatorial problem. Thus, the efficient modeling of convoluted search space, the clever use of energy functions, and more importantly, the use of effective sampling algorithms become crucial to address this problem. For protein structure modeling, an off-lattice model provides limited scopes to exercise and evaluate the algorithmic developments due to its astronomically large set of data-points. In contrast, an on-lattice model widens the scopes and permits studying the relatively larger proteins because of its finite set of data-points. In this work, we took the full advantage of an on-lattice model by using a face-centered-cube lattice that has the highest packing density with the maximum degree of freedom. We proposed a graded energy-strategically mixes the Miyazawa-Jernigan (MJ) energy with the hydrophobic-polar (HP) energy-based genetic algorithm (GA) for conformational search. In our application, we introduced a 2×2 HP energy guided macro-mutation operator within the GA to explore the best possible local changes exhaustively. Conversely, the 20×20 MJ energy model-the ultimate objective function of our GA that needs to be minimized-considers the impacts amongst the 20 different amino acids and allow searching the globally acceptable conformations. On a set of benchmark proteins, our proposed approach outperformed state-of-the-art approaches in terms of the free energy levels and the root-mean-square deviations.",
""
]
} |
1903.01373 | 2919720931 | We introduce -Rank, a principled evolutionary dynamics methodology for the evaluation and ranking of agents in large-scale multi-agent interactions, grounded in a novel dynamical game-theoretic solution concept called Markov-Conley chains (MCCs). The approach leverages continuous- and discrete-time evolutionary dynamical systems applied to empirical games, and scales tractably in the number of agents, the type of interactions, and the type of empirical games (symmetric and asymmetric). Current models are fundamentally limited in one or more of these dimensions and are not guaranteed to converge to the desired game-theoretic solution concept (typically the Nash equilibrium). -Rank provides a ranking over the set of agents under evaluation and provides insights into their strengths, weaknesses, and long-term dynamics. This is a consequence of the links we establish to the MCC solution concept when the underlying evolutionary model's ranking-intensity parameter, , is chosen to be large, which exactly forms the basis of -Rank. In contrast to the Nash equilibrium, which is a static concept based on fixed points, MCCs are a dynamical solution concept based on the Markov chain formalism, Conley's Fundamental Theorem of Dynamical Systems, and the core ingredients of dynamical systems: fixed points, recurrent sets, periodic orbits, and limit cycles. -Rank runs in polynomial time with respect to the total number of pure strategy profiles, whereas computing a Nash equilibrium for a general-sum game is known to be intractable. We introduce proofs that not only provide a unifying perspective of existing continuous- and discrete-time evolutionary evaluation models, but also reveal the formal underpinnings of the -Rank methodology. We empirically validate the method in several domains including AlphaGo, AlphaZero, MuJoCo Soccer, and Poker. | The purpose of the first applications of EGTA was to reduce the complexity of large economic problems in electronic commerce, such as continuous double auctions, supply chain management, market games, and automated trading @cite_53 @cite_28 @cite_70 @cite_5 . While these complex economic problems continue to be a primary application area of these methods @cite_32 @cite_30 @cite_2 @cite_44 , the general techniques have been applied in many different settings. These include analysis of interactions among heuristic meta-strategies in poker @cite_40 , network protocol compliance @cite_59 , collision avoidance in robotics @cite_14 , and security games @cite_79 @cite_46 @cite_8 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_8",
"@cite_28",
"@cite_70",
"@cite_53",
"@cite_32",
"@cite_44",
"@cite_79",
"@cite_40",
"@cite_59",
"@cite_2",
"@cite_5",
"@cite_46"
],
"mid": [
"",
"",
"2765659133",
"2104771847",
"2149086825",
"2152897361",
"2320435886",
"2808100210",
"2016668099",
"2076911041",
"16124843",
"2750480045",
"102212266",
"2535017274"
],
"abstract": [
"",
"",
"We study the problem of allocating limited security countermeasures to protect network data from cyber-attacks, for scenarios modeled by Bayesian attack graphs. We consider multi-stage interactions between a network administrator and cybercriminals, formulated as a security game. This formulation is capable of representing security environments with significant dynamics and uncertainty, and very large strategy spaces. For the game model, we propose parameterized heuristic strategies for both players. Our heuristics exploit the topological structure of the attack graphs and employ different sampling methodologies to overcome the computational complexity in determining players' actions. Given the complexity of the game, we employ a simulation-based methodology, and perform empirical game analysis over an enumerated set of these heuristic strategies. Finally, we conduct experiments based on a variety of game settings to demonstrate the advantages of our heuristics in obtaining effective defense strategies which are robust to the uncertainty of the security environment.",
"Auctions define games of incomplete information for which it is often too hard to compute the exact Bayesian-Nash equilibrium. Instead, the infinite strategy space is often populated with heuristic strategies, such as myopic best-response to prices. Given these heuristic strategies, it can be useful to evaluate the strategies and the auction design by computing a Nash equilibrium across the restricted strategy space. First, it is necessary to compute the expected payoff for each heuristic strategy profile. This step involves sampling the auction and averaging over multiple simulations, and its cost can dominate the cost of computing the equilibrium given a payoff matrix. In this paper, we propose two information theoretic approaches to determine the next sample through an interleaving of equilibrium calculations and payoff refinement. Initial experiments demonstrate that both methods reduce error in the computed Nash equilibrium as samples are performed at faster rates than naive uniform sampling. The second, faster method, has a lower metadeliberation cost and better scaling properties. We discuss how our sampling methodology could be used within experimental mechanism design.",
"We consider a class of games with real-valued strategies and payoff information available only in the form of data from a given sample of strategy profiles. Solving such games with respect to the underlying strategy space requires generalizing from the data to a complete payoff-function representation. We address payoff-function learning as a standard regression problem, with provision for capturing known structure (e.g., symmetry) in the multiagent environment. To measure learning performance, we consider the relative utility of prescribed strategies, rather than the accuracy of payoff functions per se. We demonstrate our approach and evaluate its effectiveness on two examples: a two-player version of the first-price sealed-bid auction (with known analytical form), and a five-player market-based scheduling game (with no known solution). Additionally, we explore the efficacy of using relative utility of strategies as a target of supervised learning and as a learning model selector. Our experiments demonstrate its effectiveness in the former case, though not in the latter.",
"We develop a model for analyzing complex games with repeated interactions, for which a full game-theoretic analysis is intractable. Our approach treats exogenously specified, heuristic strategies, rather than the atomic actions, as primitive, and computes a heuristic-payoff table specifying the expected payoffs of the joint heuristic strategy space. We analyze two games based on (i) automated dynamic pricing and (ii) continuous double auction. For each game we compute Nash equilibria of previously published heuristic strategies. To determine the most plausible equilibria, we study the replicator dynamics of a large population playing the strategies. In order to account for errors in estimation of payoffs or improvements in strategies, we also analyze the dynamics and equilibria based on perturbed payoffs.",
"Frequent call markets have been proposed as a market design solution to the latency arms race perpetuated by high-frequency traders in continuous markets, but the path to widespread adoption of such markets is unclear. If such trading mechanisms were available, would anyone want to use them? This is ultimately a question of market choice, thus we model it as a game of strategic market selection, where agents choose to participate in either a frequent call market or a continuous double auction. Our market environment is populated by fast and slow traders, who reenter to trade at different rates. We employ empirical game-theoretic methods to determine the market types and trading strategies selected in equilibrium. We also analyze best-response patterns to characterize the frequent call market’s basin of attraction. Our findings show that in equilibrium, welfare of slow traders is generally higher in the call market. We also find strong evidence of a predator-prey relation between fast and slow traders: the fast traders chase agents into either market, and slow traders under pursuit seek the protection of the frequent call market.",
"",
"The effectiveness of a moving target defense depends on how it is deployed through specific system operations over time, and how attackers may respond to this deployment. We define a generic cyber-defense scenario, and examine the interplay between attack and defense strategies using empirical game-theoretic techniques. In this approach, the scenario is defined procedurally by a simulator, and data derived from systematic simulation is used to induce a game model. We explore a space of 72 game instances, defined by differences in agent objectives, attack cost, and ability of the defender to detect attack actions. We observe a range of qualitative strategic behaviors, which vary in clear patterns across environmental conditions. In particular, we find that the efficacy of deterrent defense is critically sensitive to detection capability, and in the absence of perfect detection the defender is often driven to proactive moving-target actions.",
"Abstract In this paper we investigate the evolutionary dynamics of strategic behavior in the game of poker by means of data gathered from a large number of real world poker games. We perform this study from an evolutionary game theoretic perspective using two Replicator Dynamics models. First we consider the basic selection model on this data, secondly we use a model which includes both selection and mutation. We investigate the dynamic properties by studying how rational players switch between different strategies under different circumstances, what the basins of attraction of the equilibria look like, and what the stability properties of the attractors are. We illustrate the dynamics using a simplex analysis. Our experimental results confirm existing domain knowledge of the game, namely that certain strategies are clearly inferior while others can be successful given certain game conditions.",
"Formal analyses of incentives for compliance with network protocols often appeal to gametheoretic models and concepts. Applications of game-theoretic analysis to network security have generally been limited to highly stylized models, where simplified environments enable tractable study of key strategic variables. We propose a simulation-based approach to gametheoretic analysis of protocol compliance, for scenarios with large populations of agents and large policy spaces. We define a general procedure for systematically exploring a structured policy space, directed expressly to resolve the qualitative classification of equilibrium behavior as compliant or non-compliant. The techniques are illustrated and exercised through an extensive case study analyzing compliance incentives for introduction-based routing. We find that the benefits of complying with the protocol are particularly strong for nodes subject to attack, and the overall compliance level achieved in equilibrium, while not universal, is sufficient to support the desired security goals of the protocol.",
"We investigate the effects of market making on market performance, focusing on allocative efficiency as well as gains from trade accrued by background traders. We employ empirical simulation-based methods to evaluate heuristic strategies for market makers as well as background investors in a variety of complex trading environments. Our market model incorporates private and common valuation elements, with dynamic fundamental value and asymmetric information. In this context, we compare the surplus achieved by background traders in strategic equilibrium, with and without a market maker. Our findings indicate that the presence of the market maker strongly tends to increase total welfare across various environments. Market-maker profit may or may not exceed the welfare gain, thus the effect on background-investor surplus is ambiguous. We find that market making tends to benefit investors in relatively thin markets, and situations where background traders are impatient, due to limited trading opportunities. The presence of additional market makers increases these benefits, as competition drives the market makers to provide liquidity at lower price spreads. A thorough sensitivity analysis indicates that these results are robust to reasonable changes in model parameters.",
"An emerging empirical methodology bridges the gap between game theory and simulation for practical strategic reasoning.",
"Distributed denial-of-service attacks are an increasing problem facing web applications, for which many defense techniques have been proposed, including several moving-target strategies. These strategies typically work by relocating targeted services over time, increasing uncertainty for the attacker, while trying not to disrupt legitimate users or incur excessive costs. Prior work has not shown, however, whether and how a rational defender would choose a moving-target method against an adaptive attacker, and under what conditions. We formulate a denial-of-service scenario as a two-player game, and solve a restricted-strategy version of the game using the methods of empirical game-theoretic analysis. Using agent-based simulation, we evaluate the performance of strategies from prior literature under a variety of attacks and environmental conditions. We find evidence for the strategic stability of various proposed strategies, such as proactive server movement, delayed attack timing, and suspected insider blocking, along with guidelines for when each is likely to be most effective."
]
} |
1903.01373 | 2919720931 | We introduce -Rank, a principled evolutionary dynamics methodology for the evaluation and ranking of agents in large-scale multi-agent interactions, grounded in a novel dynamical game-theoretic solution concept called Markov-Conley chains (MCCs). The approach leverages continuous- and discrete-time evolutionary dynamical systems applied to empirical games, and scales tractably in the number of agents, the type of interactions, and the type of empirical games (symmetric and asymmetric). Current models are fundamentally limited in one or more of these dimensions and are not guaranteed to converge to the desired game-theoretic solution concept (typically the Nash equilibrium). -Rank provides a ranking over the set of agents under evaluation and provides insights into their strengths, weaknesses, and long-term dynamics. This is a consequence of the links we establish to the MCC solution concept when the underlying evolutionary model's ranking-intensity parameter, , is chosen to be large, which exactly forms the basis of -Rank. In contrast to the Nash equilibrium, which is a static concept based on fixed points, MCCs are a dynamical solution concept based on the Markov chain formalism, Conley's Fundamental Theorem of Dynamical Systems, and the core ingredients of dynamical systems: fixed points, recurrent sets, periodic orbits, and limit cycles. -Rank runs in polynomial time with respect to the total number of pure strategy profiles, whereas computing a Nash equilibrium for a general-sum game is known to be intractable. We introduce proofs that not only provide a unifying perspective of existing continuous- and discrete-time evolutionary evaluation models, but also reveal the formal underpinnings of the -Rank methodology. We empirically validate the method in several domains including AlphaGo, AlphaZero, MuJoCo Soccer, and Poker. | Evolutionary dynamics have often been presented as a practical tool for analyzing interactions among meta-strategies found in EGTA @cite_53 @cite_14 @cite_41 , and for studying the change in policies of multiple learning agents @cite_41 , as the EGTA approach is largely based on the same assumptions as evolutionary game-theory, viz. repeated interactions among sub-groups sampled independently at random from an arbitrarily-large population of agents. | {
"cite_N": [
"@cite_41",
"@cite_53",
"@cite_14"
],
"mid": [
"1192553058",
"2152897361",
""
],
"abstract": [
"The interaction of multiple autonomous agents gives rise to highly dynamic and nondeterministic environments, contributing to the complexity in applications such as automated financial markets, smart grids, or robotics. Due to the sheer number of situations that may arise, it is not possible to foresee and program the optimal behaviour for all agents beforehand. Consequently, it becomes essential for the success of the system that the agents can learn their optimal behaviour and adapt to new situations or circumstances. The past two decades have seen the emergence of reinforcement learning, both in single and multi-agent settings, as a strong, robust and adaptive learning paradigm. Progress has been substantial, and a wide range of algorithms are now available. An important challenge in the domain of multi-agent learning is to gain qualitative insights into the resulting system dynamics. In the past decade, tools and methods from evolutionary game theory have been successfully employed to study multi-agent learning dynamics formally in strategic interactions. This article surveys the dynamical models that have been derived for various multi-agent reinforcement learning algorithms, making it possible to study and compare them qualitatively. Furthermore, new learning algorithms that have been introduced using these evolutionary game theoretic tools are reviewed. The evolutionary models can be used to study complex strategic interactions. Examples of such analysis are given for the domains of automated trading in stock markets and collision avoidance in multi-robot systems. The paper provides a roadmap on the progress that has been achieved in analysing the evolutionary dynamics of multi-agent learning by highlighting the main results and accomplishments.",
"We develop a model for analyzing complex games with repeated interactions, for which a full game-theoretic analysis is intractable. Our approach treats exogenously specified, heuristic strategies, rather than the atomic actions, as primitive, and computes a heuristic-payoff table specifying the expected payoffs of the joint heuristic strategy space. We analyze two games based on (i) automated dynamic pricing and (ii) continuous double auction. For each game we compute Nash equilibria of previously published heuristic strategies. To determine the most plausible equilibria, we study the replicator dynamics of a large population playing the strategies. In order to account for errors in estimation of payoffs or improvements in strategies, we also analyze the dynamics and equilibria based on perturbed payoffs.",
""
]
} |
1903.01373 | 2919720931 | We introduce -Rank, a principled evolutionary dynamics methodology for the evaluation and ranking of agents in large-scale multi-agent interactions, grounded in a novel dynamical game-theoretic solution concept called Markov-Conley chains (MCCs). The approach leverages continuous- and discrete-time evolutionary dynamical systems applied to empirical games, and scales tractably in the number of agents, the type of interactions, and the type of empirical games (symmetric and asymmetric). Current models are fundamentally limited in one or more of these dimensions and are not guaranteed to converge to the desired game-theoretic solution concept (typically the Nash equilibrium). -Rank provides a ranking over the set of agents under evaluation and provides insights into their strengths, weaknesses, and long-term dynamics. This is a consequence of the links we establish to the MCC solution concept when the underlying evolutionary model's ranking-intensity parameter, , is chosen to be large, which exactly forms the basis of -Rank. In contrast to the Nash equilibrium, which is a static concept based on fixed points, MCCs are a dynamical solution concept based on the Markov chain formalism, Conley's Fundamental Theorem of Dynamical Systems, and the core ingredients of dynamical systems: fixed points, recurrent sets, periodic orbits, and limit cycles. -Rank runs in polynomial time with respect to the total number of pure strategy profiles, whereas computing a Nash equilibrium for a general-sum game is known to be intractable. We introduce proofs that not only provide a unifying perspective of existing continuous- and discrete-time evolutionary evaluation models, but also reveal the formal underpinnings of the -Rank methodology. We empirically validate the method in several domains including AlphaGo, AlphaZero, MuJoCo Soccer, and Poker. | From the theoretical biology perspective, researchers have additionally deployed discrete-time evolutionary dynamics models @cite_49 . These models typically provide insights in the macro-dynamics of the overall behavior of agents in strategy space, corresponding to flow rates at the edges of a manifold @cite_65 @cite_52 @cite_33 @cite_73 @cite_63 . These studies usually focus on biological games, the evolution of cooperation and fairness in social dilemma's like the iterated prisoner's dilemma or signalling games, deploying, amongst others, imitation dynamics with low mutation rates @cite_56 @cite_1 . Similar efforts investigating evolutionary dynamics inspired by statistical physics models have been taken as well @cite_81 @cite_82 . | {
"cite_N": [
"@cite_33",
"@cite_65",
"@cite_52",
"@cite_1",
"@cite_56",
"@cite_81",
"@cite_49",
"@cite_63",
"@cite_73",
"@cite_82"
],
"mid": [
"2031564575",
"2097141283",
"2156173537",
"220156613",
"2119086496",
"2897212219",
"604170243",
"2042040675",
"2051166571",
"2771187719"
],
"abstract": [
"In stochastic dynamical systems, different concepts of stability can be obtained in different limits. A particularly interesting example is evolutionary game theory, which is traditionally based on infinite populations, where strict Nash equilibria correspond to stable fixed points that are always evolutionarily stable. However, in finite populations stochastic effects can drive the system away from strict Nash equilibria, which gives rise to a new concept for evolutionary stability. The conventional and the new stability concepts may apparently contradict each other leading to conflicting predictions in large yet finite populations. We show that the two concepts can be derived from the frequency dependent Moran process in different limits. Our results help to determine the appropriate stability concept in large finite populations. The general validity of our findings is demonstrated showing that the same results are valid employing vastly different co-evolutionary processes.",
"Darwinian dynamics based on mutation and selection form the core of mathematical models for adaptation and coevolution of biological populations. The evolutionary outcome is often not a fitness-maximizing equilibrium but can include oscillations and chaos. For studying frequency-dependent selection, game-theoretic arguments are more appropriate than optimization algorithms. Replicator and adaptive dynamics describe short- and long-term evolution in phenotype space and have found applications ranging from animal behavior and ecology to speciation, macroevolution, and human language. Evolutionary game theory is an essential component of a mathematical and computational approach to biology.",
"We study evolutionary game dynamics in finite populations. We analyze an evolutionary process, which we call pairwise comparison, for which we adopt the ubiquitous Fermi distribution function from statistical mechanics. The inverse temperature in this process controls the intensity of selection, leading to a unified framework for evolutionary dynamics at all intensities of selection, from random drift to imitation dynamics. We derive a simple closed formula that determines the feasibility of cooperation in finite populations, whenever cooperation is modeled in terms of any symmetric two-person game. In contrast with previous results, the present formula is valid at all intensities of selection and for any initial condition. We investigate the evolutionary dynamics of cooperators in finite populations, and study the interplay between intensity of selection and the remnants of interior fixed points in infinite populations, as a function of a given initial number of cooperators, showing how this interplay strongly affects the approach to fixation of a given trait in finite populations, leading to counterintuitive results at different intensities of selection.",
"We model evolution according to an asymmetric game as occurring in multiple finite populations, one for each role in the game, and study the effect of subjecting individuals to stochastic strategy mutations. We show that, when these mutations occur sufficiently infrequently, the dynamics over all population states simplify to an ergodic Markov chain over just the pure population states (where each population is monomorphic). This makes calculation of the stationary distribution computationally feasible. The transition probabilities of this embedded Markov chain involve fixation probabilities of mutants in single populations. The asymmetry of the underlying game leads to fixation probabilities that are derived from frequency-independent selection, in contrast to the analogous single-population symmetric-game case (Fudenberg and Imhof, 2006). This frequency independence is useful in that it allows us to employ results from the population genetics literature to calculate the stationary distribution of the evolutionary process, giving sharper, and sometimes even analytic, results. We demonstrate the utility of this approach by applying it to a battle-of-the-sexes game, a Crawford–Sobel signalling game, and the beer-quiche game of Cho and Kreps (1987).",
"This note characterizes the impact of adding rare stochastic mutations to an “imitation dynamic,†meaning a process with the properties that absent strategies remain absent, and non-homogeneous states are transient. The resulting system will spend almost all of its time at the absorbing states of the no-mutation process. The work of Freidlin and Wentzell [Random Perturbations of Dynamical Systems, Springer, New York, 1984] and its extensions provide a general algorithm for calculating the limit distribution, but this algorithm can be complicated to apply. This note provides a simpler and more intuitive algorithm. Loosely speaking, in a process with K strategies, it is sufficient to find the invariant distribution of a KA—K Markov matrix on the K homogeneous states, where the probability of a transit from “all play i†to “all play j†is the probability of a transition from the state “all agents but 1 play i, 1 plays j†to the state “all play j†.",
"Pro-social punishment and exclusion are common means to elevate the level of cooperation among unrelated individuals. Indeed, it is worth pointing out that the combined use of these two strategies is quite common across human societies. However, it is still not known how a combined strategy where punishment and exclusion are switched can promote cooperation from the theoretical perspective. In this paper, we thus propose two different switching strategies, namely, peer switching that is based on peer punishment and peer exclusion, and pool switching that is based on pool punishment and pool exclusion. Individuals adopting the switching strategy will punish defectors when their numbers are below a threshold and exclude them otherwise. We study how the two switching strategies influence the evolutionary dynamics in the public goods game. We show that an intermediate value of the threshold leads to a stable coexistence of cooperators, defectors, and players adopting the switching strategy in a well-mixed population, and this regardless of whether the pool-based or the peer-based switching strategy is introduced. Moreover, we show that the pure exclusion strategy alone is able to evoke a limit cycle attractor in the evolutionary dynamics, such that cooperation can coexist with other strategies.Pro-social punishment and exclusion are common means to elevate the level of cooperation among unrelated individuals. Indeed, it is worth pointing out that the combined use of these two strategies is quite common across human societies. However, it is still not known how a combined strategy where punishment and exclusion are switched can promote cooperation from the theoretical perspective. In this paper, we thus propose two different switching strategies, namely, peer switching that is based on peer punishment and peer exclusion, and pool switching that is based on pool punishment and pool exclusion. Individuals adopting the switching strategy will punish defectors when their numbers are below a threshold and exclude them otherwise. We study how the two switching strategies influence the evolutionary dynamics in the public goods game. We show that an intermediate value of the threshold leads to a stable coexistence of cooperators, defectors, and players adopting the switching strategy in a well-mixed pop...",
"Preface 1. Introduction 2. What Evolution Is 3. Fitness Landscapes and Sequence Spaces 4. Evolutionary Games 5. Prisoners of the Dilemma 6. Finite Populations 7. Games in Finite Populations 8. Evolutionary Graph Theory 9. Spatial Games 10. HIV Infection 11. The Evolution of Virulence 12. The Evolutionary Dynamics of Cancer 13. Language Evolution 14. Conclusion Further Reading References Index",
"Often groups need to meet repeatedly before a decision is reached. Hence, most individual decisions will be contingent on decisions taken previously by others. In particular, the decision to cooperate or not will depend on one’s own assessment of what constitutes a fair group outcome. Making use of a repeated N-person prisoner’s dilemma, we show that reciprocation towards groups opens a window of opportunity for cooperation to thrive, leading populations to engage in dynamics involving both coordination and coexistence, and characterized by cycles of cooperation and defection. Furthermore, we show that this process leads to the emergence of fairness, whose level will depend on the dilemma at stake.",
"A finite-population dynamic evolutionary model is presented, which shows that increasing the individual capacity of sending pre-play signals (without any pre-defined meaning), opens a route for cooperation. The population dynamics leads individuals to discriminate between different signals and react accordingly to the signals received. The proportion of time that the population spends in different states can be calculated analytically. We show that increasing the number of different signals benefits cooperative strategies, illustrating how cooperators may take profit from a diverse signaling portfolio to forecast future behaviors and avoid being cheated by defectors.",
"Cooperation is a difficult proposition in the face of Darwinian selection. Those that defect have an evolutionary advantage over cooperators who should therefore die out. However, spatial structure enables cooperators to survive through the formation of homogeneous clusters, which is the hallmark of network reciprocity. Here we go beyond this traditional setup and study the spatiotemporal dynamics of cooperation in a population of populations. We use the prisoner's dilemma game as the mathematical model and show that considering several populations simultaneously gives rise to fascinating spatiotemporal dynamics and pattern formation. Even the simplest assumption that strategies between different populations are payoff-neutral with one another results in the spontaneous emergence of cyclic dominance, where defectors of one population become prey of cooperators in the other population, and vice versa. Moreover, if social interactions within different populations are characterized by significantly different temptations to defect, we observe that defectors in the population with the largest temptation counterintuitively vanish the fastest, while cooperators that hang on eventually take over the whole available space. Our results reveal that considering the simultaneous presence of different populations significantly expands the complexity of evolutionary dynamics in structured populations, and it allows us to understand the stability of cooperation under adverse conditions that could never be bridged by network reciprocity alone."
]
} |
1903.01489 | 2905654560 | Current movie captioning architectures are not capable of mentioning characters with their proper name, replacing them with a generic “someone” tag. The lack of movie description datasets with characters’ visual annotations surely plays a relevant role in this shortage. Recently, we proposed to extend the M-VAD dataset by introducing such information. In this paper, we present an improved version of the dataset, namely M-VAD Names, and its semi-automatic annotation procedure. The resulting dataset contains 63 k visual tracks and 34 k textual mentions, all associated with character identities. To showcase the features of the dataset and quantify the complexity of the naming task, we investigate multimodal architectures to replace the “someone” tags with proper character names in existing video captions. The evaluation is further extended by testing this application on videos outside of the M-VAD Names dataset. | The problem of identifying characters in movies or TV series has been widely addressed by computer vision researchers who principally focus on linking people with their names by tracking faces in the video and assigning names to them @cite_29 @cite_28 @cite_26 @cite_2 @cite_19 . For example, @cite_28 @cite_2 tackled this problem by automatically aligning subtitles and script texts of movies and TV series. In particular, Everingham al @cite_28 aimed to associate speaker names present in the movie scripts to the correct faces appearing in the movie clips by detecting face tracks with lip motion. Sivic al @cite_2 extended the previous work, limited in classifying frontal faces, by adding the detection and recognition of characters in profile views, improving the overall performance. | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_29",
"@cite_19",
"@cite_2"
],
"mid": [
"38568571",
"2168996682",
"2119031011",
"2055251102",
"2121027212"
],
"abstract": [
"Natural language descriptions of videos provide a potentially rich and vast source of supervision. However, the highly-varied nature of language presents a major barrier to its effective use. What is needed are models that can reason over uncertainty over both videos and text. In this paper, we tackle the core task of person naming: assigning names of people in the cast to human tracks in TV videos. Screenplay scripts accompanying the video provide some crude supervision about who’s in the video. However, even the basic problem of knowing who is mentioned in the script is often difficult, since language often refers to people using pronouns (e.g., “he”) and nominals (e.g., “man”) rather than actual names (e.g., “Susan”). Resolving the identity of these mentions is the task of coreference resolution, which is an active area of research in natural language processing. We develop a joint model for person naming and coreference resolution, and in the process, infer a latent alignment between tracks and mentions. We evaluate our model on both vision and NLP tasks on a new dataset of 19 TV episodes. On both tasks, we significantly outperform the independent baselines.",
"We investigate the problem of automatically labelling appearances of characters in TV or film material. This is tremendously challenging due to the huge variation in imaged appearance of each character and the weakness and ambiguity of available annotation. However, we demonstrate that high precision can be achieved by combining multiple sources of information, both visual and textual. The principal novelties that we introduce are: (i) automatic generation of time stamped character annotation by aligning subtitles and transcripts; (ii) strengthening the supervisory information by identifying when characters are speaking; (iii) using complementary cues of face matching and clothing matching to propose common annotations for face tracks. Results are presented on episodes of the TV series “Buffy the Vampire Slayer”.",
"We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty.",
"We describe a probabilistic method for identifying characters in TV series or movies. We aim at labeling every character appearance, and not only those where a face can be detected. Consequently, our basic unit of appearance is a person track (as opposed to a face track). We model each TV series episode as a Markov Random Field, integrating face recognition, clothing appearance, speaker recognition and contextual constraints in a probabilistic manner. The identification task is then formulated as an energy minimization problem. In order to identify tracks without faces, we learn clothing models by adapting available face recognition results. Within a scene, as indicated by prior analysis of the temporal structure of the TV series, clothing features are combined by agglomerative clustering. We evaluate our approach on the first 6 episodes of The Big Bang Theory and achieve an absolute improvement of 20 for person identification and 12 for face recognition.",
"We investigate the problem of automatically labelling faces of characters in TV or movie material with their names, using only weak supervision from automatically-aligned subtitle and script text. Our previous work ( [8]) demonstrated promising results on the task, but the coverage of the method (proportion of video labelled) and generalization was limited by a restriction to frontal faces and nearest neighbour classification. In this paper we build on that method, extending the coverage greatly by the detection and recognition of characters in profile views. In addition, we make the following contributions: (i) seamless tracking, integration and recognition of profile and frontal detections, and (ii) a character specific multiple kernel classifier which is able to learn the features best able to discriminate between the characters. We report results on seven episodes of the TV series \"Buffy the Vampire Slayer\", demonstrating significantly increased coverage and performance with respect to previous methods on this material."
]
} |
1903.01489 | 2905654560 | Current movie captioning architectures are not capable of mentioning characters with their proper name, replacing them with a generic “someone” tag. The lack of movie description datasets with characters’ visual annotations surely plays a relevant role in this shortage. Recently, we proposed to extend the M-VAD dataset by introducing such information. In this paper, we present an improved version of the dataset, namely M-VAD Names, and its semi-automatic annotation procedure. The resulting dataset contains 63 k visual tracks and 34 k textual mentions, all associated with character identities. To showcase the features of the dataset and quantify the complexity of the naming task, we investigate multimodal architectures to replace the “someone” tags with proper character names in existing video captions. The evaluation is further extended by testing this application on videos outside of the M-VAD Names dataset. | @cite_19 , each TV series episode is instead modelled as a Markov Random Field, integrating cues from face, speech, and clothing. Bojanowski al @cite_29 proposed a method to extract actor action pairs from movie scripts and used them as constraints in a discriminative clustering framework. @cite_26 , authors introduced a joint model for person naming and co-reference resolution which consists in resolving the identity of ambiguous mentions of people such as pronouns (e.g. he'' or she'') and nominals (e.g. man''). | {
"cite_N": [
"@cite_19",
"@cite_29",
"@cite_26"
],
"mid": [
"2055251102",
"2119031011",
"38568571"
],
"abstract": [
"We describe a probabilistic method for identifying characters in TV series or movies. We aim at labeling every character appearance, and not only those where a face can be detected. Consequently, our basic unit of appearance is a person track (as opposed to a face track). We model each TV series episode as a Markov Random Field, integrating face recognition, clothing appearance, speaker recognition and contextual constraints in a probabilistic manner. The identification task is then formulated as an energy minimization problem. In order to identify tracks without faces, we learn clothing models by adapting available face recognition results. Within a scene, as indicated by prior analysis of the temporal structure of the TV series, clothing features are combined by agglomerative clustering. We evaluate our approach on the first 6 episodes of The Big Bang Theory and achieve an absolute improvement of 20 for person identification and 12 for face recognition.",
"We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty.",
"Natural language descriptions of videos provide a potentially rich and vast source of supervision. However, the highly-varied nature of language presents a major barrier to its effective use. What is needed are models that can reason over uncertainty over both videos and text. In this paper, we tackle the core task of person naming: assigning names of people in the cast to human tracks in TV videos. Screenplay scripts accompanying the video provide some crude supervision about who’s in the video. However, even the basic problem of knowing who is mentioned in the script is often difficult, since language often refers to people using pronouns (e.g., “he”) and nominals (e.g., “man”) rather than actual names (e.g., “Susan”). Resolving the identity of these mentions is the task of coreference resolution, which is an active area of research in natural language processing. We develop a joint model for person naming and coreference resolution, and in the process, infer a latent alignment between tracks and mentions. We evaluate our model on both vision and NLP tasks on a new dataset of 19 TV episodes. On both tasks, we significantly outperform the independent baselines."
]
} |
1903.01489 | 2905654560 | Current movie captioning architectures are not capable of mentioning characters with their proper name, replacing them with a generic “someone” tag. The lack of movie description datasets with characters’ visual annotations surely plays a relevant role in this shortage. Recently, we proposed to extend the M-VAD dataset by introducing such information. In this paper, we present an improved version of the dataset, namely M-VAD Names, and its semi-automatic annotation procedure. The resulting dataset contains 63 k visual tracks and 34 k textual mentions, all associated with character identities. To showcase the features of the dataset and quantify the complexity of the naming task, we investigate multimodal architectures to replace the “someone” tags with proper character names in existing video captions. The evaluation is further extended by testing this application on videos outside of the M-VAD Names dataset. | Recently, Rohrbach al @cite_41 addressed the problem of generating video descriptions with grounded and co-referenced people by proposing a deeply-learned model. This task significantly differs from the one tackled in this paper, as it aims at predicting the spatial location in which a given character appears, and at producing captions with proper names in the correct place. | {
"cite_N": [
"@cite_41"
],
"mid": [
"2605585413"
],
"abstract": [
"Learning how to generate descriptions of images or videos received major interest both in the Computer Vision and Natural Language Processing communities. While a few works have proposed to learn a grounding during the generation process in an unsupervised way (via an attention mechanism), it remains unclear how good the quality of the grounding is and whether it benefits the description quality. In this work we propose a movie description model which learns to generate description and jointly ground (localize) the mentioned characters as well as do visual co-reference resolution between pairs of consecutive sentences clips. We also propose to use weak localization supervision through character mentions provided in movie descriptions to learn the character grounding. At training time, we first learn how to localize characters by relating their visual appearance to mentions in the descriptions via a semi-supervised approach. We then provide this (noisy) supervision into our description model which greatly improves its performance. Our proposed description model improves over prior work w.r.t. generated description quality and additionally provides grounding and local co-reference resolution. We evaluate it on the MPII Movie Description dataset using automatic and human evaluation measures and using our newly collected grounding and co-reference data for characters."
]
} |
1903.01489 | 2905654560 | Current movie captioning architectures are not capable of mentioning characters with their proper name, replacing them with a generic “someone” tag. The lack of movie description datasets with characters’ visual annotations surely plays a relevant role in this shortage. Recently, we proposed to extend the M-VAD dataset by introducing such information. In this paper, we present an improved version of the dataset, namely M-VAD Names, and its semi-automatic annotation procedure. The resulting dataset contains 63 k visual tracks and 34 k textual mentions, all associated with character identities. To showcase the features of the dataset and quantify the complexity of the naming task, we investigate multimodal architectures to replace the “someone” tags with proper character names in existing video captions. The evaluation is further extended by testing this application on videos outside of the M-VAD Names dataset. | Miech al @cite_12 , instead, addressed the problem of weakly supervised learning of actions and actors from movies by applying an online optimization algorithm based on the Block-Coordinate Frank-Wolfe method. Finally, in @cite_40 an end-to-end system for detecting and clustering faces by identity in full-length movies is proposed. However, this approach is far from the aforementioned works as it only aimed at clustering face tracks without naming the corresponding movie characters. | {
"cite_N": [
"@cite_40",
"@cite_12"
],
"mid": [
"2962698660",
"2964102650"
],
"abstract": [
"We present an end-to-end system for detecting and clustering faces by identity in full-length movies. Unlike works that start with a predefined set of detected faces, we consider the end-to-end problem of detection and clustering together. We make three separate contributions. First, we combine a state-of-the-art face detector with a generic tracker to extract high quality face tracklets. We then introduce a novel clustering method, motivated by the classic graph theory results of Erdos and Renyi. It is based on the observations that large clusters can be fully connected by joining just a small fraction of their point pairs, while just a single connection between two different people can lead to poor clustering results. This suggests clustering using a verification system with very few false positives but perhaps moderate recall. We introduce a novel verification method, rank-1 counts verification, that has this property, and use it in a link-based clustering scheme. Finally, we define a novel end-to-end detection and clustering evaluation metric allowing us to assess the accuracy of the entire end-to-end system. We present state-of-the-art results on multiple video data sets and also on standard face databases.",
"Discriminative clustering has been successfully applied to a number of weakly supervised learning tasks. Such applications include person and action recognition, text-to-video alignment, object co-segmentation and co-localization in videos and images. One drawback of discriminative clustering, however, is its limited scalability. We address this issue and propose an online optimization algorithm based on the Block-Coordinate Frank-Wolfe algorithm. We apply the proposed method to the problem of weakly supervised learning of actions and actors from movies together with corresponding movie scripts. The scaling up of the learning problem to 66 feature-length movies enables us to significantly improve weakly supervised action recognition."
]
} |
1903.01489 | 2905654560 | Current movie captioning architectures are not capable of mentioning characters with their proper name, replacing them with a generic “someone” tag. The lack of movie description datasets with characters’ visual annotations surely plays a relevant role in this shortage. Recently, we proposed to extend the M-VAD dataset by introducing such information. In this paper, we present an improved version of the dataset, namely M-VAD Names, and its semi-automatic annotation procedure. The resulting dataset contains 63 k visual tracks and 34 k textual mentions, all associated with character identities. To showcase the features of the dataset and quantify the complexity of the naming task, we investigate multimodal architectures to replace the “someone” tags with proper character names in existing video captions. The evaluation is further extended by testing this application on videos outside of the M-VAD Names dataset. | Several other methods have been proposed towards understanding social aspects of movies and TV series scenes for either classifying different types of interactions @cite_39 or predicting whether people are looking at each other @cite_30 @cite_27 . As an example, Vicol al @cite_0 introduced a novel dataset which provides graph-based annotations of social situations appearing in movie clips to capture who is present in the clip, their emotional and physical attributes, their relationships, and the interactions between them. | {
"cite_N": [
"@cite_30",
"@cite_27",
"@cite_0",
"@cite_39"
],
"mid": [
"1484918855",
"1971029019",
"2963542293",
"1979545636"
],
"abstract": [
"If you have ever watched movies or television shows, you know how easy it is to tell the good characters from the bad ones. Little, however, is known \"whether\" or \"how\" computers can achieve such high-level understanding of movies. In this paper, we take the first step towards learning the relations among movie characters using visual and auditory cues. Specifically, we use support vector regression to estimate local characterization of adverseness at the scene level. Such local properties are then synthesized via statistical learning based on Gaussian processes to derive the affinity between the movie characters. Once the affinity is learned, we perform social network analysis to find communities of characters and identify the leader of each community. We experimentally demonstrate that the relations among characters can be determined with reasonable accuracy from the movie content.",
"The objective of this work is to determine if people are interacting in TV video by detecting whether they are looking at each other or not. We determine both the temporal period of the interaction and also spatially localize the relevant people. We make the following four contributions: (i) head detection with implicit coarse pose information (front, profile, back); (ii) continuous head pose estimation in unconstrained scenarios (TV video) using Gaussian process regression; (iii) propose and evaluate several methods for assessing whether and when pairs of people are looking at each other in a video shot; and (iv) introduce new ground truth annotation for this task, extending the TV human interactions dataset (Patron- 2010) The performance of the methods is evaluated on this dataset, which consists of 300 video clips extracted from TV shows. Despite the variety and difficulty of this video material, our best method obtains an average precision of 87.6 in a fully automatic manner.",
"There is growing interest in artificial intelligence to build socially intelligent robots. This requires machines to have the ability to \"read\" people's emotions, motivations, and other factors that affect behavior. Towards this goal, we introduce a novel dataset called MovieGraphs which provides detailed, graph-based annotations of social situations depicted in movie clips. Each graph consists of several types of nodes, to capture who is present in the clip, their emotional and physical attributes, their relationships (i.e., parent child), and the interactions between them. Most interactions are associated with topics that provide additional details, and reasons that give motivations for actions. In addition, most interactions and many attributes are grounded in the video with time stamps. We provide a thorough analysis of our dataset, showing interesting common-sense correlations between different social aspects of scenes, as well as across scenes over time. We propose a method for querying videos and text with graphs, and show that: 1) our graphs contain rich and sufficient information to summarize and localize each scene; and 2) subgraphs allow us to describe situations at an abstract level and retrieve multiple semantically relevant situations. We also propose methods for interaction understanding via ordering, and reason understanding. MovieGraphs is the first benchmark to focus on inferred properties of human-centric situations, and opens up an exciting avenue towards socially-intelligent AI agents.",
"The objective of this work is recognition and spatiotemporal localization of two-person interactions in video. Our approach is person-centric. As a first stage we track all upper bodies and heads in a video using a tracking-by-detection approach that combines detections with KLT tracking and clique partitioning, together with occlusion detection, to yield robust person tracks. We develop local descriptors of activity based on the head orientation (estimated using a set of pose-specific classifiers) and the local spatiotemporal region around them, together with global descriptors that encode the relative positions of people as a function of interaction type. Learning and inference on the model uses a structured output SVM which combines the local and global descriptors in a principled manner. Inference using the model yields information about which pairs of people are interacting, their interaction class, and their head orientation (which is also treated as a variable, enabling mistakes in the classifier to be corrected using global context). We show that inference can be carried out with polynomial complexity in the number of people, and describe an efficient algorithm for this. The method is evaluated on a new dataset comprising 300 video clips acquired from 23 different TV shows and on the benchmark UT--Interaction dataset."
]
} |
1903.01489 | 2905654560 | Current movie captioning architectures are not capable of mentioning characters with their proper name, replacing them with a generic “someone” tag. The lack of movie description datasets with characters’ visual annotations surely plays a relevant role in this shortage. Recently, we proposed to extend the M-VAD dataset by introducing such information. In this paper, we present an improved version of the dataset, namely M-VAD Names, and its semi-automatic annotation procedure. The resulting dataset contains 63 k visual tracks and 34 k textual mentions, all associated with character identities. To showcase the features of the dataset and quantify the complexity of the naming task, we investigate multimodal architectures to replace the “someone” tags with proper character names in existing video captions. The evaluation is further extended by testing this application on videos outside of the M-VAD Names dataset. | The generation of natural language descriptions of visual content has received large interest since the emergence of recurrent networks, either for single images @cite_42 , user-generated videos @cite_20 , or movie clips @cite_14 @cite_8 . First approaches described the input video through mean-pooled CNN features @cite_13 or sequentially encoded by a recurrent layer @cite_20 @cite_8 . This strategy was then followed by the majority of video captioning approaches, either by incorporating attentive mechanisms @cite_21 in the sentence decoder, by building a common visual-semantic embedding @cite_37 , or by adding external knowledge with language models @cite_6 or visual classifiers @cite_14 . | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_8",
"@cite_42",
"@cite_21",
"@cite_6",
"@cite_13",
"@cite_20"
],
"mid": [
"1573040851",
"1893116441",
"2139501017",
"2173180041",
"1586939924",
"2963410018",
"2964241990",
"2951183276"
],
"abstract": [
"Automatically describing video content with natural language is a fundamental challenge of computer vision. Re-current Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with the given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best published performance in generating natural sentences: 45.3 and 31.0 in terms of BLEU@4 and METEOR, respectively. Superior performances are also reported on two movie description datasets (M-VAD and MPII-MD). In addition, we demonstrate that LSTM-E outperforms several state-of-the-art techniques in predicting Subject-Verb-Object (SVO) triplets.",
"Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] and M-VAD [31] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long Short-Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these classifiers we generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD and M-VAD datasets. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task.",
"Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be senstive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).",
"While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context. In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired imagesentence datasets. Our method achieves this by leveraging large object recognition datasets and external text corpora and by transferring knowledge between semantically similar concepts. Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet. In contrast, our model can compose sentences that describe novel objects and their interactions with other objects. We demonstrate our model's ability to describe novel concepts by empirically evaluating its performance on MSCOCO and show qualitative results on ImageNet images of objects for which no paired image-sentence data exist. Further, we extend our approach to generate descriptions of objects in video clips. Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.",
"Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.",
"",
"Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized."
]
} |
1903.01489 | 2905654560 | Current movie captioning architectures are not capable of mentioning characters with their proper name, replacing them with a generic “someone” tag. The lack of movie description datasets with characters’ visual annotations surely plays a relevant role in this shortage. Recently, we proposed to extend the M-VAD dataset by introducing such information. In this paper, we present an improved version of the dataset, namely M-VAD Names, and its semi-automatic annotation procedure. The resulting dataset contains 63 k visual tracks and 34 k textual mentions, all associated with character identities. To showcase the features of the dataset and quantify the complexity of the naming task, we investigate multimodal architectures to replace the “someone” tags with proper character names in existing video captions. The evaluation is further extended by testing this application on videos outside of the M-VAD Names dataset. | Recent video captioning models have improved both components of the encoder-decoder approach by significantly changing their structure. Yu al @cite_25 focused on the sentence decoder and proposed a hierarchical model containing a sentence and a paragraph generator. In particular, the sentence generator produces one simple short sentence that describes a specific short video interval by exploiting both temporal and spatial attention mechanisms. In contrast, Pan al @cite_5 concentrated on the video encoding stage and introduced a hierarchical recurrent encoder to exploit temporal information of videos. @cite_3 , instead, authors proposed a modification of the LSTM cell able to identify discontinuity points between frames or segments and to modify the temporal connections of the encoding layer accordingly. | {
"cite_N": [
"@cite_5",
"@cite_25",
"@cite_3"
],
"mid": [
"2963843052",
"2963576560",
"2557264465"
],
"abstract": [
"Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.",
"We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal-and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively.",
"The use of Recurrent Neural Networks for video captioning has recently gained a lot of attention, since they can be used both to encode the input video and to generate the corresponding description. In this paper, we present a recurrent video encoding scheme which can discover and leverage the hierarchical structure of the video. Unlike the classical encoder-decoder approach, in which a video is encoded continuously by a recurrent layer, we propose a novel LSTM cell which can identify discontinuity points between frames or segments and modify the temporal connections of the encoding layer accordingly. We evaluate our approach on three large-scale datasets: the Montreal Video Annotation dataset, the MPII Movie Description dataset and the Microsoft Video Description Corpus. Experiments show that our approach can discover appropriate hierarchical representations of input videos and improve the state of the art results on movie description datasets."
]
} |
1903.01489 | 2905654560 | Current movie captioning architectures are not capable of mentioning characters with their proper name, replacing them with a generic “someone” tag. The lack of movie description datasets with characters’ visual annotations surely plays a relevant role in this shortage. Recently, we proposed to extend the M-VAD dataset by introducing such information. In this paper, we present an improved version of the dataset, namely M-VAD Names, and its semi-automatic annotation procedure. The resulting dataset contains 63 k visual tracks and 34 k textual mentions, all associated with character identities. To showcase the features of the dataset and quantify the complexity of the naming task, we investigate multimodal architectures to replace the “someone” tags with proper character names in existing video captions. The evaluation is further extended by testing this application on videos outside of the M-VAD Names dataset. | On a different note, Krishna al @cite_36 introduced the task of dense-captioning events, which involves both detecting and describing events in a video, and proposed a new model able to identify events of a video while simultaneously describing the detected events in natural language. | {
"cite_N": [
"@cite_36"
],
"mid": [
"2963916161"
],
"abstract": [
"Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with its unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization."
]
} |
1903.01632 | 2919006973 | The implementation of connected and automated vehicle (CAV) technologies enables a novel computational framework to deliver real-time control actions that optimize travel time, energy, and safety. Hardware is an integral part of any practical implementation of CAVs, and as such, it should be incorporated in any validation method. However, high costs associated with full scale, field testing of CAVs have proven to be a significant barrier. In this paper, we present the implementation of a decentralized control framework, which was developed previously, in a scaled-city using robotic CAVs, and discuss the implications of CAVs on travel time. Supplemental information and videos can be found at this https URL. | The first approach utilizes connectivity and automation to form closely-coupled vehicular platoons to effectively reduce aerodynamic drag, especially at a high cruising speed. The concept of forming platoons of vehicles traveling at high speed was a popular system-level approach to address traffic congestion that gained momentum in the 1980s and 1990s @cite_43 @cite_36 . An automated transportation system can alleviate congestion, reduce energy use and emissions, and improve safety while increasing throughput significantly. The Japan ITS Energy Project @cite_10 , the Safe Road Trains for the Environment program @cite_1 , and the California Partner for Advanced Transportation Technology @cite_3 , are among the mostly-reported efforts in this area. | {
"cite_N": [
"@cite_36",
"@cite_1",
"@cite_3",
"@cite_43",
"@cite_10"
],
"mid": [
"2169683785",
"1579598958",
"2069135366",
"2163057321",
"2060129668"
],
"abstract": [
"This paper presents the design and experimental implementation of an integrated longitudinal and lateral control system for the operation of automated vehicles in platoons. The design of the longitudinal control system include nonlinear vehicle dynamics, string-stable operation with very small inter-vehicle spacing, operation at all speeds from a complete stop to high-speed cruising, and the execution of longitudinal split and join maneuvers in the presence of communication constraints. The design of the lateral control system include high-speed operation using a purely \"look down\" sensor system and lane changing without transitional lateral position measurements. We also describes the design of an on-board supervisor that utilizes inter-vehicle communication and coordinates the operation of the lateral and longitudinal controllers in order to execute entry and exit maneuvers. Experimental results are included from the NAHSC demonstration of automated highways at San Diego, CA.",
"Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2010.",
"The California PATH Program was founded in 1986, as the first research program in North America focused on the subject now known as intelligent transportation systems (ITS). This paper reviews the history of the founding of PATH and of the national ITS program in the U.S., providing perspective on the changes that have occurred during the past twenty years",
"The accomplishments to date on the development of automatic vehicle control technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic principles and assumptions underlying the PATH work are identified, and the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control is explained. For both lateral and longitudinal control, the modeling of plant dynamics is described, and the development of the additional subsystems needed (communications, reference sensor systems) and the derivation of the control laws are presented. Plans for testing on vehicles in both near and long term are discussed. >",
"Abstract This paper presents an overview on an automated truck platoon that has been developed within a national ITS project named “Energy ITS.” The project, started in 2008, aims at energy saving and global warming prevention with ITS technologies. A platoon of three fully-automated heavy trucks and also a fully-automated light truck currently drives at 80 km h with the gap of up to 4 m on a test truck and along an expressway before public use, under not only steady state driving but also lane changing. The lateral control is based on the lane marker detection by the computer vision, and the longitudinal control is based on the gap measurement by 76 GHz radar and lidar in addition to the inter-vehicle communications of 5.8 GHz DSRC and infrared. The radar and lidar also work as the obstacle detection. The feature of the technologies is the high reliability, aiming at the near future introduction. Fuel consumption measurement on a test track and along an expressway shows that the fuel can be saved by about 13 when the gap was 10 m. The evaluation simulation shows that the effectiveness of the platooning with the gap of 10 m when the 40 penetration in heavy trucks is 2.1 reduction of CO 2 along an expressway. The introduction scenario is also discussed."
]
} |
1903.01632 | 2919006973 | The implementation of connected and automated vehicle (CAV) technologies enables a novel computational framework to deliver real-time control actions that optimize travel time, energy, and safety. Hardware is an integral part of any practical implementation of CAVs, and as such, it should be incorporated in any validation method. However, high costs associated with full scale, field testing of CAVs have proven to be a significant barrier. In this paper, we present the implementation of a decentralized control framework, which was developed previously, in a scaled-city using robotic CAVs, and discuss the implications of CAVs on travel time. Supplemental information and videos can be found at this https URL. | Although previous work has shown promising results emphasizing the potential benefits of coordination of CAVs, validation has been primarily in simulation. In previous work, we presented the experimental validation of the solution of the unconstrained merging roadway problem in UDSSC using @math robotic CAVs @cite_18 . In this paper, we demonstrate the impact of an optimal decentralized framework, developed in earlier work @cite_26 , for coordinating CAVs in a transportation network with multiple conflict zones where potential lateral collision may occur. | {
"cite_N": [
"@cite_18",
"@cite_26"
],
"mid": [
"2767199060",
"2904365072"
],
"abstract": [
"The common thread that characterizes energy efficient mobility systems for smart cities is their interconnectivity which enables the exchange of massive amounts of data; this, in turn, provides the opportunity to develop a decentralized framework to process this information and deliver real-time control actions that optimize energy consumption and other associated benefits. To seize these opportunities, this paper describes the development of a scaled smart city providing a glimpse that bridges the gap between simulation and full scale implementation of energy efficient mobility systems. Using this testbed, we can quickly, safely, and affordably experimentally validate control concepts aimed at enhancing our understanding of the implications of next generation mobility systems.",
"In earlier work, we established a decentralized optimal control framework for coordinating online connected and automated vehicles (CAVs) in specific transportation segments, e.g., urban intersections, merging roadways, roundabouts, and speed reduction zones. In this paper, we address coordination of CAVs in a corridor with multiple such scenarios and derive a closed-form analytical solution that includes interior boundary conditions. We evaluate the effectiveness of the solution through simulation in VISSIM. The proposed approach reduces significantly both fuel consumption and travel time for the CAVs compared to the baseline scenario where traditional human-driven vehicles without control are considered."
]
} |
1903.01522 | 2919166888 | Deep neural networks based methods have been proved to achieve outstanding performance on object detection and classification tasks. Despite significant performance improvement, due to the deep structures, they still require prohibitive runtime to process images and maintain the highest possible performance for real-time applications. Observing the phenomenon that human vision system (HVS) relies heavily on the temporal dependencies among frames from the visual input to conduct recognition efficiently, we propose a novel framework dubbed as TKD: temporal knowledge distillation. This framework distills the temporal knowledge from a heavy neural networks based model over selected video frames (the perception of the moments) to a light-weight model. To enable the distillation, we put forward two novel procedures: 1) an Long-short Term Memory (LSTM) based key frame selection method; and 2) a novel teacher-bounded loss design. To validate, we conduct comprehensive empirical evaluations using different object detection methods over multiple datasets including Youtube-Objects and Hollywood scene dataset. Our results show consistent improvement in accuracy-speed trad-offs for object detection over the frames of the dynamic scene, compare to other modern object recognition methods. | The last few years in the field of deep learning has laid the foundation for major advancements in visual recognition systems, ranging from object recognition @cite_37 , action recognition @cite_3 , to scene recognition @cite_12 . Significant improvements in recognition accuracy have allowed a wide range of science fiction ideas to materialize, resulting in economic and societ al benefits with AI applications such as autonomous vehicles @cite_0 , intelligent IoT systems @cite_16 , industrial robots, service robots and intelligent health care systems @cite_36 @cite_42 . | {
"cite_N": [
"@cite_37",
"@cite_36",
"@cite_42",
"@cite_3",
"@cite_0",
"@cite_16",
"@cite_12"
],
"mid": [
"2884561390",
"2561981131",
"2756477004",
"2413983136",
"2119112357",
"2763068163",
"2134670479"
],
"abstract": [
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https: github.com facebookresearch Detectron.",
"With a massive influx of multimodality data, the role of data analytics in health informatics has grown rapidly in the last decade. This has also prompted increasing interests in the generation of analytical, data driven models based on machine learning in health informatics. Deep learning, a technique with its foundation in artificial neural networks, is emerging in recent years as a powerful tool for machine learning, promising to reshape the future of artificial intelligence. Rapid improvements in computational power, fast data storage, and parallelization have also contributed to the rapid uptake of the technology in addition to its predictive power and ability to generate automatically optimized high-level features and semantic interpretation from the input data. This article presents a comprehensive up-to-date review of research employing deep learning in health informatics, providing a critical analysis of the relative merit, and potential pitfalls of the technique as well as its future outlook. The paper mainly focuses on key applications of deep learning in the fields of translational bioinformatics, medical imaging, pervasive sensing, medical informatics, and public health.",
"Confocal laser endomicroscopy (CLE) is an advanced optical fluorescence technology undergoing assessment for applications in brain tumor surgery. Despite its promising potential, interpreting the unfamiliar gray tone images of fluorescent stains can be difficult. Many of the CLE images can be distorted by motion, extremely low or high fluorescence signal, or obscured by red blood cell accumulation, and these can be interpreted as nondiagnostic. However, just one neat CLE image might suffice for intraoperative diagnosis of the tumor. While manual examination of thousands of nondiagnostic images during surgery would be impractical, this creates an opportunity for a model to select diagnostic images for the pathologists or surgeon's review. In this study, we sought to develop a deep learning model to automatically detect the diagnostic images using a manually annotated dataset, and we employed a patient-based nested cross-validation approach to explore generalizability of the model. We explored various training regimes: deep training, shallow fine-tuning, and deep fine-tuning. Further, we investigated the effect of ensemble modeling by combining the top-5 single models crafted in the development phase. We localized histological features from diagnostic CLE images by visualization of shallow and deep neural activations. Our inter-rater experiment results confirmed that our ensemble of deeply fine-tuned models achieved higher agreement with the ground truth than the other observers. With the speed and precision of the proposed method (110 images second; 85 on the gold standard test subset), it has potential to be integrated into the operative workflow in the brain tumor surgery.",
"Fine-grained action recognition is important for many applications of human-robot interaction, automated skill assessment, and surveillance. The goal is to segment and classify all actions occurring in a time series sequence. While recent recognition methods have shown strong performance in robotics applications, they often require hand-crafted features, use large amounts of domain knowledge, or employ overly simplistic representations of how objects change throughout an action. In this paper we present the Latent Convolutional Skip Chain Conditional Random Field (LC-SC-CRF). This time series model learns a set of interpretable and composable action primitives from sensor data. We apply our model to cooking tasks using accelerometer data from the University of Dundee 50 Salads dataset and to robotic surgery training tasks using robot kinematic data from the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our performance on 50 Salads and JIGSAWS are 18.0 and 5.3 higher than the state of the art, respectively. This model performs well without requiring hand-crafted features or intricate domain knowledge. The code and features have been made public.",
"Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.",
"Deep learning can enable Internet of Things (IoT) devices to interpret unstructured multimedia data and intelligently react to both user and environmental events but has demanding performance and power requirements. The authors explore two ways to successfully integrate deep learning with low-power IoT products.",
"Scene recognition is one of the hallmark tasks of computer vision, allowing definition of a context for object recognition. Whereas the tremendous recent progress in object recognition tasks is due to the availability of large datasets like ImageNet and the rise of Convolutional Neural Networks (CNNs) for learning high-level features, performance at scene recognition has not attained the same level of success. This may be because current deep features trained from ImageNet are not competitive enough for such tasks. Here, we introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. We propose new methods to compare the density and diversity of image datasets and show that Places is as dense as other scene datasets and has more diversity. Using CNN, we learn deep features for scene recognition tasks, and establish new state-of-the-art results on several scene-centric datasets. A visualization of the CNN layers' responses allows us to show differences in the internal representations of object-centric and scene-centric networks."
]
} |
1903.01522 | 2919166888 | Deep neural networks based methods have been proved to achieve outstanding performance on object detection and classification tasks. Despite significant performance improvement, due to the deep structures, they still require prohibitive runtime to process images and maintain the highest possible performance for real-time applications. Observing the phenomenon that human vision system (HVS) relies heavily on the temporal dependencies among frames from the visual input to conduct recognition efficiently, we propose a novel framework dubbed as TKD: temporal knowledge distillation. This framework distills the temporal knowledge from a heavy neural networks based model over selected video frames (the perception of the moments) to a light-weight model. To enable the distillation, we put forward two novel procedures: 1) an Long-short Term Memory (LSTM) based key frame selection method; and 2) a novel teacher-bounded loss design. To validate, we conduct comprehensive empirical evaluations using different object detection methods over multiple datasets including Youtube-Objects and Hollywood scene dataset. Our results show consistent improvement in accuracy-speed trad-offs for object detection over the frames of the dynamic scene, compare to other modern object recognition methods. | Object detection methods based on Convolutional Neural Networks (CNNs) have shown promising results over past years. There are two main types of object recognition systems which are based on CNNs, one-stage and two-stage. In one-stage methods, we classify and localize objects in one-stage. Images when forwarded through the network produce a single output which is then used to classify or localize objects. Some examples of one-stage methods are Yolo @cite_18 , SSD @cite_29 , RetinaNet @cite_37 and DSSD @cite_21 . These models are faster compared to other methods because they run in a single stage. The second types of models are two-stage methods in which classification and localization happen as two different stages, using classification networks and region proposal networks respectively. Some of two-stage models are FasterRCNN @cite_1 , R-FCN @cite_11 . These models reach to higher performance with high intersection over union (IOU). However, @cite_18 showed at lower IOU (IOU=0.5) one-stage models can perform same accuracy as two-stage models. | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_11"
],
"mid": [
"2796347433",
"2884561390",
"2193145675",
"2579985080",
"2613718673",
"2407521645"
],
"abstract": [
"We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL",
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https: github.com facebookresearch Detectron.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN [7, 19] that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets) [10], for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: https: github.com daijifeng001 r-fcn."
]
} |
1903.01522 | 2919166888 | Deep neural networks based methods have been proved to achieve outstanding performance on object detection and classification tasks. Despite significant performance improvement, due to the deep structures, they still require prohibitive runtime to process images and maintain the highest possible performance for real-time applications. Observing the phenomenon that human vision system (HVS) relies heavily on the temporal dependencies among frames from the visual input to conduct recognition efficiently, we propose a novel framework dubbed as TKD: temporal knowledge distillation. This framework distills the temporal knowledge from a heavy neural networks based model over selected video frames (the perception of the moments) to a light-weight model. To enable the distillation, we put forward two novel procedures: 1) an Long-short Term Memory (LSTM) based key frame selection method; and 2) a novel teacher-bounded loss design. To validate, we conduct comprehensive empirical evaluations using different object detection methods over multiple datasets including Youtube-Objects and Hollywood scene dataset. Our results show consistent improvement in accuracy-speed trad-offs for object detection over the frames of the dynamic scene, compare to other modern object recognition methods. | Another thrust of work has focused on reducing the resources consumption of CNNs (due to expensive computation and memory usage) by compressing the network structures @cite_31 @cite_38 @cite_7 @cite_22 . Network pruning is one of well-studied approach which removes unnecessary connections from CNN model, to gaining inference speedup @cite_38 @cite_10 @cite_23 . Quantizing @cite_44 and binarizing @cite_22 @cite_8 are two other methods which have been used to reduce network size and computation load. These methods improve performance at the hardware level by reducing the size of weights at the binary codes level. However, the standard GPU implementation still remains challenging for these methods to achieves runtime speedup @cite_7 . Also, the advantages of these methods over other one-stage methods without the fully connected layers (which is the target layer for network pruning @cite_44 ) is not clear. | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_44",
"@cite_23",
"@cite_31",
"@cite_10"
],
"mid": [
"",
"2300242332",
"2963674932",
"2319920447",
"2964299589",
"2279098554",
"2134797427",
"2963000224"
],
"abstract": [
"",
"We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32 ( ) memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58 ( ) faster convolutional operations (in terms of number of the high precision operations) and 32 ( ) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than (16 , ) in top-1 accuracy. Our code is available at: http: allenai.org plato xnornet.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At training-time the binary weights and activations are used for computing the parameters gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line.",
"Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).",
"Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this paper we empirically demonstrate that shallow feed-forward nets can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, in some cases the shallow nets can learn these deep functions using the same number of parameters as the original deep models. On the TIMIT phoneme recognition and CIFAR-10 image recognition tasks, shallow nets can be trained that perform similarly to complex, well-engineered, deeper convolutional models.",
"High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 ."
]
} |
1903.01522 | 2919166888 | Deep neural networks based methods have been proved to achieve outstanding performance on object detection and classification tasks. Despite significant performance improvement, due to the deep structures, they still require prohibitive runtime to process images and maintain the highest possible performance for real-time applications. Observing the phenomenon that human vision system (HVS) relies heavily on the temporal dependencies among frames from the visual input to conduct recognition efficiently, we propose a novel framework dubbed as TKD: temporal knowledge distillation. This framework distills the temporal knowledge from a heavy neural networks based model over selected video frames (the perception of the moments) to a light-weight model. To enable the distillation, we put forward two novel procedures: 1) an Long-short Term Memory (LSTM) based key frame selection method; and 2) a novel teacher-bounded loss design. To validate, we conduct comprehensive empirical evaluations using different object detection methods over multiple datasets including Youtube-Objects and Hollywood scene dataset. Our results show consistent improvement in accuracy-speed trad-offs for object detection over the frames of the dynamic scene, compare to other modern object recognition methods. | Object detection in the real world still needs to address challenges such as low image quality, large variance in the backgrounds, illumination variation, etc. These could lead to a significant domain shift between the training, validation and the test data. Consequently, the field of domain adaptation has been widely studied in image classification @cite_13 @cite_9 and object detection @cite_17 @cite_39 tasks. These methods improves accuracy on well-known bench-marking datasets. Nevertheless, they typically adopt an offline domain adaptation procedure and do not concern with domain-change during the inference stage. | {
"cite_N": [
"@cite_9",
"@cite_13",
"@cite_39",
"@cite_17"
],
"mid": [
"2964099118",
"1565327149",
"2952735550",
"2964115968"
],
"abstract": [
"Domain adaption (DA) allows machine learning methods trained on data sampled from one distribution to be applied to data sampled from another. It is thus of great practical importance to the application of such methods. Despite the fact that tensor representations are widely used in Computer Vision to capture multi-linear relationships that affect the data, most existing DA methods are applicable to vectors only. This renders them incapable of reflecting and preserving important structure in many problems. We thus propose here a learning-based method to adapt the source and target tensor representations directly, without vectorization. In particular, a set of alignment matrices is introduced to align the tensor representations from both domains into the invariant tensor subspace. These alignment matrices and the tensor subspace are modeled as a joint optimization problem and can be learned adaptively from the data using the proposed alternative minimization scheme. Extensive experiments show that our approach is capable of preserving the discriminative power of the source domain, of resisting the effects of label noise, and works effectively for small sample sizes, and even one-shot DA. We show that our method outperforms the state-of-the-art on the task of cross-domain visual recognition in both efficacy and efficiency, and particularly that it outperforms all comparators when applied to DA of the convolutional activations of deep convolutional networks.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.",
"This work addresses the problem of semantic image segmentation of nighttime scenes. Although considerable progress has been made in semantic image segmentation, it is mainly related to daytime scenarios. This paper proposes a novel method to progressive adapt the semantic models trained on daytime scenes, along with large-scale annotations therein, to nighttime scenes via the bridge of twilight time -- the time between dawn and sunrise, or between sunset and dusk. The goal of the method is to alleviate the cost of human annotation for nighttime images by transferring knowledge from standard daytime conditions. In addition to the method, a new dataset of road scenes is compiled; it consists of 35,000 images ranging from daytime to twilight time and to nighttime. Also, a subset of the nighttime images are densely annotated for method evaluation. Our experiments show that our method is effective for model adaptation from daytime scenes to nighttime scenes, without using extra human annotation.",
"Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc., and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on @math -divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios."
]
} |
1903.01522 | 2919166888 | Deep neural networks based methods have been proved to achieve outstanding performance on object detection and classification tasks. Despite significant performance improvement, due to the deep structures, they still require prohibitive runtime to process images and maintain the highest possible performance for real-time applications. Observing the phenomenon that human vision system (HVS) relies heavily on the temporal dependencies among frames from the visual input to conduct recognition efficiently, we propose a novel framework dubbed as TKD: temporal knowledge distillation. This framework distills the temporal knowledge from a heavy neural networks based model over selected video frames (the perception of the moments) to a light-weight model. To enable the distillation, we put forward two novel procedures: 1) an Long-short Term Memory (LSTM) based key frame selection method; and 2) a novel teacher-bounded loss design. To validate, we conduct comprehensive empirical evaluations using different object detection methods over multiple datasets including Youtube-Objects and Hollywood scene dataset. Our results show consistent improvement in accuracy-speed trad-offs for object detection over the frames of the dynamic scene, compare to other modern object recognition methods. | Knowledge distillation is another approach to boost accuracy in CNNs. Under the knowledge distillation setting, an ensemble of CNNs models or a very deep model will serve as the teacher model, which transfers its knowledge to the student model (shallow model). @cite_2 proposed a method to apply teacher prediction as a soft-label'' and distills teacher classifier's knowledge to the student. Moreover, they proposed a temperature cross entropy instead of @math distance as the loss function. @cite_5 proposed a so-called hint'' procedure to guide the training of the student model. There are also other approaches to distill knowledge between different domains such as from RGB to depth images @cite_43 @cite_6 . | {
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_43",
"@cite_2"
],
"mid": [
"1690739335",
"2767012653",
"753847829",
"1821462560"
],
"abstract": [
"While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.",
"Model compression and knowledge distillation have been successfully applied for cross-architecture and cross-domain transfer learning. However, a key requirement is that training examples are in correspondence across the domains. We show that in many scenarios of practical importance such aligned data can be synthetically generated using computer graphics pipelines allowing domain adaptation through distillation. We apply this technique to learn models for recognizing low-resolution images using labeled high-resolution images, non-localized objects using labeled localized objects, line-drawings using labeled color images, etc. Experiments on various fine-grained recognition datasets demonstrate that the technique improves recognition performance on the low-quality data and beats strong baselines for domain adaptation. Finally, we present insights into workings of the technique through visualizations and relating it to existing literature.",
"In this work we propose a technique that transfers supervision between images from different modalities. We use learned representations from a large labeled modality as supervisory signal for training representations for a new unlabeled paired modality. Our method enables learning of rich representations for unlabeled modalities and can be used as a pre-training procedure for new modalities with limited labeled data. We transfer supervision from labeled RGB images to unlabeled depth and optical flow images and demonstrate large improvements for both these cross modal supervision transfers.",
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel."
]
} |
1903.01522 | 2919166888 | Deep neural networks based methods have been proved to achieve outstanding performance on object detection and classification tasks. Despite significant performance improvement, due to the deep structures, they still require prohibitive runtime to process images and maintain the highest possible performance for real-time applications. Observing the phenomenon that human vision system (HVS) relies heavily on the temporal dependencies among frames from the visual input to conduct recognition efficiently, we propose a novel framework dubbed as TKD: temporal knowledge distillation. This framework distills the temporal knowledge from a heavy neural networks based model over selected video frames (the perception of the moments) to a light-weight model. To enable the distillation, we put forward two novel procedures: 1) an Long-short Term Memory (LSTM) based key frame selection method; and 2) a novel teacher-bounded loss design. To validate, we conduct comprehensive empirical evaluations using different object detection methods over multiple datasets including Youtube-Objects and Hollywood scene dataset. Our results show consistent improvement in accuracy-speed trad-offs for object detection over the frames of the dynamic scene, compare to other modern object recognition methods. | Knowledge distillation has been also applied to the object detection task. @cite_26 proposed a method which adopts all of the soft labeling (labels generated by the teacher), the hard labeling (the ground truth) and the hint procedure to transfer knowledge from the teacher with deep feature extractor to the student with a shallow feature extractor. They adopt a two-stage method (FasterRCNN @cite_1 ) in their system. @cite_14 applied same procedure to one stage method (Tiny-Yolo v2). Here, we focus on the role of knowledge distillation as an online adaptation process over the temporal domain for the object detection task. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_1"
],
"mid": [
"2803113663",
"2750784772",
"2613718673"
],
"abstract": [
"In this paper, we propose an efficient and fast object detector which can process hundreds of frames per second. To achieve this goal we investigate three main aspects of the object detection framework: network architecture, loss function and training data (labeled and unlabeled). In order to obtain compact network architecture, we introduce various improvements, based on recent work, to develop an architecture which is computationally light-weight and achieves a reasonable performance. To further improve the performance, while keeping the complexity same, we utilize distillation loss function. Using distillation loss we transfer the knowledge of a more accurate teacher network to proposed light-weight student network. We propose various innovations to make distillation efficient for the proposed one stage detector pipeline: objectness scaled distillation loss, feature map non-maximal suppression and a single unified distillation loss function for detection. Finally, building upon the distillation loss, we explore how much can we push the performance by utilizing the unlabeled data. We train our model with unlabeled data using the soft labels of the teacher network. Our final network consists of 10x fewer parameters than the VGG based object detection network and it achieves a speed of more than 200 FPS and proposed changes improve the detection accuracy by 14 mAP over the baseline on Pascal dataset.",
"Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much reduced accuracy. In this work, we propose a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation [20] and hint learning [34]. Although knowledge distillation has demonstrated excellent improvements for simpler classification setups, the complexity of detection poses new challenges in the form of regression, region proposals and less voluminous labels. We address this through several innovations such as a weighted cross-entropy loss to address class imbalance, a teacher bounded loss to handle the regression component and adaptation layers to better learn from intermediate teacher distributions. We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. Our results show consistent improvement in accuracy-speed trade-offs for modern multi-class detection models.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn."
]
} |
1903.01417 | 2949403005 | Let @math be a polygonal domain of @math holes and @math vertices. We study the problem of constructing a data structure that can compute a shortest path between @math and @math in @math under the @math metric for any two query points @math and @math . To do so, a standard approach is to first find a set of @math "gateways" for @math and a set of @math "gateways" for @math such that there exist a shortest @math - @math path containing a gateway of @math and a gateway of @math , and then compute a shortest @math - @math path using these gateways. Previous algorithms all take quadratic @math time to solve this problem. In this paper, we propose a divide-and-conquer technique that solves the problem in @math time. As a consequence, we construct a data structure of @math size in @math time such that each query can be answered in @math time. | Better results exist for certain special cases of the problem. If @math is a simple polygon, then a shortest path in @math with minimum Euclidean length is also an @math shortest path @cite_24 , and thus by using the data structure in @cite_15 @cite_2 for the Euclidean metric, one can build a data structure in @math time and space that can answer each query in @math time; recently Bae and Wang @cite_16 proposed a simpler approach that can achieve the same performance. If @math and all holes of it are rectangles whose edges are all axis-parallel, then ElGindy and Mitra @cite_4 constructed a data structure of @math size in @math time that supports @math time queries. | {
"cite_N": [
"@cite_4",
"@cite_24",
"@cite_2",
"@cite_15",
"@cite_16"
],
"mid": [
"2114726496",
"2081751185",
"2004884735",
"2038699153",
"2949937035"
],
"abstract": [
"Given a set ℬ of n barriers, the shortest route query SRQ problem asks for a preprocessing of ℬ such that a description of the shortest route between two points (origin and destination) can be reported efficiently. In this manuscript we present efficient sequential and parallel algorithms for the SRQ problem where the barriers in ℬ are disjoint planar rectangles whose sides are parallel to the coordinate axes, and subsequent queries ask for the shortest L1 route between two arbitrary points which avoids the barriers in ℬ. The segments forming such route are also restricted to be parallel to the coordinate axes. For this problem we we present sequential and parallel preprocessing algorithms which allow for reporting the shortest distance between two arbitrary query points in O(log n) time with a single processor. The route itself can also be constructed in time proportional to its number of segments. Our method is based on constructing three planar graphs, called carrier graphs, that contain the shortest route information in a succinct form. Each graph can then be searched using graph theoretic techniques. Using the same techniques we also present a parallel algorithm for computing the orthogonal shortest distance between two points among rectangular obstacles which runs in poly-logarithmic time using sub-quadratic number of processors on the CREW PRAM model of computation.",
"Abstract In this paper, we show that the universal covering space of a surface can be used to unify previous results on computing paths in a simple polygon. We optimize a given path among obstacles in the plane under the Euclidean and link metrics and under polygonal convex distance functions. Besides revealing connections between the minimum paths under these three distance functions, the framework provided by the universal cover leads to simplified linear-time algorithms for shortest path trees, for minimum-link paths in simple polygons, and for paths restricted to c given orientations.",
"Abstract This note describes a new data structure for answering shortest path queries inside a simple polygon. The new data structure has the same asymptotic performance as the previously known data structure (linear preprocessing after triangulation, logarithmic query time), but it is significantly less complicated.",
"Abstract Let P be a simple polygon with n sides. This paper shows how to preprocess the polygon so that, given two query points p and q inside P , the length of the shortest path inside the polygon from p to q can be found in time O (log n ). The path itself must be polygonal and can be extracted in additional time proportional to the number of turns it makes. The preprocessing consists of triangulation plus a linear amount of additional work.",
"Let @math be a simple polygon of @math vertices. We consider two-point @math shortest path queries in @math . We build a data structure of @math size in @math time such that given any two query points @math and @math , the length of an @math shortest path from @math to @math in @math can be computed in @math time, or in @math time if both @math and @math are vertices of @math , and an actual shortest path can be output in additional linear time in the number of edges of the path. To achieve the result, we propose a mountain decomposition of simple polygons, which may be interesting in its own right. Most importantly, our approach is much simpler than the previous work on this problem."
]
} |
1903.01417 | 2949403005 | Let @math be a polygonal domain of @math holes and @math vertices. We study the problem of constructing a data structure that can compute a shortest path between @math and @math in @math under the @math metric for any two query points @math and @math . To do so, a standard approach is to first find a set of @math "gateways" for @math and a set of @math "gateways" for @math such that there exist a shortest @math - @math path containing a gateway of @math and a gateway of @math , and then compute a shortest @math - @math path using these gateways. Previous algorithms all take quadratic @math time to solve this problem. In this paper, we propose a divide-and-conquer technique that solves the problem in @math time. As a consequence, we construct a data structure of @math size in @math time such that each query can be answered in @math time. | Better results are also known for one-point queries in the @math metric @cite_13 @cite_9 @cite_18 @cite_25 @cite_6 @cite_17 , i.e., @math is fixed in the input and only @math is a query point. In particular, Mitchell @cite_6 @cite_17 built a data structure of @math size in @math time that can answer each such query in @math time. Later Chen and Wang @cite_13 reduced the preprocessing time to @math if @math is already triangulated (which can be done in @math or @math time for any @math @cite_0 @cite_5 ), while the query time is still @math . | {
"cite_N": [
"@cite_18",
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"1979832172",
"",
"160239005",
"2171580054",
"2106413026",
"2963551506",
"2021380525",
"2063372577"
],
"abstract": [
"The problem of finding a rectilinear shortest path amongst obstacles may be stated as follows: Given a set of obstacles in the plane find a shortest rectilinear ( L 1 ) path from a point s to a point t which avoids all obstacles. The path may touch an obstacle but may not cross an obstacle. We study the rectilinear shortest path problem for the case where the obstacles are non-intersecting simple polygons, and present an O ( n (log n ) 2 ) algorithm for finding such a path, where n is the number of vertices of the obstacles. We also study the case of rectilinear obstacles in three dimensions, and show that L 1 shortest paths can be found in O ( n 2 (log n ) 3 ) time.",
"",
"",
"Recent advances on polygon triangulation have yielded efficient algorithms for a large number of problems dealing with a single simple polygon. If the input consists of several disjoint polygons, however, it is often desirable to merge them in preprocessing so as to produce a single polygon that retains the geometric characteristics of its individual components. We give an efficient method for doing so, which combines a generalized form of Jordan sorting with the efficient use of point location and interval trees. As a corollary, we are able to triangulate a collection of p disjoint Jordan polygonal chains in time O (n + p (log p)1+e), for any fixed e > 0, where n is the total number of vertices. A variant of the algorithm gives a running time of O ((n + p log p) log log p). The performance of these solutions approaches the lower bound of Ω (n + p log p).",
"We give a deterministic algorithm for triangulating a simple polygon in linear time. The basic strategy is to build a coarse approximation of a triangulation in a bottom-up phase and then use the information computed along the way to refine the triangulation in a top-down phase. The main tools used are the polygon-cutting theorem, which provides us with a balancing scheme, and the planar separator theorem, whose role is essential in the discovery of new diagonals. Only elementary data structures are required by the algorithm. In particular, no dynamic search trees, of our algorithm.",
"Given a point s and a set of h pairwise disjoint polygonal obstacles with a total of n vertices in the plane, suppose a triangulation of the space outside the obstacles is given; we present an (O(n+h h) ) time and O(n) space algorithm for building a data structure (called shortest path map) of size O(n) such that for any query point t, the length of an (L_1 ) shortest obstacle-avoiding path from s to t can be computed in (O( n) ) time and the actual path can be reported in additional time proportional to the number of edges of the path. The previously best algorithm computes such a shortest path map in (O(n n) ) time and O(n) space. So our algorithm is faster when h is relatively small. Further, our techniques can be extended to obtain improved results for other related problems, e.g., computing the (L_1 ) geodesic Voronoi diagram for a set of point sites among the obstacles.",
"The rectilinear shortest path problem can be stated as follows: given a set of m non-intersecting simple polygonal obstacles in the plane, find a shortest L\"1-metric (rectilinear) path from a point s to a point t that avoids all the obstacles. The path can touch an obstacle but does not cross it. This paper presents an algorithm with time complexity O(n+m(lgn)^3^ ^2), which is close to the known lower bound of @W(n+mlgm) for finding such a path. Here, n is the number of vertices of all the obstacles together.",
"We present an algorithm for computingL1 shortest paths among polygonal obstacles in the plane. Our algorithm employs the \"continuous Dijkstra\" technique of propagating a \"wavefront\" and runs in timeO(E logn) and spaceO(E), wheren is the number of vertices of the obstacles andE is the number of \"events.\" By using bounds on the density of certain sparse binary matrices, we show thatE =O(n logn), implying that our algorithm is nearly optimal. We conjecture thatE =O(n), which would imply our algorithm to be optimal. Previous bounds for our problem were quadratic in time and space. Our algorithm generalizes to the case of fixed orientation metrics, yielding anO(nźź1 2 log2n) time andO(nźź1 2) space approximation algorithm for finding Euclidean shortest paths among obstacles. The algorithm further generalizes to the case of many sources, allowing us to compute anL1 Voronoi diagram for source points that lie among a collection of polygonal obstacles."
]
} |
1903.01417 | 2949403005 | Let @math be a polygonal domain of @math holes and @math vertices. We study the problem of constructing a data structure that can compute a shortest path between @math and @math in @math under the @math metric for any two query points @math and @math . To do so, a standard approach is to first find a set of @math "gateways" for @math and a set of @math "gateways" for @math such that there exist a shortest @math - @math path containing a gateway of @math and a gateway of @math , and then compute a shortest @math - @math path using these gateways. Previous algorithms all take quadratic @math time to solve this problem. In this paper, we propose a divide-and-conquer technique that solves the problem in @math time. As a consequence, we construct a data structure of @math size in @math time such that each query can be answered in @math time. | The Euclidean counterparts have also been studied. For one-point queries, Hershberger and Suri @cite_21 built a shortest path map of @math size with @math query time and the map can be built in @math time and space. For two-point queries, Chiang and Mitchell @cite_14 built a data structure of @math size that can support @math time queries, and they also built a data structure of @math size with @math query time. Other results with tradeoff between preprocessing and query time were also proposed in @cite_14 . Also, @cite_20 showed that with @math space one can answer each two-point query in @math time, where @math (resp., @math ) is the set of vertices of @math visible to @math (resp., @math ). @cite_22 gave a data structure of @math size that can support @math time two-point queries. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_22",
"@cite_20"
],
"mid": [
"2031247977",
"2058510050",
"1504893535",
"2107704676"
],
"abstract": [
"We consider the two-point query version of the fundamental geometric shortest path problem: Given a set h of polygonal obstacles iu the plane, having a total of n vertices, build a data structure such that for any two query points s and t we can efficiently determine the length, d(s,t), of an Euclidean shortest obstacle-avoiding path, *(s,t), from s to t. Additionally, our data structure should allow one to report the path x(s, t), in time proportional to its (combinatorial) size. We present various methods for solving this two-point query problem, including algorithms with o(n), O(log n+h), 0( h log n), O(log’ n) or optimal O(log n) query times, using polynomial-space data structures, with various tradeoffs between space and query time. While severa results have been known for approtimate twepoint Euclidean shortest path queries, it has been a well-publicized open problem to obtain sublinear query time for the exact version of the problem. Our methods also yield data structures for twc+point shortest path queries on nonconvex polyhedral",
"We propose an optimal-time algorithm for a classical problem in plane computational geometry: computing a shortest path between two points in the presence of polygonal obstacles. Our algorithm runs in worst-case time O(n log n) and requires O(n log n) space, where n is the total number of vertices in the obstacle polygons. The algorithm is based on an efficient implementation of wavefront propagation among polygonal obstacles, and it actually computes a planar map encoding shortest paths from a fixed source point to all other points of the plane; the map can be used to answer single-source shortest path queries in O(log n) time. The time complexity of our algorithm is a significant improvement over all previously published results on the shortest path problem. Finally, we also discuss extensions to more general shortest path problems, involving nonpoint and multiple sources.",
"We consider shortest path queries in a polygonal domain Phaving nvertices and hholes. A skeleton graph is a subgraph of a Voronoi diagram of P. Our novel algorithm utilizes a reduced skeleton graph of Pto compute a tessellation of P. It builds a data structure of size O(n2) in O(n2logn) time to support distance queries for any pair of query points in Pin O(hlogn) time.",
"In this paper, we study several geometric path query problems. Given a scene of disjoint polygonal obstacles with totally n vertices in the plane, we construct efficient data structures that enable fast reporting of an \"optimal\" obstacle-avoiding path (or its length, cost, directions, etc) between two arbitrary query points s and t that are given in an on-line fashion. We consider geometric paths under several optimality criteria: Lm length, number of edges (called links), monotonicity with respect to a certain direction, and some combinations of length and links. Our methods are centered around the notion of gateways, a small number of easily identified points in the plane that control the paths we seek. We give efficient solutions for several special cases based upon new geometric observations. We also present solutions for the general cases based upon the computation of the minimum size visibility polygon for query points."
]
} |
1903.01503 | 2920062964 | We present a framework for creating navigable space from sparse and noisy map points generated by sparse visual SLAM methods. Our method incrementally seeds and creates local convex regions free of obstacle points along a robot's trajectory. Then a dense version of point cloud is reconstructed through a map point regulation process where the original noisy map points are first projected onto a series of local convex hull surfaces, after which those points falling inside the convex hulls are culled. The regulated and refined map points allow human users to quickly recognize and abstract the environmental information. We have validated our proposed framework using both a public dataset and a real environmental structure, and our results reveal that the reconstructed navigable free space has small volume loss (error) comparing with the ground truth, and the method is highly efficient, allowing real-time computation and online planning. | Point cloud processing methods have been well studied in past years. Existing point cloud methods can be categorized as surface reconstruction, volume reconstruction, model fitting, and kernel-based regression frameworks. Specifically, a large set of early work aims to build surfaces from point clouds. For instance, moving least squares (MLS) based methods @cite_13 @cite_32 were developed to reconstruct point clouds output from laser scans; projection-based greedy strategies @cite_24 were used to achieve incremental surface growing and building. Signed distance function @cite_10 and Bayesian method @cite_30 have also been investigated for surface reconstruction. Some other works, like @cite_0 , @cite_20 , and @cite_11 , adopted volume carving mechanism to obtain free space given a set of points. Typically these methods first decompose the space into cells using 3D triangulation techniques, and then the visibility constraints are used to carve out those cells passed by the visibility line. Different from the surface-based and volume-based reconstruction schemes, RANSAC based model fitting methods @cite_23 @cite_36 have been used to capture the spatial structure of the given set of points. Online kernel-based learning methods @cite_27 have also been proposed to implement terrain estimation from the point cloud of a LIDAR scanner. | {
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_36",
"@cite_32",
"@cite_24",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2019032869",
"2005794784",
"2024276620",
"2058524213",
"2132563333",
"",
"28531074",
"2776645264",
"2100816864",
"2211977492",
"2771345721"
],
"abstract": [
"We present a Bayesian technique for the reconstruction and subsequent decimation of 3D surface models from noisy sensor data. The method uses oriented probabilistic models of the measurement noise and combines them with feature-enhancing prior probabilities over 3D surfaces. When applied to surface reconstruction, the method simultaneously smooths noisy regions while enhancing features such as corners. When applied to surface decimation, it finds models that closely approximate the original mesh when rendered. The method is applied in the context of computer animation where it finds decimations that minimize the visual error even under nonrigid deformations.",
"We introduce an algorithm for constructing a high-quality triangulation directly from Point Set Surfaces. Our algorithm requires no intermediate representation and no post-processing of the output, and naturally handles noisy input data, typically in the form of a set of registered range scans. It creates a triangulation where triangle size respects the geometry of the surface rather than the sampling density of the range scans. Our technique does not require normal information, but still produces a consistent orientation of the triangles, assuming the sampled surface is an orientable two-manifold. Our work is based on using Moving Least-Squares (MLS) surfaces as the underlying representation. Our technique is a novel advancing front algorithm, that bounds the Hausdorff distance to within a user-specified limit. Specifically, we introduce a way of augmenting advancing front algorithms with global information, so that triangle size adapts gracefully even when there are large changes in surface curvature. Our results show that our technique generates high-quality triangulations where other techniques fail to reconstruct the correct surface due to irregular sampling on the point cloud, noise, registration artifacts, and underlying geometric features, such as regions with high curvature gradients.",
"Geometric model fitting is a typical chicken-&-egg problem: data points should be clustered based on geometric proximity to models whose unknown parameters must be estimated at the same time. Most existing methods, including generalizations of RANSAC, greedily search for models with most inliers (within a threshold) ignoring overall classification of points. We formulate geometric multi-model fitting as an optimal labeling problem with a global energy function balancing geometric errors and regularity of inlier clusters. Regularization based on spatial coherence (on some near-neighbor graph) and or label costs is NP hard. Standard combinatorial algorithms with guaranteed approximation bounds (e.g. ?-expansion) can minimize such regularization energies over a finite set of labels, but they are not directly applicable to a continuum of labels, e.g. @math in line fitting. Our proposed approach (PEaRL) combines model sampling from data points as in RANSAC with iterative re-estimation of inliers and models' parameters based on a global regularization functional. This technique efficiently explores the continuum of labels in the context of energy minimization. In practice, PEaRL converges to a good quality local minimum of the energy automatically selecting a small number of models that best explain the whole data set. Our tests demonstrate that our energy-based approach significantly improves the current state of the art in geometric model fitting currently dominated by various greedy generalizations of RANSAC.",
"We introduce a robust moving least-squares technique for reconstructing a piecewise smooth surface from a potentially noisy point cloud. We use techniques from robust statistics to guide the creation of the neighborhoods used by the moving least squares (MLS) computation. This leads to a conceptually simple approach that provides a unified framework for not only dealing with noise, but also for enabling the modeling of surfaces with sharp features.Our technique is based on a new robust statistics method for outlier detection: the forward-search paradigm. Using this powerful technique, we locally classify regions of a point-set to multiple outlier-free smooth regions. This classification allows us to project points on a locally smooth region rather than a surface that is smooth everywhere, thus defining a piecewise smooth surface and increasing the numerical stability of the projection operator. Furthermore, by treating the points across the discontinuities as outliers, we are able to define sharp features. One of the nice features of our approach is that it automatically disregards outliers during the surface-fitting phase.",
"In this paper we present a method for fast surface reconstruction from large noisy datasets. Given an unorganized 3D point cloud, our algorithm recreates the underlying surface's geometrical properties using data resampling and a robust triangulation algorithm in near realtime. For resulting smooth surfaces, the data is resampled with variable densities according to previously estimated surface curvatures. Incremental scans are easily incorporated into an existing surface mesh, by determining the respective overlapping area and reconstructing only the updated part of the surface mesh. The proposed framework is flexible enough to be integrated with additional point label information, where groups of points sharing the same label are clustered together and can be reconstructed separately, thus allowing fast updates via triangular mesh decoupling. To validate our approach, we present results obtained from laser scans acquired in both indoor and outdoor environments.",
"",
"Accurate terrain estimation is critical for autonomous offroad navigation. Reconstruction of a 3D surface allows rough and hilly ground to be represented, yielding faster driving and better planning and control. However, data from a 3D sensor samples the terrain unevenly, quickly becoming sparse at longer ranges and containing large voids because of occlusions and inclines. The proposed approach uses online kernel-based learning to estimate a continuous surface over the area of interest while providing upper and lower bounds on that surface. Unlike other approaches, visibility information is exploited to constrain the terrain surface and increase precision, and an efficient gradient-based optimization allows for realtime implementation.",
"We propose a novel framework for reconstructing lightweight polygonal surfaces from point clouds. Unlike traditional methods that focus on either extracting good geometric primitives or obtaining proper arrangements of primitives, the emphasis of this work lies in intersecting the primitives (planes only) and seeking for an appropriate combination of them to obtain a manifold polygonal surface model without boundary.,,We show that reconstruction from point clouds can be cast as a binary labeling problem. Our method is based on a hypothesizing and selection strategy. We first generate a reasonably large set of face candidates by intersecting the extracted planar primitives. Then an optimal subset of the candidate faces is selected through optimization. Our optimization is based on a binary linear programming formulation under hard constraints that enforce the final polygonal surface model to be manifold and watertight. Experiments on point clouds from various sources demonstrate that our method can generate lightweight polygonal surface models of arbitrary piecewise planar objects. Besides, our method is capable of recovering sharp features and is robust to noise, outliers, and missing data.",
"This thesis describes a general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points. Instances of surface reconstruction arise in numerous scientific and engineering applications, including reverse-engineering--the automatic generation of CAD models from physical objects. Previous surface reconstruction methods have typically required additional knowledge, such as structure in the data, known surface genus, or orientation information. In contrast, the method outlined in this thesis requires only the 3D coordinates of the data points. From the data, the method is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners. The reconstruction method has three major phases: (1) initial surface estimation, (2) mesh optimization, and (3) piecewise smooth surface optimization. A key ingredient in phase 3, and another principal contribution of this thesis, is the introduction of a new class of piecewise smooth representations based on subdivision. The effectiveness of the three-phase reconstruction method is demonstrated on a number of examples using both simulated and real data. Phases 2 and 3 of the surface reconstruction method can also be used to approximate existing surface models. By casting surface approximation as a global optimization problem with an energy function that directly measures deviation of the approximation from the original surface, models are obtained that exhibit excellent accuracy to conciseness trade-offs. Examples of piecewise linear and piecewise smooth approximations are generated for various surfaces, including meshes, NURBS surfaces, CSG models, and implicit surfaces.",
"Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we propose to use a Delaunay triangulation of Edge-Points, which are the 3D points corresponding to image edges. These points constrain the edges of the 3D Delaunay triangulation to real-world edges. Besides the use of the Edge-Points, a second contribution of this paper is the Inverse Cone Heuristic that preemptively avoids the creation of artifacts in the reconstructed manifold surface. We force the reconstruction of a manifold surface since it makes it possible to apply computer graphics or photometric refinement algorithms to the output mesh. We evaluated our approach on four real sequences of the public available KITTI dataset by comparing the incremental reconstruction against Velodyne measurements.",
"Autonomous navigation, which consists of a systematic integration of localization, mapping, motion planning and control, is the core capability of mobile robotic systems. However, most research considers only isolated technical modules. There exist significant gaps between maps generated by SLAM algorithms and maps required for motion planning. This paper presents a complete online system that consists in three modules: incremental SLAM, real-time dense mapping, and free space extraction. The obtained free-space volume (i.e. a tessellation of tetrahedra) can be served as regular geometric constraints for motion planning. Our system runs in real-time thanks to the engineering decisions proposed to increase the system efficiency. We conduct extensive experiments on the KITTI dataset to demonstrate the run-time performance. Qualitative and quantitative results on mapping accuracy are also shown. For the benefit of the community, we make the source code public."
]
} |
1903.01503 | 2920062964 | We present a framework for creating navigable space from sparse and noisy map points generated by sparse visual SLAM methods. Our method incrementally seeds and creates local convex regions free of obstacle points along a robot's trajectory. Then a dense version of point cloud is reconstructed through a map point regulation process where the original noisy map points are first projected onto a series of local convex hull surfaces, after which those points falling inside the convex hulls are culled. The regulated and refined map points allow human users to quickly recognize and abstract the environmental information. We have validated our proposed framework using both a public dataset and a real environmental structure, and our results reveal that the reconstructed navigable free space has small volume loss (error) comparing with the ground truth, and the method is highly efficient, allowing real-time computation and online planning. | Related work also includes various mapping approaches as our work utilizes the 3D map points generated from existing mapping methods. Existing map forms for robots navigation include, for example, occupancy grid map @cite_37 , 3D OctoMap @cite_35 , signed distance map @cite_31 @cite_34 , topological map @cite_6 @cite_14 , and convex region growing map @cite_9 @cite_33 @cite_18 , etc. In this work we are particularly interested in the map points generated from sparse visual features such as those from ORB-SLAM @cite_3 and SVO @cite_15 . | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_14",
"@cite_18",
"@cite_33",
"@cite_15",
"@cite_9",
"@cite_6",
"@cite_3",
"@cite_31",
"@cite_34"
],
"mid": [
"2133844819",
"1999050017",
"2962794880",
"2587415290",
"1605218591",
"1970504153",
"",
"2963872397",
"1612997784",
"2585698528",
"2561394090"
],
"abstract": [
"Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.",
"An approach to robot perception and world modeling that uses a probabilistic tesselated representation of spatial information called the occupancy grid is reviewed. The occupancy grid is a multidimensional random field that maintains stochastic estimates of the occupancy state of the cells in a spatial lattice. To construct a sensor-derived map of the robot's world, the cell state estimates are obtained by interpreting the incoming range readings using probabilistic sensor models. Bayesian estimation procedures allow the incremental updating of the occupancy grid, using readings taken from several sensors over multiple points of view. The use of occupancy grids from mapping and for navigation is examined. Operations on occupancy grids and extensions of the occupancy grid framework are briefly considered. >",
"Micro-Aerial Vehicles (MAVs)have the advantage of moving freely in 3D space. However, creating compact and sparse map representations that can be efficiently used for planning for such robots is still an open problem. In this paper, we take maps built from noisy sensor data and construct a sparse graph containing topological information that can be used for 3D planning. We use a Euclidean Signed Distance Field, extract a 3D Generalized Voronoi Diagram (GVD), and obtain a thin skeleton diagram representing the topological structure of the environment. We then convert this skeleton diagram into a sparse graph, which we show is resistant to noise and changes in resolution. We demonstrate global planning over this graph, and the orders of magnitude speed-up it offers over other common planning methods. We validate our planning algorithm in real maps built onboard an MAV, using RGB-D sensing.",
"There is extensive literature on using convex optimization to derive piece-wise polynomial trajectories for controlling differential flat systems with applications to three-dimensional flight for Micro Aerial Vehicles. In this work, we propose a method to formulate trajectory generation as a quadratic program (QP) using the concept of a Safe Flight Corridor (SFC). The SFC is a collection of convex overlapping polyhedra that models free space and provides a connected path from the robot to the goal position. We derive an efficient convex decomposition method that builds the SFC from a piece-wise linear skeleton obtained using a fast graph search technique. The SFC provides a set of linear inequality constraints in the QP allowing real-time motion planning. Because the range and field of view of the robot's sensors are limited, we develop a framework of Receding Horizon Planning , which plans trajectories within a finite footprint in the local map, continuously updating the trajectory through a re-planning process. The re-planning process takes between 50 to 300 ms for a large and cluttered map. We show the feasibility of our approach, its completeness and performance, with applications to high-speed flight in both simulated and physical experiments using quadrotors.",
"We present a new approach to the design of smooth trajectories for quadrotor unmanned aerial vehicles (UAVs), which are free of collisions with obstacles along their entire length. To avoid the non-convex constraints normally required for obstacle-avoidance, we perform a mixed-integer optimization in which polynomial trajectories are assigned to convex regions which are known to be obstacle-free. Prior approaches have used the faces of the obstacles themselves to define these convex regions. We instead use IRIS, a recently developed technique for greedy convex segmentation [1], to pre-compute convex regions of safe space. This results in a substantially reduced number of integer variables, which improves the speed with which the optimization can be solved to its global optimum, even for tens or hundreds of obstacle faces. In addition, prior approaches have typically enforced obstacle avoidance at a finite set of sample or knot points. We introduce a technique based on sums-of-squares (SOS) programming that allows us to ensure that the entire piecewise polynomial trajectory is free of collisions using convex constraints. We demonstrate this technique in 2D and in 3D using a dynamical model in the Drake toolbox for Matlab [2].",
"We propose a semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods. The semi-direct approach eliminates the need of costly feature extraction and robust matching techniques for motion estimation. Our algorithm operates directly on pixel intensities, which results in subpixel precision at high frame-rates. A probabilistic mapping method that explicitly models outlier measurements is used to estimate 3D points, which results in fewer outliers and more reliable points. Precise and high frame-rate motion estimation brings increased robustness in scenes of little, repetitive, and high-frequency texture. The algorithm is applied to micro-aerial-vehicle state-estimation in GPS-denied environments and runs at 55 frames per second on the onboard embedded computer and at more than 300 frames per second on a consumer laptop. We call our approach SVO (Semi-direct Visual Odometry) and release our implementation as open-source software.",
"",
"Visual robot navigation within large-scale, semistructured environments deals with various challenges such as computation intensive path planning algorithms or insufficient knowledge about traversable spaces. Moreover, many state-of-the-art navigation approaches only operate locally instead of gaining a more conceptual understanding of the planning objective. This limits the complexity of tasks a robot can accomplish and makes it harder to deal with uncertainties that are present in the context of real-time robotics applications. In this work, we present Topomap, a framework which simplifies the navigation task by providing a map to the robot which is tailored for path planning use. This novel approach transforms a sparse feature-based map from a visual Simultaneous Localization And Mapping (SLAM) system into a three-dimensional topological map. This is done in two steps. First, we extract occupancy information directly from the noisy sparse point cloud. Then, we create a set of convex free-space clusters, which are the vertices of the topological map. We show that this representation improves the efficiency of global planning, and we provide a complete derivation of our algorithm. Planning experiments on real world datasets demonstrate that we achieve similar performance as RRT* with significantly lower computation times and storage requirements. Finally, we test our algorithm on a mobile robotic platform to prove its advantages.",
"This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.",
"",
"Truncated Signed Distance Fields (TSDFs) have become a popular tool in 3D reconstruction, as they allow building very high-resolution models of the environment in real-time on GPU. However, they have rarely been used for planning on robotic platforms, mostly due to high computational and memory requirements. We propose to reduce these requirements by using large voxel sizes, and extend the standard TSDF representation to be faster and better model the environment at these scales. We also propose a method to build Euclidean Signed Distance Fields (ESDFs), which are a common representation for planning, incrementally out of our TSDF representation. ESDFs provide Euclidean distance to the nearest obstacle at any point in the map, and also provide collision gradient information for use with optimization-based planners. We validate the reconstruction accuracy and real-time performance of our combined system on both new and standard datasets from stereo and RGB-D imagery. The complete system will be made available as an open-source library called voxblox."
]
} |
1903.01503 | 2920062964 | We present a framework for creating navigable space from sparse and noisy map points generated by sparse visual SLAM methods. Our method incrementally seeds and creates local convex regions free of obstacle points along a robot's trajectory. Then a dense version of point cloud is reconstructed through a map point regulation process where the original noisy map points are first projected onto a series of local convex hull surfaces, after which those points falling inside the convex hulls are culled. The regulated and refined map points allow human users to quickly recognize and abstract the environmental information. We have validated our proposed framework using both a public dataset and a real environmental structure, and our results reveal that the reconstructed navigable free space has small volume loss (error) comparing with the ground truth, and the method is highly efficient, allowing real-time computation and online planning. | However, in our problem where only sparse map points are provided as inputs, existing point cloud processing methods expose limitations. First, the majority of existing methods @cite_13 @cite_32 @cite_24 @cite_10 @cite_30 assume that the points are captured by ranging sensors such as Lidars, sonars or depth cameras, and therefore the points are dense and evenly (uniformly) distributed like a mesh surface. We cannot use those approaches, because we are not able to get well estimated normal and curvature of point set from those techniques. Volume reconstruction methods @cite_0 @cite_17 @cite_11 need to implement 3D triangulation for all the points. This however is not necessary if building a navigable space is the final goal: we want to build a map with only a minimal set of points defining the free space, instead of all of the (possibly noisy) points. In addition, the computation requirement for the 3D triangulation and the post-processing can be prohibitive. Plane fitting methods @cite_23 @cite_29 also fail in our case due to the high ambiguity of points structures. | {
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_11",
"@cite_29",
"@cite_32",
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_10",
"@cite_17"
],
"mid": [
"2019032869",
"2005794784",
"2771345721",
"",
"2058524213",
"2132563333",
"",
"2776645264",
"2100816864",
""
],
"abstract": [
"We present a Bayesian technique for the reconstruction and subsequent decimation of 3D surface models from noisy sensor data. The method uses oriented probabilistic models of the measurement noise and combines them with feature-enhancing prior probabilities over 3D surfaces. When applied to surface reconstruction, the method simultaneously smooths noisy regions while enhancing features such as corners. When applied to surface decimation, it finds models that closely approximate the original mesh when rendered. The method is applied in the context of computer animation where it finds decimations that minimize the visual error even under nonrigid deformations.",
"We introduce an algorithm for constructing a high-quality triangulation directly from Point Set Surfaces. Our algorithm requires no intermediate representation and no post-processing of the output, and naturally handles noisy input data, typically in the form of a set of registered range scans. It creates a triangulation where triangle size respects the geometry of the surface rather than the sampling density of the range scans. Our technique does not require normal information, but still produces a consistent orientation of the triangles, assuming the sampled surface is an orientable two-manifold. Our work is based on using Moving Least-Squares (MLS) surfaces as the underlying representation. Our technique is a novel advancing front algorithm, that bounds the Hausdorff distance to within a user-specified limit. Specifically, we introduce a way of augmenting advancing front algorithms with global information, so that triangle size adapts gracefully even when there are large changes in surface curvature. Our results show that our technique generates high-quality triangulations where other techniques fail to reconstruct the correct surface due to irregular sampling on the point cloud, noise, registration artifacts, and underlying geometric features, such as regions with high curvature gradients.",
"Autonomous navigation, which consists of a systematic integration of localization, mapping, motion planning and control, is the core capability of mobile robotic systems. However, most research considers only isolated technical modules. There exist significant gaps between maps generated by SLAM algorithms and maps required for motion planning. This paper presents a complete online system that consists in three modules: incremental SLAM, real-time dense mapping, and free space extraction. The obtained free-space volume (i.e. a tessellation of tetrahedra) can be served as regular geometric constraints for motion planning. Our system runs in real-time thanks to the engineering decisions proposed to increase the system efficiency. We conduct extensive experiments on the KITTI dataset to demonstrate the run-time performance. Qualitative and quantitative results on mapping accuracy are also shown. For the benefit of the community, we make the source code public.",
"",
"We introduce a robust moving least-squares technique for reconstructing a piecewise smooth surface from a potentially noisy point cloud. We use techniques from robust statistics to guide the creation of the neighborhoods used by the moving least squares (MLS) computation. This leads to a conceptually simple approach that provides a unified framework for not only dealing with noise, but also for enabling the modeling of surfaces with sharp features.Our technique is based on a new robust statistics method for outlier detection: the forward-search paradigm. Using this powerful technique, we locally classify regions of a point-set to multiple outlier-free smooth regions. This classification allows us to project points on a locally smooth region rather than a surface that is smooth everywhere, thus defining a piecewise smooth surface and increasing the numerical stability of the projection operator. Furthermore, by treating the points across the discontinuities as outliers, we are able to define sharp features. One of the nice features of our approach is that it automatically disregards outliers during the surface-fitting phase.",
"In this paper we present a method for fast surface reconstruction from large noisy datasets. Given an unorganized 3D point cloud, our algorithm recreates the underlying surface's geometrical properties using data resampling and a robust triangulation algorithm in near realtime. For resulting smooth surfaces, the data is resampled with variable densities according to previously estimated surface curvatures. Incremental scans are easily incorporated into an existing surface mesh, by determining the respective overlapping area and reconstructing only the updated part of the surface mesh. The proposed framework is flexible enough to be integrated with additional point label information, where groups of points sharing the same label are clustered together and can be reconstructed separately, thus allowing fast updates via triangular mesh decoupling. To validate our approach, we present results obtained from laser scans acquired in both indoor and outdoor environments.",
"",
"We propose a novel framework for reconstructing lightweight polygonal surfaces from point clouds. Unlike traditional methods that focus on either extracting good geometric primitives or obtaining proper arrangements of primitives, the emphasis of this work lies in intersecting the primitives (planes only) and seeking for an appropriate combination of them to obtain a manifold polygonal surface model without boundary.,,We show that reconstruction from point clouds can be cast as a binary labeling problem. Our method is based on a hypothesizing and selection strategy. We first generate a reasonably large set of face candidates by intersecting the extracted planar primitives. Then an optimal subset of the candidate faces is selected through optimization. Our optimization is based on a binary linear programming formulation under hard constraints that enforce the final polygonal surface model to be manifold and watertight. Experiments on point clouds from various sources demonstrate that our method can generate lightweight polygonal surface models of arbitrary piecewise planar objects. Besides, our method is capable of recovering sharp features and is robust to noise, outliers, and missing data.",
"This thesis describes a general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points. Instances of surface reconstruction arise in numerous scientific and engineering applications, including reverse-engineering--the automatic generation of CAD models from physical objects. Previous surface reconstruction methods have typically required additional knowledge, such as structure in the data, known surface genus, or orientation information. In contrast, the method outlined in this thesis requires only the 3D coordinates of the data points. From the data, the method is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners. The reconstruction method has three major phases: (1) initial surface estimation, (2) mesh optimization, and (3) piecewise smooth surface optimization. A key ingredient in phase 3, and another principal contribution of this thesis, is the introduction of a new class of piecewise smooth representations based on subdivision. The effectiveness of the three-phase reconstruction method is demonstrated on a number of examples using both simulated and real data. Phases 2 and 3 of the surface reconstruction method can also be used to approximate existing surface models. By casting surface approximation as a global optimization problem with an energy function that directly measures deviation of the approximation from the original surface, models are obtained that exhibit excellent accuracy to conciseness trade-offs. Examples of piecewise linear and piecewise smooth approximations are generated for various surfaces, including meshes, NURBS surfaces, CSG models, and implicit surfaces.",
""
]
} |
1903.01344 | 2920074546 | In this paper we propose a hybrid architecture of actor-critic algorithms for reinforcement learning in parameterized action space, which consists of multiple parallel sub-actor networks to decompose the structured action space into simpler action spaces along with a critic network to guide the training of all sub-actor networks. While this paper is mainly focused on parameterized action space, the proposed architecture, which we call hybrid actor-critic, can be extended for more general action spaces which has a hierarchical structure. We present an instance of the hybrid actor-critic architecture based on proximal policy optimization (PPO), which we refer to as hybrid proximal policy optimization (H-PPO). Our experiments test H-PPO on a collection of tasks with parameterized action space, where H-PPO demonstrates superior performance over previous methods of parameterized action reinforcement learning. | Policy gradient @cite_12 is another class of RL algorithms which optimizes a stochastic policy @math parameterized by @math to maximize the expected policy value @math . The gradient of the stochastic policy is given by the policy gradient theorem @cite_12 as As an alternative, the policy gradient could also be computed with the advantage function @math as | {
"cite_N": [
"@cite_12"
],
"mid": [
"2155027007"
],
"abstract": [
"Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy."
]
} |
1903.01344 | 2920074546 | In this paper we propose a hybrid architecture of actor-critic algorithms for reinforcement learning in parameterized action space, which consists of multiple parallel sub-actor networks to decompose the structured action space into simpler action spaces along with a critic network to guide the training of all sub-actor networks. While this paper is mainly focused on parameterized action space, the proposed architecture, which we call hybrid actor-critic, can be extended for more general action spaces which has a hierarchical structure. We present an instance of the hybrid actor-critic architecture based on proximal policy optimization (PPO), which we refer to as hybrid proximal policy optimization (H-PPO). Our experiments test H-PPO on a collection of tasks with parameterized action space, where H-PPO demonstrates superior performance over previous methods of parameterized action reinforcement learning. | To deal with the fact that a parameterized action space contains both discrete actions and continuous parameters, one straightforward approach is to directly discretize the continuous part of the action space and turn it into a large discrete set (for example with the tile coding approach @cite_19 ). This trivial method loses the advantages of continuous action space for fine-grained control, and often ends up with an extremely large discrete action space. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2144366468"
],
"abstract": [
"Reinforcement learning (RL) is a powerful abstraction of sequential decision making that has an established theoretical foundation and has proven effective in a variety of small, simulated domains. The success of RL on real-world problems with large, often continuous state and action spaces hinges on effective function approximation. Of the many function approximation schemes proposed, tile coding strikes an empirically successful balance among representational power, computational cost, and ease of use and has been widely adopted in recent RL work. This paper demonstrates that the performance of tile coding is quite sensitive to parameterization. We present detailed experiments that isolate the effects of parameter choices and provide guidance to their setting. We further illustrate that no single parameterization achieves the best performance throughout the learning curve, and contribute an automated technique for adjusting tile-coding parameters online. Our experimental findings confirm the superiority of adaptive parameterization to fixed settings. This work aims to automate the choice of approximation scheme not only on a problem basis but also throughout the learning process, eliminating the need for a substantial tuning effort."
]
} |
1903.01344 | 2920074546 | In this paper we propose a hybrid architecture of actor-critic algorithms for reinforcement learning in parameterized action space, which consists of multiple parallel sub-actor networks to decompose the structured action space into simpler action spaces along with a critic network to guide the training of all sub-actor networks. While this paper is mainly focused on parameterized action space, the proposed architecture, which we call hybrid actor-critic, can be extended for more general action spaces which has a hierarchical structure. We present an instance of the hybrid actor-critic architecture based on proximal policy optimization (PPO), which we refer to as hybrid proximal policy optimization (H-PPO). Our experiments test H-PPO on a collection of tasks with parameterized action space, where H-PPO demonstrates superior performance over previous methods of parameterized action reinforcement learning. | proposed a hierarchical approach for RL in parameterized action space where the parameter policy is conditioned on the discrete action policy and used TRPO and Stochastic Value Gradient @cite_0 to train such an architecture. Although they also found that this method could be unstable due to the joint-learning between the discrete action policy and parameter policy. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1906772730"
],
"abstract": [
"We present a unified framework for learning continuous control policies using backpropagation. It supports stochastic control by treating stochasticity in the Bellman equation as a deterministic function of exogenous noise. The product is a spectrum of general policy gradient algorithms that range from model-free methods with value functions to model-based methods without value functions. We use learned models but only require observations from the environment in- stead of observations from model-predicted trajectories, minimizing the impact of compounded model errors. We apply these algorithms first to a toy stochastic control problem and then to several physics-based control problems in simulation. One of these variants, SVG(1), shows the effectiveness of learning models, value functions, and policies simultaneously in continuous domains."
]
} |
1903.01072 | 2950212751 | Recent works in image captioning have shown very promising raw performance. However, we realize that most of these encoder-decoder style networks with attention do not scale naturally to large vocabulary size, making them difficult to be deployed on embedded system with limited hardware resources. This is because the size of word and output embedding matrices grow proportionally with the size of vocabulary, adversely affecting the compactness of these networks. To address this limitation, this paper introduces a brand new idea in the domain of image captioning. That is, we tackle the problem of compactness of image captioning models which is hitherto unexplored. We showed that, our proposed model, named COMIC for COMpact Image Captioning, achieves comparable results in five common evaluation metrics with state-of-the-art approaches on both MS-COCO and InstaPIC-1.1M datasets despite having an embedding vocabulary size that is 39x - 99x smaller. The source code and models are available at: https: github.com jiahuei COMIC-Compact-Image-Captioning-with-Attention | Summary. Compared to regular image captioning models, COMIC has vastly fewer learnable parameters, leading to reduced requirement on GPU memory and storage. A closely related work to ours is LightRNN @cite_26 but with few differences - i) COMIC requires only a single word embedding matrix (as opposed to two in LightRNN); ii) COMIC does not necessitate any changes in the model architecture (LightRNN requires a word embedding table); and iii) LightRNN is applied for language modelling only. On the other hand, our proposed method is orthogonal to compression and pruning based methods such as @cite_51 @cite_22 . Compression methods encode the trained weights of a full CNN into a smaller representation, while pruning methods are applied only after the full dense model has started the training process. In contrast, our method directly reduces the number of learnable parameters in the first place, thus producing a compact model. Moreover, @cite_51 @cite_22 are applied for image classification instead of image captioning. We believe that the aforementioned methods can be applied on top of COMIC to achieve even higher savings in terms of storage and parameters. | {
"cite_N": [
"@cite_26",
"@cite_51",
"@cite_22"
],
"mid": [
"2546915671",
"2119144962",
"2300242332"
],
"abstract": [
"Recurrent neural networks (RNNs) have achieved state-of-the-art performances in many natural language processing tasks, such as language modeling and machine translation. However, when the vocabulary is large, the RNN model will become very big (e.g., possibly beyond the memory capacity of a GPU device) and its training will become very inefficient. In this work, we propose a novel technique to tackle this challenge. The key idea is to use 2-Component (2C) shared embedding for word representations. We allocate every word in the vocabulary into a table, each row of which is associated with a vector, and each column associated with another vector. Depending on its position in the table, a word is jointly represented by two components: a row vector and a column vector. Since the words in the same row share the row vector and the words in the same column share the column vector, we only need @math vectors to represent a vocabulary of @math unique words, which are far less than the @math vectors required by existing approaches. Based on the 2-Component shared embedding, we design a new RNN algorithm and evaluate it using the language modeling task on several benchmark datasets. The results show that our algorithm significantly reduces the model size and speeds up the training process, without sacrifice of accuracy (it achieves similar, if not better, perplexity as compared to state-of-the-art language models). Remarkably, on the One-Billion-Word benchmark Dataset, our algorithm achieves comparable perplexity to previous language models, whilst reducing the model size by a factor of 40-100, and speeding up the training process by a factor of 2. We name our proposed algorithm to reflect its very small model size and very high training speed.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32 ( ) memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58 ( ) faster convolutional operations (in terms of number of the high precision operations) and 32 ( ) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than (16 , ) in top-1 accuracy. Our code is available at: http: allenai.org plato xnornet."
]
} |
1903.01069 | 2920450151 | One characteristic of human visual perception is the presence of Gestalt phenomena,' that is, that the whole is something other than the sum of its parts. A natural question is whether image-recognition networks show similar effects. Our paper investigates one particular type of Gestalt phenomenon, the law of closure, in the context of a feedforward image classification neural network (NN). This is a robust effect in human perception, but experiments typically rely on measurements (e.g., reaction time) that are not available for artificial neural nets. We describe a protocol for identifying closure effect in NNs, and report on the results of experiments with simple visual stimuli. Our findings suggest that NNs trained with natural images do exhibit closure, in contrast to networks with randomized weights or networks that have been trained on visually random data. Furthermore, the closure effect reflects something beyond good feature extraction; it is correlated with the network's higher layer features and ability to generalize. | The key idea of the Gestalt psychology school is that we perceive individual sensory stimuli as meaningful wholes @cite_19 . Further, the Gestalt psychologists maintained that when sensory elements are combined, the elements form a new pattern or configuration. For example, when you hear musical notes, a melody emerges from their combinations, something that did not exist in individual elements. In other words, the whole is different from the sum of its parts @cite_42 . This overarching idea explains many phenomena of human perception @cite_19 , one of which is illusory contours (set Illusory in Fig. ); the brain has a need to see familiar simple objects and has a tendency to create a whole'' image from individual elements. | {
"cite_N": [
"@cite_19",
"@cite_42"
],
"mid": [
"1748744376",
"2060565253"
],
"abstract": [
"Theoretically I might say there were 327 brightnesses and nuances of colour. Do I have \"327\"? No. I have sky, house, and trees. It is impossible to achieve \"327 \" as such. And yet even though such droll calculation were possible and implied, say, for the house 120, the trees 90, the sky 117 -I should at least have this arrangement and division of the total, and not, say, 127 and 100 and 100; or 150 and 177.",
"Routledge is now re-issuing this prestigious series of 204 volumes originally published between 1910 and 1965. The titles include works by key figures such asC.G. Jung, Sigmund Freud, Jean Piaget, Otto Rank, James Hillman, Erich Fromm, Karen Horney and Susan Isaacs. Each volume is available on its own, as part of a themed mini-set, or as part of a specially-priced 204-volume set. A brochure listing each title in the \"International Library of Psychology\" series is available upon request."
]
} |
1903.01069 | 2920450151 | One characteristic of human visual perception is the presence of Gestalt phenomena,' that is, that the whole is something other than the sum of its parts. A natural question is whether image-recognition networks show similar effects. Our paper investigates one particular type of Gestalt phenomenon, the law of closure, in the context of a feedforward image classification neural network (NN). This is a robust effect in human perception, but experiments typically rely on measurements (e.g., reaction time) that are not available for artificial neural nets. We describe a protocol for identifying closure effect in NNs, and report on the results of experiments with simple visual stimuli. Our findings suggest that NNs trained with natural images do exhibit closure, in contrast to networks with randomized weights or networks that have been trained on visually random data. Furthermore, the closure effect reflects something beyond good feature extraction; it is correlated with the network's higher layer features and ability to generalize. | Leveraging the experimental framework of classical psychology to study NNs is an important yet under-explored area of research. Recent work in this direction includes investigating shape biases in NNs @cite_28 or measuring abstract reasoning ability using tests designed for humans @cite_48 . Our work aims to continue this effort. | {
"cite_N": [
"@cite_28",
"@cite_48"
],
"mid": [
"2729557715",
"2952828155"
],
"abstract": [
"Deep neural networks (DNNs) have achieved unprecedented performance on a wide range of complex tasks, rapidly outpacing our understanding of the nature of their solutions. This has caused a recent surge of interest in methods for rendering modern neural systems more interpretable. In this work, we propose to address the interpretability problem in modern DNNs using the rich history of problem descriptions, theories and experimental methods developed by cognitive psychologists to study the human mind. To explore the potential value of these tools, we chose a well-established analysis from developmental psychology that explains how children learn word labels for objects, and applied that analysis to DNNs. Using datasets of stimuli inspired by the original cognitive psychology experiments, we find that state-of-the-art one shot learning models trained on ImageNet exhibit a similar bias to that observed in humans: they prefer to categorize objects according to shape rather than color. The magnitude of this shape bias varies greatly among architecturally identical, but differently seeded models, and even fluctuates within seeds throughout training, despite nearly equivalent classification performance. These results demonstrate the capability of tools from cognitive psychology for exposing hidden computational properties of DNNs, while concurrently providing us with a computational model for human word learning.",
"Whether neural networks can learn abstract reasoning or whether they merely rely on superficial statistics is a topic of recent debate. Here, we propose a dataset and challenge designed to probe abstract reasoning, inspired by a well-known human IQ test. To succeed at this challenge, models must cope with various generalisation regimes' in which the training and test data differ in clearly-defined ways. We show that popular models such as ResNets perform poorly, even when the training and test sets differ only minimally, and we present a novel architecture, with a structure designed to encourage reasoning, that does significantly better. When we vary the way in which the test questions and training data differ, we find that our model is notably proficient at certain forms of generalisation, but notably weak at others. We further show that the model's ability to generalise improves markedly if it is trained to predict symbolic explanations for its answers. Altogether, we introduce and explore ways to both measure and induce stronger abstract reasoning in neural networks. Our freely-available dataset should motivate further progress in this direction."
]
} |
1903.01113 | 2919479130 | Increasing adoption of scientific workflows in the community has urged for the development of multi-tenant platforms that provides these workflows execution as a service. Workflow as a Service (WaaS) concept has been brought up by researchers to address the future design of Workflow Management Systems (WMS) that can serve a large number of users from a single point of service. This platform differs from a traditional WMS in handling the workload of workflows at runtime. A traditional WMS is usually designed to execute a single workflow in a dedicated process while the WaaS platforms enhance the process by exploiting multiple workflows execution in a resource-sharing environment model. In this paper, we explore a novel resource-sharing policy to improve system utilization and to fulfill various Quality of Service (QoS) requirements from multiple users. We propose an Elastic Budget-constrained resource Provisioning and Scheduling algorithm for Multiple workflows designed for WaaS platforms that is able to reduce the computational overhead by encouraging resource-sharing policy to minimize workflows' makespan while meeting user-defined budget. Our experiments show that the EBPSM algorithm is able to utilize the resource-sharing policy to achieve higher performance in terms of minimizing the makespan compared to the state-of-the-art budget-constraint scheduling algorithm. | The majority of works in multiple workflows scheduling have pointed out the necessity of reusing already provisioned VMs instead of acquiring the new ones to reduce the idle gaps and increase system utilization. Examples of the works include the CWSA @cite_2 algorithm that uses a depth-first search technique to find potential schedule gaps between tasks' execution. Another work is the CERSA @cite_27 algorithm that dynamically adjusts the VM allocation for tasks in a reactive fashion whenever a new workflow job is submitted to the system. These works' idea to fill the schedule gaps between tasks' execution of a workflow to be utilized for scheduling tasks from another workflow is similar to our proposal. However, they merely assumed that different workflow applications could be deployed into any existing VMs available without considering possible complexity from software dependency conflicts. Our work differs in the way that we model the software configurations into a container image before deploying it to the VMs for execution. | {
"cite_N": [
"@cite_27",
"@cite_2"
],
"mid": [
"2889870315",
"2339254261"
],
"abstract": [
"Workflow comprising of many tasks and data dependencies among tasks is an attractive programming paradigm for processing big data in clouds, and workflow scheduling plays essential roles in improving the cost and resource efficiency for cloud platforms. Up to now, large numbers of scheduling approaches have been proposed and improved. However, the majority of them focused on scheduling a single workflow and have not adequately exploited the idle time slots on resources to reduce the cost for executing workflow applications. To cover the above issue, we suggest to schedule tasks from different workflows in a hybrid way to take full advantage of idle time slots to improve the cost and resource efficiency, while guaranteeing the deadlines of workflows. To achieve the above idea, we first introduce a reactive scheduling architecture for real-time workflows. Then, a novel cost-efficient reactive scheduling algorithm (CERSA) is proposed to deploy multiple workflows with deadlines to cloud platforms. Finally, on the basis of real-world workflow traces, extensive experiments are conducted to compare CERSA with five existing algorithms. The experimental results demonstrate that CERSA is better than those algorithms with respect to monetary cost and resource efficiency.",
"Multi-tenancy is one of the key features of cloud computing, which provides scalability and economic benefits to the end-users and service providers by sharing the same cloud platform and its underlying infrastructure with the isolation of shared network and compute resources. However, resource management in the context of multi-tenant cloud computing is becoming one of the most complex task due to the inherent heterogeneity and resource isolation. This paper proposes a novel cloud-based workflow scheduling (CWSA) policy for compute-intensive workflow applications in multi-tenant cloud computing environments, which helps minimize the overall workflow completion time, tardiness, cost of execution of the workflows, and utilize idle resources of cloud effectively. The proposed algorithm is compared with the state-of-the-art algorithms, i.e., First Come First Served (FCFS), EASY Backfilling, and Minimum Completion Time (MCT) scheduling policies to evaluate the performance. Further, a proof-of-concept experiment of real-world scientific workflow applications is performed to demonstrate the scalability of the CWSA, which verifies the effectiveness of the proposed solution. The simulation results show that the proposed scheduling policy improves the workflow performance and outperforms the aforementioned alternative scheduling policies under typical deployment scenarios."
]
} |
1903.01113 | 2919479130 | Increasing adoption of scientific workflows in the community has urged for the development of multi-tenant platforms that provides these workflows execution as a service. Workflow as a Service (WaaS) concept has been brought up by researchers to address the future design of Workflow Management Systems (WMS) that can serve a large number of users from a single point of service. This platform differs from a traditional WMS in handling the workload of workflows at runtime. A traditional WMS is usually designed to execute a single workflow in a dedicated process while the WaaS platforms enhance the process by exploiting multiple workflows execution in a resource-sharing environment model. In this paper, we explore a novel resource-sharing policy to improve system utilization and to fulfill various Quality of Service (QoS) requirements from multiple users. We propose an Elastic Budget-constrained resource Provisioning and Scheduling algorithm for Multiple workflows designed for WaaS platforms that is able to reduce the computational overhead by encouraging resource-sharing policy to minimize workflows' makespan while meeting user-defined budget. Our experiments show that the EBPSM algorithm is able to utilize the resource-sharing policy to achieve higher performance in terms of minimizing the makespan compared to the state-of-the-art budget-constraint scheduling algorithm. | The use of the container for deploying scientific workflows has been intensively researched. Examples include the work by @cite_36 that deployed a TOSCA-based workflow https: github.com ditrit workflows using Docker https: www.docker.com container on e-Science Central platform https: www.esciencecentral.org . Although their work is done on a single host VM, the result shows promising future scientific workflows reproducibility that is made possible using container technology. A similar result is presented by @cite_13 that convinces performance nativity and high flexibility of deploying scientific workflows using the Docker container. Finally, the adCFS @cite_33 algorithm is designed to schedule containerized scientific workflows that encourage the CPU-sharing policy using a Markov-chain model to assign the appropriate CPU weight for containers. Those solutions are the early development of containerized scientific workflows on a single workflow execution. Their results show high feasibility to utilize container technology for efficiently bundling software configurations for workflows that are being proposed for WaaS platforms. | {
"cite_N": [
"@cite_36",
"@cite_13",
"@cite_33"
],
"mid": [
"2530682269",
"",
"2774502204"
],
"abstract": [
"Scientific workflows are increasingly being migrated to the Cloud. However, workflow developers face the problem of which Cloud to choose and, more importantly, how to avoid vendor lock-in. This is because there are a range of Cloud platforms, each with different functionality and interfaces. In this paper we propose a solution - a system that allows workflows to be portable across a range of Clouds. This portability is achieved through a new framework for building, dynamically deploying and enacting workflows. It combines the TOSCA specification language and container-based virtualization. TOSCA is used to build a reusable and portable description of a workflow which can be automatically deployed and enacted using Docker containers. We describe a working implementation of our framework and evaluate it using a set of existing scientific workflows that illustrate the flexibility of the proposed approach.",
"",
"Scientific workflows are increasingly containerised, which requires rethinking central processing unit (CPU) sharing policies to accommodate different workload types. However, container engines running scientific workflows struggle to share the CPU fairly, as workload characteristics are not taken into account. This paper proposes a sharing policy called the Adaptive Completely Fair Scheduling policy (adCFS), which considers the future state of CPU usage and proactively shares CPU cycles between various containers based on their corresponding workload metrics (e.g., CPU usage, task runtime, #tasks). adCFS estimates the weight of workload characteristics and redistributes the CPU based on the corresponding weights. The Markov chain model is used to predict CPU state use, and the adCFS policy is triggered to dynamically allocate containers to the proper CPU portions. Experimental results show enhanced container CPU response time for those containers that run heavy and large jobs: these display 12 faster response time compared with the default CFS (Completely Fair Scheduler). adCFS therefore enhances CFS by considering workload metrics, which leads to the CPU being shared fairly when it is fully used."
]
} |
1903.01113 | 2919479130 | Increasing adoption of scientific workflows in the community has urged for the development of multi-tenant platforms that provides these workflows execution as a service. Workflow as a Service (WaaS) concept has been brought up by researchers to address the future design of Workflow Management Systems (WMS) that can serve a large number of users from a single point of service. This platform differs from a traditional WMS in handling the workload of workflows at runtime. A traditional WMS is usually designed to execute a single workflow in a dedicated process while the WaaS platforms enhance the process by exploiting multiple workflows execution in a resource-sharing environment model. In this paper, we explore a novel resource-sharing policy to improve system utilization and to fulfill various Quality of Service (QoS) requirements from multiple users. We propose an Elastic Budget-constrained resource Provisioning and Scheduling algorithm for Multiple workflows designed for WaaS platforms that is able to reduce the computational overhead by encouraging resource-sharing policy to minimize workflows' makespan while meeting user-defined budget. Our experiments show that the EBPSM algorithm is able to utilize the resource-sharing policy to achieve higher performance in terms of minimizing the makespan compared to the state-of-the-art budget-constraint scheduling algorithm. | One of the challenges of executing scientific workflows in the clouds is related to the data-locality. Scientific workflows are known as data-intensive applications that involve a vast amount of data to be processed. Therefore, the communication overhead for transferring the data between tasks' execution may take a considerable amount of time that might impact the overall makespan. A work by Stavrinides and Karatza @cite_28 shows that the use of distributed in-memory storage to store the datasets locally for tasks' execution can reduce the communication overhead. Our work is similar regarding the data-locality policy to minimize the data transfer between tasks' execution. However, we use the cached datasets in VMs local storage from tasks' execution to endorse the locality of data. We enhance this policy so that the algorithm can intelligently decide which task to be scheduled in particular VMs that can provide the minimum execution time given the available cached datasets. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2546495709"
],
"abstract": [
"Abstract As large-scale distributed systems gain momentum, the scheduling of workflow applications with multiple requirements in such computing platforms has become a crucial area of research. In this paper, we investigate the workflow scheduling problem in large-scale distributed systems, from the Quality of Service (QoS) and data locality perspectives. We present a scheduling approach, considering two models of synchronization for the tasks in a workflow application: (a) communication through the network and (b) communication through temporary files. Specifically, we investigate via simulation the performance of a heterogeneous distributed system, where multiple soft real-time workflow applications arrive dynamically. The applications are scheduled under various tardiness bounds, taking into account the communication cost in the first case study and the I O cost and data locality in the second. The simulation results provide useful insights into the impact of tardiness bound and data locality on the system performance."
]
} |
1903.01113 | 2919479130 | Increasing adoption of scientific workflows in the community has urged for the development of multi-tenant platforms that provides these workflows execution as a service. Workflow as a Service (WaaS) concept has been brought up by researchers to address the future design of Workflow Management Systems (WMS) that can serve a large number of users from a single point of service. This platform differs from a traditional WMS in handling the workload of workflows at runtime. A traditional WMS is usually designed to execute a single workflow in a dedicated process while the WaaS platforms enhance the process by exploiting multiple workflows execution in a resource-sharing environment model. In this paper, we explore a novel resource-sharing policy to improve system utilization and to fulfill various Quality of Service (QoS) requirements from multiple users. We propose an Elastic Budget-constrained resource Provisioning and Scheduling algorithm for Multiple workflows designed for WaaS platforms that is able to reduce the computational overhead by encouraging resource-sharing policy to minimize workflows' makespan while meeting user-defined budget. Our experiments show that the EBPSM algorithm is able to utilize the resource-sharing policy to achieve higher performance in terms of minimizing the makespan compared to the state-of-the-art budget-constraint scheduling algorithm. | Two conflicting QoS requirements in scheduling (e.g., time and cost) have been a significant concern of deploying scientific workflows in clouds. A more relaxed constraint to minimize the trade-off between these two requirements is shown in several works that consider scheduling the workflows within the deadline and budget constraints. They do not attempt to optimize one or both of the QoS requirements but instead maximizing the success rate of workflows execution within the constraints. Examples of these works include PAPS @cite_22 , MW-DBS @cite_3 , and MW-HBDCS @cite_24 algorithms. Another similar work is the MQ-PAS @cite_23 algorithm that emphasizes on increasing the providers' profit by exploiting the budget constraint as long as the deadline is not violated. Our work considers the user-defined budget constraint in the scheduling, but it differs in the way that the algorithm aims to optimize the overall makespan of workflows while meeting their budget. | {
"cite_N": [
"@cite_24",
"@cite_23",
"@cite_22",
"@cite_3"
],
"mid": [
"2806513498",
"",
"2045287414",
"2532733630"
],
"abstract": [
"In heterogeneous distributed environment, it is a great challenge to schedule multiple workflows submitted at different times. Particularly, scheduling of concurrent workflows with deadline and budget constraints makes the problem become more complex. Recent studies have proposed dynamic scheduling strategies for concurrent workflows which have limitations in inconsistent environments. Therefore, this paper presents a new dynamic scheduling algorithm for concurrent workflows. This algorithm proposes a uniform ranking that considers the time and costs for both workflows and workgroups to assign priorities for tasks. In the resource selection phase, it controls the resource selection range for each task based on an optimistic budget for the current task and selects resources for the current task according to a defined bi-factor. The experimental results show that our algorithm outperforms the existing algorithms in both consistent and inconsistent environments.",
"",
"Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as ‘services’ to end-users under a usage-based payment model. It can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behavior modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited effort. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter-networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter-networked Cloud computing scenarios. Several researchers from organizations, such as HP Labs in U.S.A., are using CloudSim in their investigation on Cloud resource provisioning and energy-efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in the hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns. Copyright © 2010 John Wiley & Sons, Ltd.",
"Abstract In many domains of science, scientific applications are represented by workflows. In this paper, we introduce a resource management strategy to maximize the success rate of concurrent workflow applications constrained by individual deadline and budget values. The Multi-Workflow Deadline-Budget Scheduling (MW-DBS) algorithm can schedule multiple workflows that can arrive in the system at any time, with the aim of satisfying individual job requirements. MW-DBS produces schedules without performing optimizations but guarantees that the deadline and budget defined for each job are not exceeded. Experimental results show that our strategy increases the scheduling success rate of finding valid solutions."
]
} |
1903.01113 | 2919479130 | Increasing adoption of scientific workflows in the community has urged for the development of multi-tenant platforms that provides these workflows execution as a service. Workflow as a Service (WaaS) concept has been brought up by researchers to address the future design of Workflow Management Systems (WMS) that can serve a large number of users from a single point of service. This platform differs from a traditional WMS in handling the workload of workflows at runtime. A traditional WMS is usually designed to execute a single workflow in a dedicated process while the WaaS platforms enhance the process by exploiting multiple workflows execution in a resource-sharing environment model. In this paper, we explore a novel resource-sharing policy to improve system utilization and to fulfill various Quality of Service (QoS) requirements from multiple users. We propose an Elastic Budget-constrained resource Provisioning and Scheduling algorithm for Multiple workflows designed for WaaS platforms that is able to reduce the computational overhead by encouraging resource-sharing policy to minimize workflows' makespan while meeting user-defined budget. Our experiments show that the EBPSM algorithm is able to utilize the resource-sharing policy to achieve higher performance in terms of minimizing the makespan compared to the state-of-the-art budget-constraint scheduling algorithm. | Several works specifically focus on handling the real-time workload of workflows in WaaS platforms. This nature of workload raises the issue of uncertainties as the platforms have no knowledge of the arriving workflows. EDPRS @cite_4 algorithm adopts the dynamic scheduling approach using event-driven and periodic rolling strategies to handle the uncertainties in real-time workloads. Another work, called ROSA @cite_32 algorithm, controls the queuing jobs--which increase the uncertainties along with the performance variation of cloud resources--in the WaaS platforms to reduce the waiting time that can prohibit the uncertainties propagation. Both algorithms are designed to schedule the multiple workflows dynamically to minimize the operational cost while meeting the deadline. Our EBPSM can make a fast decision to schedule dynamically with the same purpose to handle the real-time workload and reduce the effect of uncertainties in WaaS environments. However, we differ in the way that our scheduling objectives are minimizing the workflows' makespan, while meeting the user-defined budget. | {
"cite_N": [
"@cite_4",
"@cite_32"
],
"mid": [
"2611469001",
"2889171665"
],
"abstract": [
"Workflow scheduling has become one of the hottest topics in cloud environments, and efficient scheduling approaches show promising ways to maximize the profit of cloud providers via minimizing their cost, while guaranteeing the QoS for users' applications. However, existing scheduling approaches are inadequate for dynamic workflows with uncertain task execution times running in cloud environments, because those approaches assume that cloud computing environments are deterministic and pre-computed schedule decisions will be statically followed during schedule execution. To cover the above issue, we introduce an uncertainty-aware scheduling architecture to mitigate the impact of uncertain factors on the workflow scheduling quality. Based on this architecture, we present a scheduling algorithm, incorporating both event-driven and periodic rolling strategies (EDPRS), for scheduling dynamic workflows. Lastly, we conduct extensive experiments to compare EDPRS with two typical baseline algorithms using real-world workflow traces. The experimental results show that EDPRS performs better than those algorithms.",
"Scheduling workflows in cloud service environment has attracted great enthusiasm, and various approaches have been reported up to now. However, these approaches often ignored the uncertainties in the scheduling environment. Ignoring these uncertain factors often leads to the violation of workflow deadlines and increases service renting costs of executing workflows. This study devotes to improving the performance for cloud service platforms by minimizing uncertainty propagation in scheduling workflow applications that have both uncertain task execution time and data transfer time. To be specific, a novel scheduling architecture is designed to control the count of workflow tasks directly waiting on each service instance (e.g., virtual machine). Based on this architecture, we develop an unceRtainty-aware Online Scheduling Algorithm (ROSA) to schedule dynamic and multiple workflows with deadlines. The proposed ROSA skillfully integrates both the proactive and reactive strategies. Then, on the basis of real-world workflow traces, five groups of simulation experiments are carried out to compare ROSA with five typical algorithms. The comparison results reveal that ROSA performs better than the five compared algorithms with respect to costs (up to 56 ), deviation (up to 70 ), resource utilization (up to 37 ) and fairness (up to 37)."
]
} |
1903.01287 | 2920498407 | Analyzing the robustness of neural networks against norm-bounded uncertainties and adversarial attacks has found many applications ranging from safety verification to robust training. In this paper, we propose a semidefinite programming (SDP) framework for safety verification and robustness analysis of neural networks with general activation functions. Our main idea is to abstract various properties of activation functions (e.g., monotonicity, bounded slope, bounded values, and repetition across layers) with the formalism of quadratic constraints. We then analyze the safety properties of the abstracted network via the S-procedure and semidefinite programming. Compared to other semidefinite relaxations proposed in the literature, our method is less conservative, especially for deep networks, with an order of magnitude reduction in computational complexity. Furthermore, our approach is applicable to any activation functions. | The performance of certification algorithms for neural networks can be measured along three axes. The first axis is the tightness of the certification bounds; the second axis is the computational complexity, and, the third axis is applicability across various models (e.g. different activation functions). These axes conflict. For instance, the conservatism of the algorithm is typically at odds with the computational complexity. On the other hand, generalizable algorithms tend to be more conservative. The relative advantage of any of these algorithms is application specific. For example, reachability analysis and safety verification applications call for less conservative algorithms, whereas in robust training, computationally fast algorithms are desirable @cite_3 . | {
"cite_N": [
"@cite_3"
],
"mid": [
"2799107510"
],
"abstract": [
"Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17]. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or delivering low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms Fast-Lin and Fast-Lip that are able to certify non-trivial lower bounds of minimum distortions, by bounding the ReLU units with appropriate linear functions Fast-Lin, or by bounding the local Lipschitz constant Fast-Lip. Experiments show that (1) our proposed methods deliver bounds close to (the gap is 2-3X) exact minimum distortion found by Reluplex in small MNIST networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35 and usually around 10 ; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 33-14,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that, in fact, there is no polynomial time algorithm that can approximately find the minimum @math adversarial distortion of a ReLU network with a @math approximation ratio unless @math = @math , where @math is the number of neurons in the network."
]
} |
1903.01287 | 2920498407 | Analyzing the robustness of neural networks against norm-bounded uncertainties and adversarial attacks has found many applications ranging from safety verification to robust training. In this paper, we propose a semidefinite programming (SDP) framework for safety verification and robustness analysis of neural networks with general activation functions. Our main idea is to abstract various properties of activation functions (e.g., monotonicity, bounded slope, bounded values, and repetition across layers) with the formalism of quadratic constraints. We then analyze the safety properties of the abstracted network via the S-procedure and semidefinite programming. Compared to other semidefinite relaxations proposed in the literature, our method is less conservative, especially for deep networks, with an order of magnitude reduction in computational complexity. Furthermore, our approach is applicable to any activation functions. | On the one hand, formal verification techniques such as Satisfiability Modulo (SMT) solvers @cite_24 @cite_31 @cite_11 , or integer programming approaches @cite_1 @cite_14 rely on combinatorial optimization to provide tight certification bounds for piece-wise linear networks, whose complexity scales exponentially with the size of the network in the worst-case. A notable work to improve scalability is @cite_14 , where the authors do exact verification of piecewise-linear networks using mixed-integer programming with an order of magnitude reduction in computational cost via tight formulations for non-linearities and careful preprocessing. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_24",
"@cite_31",
"@cite_11"
],
"mid": [
"2950499086",
"2721006554",
"2963054787",
"2543296129",
"2594877703"
],
"abstract": [
"Neural networks have demonstrated considerable success on a wide variety of real-world problems. However, networks trained only to optimize for training accuracy can often be fooled by adversarial examples - slightly perturbed inputs that are misclassified with high confidence. Verification of networks enables us to gauge their vulnerability to such adversarial examples. We formulate verification of piecewise-linear neural networks as a mixed integer program. On a representative task of finding minimum adversarial distortions, our verifier is two to three orders of magnitude quicker than the state-of-the-art. We achieve this computational speedup via tight formulations for non-linearities, as well as a novel presolve algorithm that makes full use of all information available. The computational speedup allows us to verify properties on convolutional networks with an order of magnitude more ReLUs than networks previously verified by any complete verifier. In particular, we determine for the first time the exact adversarial accuracy of an MNIST classifier to perturbations with bounded @math norm @math : for this classifier, we find an adversarial example for 4.38 of samples, and a certificate of robustness (to perturbations with bounded norm) for the remainder. Across all robust training procedures and network architectures considered, we are able to certify more samples than the state-of-the-art and find more adversarial examples than a strong first-order attack.",
"We study the reachability problem for systems implemented as feed-forward neural networks whose activation function is implemented via ReLU functions. We draw a correspondence between establishing whether some arbitrary output can ever be outputed by a neural system and linear problems characterising a neural system of interest. We present a methodology to solve cases of practical interest by means of a state-of-the-art linear programs solver. We evaluate the technique presented by discussing the experimental results obtained by analysing reachability properties for a number of benchmarks in the literature.",
"We present an approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function. Such networks are often used in deep learning and have been shown to be hard to verify for modern satisfiability modulo theory (SMT) and integer linear programming (ILP) solvers.",
"Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness.",
"Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods."
]
} |
1903.01287 | 2920498407 | Analyzing the robustness of neural networks against norm-bounded uncertainties and adversarial attacks has found many applications ranging from safety verification to robust training. In this paper, we propose a semidefinite programming (SDP) framework for safety verification and robustness analysis of neural networks with general activation functions. Our main idea is to abstract various properties of activation functions (e.g., monotonicity, bounded slope, bounded values, and repetition across layers) with the formalism of quadratic constraints. We then analyze the safety properties of the abstracted network via the S-procedure and semidefinite programming. Compared to other semidefinite relaxations proposed in the literature, our method is less conservative, especially for deep networks, with an order of magnitude reduction in computational complexity. Furthermore, our approach is applicable to any activation functions. | On the other hand, certification algorithms based on continuous optimization are more scalable but less accurate. A notable work in this category is reported in @cite_19 , where the authors propose a linear-programming (LP) relaxation of piece-wise linear networks and provide upper bounds on the worst-case loss using weak duality. The main advantage of this work is that the proposed algorithm solely relies on forward- and back-propagation operations on a modified network, and thus is easily integrable into existing learning algorithms. In @cite_12 , the authors propose an SDP relaxation of one-layer sigmoid-based neural networks based on bounding the worst-case loss with a first-order Taylor expansion. Finally, the closest work to the present work is @cite_20 , in which the authors propose a semidefinite relaxation (SDR) for certifying robustness of piece-wise linear multi-layer neural networks. This technique provides tighter bounds than that of @cite_19 , although it is less scalable. | {
"cite_N": [
"@cite_19",
"@cite_20",
"@cite_12"
],
"mid": [
"2766462876",
"2892354372",
"2786163515"
],
"abstract": [
"We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations (on the training data; for previously unseen examples, the approach will be guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well). The basic idea of the approach is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a toy 2D robust classification task, and on a simple convolutional architecture applied to MNIST, where we produce a classifier that provably has less than 8.4 test error for any adversarial attack with bounded @math norm less than @math . This represents the largest verified network that we are aware of, and we discuss future challenges in scaling the approach to much larger domains.",
"Research on adversarial examples are evolved in arms race between defenders who attempt to train robust networks and attackers who try to prove them wrong. This has spurred interest in methods for certifying the robustness of a network. Methods based on combinatorial optimization compute the true robustness but do not yet scale. Methods based on convex relaxations scale better but can only yield non-vacuous bounds on networks trained with those relaxations. In this paper, we propose a new semidefinite relaxation that applies to ReLU networks with any number of layers. We show that it produces meaningful robustness guarantees across a spectrum of networks that were trained against other objectives, something previous convex relaxations are not able to achieve.",
"While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35 test error."
]
} |
1903.01042 | 2919853922 | This work proposes the first strategy to make distributed training of neural networks resilient to computing errors, a problem that has remained unsolved despite being first posed in 1956 by von Neumann. He also speculated that the efficiency and reliability of the human brain is obtained by allowing for low power but error-prone components with redundancy for error-resilience. It is surprising that this problem remains open, even as massive artificial neural networks are being trained on increasingly low-cost and unreliable processing units. Our coding-theory-inspired strategy, "CodeNet," solves this problem by addressing three challenges in the science of reliable computing: (i) Providing the first strategy for error-resilient neural network training by encoding each layer separately; (ii) Keeping the overheads of coding (encoding error-detection decoding) low by obviating the need to re-encode the updated parameter matrices after each iteration from scratch. (iii) Providing a completely decentralized implementation with no central node (which is a single point of failure), allowing all primary computational steps to be error-prone. We theoretically demonstrate that CodeNet has higher error tolerance than replication, which we leverage to speed up computation time. Simultaneously, CodeNet requires lower redundancy than replication, and equal computational and communication costs in scaling sense. We first demonstrate the benefits of CodeNet in reducing expected computation time over replication when accounting for checkpointing. Our experiments show that CodeNet achieves the best accuracy-runtime tradeoff compared to both replication and uncoded strategies. CodeNet is a significant step towards biologically plausible neural network training, that could hold the key to orders of magnitude efficiency improvements. | Because of their potential to go undetected, soft-errors are receiving increasing attention (sometimes even regarded as the scariest aspect of supercomputing’s monster in the closet'' @cite_33 ). Common causes for soft-errors include: (i) Exposure of chips to cosmic rays from outer space causing unpredictable bit-flips; (ii) Manufacturing and soldering defects; and (iii) Memory and storage faults etc. @cite_33 @cite_10 . Even for specialized nanoscale circuits, the for semiconductors predicts that as devices become smaller, thermal noise itself will cause systems to fail catastrophically during normal operation even without supply voltage scaling @cite_61 , thus increasing the need for fault-tolerant training. | {
"cite_N": [
"@cite_61",
"@cite_10",
"@cite_33"
],
"mid": [
"2087042114",
"2033346530",
"2296204683"
],
"abstract": [
"Abstract Noise abatement is the key problem of small-scaled circuit design. New computational paradigms are needed -- as these circuits shrink, they become very vulnerable to noise and soft errors. In this lecture, we present a probabilistic computation framework for improving the resiliency of logic gates and circuits under random conditions induced by voltage or current fluctuation. Among many probabilistic techniques for modeling such devices, only a few models satisfy the requirements of efficient hardware implementation -- specifically, Boltzman machines and Markov Random Field (MRF) models. These models have similar built-in noise-immunity characteristics based on feedback mechanisms. In probabilistic models, the values 0 and 1 of logic functions are replaced by degrees of beliefs that these values occur. An appropriate metric for degree of belief is probability. We discuss various approaches for noise-resilient logic gate design, and propose a novel design taxonomy based on implementation of the MR...",
"This paper reviews the basic physics of those cosmic rays which can affect terrestrial electronics. Cosmic rays at sea level consist mostly of neutrons, protons, pions, muons, electrons, and photons. The particles which cause significant soft fails in electronics are those particles with the strong interaction: neutrons, protons, and pions. At sea level, about 95 of these particles are neutrons. The quantitative flux of neutrons can be estimated to within 3 times , and the relative variation in neutron flux with latitude, altitude, diurnal time, earth s sidereal position, and solar cycle is known with even higher accuracy. The possibility of two particles of a cascade interacting with a single circuit to cause two simultaneous errors is discussed. The terrestrial flux of nucleons can be attenuated by shielding, making a significant reduction in the electronic system soft-error rate. Estimates of such attenuation are made.",
"As a child, were you ever afraid that a monster lurking in your bedroom would leap out of the dark and get you? My job at Oak Ridge National Laboratory is to worry about a similar monster, hiding in the steel cabinets of the supercomputers and threatening to crash the largest computing machines on the planet. The monster is something supercomputer specialists call resilience- or rather the lack of resilience. It has bitten several supercomputers in the past. A high-profile example affected what was the second fastest supercomputer in the world in 2002, a machine called ASCI Q at Los Alamos National Laboratory. When it was first installed at the New Mexico lab, this computer couldn’t run more than an hour or so without crashing."
]
} |
1903.01042 | 2919853922 | This work proposes the first strategy to make distributed training of neural networks resilient to computing errors, a problem that has remained unsolved despite being first posed in 1956 by von Neumann. He also speculated that the efficiency and reliability of the human brain is obtained by allowing for low power but error-prone components with redundancy for error-resilience. It is surprising that this problem remains open, even as massive artificial neural networks are being trained on increasingly low-cost and unreliable processing units. Our coding-theory-inspired strategy, "CodeNet," solves this problem by addressing three challenges in the science of reliable computing: (i) Providing the first strategy for error-resilient neural network training by encoding each layer separately; (ii) Keeping the overheads of coding (encoding error-detection decoding) low by obviating the need to re-encode the updated parameter matrices after each iteration from scratch. (iii) Providing a completely decentralized implementation with no central node (which is a single point of failure), allowing all primary computational steps to be error-prone. We theoretically demonstrate that CodeNet has higher error tolerance than replication, which we leverage to speed up computation time. Simultaneously, CodeNet requires lower redundancy than replication, and equal computational and communication costs in scaling sense. We first demonstrate the benefits of CodeNet in reducing expected computation time over replication when accounting for checkpointing. Our experiments show that CodeNet achieves the best accuracy-runtime tradeoff compared to both replication and uncoded strategies. CodeNet is a significant step towards biologically plausible neural network training, that could hold the key to orders of magnitude efficiency improvements. | Fault tolerance has been actively studied since von Neumann's work @cite_71 (see @cite_56 @cite_5 @cite_91 @cite_19 @cite_8 ). Existing techniques fall into two categories: roll-backward and roll-forward error correction. Roll-backward error correction refers to different forms of @cite_67 , where the computation-state is transmitted to, and stored in, a disk at regular programmer-defined intervals. When errors are detected, the last stored state is retrieved from the disk, and the computation is resumed from the previous checkpoint. However checkpointing comes with immense communication costs @cite_67 . Retrieving the state of the system from the disk is extremely time-intensive, and can significantly slow down the computation if the errors are frequent. | {
"cite_N": [
"@cite_67",
"@cite_91",
"@cite_8",
"@cite_56",
"@cite_19",
"@cite_71",
"@cite_5"
],
"mid": [
"2466095195",
"2116767161",
"",
"2017795258",
"2162449851",
"2497735908",
"2142663831"
],
"abstract": [
"This timely text presents a comprehensive overview of fault tolerance techniques for high-performance computing (HPC). The text opens with a detailed introduction to the concepts of checkpoint protocols and scheduling algorithms, prediction, replication, silent error detection and correction, together with some application-specific techniques such as ABFT. Emphasis is placed on analytical performance models. This is then followed by a review of general-purpose techniques, including several checkpoint and rollback recovery protocols. Relevant execution scenarios are also evaluated and compared through quantitative models. Features: provides a survey of resilience methods and performance models; examines the various sources for errors and faults in large-scale systems; reviews the spectrum of techniques that can be applied to design a fault-tolerant MPI; investigates different approaches to replication; discusses the challenge of energy consumption of fault-tolerance methods in extreme-scale systems.",
"A proof is provided that a logarithmic redundancy factor is necessary for the reliable computation of the parity function by means of a network with noisy gates. This result was first stated by R.L. Dobrushin and S.I. Ortyukov (1977). However, the authors believe that the analysis given by Dobrushin and Ortyukov is not entirely correct. The authors establish the result by following the same steps and by replacing the questionable part of their analysis with entirely new arguments. >",
"",
"This is the first of two papers which consider the theoretical capabilities of computing systems designed from unreliable components. This paper discusses the capabilities of memories; the second paper discusses the capabilities of entire computing systems. Both present existence theorems analogous to the existence theorems of information theory. The fundamental result of information theory is that communication channels have a capacity, C, such that for all information rates less than C, arbitrarily reliable communication can be achieved. In analogy with this result, it is shown that each type of memory has an information storage capacity, C, such that for all memory redundancies greater than 1 C arbitrarily reliable information storage can be achieved. Since memory components malfunction in many different ways, two representative models for component malfunctions are considered. The first is based on the assumption that malfunctions of a particular component are statistically independent from one use to another. The second is based on the assumption that components fail permanently but that bad components are periodically replaced with good ones. In both cases, malfunctions in different components are assumed to be independent. For both models it is shown that there exist memories, constructed entirely from unreliable components of the assumed type, which have nonzero information storage capacities.",
"We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations in which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using wlog sup 0(1 )w processors and time tlog sup 0(1) w. The failure probability of the computation will be at most t spl middot exp(-w sup 1 4 ). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(nlog sup 0(1) n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.",
"",
"This paper discusses fault tolerance in discrete-time dynamic systems, such as finite-state controllers or computer simulations, with focus on the use of coding techniques to efficiently provide fault tolerance to linear finite-state machines (LFSMs). Unlike traditional fault tolerance schemes, which rely heavily-particularly for dynamic systems operating over extended time horizons-on the assumption that the error-correcting mechanism is fault free, we are interested in the case when all components of the implementation are fault prone. The paper starts with a paradigmatic fault tolerance scheme that systematically adds redundancy into a discrete-time dynamic system in a way that achieves tolerance to transient faults in both the state transition and the error-correcting mechanisms. By combining this methodology with low-complexity error-correcting coding, we then obtain an efficient way of providing fault tolerance to k identical unreliable LFSMs that operate in parallel on distinct input sequences. The overall construction requires only a constant amount of redundant hardware per machine (but sufficiently large k) to achieve an arbitrarily small probability of overall failure for any prespecified (finite) time interval, leading in this way to a lower bound on the computational capacity of unreliable LFSMs."
]
} |
1903.01042 | 2919853922 | This work proposes the first strategy to make distributed training of neural networks resilient to computing errors, a problem that has remained unsolved despite being first posed in 1956 by von Neumann. He also speculated that the efficiency and reliability of the human brain is obtained by allowing for low power but error-prone components with redundancy for error-resilience. It is surprising that this problem remains open, even as massive artificial neural networks are being trained on increasingly low-cost and unreliable processing units. Our coding-theory-inspired strategy, "CodeNet," solves this problem by addressing three challenges in the science of reliable computing: (i) Providing the first strategy for error-resilient neural network training by encoding each layer separately; (ii) Keeping the overheads of coding (encoding error-detection decoding) low by obviating the need to re-encode the updated parameter matrices after each iteration from scratch. (iii) Providing a completely decentralized implementation with no central node (which is a single point of failure), allowing all primary computational steps to be error-prone. We theoretically demonstrate that CodeNet has higher error tolerance than replication, which we leverage to speed up computation time. Simultaneously, CodeNet requires lower redundancy than replication, and equal computational and communication costs in scaling sense. We first demonstrate the benefits of CodeNet in reducing expected computation time over replication when accounting for checkpointing. Our experiments show that CodeNet achieves the best accuracy-runtime tradeoff compared to both replication and uncoded strategies. CodeNet is a significant step towards biologically plausible neural network training, that could hold the key to orders of magnitude efficiency improvements. | An alternative (and often complementary) approach is roll-forward error correction where redundancy is introduced into the computation itself, and detected errors are corrected prior to proceeding. Use of sophisticated (i.e., non-replication) based error-correcting codes in roll-forward error correction dates at least as far back as 1984, when Algorithm-Based-Fault-Tolerance (ABFT) was proposed by Huang and Abraham @cite_83 for certain linear algebraic operations. ABFT techniques @cite_83 @cite_67 mainly use parity checks to detect and sometimes correct faults, so that computation can proceed without the need to roll back, when the number of errors are limited. Here, we are interested in soft-errors in a completely decentralized setup, which impose further difficulties because and error-prone . | {
"cite_N": [
"@cite_67",
"@cite_83"
],
"mid": [
"2466095195",
"2083613288"
],
"abstract": [
"This timely text presents a comprehensive overview of fault tolerance techniques for high-performance computing (HPC). The text opens with a detailed introduction to the concepts of checkpoint protocols and scheduling algorithms, prediction, replication, silent error detection and correction, together with some application-specific techniques such as ABFT. Emphasis is placed on analytical performance models. This is then followed by a review of general-purpose techniques, including several checkpoint and rollback recovery protocols. Relevant execution scenarios are also evaluated and compared through quantitative models. Features: provides a survey of resilience methods and performance models; examines the various sources for errors and faults in large-scale systems; reviews the spectrum of techniques that can be applied to design a fault-tolerant MPI; investigates different approaches to replication; discusses the challenge of energy consumption of fault-tolerance methods in extreme-scale systems.",
"The rapid progress in VLSI technology has reduced the cost of hardware, allowing multiple copies of low-cost processors to provide a large amount of computational capability for a small cost. In addition to achieving high performance, high reliability is also important to ensure that the results of long computations are valid. This paper proposes a novel system-level method of achieving high reliability, called algorithm-based fault tolerance. The technique encodes data at a high level, and algorithms are designed to operate on encoded data and produce encoded output data. The computation tasks within an algorithm are appropriately distributed among multiple computation units for fault tolerance. The technique is applied to matrix compomations which form the heart of many computation-intensive tasks. Algorithm-based fault tolerance schemes are proposed to detect and correct errors when matrix operations such as addition, multiplication, scalar product, LU-decomposition, and transposition are performed using multiple processor systems. The method proposed can detect and correct any failure within a single processor in a multiple processor system. The number of processors needed to just detect errors in matrix multiplication is also studied."
]
} |
1903.01165 | 2949143616 | We investigate the manipulation of power indices in TU-cooperative games by stimulating (subject to a budget constraint) changes in the propensity of other players to participate to the game. We display several algorithms that show that the problem is often tractable for so-called network centrality games and influence attribution games, as well as an example when optimal manipulation is intractable, even though computing power indices is feasible. | First of all, (see e.g. @cite_25 @cite_36 ) is a well-established theme in combinatorial optimization. Our removal model can be seen as a special case of node interdiction. | {
"cite_N": [
"@cite_36",
"@cite_25"
],
"mid": [
"2229536649",
"1538437077"
],
"abstract": [
"A network interdiction problem usually involves two players who compete in a min‐max or max‐min game. One player, the network owner, tries to optimize its objective over the network, for example, as measured by a shortest path, maximum flow, or minimum cost flow. The opposing player, called the interdictor, alters the owner’s network to maximally impair the owner’s objective (e.g., by destroying arcs that maximize the owner’s shortest path). This chapter",
"The field of network interdiction explores techniques for optimally impeding network operations using limited disruption actions. These disruptions may serve to remove network components, decrease arc capacities, or increase cost flows. Network interdiction models and algorithms help to identify vulnerabilities in a complex network without resorting to exhaustively enumerating worst-case scenarios, and can be coupled with fortification models to assess benefits of protecting a network. This article is presented at an introductory level to illustrate the use and breadth of interdiction models, along with a discussion of fortification and design problems. Keywords: interdiction; network; mathematical programming; mathematical models; game theory"
]
} |
1903.01165 | 2949143616 | We investigate the manipulation of power indices in TU-cooperative games by stimulating (subject to a budget constraint) changes in the propensity of other players to participate to the game. We display several algorithms that show that the problem is often tractable for so-called network centrality games and influence attribution games, as well as an example when optimal manipulation is intractable, even though computing power indices is feasible. | Results on the reliability extension of a cooperative game @cite_17 @cite_43 @cite_4 @cite_41 @cite_5 are naturally related. So is the rich literature on , both in non-cooperative and coalitional settings @cite_6 @cite_9 @cite_10 @cite_0 @cite_35 @cite_20 @cite_45 and @cite_42 in voting. Our framework covers both scenarios, that in which an external perpetrator bribes agents to change their reliabilities, and that in which this is done by a coalition of agents. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_41",
"@cite_9",
"@cite_42",
"@cite_6",
"@cite_0",
"@cite_43",
"@cite_45",
"@cite_5",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2492695942",
"1593893973",
"2163422411",
"2123942784",
"14982402",
"2102171270",
"3422056",
"",
"2277897815",
"2130293309",
"2269108872",
"",
"1839131291"
],
"abstract": [
"The Internet and social media have fuelled enormous interest in social network analysis. New tools continue to be developed and used to analyse our personal connections, with particular emphasis on detecting communities or identifying key individuals in a social network. This raises privacy concerns that are likely to exacerbate in the future. With this in mind, we ask the question ‘Can individuals or groups actively manage their connections to evade social network analysis tools?’ By addressing this question, the general public may better protect their privacy, oppressed activist groups may better conceal their existence and security agencies may better understand how terrorists escape detection. We first study how an individual can evade ‘node centrality’ analysis while minimizing the negative impact that this may have on his or her influence. We prove that an optimal solution to this problem is difficult to compute. Despite this hardness, we demonstrate how even a simple heuristic, whereby attention is restricted to the individual’s immediate neighbourhood, can be surprisingly effective in practice; for example, it could easily disguise Mohamed Atta’s leading position within the World Trade Center terrorist network. We also study how a community can increase the likelihood of being overlooked by community-detection algorithms. We propose a measure of concealment—expressing how well a community is hidden—and use it to demonstrate the effectiveness of a simple heuristic, whereby members of the community either ‘unfriend’ certain other members or ‘befriend’ some non-members in a coordinated effort to camouflage their community.",
"We examine the impact of independent agents failures on the solutions of cooperative games, focusing on totally balanced games and the more specific subclass of convex games. We follow the reliability extension model, recently proposed in [1] and show that a (approximately) totally balanced (or convex) game remains (approximately) totally balanced (or convex) when independent agent failures are introduced or when the failure probabilities increase. One implication of these results is that any reliability extension of a totally balanced game has a non-empty core. We propose an algorithm to compute such a core imputation with high probability. We conclude by outlining the effect of failures on non-emptiness of the core in cooperative games, especially in totally balanced games and simple games, thereby extending observations in [1].",
"We examine agent failures in weighted voting games. In our cooperative game model, R-WVG, each agent has a weight and a survival probability, and the value of an agent coalition is the probability that its surviving members would have a total weight exceeding a threshold. We propose algorithms for computing the value of a coalition, finding stable payoff allocations, and estimating the power of agents. We provide simulation results showing that on average the stability level of a game increases as the failure probabilities of the agents increase. This conforms to several recent results showing that failures increase stability in cooperative games.",
"In 1992, Bartholdi, Tovey, and Trick opened the study of control attacks on elections-- attempts to improve the election outcome by such actions as adding deleting candidates or voters. That work has led to many results on how algorithms can be used to find attacks on elections and how complexity-theoretic hardness results can be used as shields against attacks. However, all the work in this line has assumed that the attacker employs just a single type of attack. In this paper, we model and study the case in which the attacker launches a multipronged (i.e., multimode) attack. We do so to more realistically capture the richness of real-life settings. For example, an attacker might simultaneously try to suppress some voters, attract new voters into the election, and introduce a spoiler candidate. Our model provides a unified framework for such varied attacks. By constructing polynomialtime multiprong attack algorithms we prove that for various election systems even such concerted, flexible attacks can be perfectly planned in deterministic polynomial time.",
"We study the complexity of influencing elections through bribery: How computationally complex is it for an external actor to determine whether by a certain amount of bribing voters a specified candidate can be made the election's winner? We study this problem for election systems as varied as scoring protocols and Dodgson voting, and in a variety of settings regarding homogeneous-vs.-nonhomogeneous electorate bribability, bounded-size-vs.-arbitrary-sized candidate sets, weighted-vs.-unweighted voters, and succinct-vs.-nonsuccinct input specification. We obtain both polynomial-time bribery algorithms and proofs of the intractability of bribery, and indeed our results show that the complexity of bribery is extremely sensitive to the setting. For example, we find settings in which bribery is NP-complete but manipulation (by voters) is in P, and we find settings in which bribing weighted voters is NP-complete but bribing voters with individual bribe thresholds is in P. For the broad class of elections (including plurality, Borda, k-approval, and veto) known as scoring protocols, we prove a dichotomy result for bribery of weighted voters: We find a simple-to-evaluate condition that classifies every case as either NP-complete or in P.",
"Weighted voting is a classic model of cooperation among agents in decision-making domains. In such games, each player has a weight, and a coalition of players wins the game if its total weight meets or exceeds a given quota. A player's power in such games is usually not directly proportional to his weight, and is measured by a power index, the most prominent among which are the Shapley-Shubik index and the Banzhaf index. In this paper, we investigate by how much a player can change his power, as measured by the Shapley-Shubik index or the Banzhaf index, by means of a false-name manipulation, i.e., splitting his weight among two or more identities. For both indices, we provide upper and lower bounds on the effect of weight-splitting. We then show that checking whether a beneficial split exists is NP-hard, and discuss efficient algorithms for restricted cases of this problem, as well as randomized algorithms for the general case. We also provide an experimental evaluation of these algorithms. Finally, we examine related forms of manipulative behavior, such as annexation, where a player subsumes other players, or merging, where several players unite into one. We characterize the computational complexity of such manipulations and provide limits on their effects. For the Banzhaf index, we describe a new paradox, which we term the Annexation Non-monotonicity Paradox.",
"We study the effects of bidder collaboration in all-pay auctions. We analyse both mergers, where the remaining players are aware of the agreement between the cooperating participants, and collusion, where the remaining players are unaware of this agreement. We examine two scenarios: the sum-profit model where the auctioneer obtains the sum of all submitted bids, and the max-profit model of crowdsourcing contests where the auctioneer can only use the best submissions and thus obtains only the winning bid. We show that while mergers do not change the expected utility of the participants, or the principal's utility in the sum-profit model, collusion transfers the utility from the non-colluders to the colluders. Surprisingly, we find that in some cases such collaboration can increase the social welfare. Moreover, mergers and, curiously, also collusion can even be beneficial to the auctioneer under certain conditions.",
"",
"Hedonic games model agents that decide which other agents they will join, given some preferences on other agents. We study Sybil attacks on such games, by a malicious agent which introduces multiple false identities, so that the outcome of the game is more interesting for itself. First taking Nash stability as the solution concept, we consider two simple manipulations, and show that they are essentially the only possible Sybil manipulations. Moreover, small experiments show that they are seldom possible in random games. We exhibit another simple manipulation on the concepts of (contractual) individual stability afterwards. Then we show that such hedonic games are very sensitive to Sybil manipulations, which contrasts sharply with the Nash case.",
"Procuring multiple agents with different ability levels to independently solve the same task is common in labor markets, crowdsourcing environments and research and development projects due to two reasons: some agents may fail to provide a satisfactory solution, and the redundancy increases the quality of the best solution found. However, incentivizing large number of agents to compete for one task is difficult; agents need fair ex-ante guaranteed payoffs that consider their ability levels and failure rates to exert efforts. We model such domains as a cooperative game called the Max-Game, where each agent has a weight representing its ability level, and the value of an agent coalition is the maximal weight of the agents in the coalition. When agents may fail, we redefine the value of a coalition as the expected maximal weight of its surviving members. We analyze the core, the Shapley value, and the Banzhaf index as methods of payoff division. Surprisingly, the latter two, which are usually computationally hard, can be computed in polynomial time. Finally, we initiate the study of a new form of sabotage where agents may be incentivized to influence the failure probabilities of their peers, and show that no such incentive is present in a restricted case of Max-Games.",
"Weighted voting games provide a simple model of decision-making in human societies and multi-agent systems. Such games are described by a set of players, a list of [email protected]? weights, and a quota; a coalition of the players is said to be winning if the total weight of its members meets or exceeds the quota. The power of a player in a weighted voting game is traditionally identified with her Shapley-Shubik index or her Banzhaf index, two classic power measures that reflect the [email protected]?s marginal contribution under different coalition formation scenarios. In this paper, we investigate by how much one can change a [email protected]?s power, as measured by these indices, by modifying the quota. We give tight bounds on the changes in the individual [email protected]?s power that can result from a change in quota. We then describe an efficient algorithm for determining whether there is a value of the quota that makes a given player a dummy, i.e., reduces her power (as measured by both indices) to 0. We also study how the choice of quota can affect the relative power of the players. Finally, we investigate scenarios where [email protected]?s choice in setting the quota is constrained. We show that optimally choosing between two values of the quota is complete for the complexity class PP, which is believed to be significantly more powerful than NP. On the other hand, we empirically demonstrate that even small changes in quota can have a significant effect on a [email protected]?s power.",
"",
"We propose a natural model for agent failures in congestion games. In our model, each of the agents may fail to participate in the game, introducing uncertainty regarding the set of active agents. We examine how such uncertainty may change the Nash equilibria (NE) of the game. We prove that although the perturbed game induced by the failure model is not always a congestion game, it still admits at least one pure Nash equilibrium. Then, we turn to examine the effect of failures on the maximal social cost in any NE of the perturbed game. We show that in the limit case where failure probability is negligible new equilibria never emerge, and that the social cost may decrease but it never increases. For the case of nonnegligible failure probabilities, we provide a full characterization of the maximal impact of failures on the social cost under worst-case equilibrium outcomes."
]
} |
1903.01165 | 2949143616 | We investigate the manipulation of power indices in TU-cooperative games by stimulating (subject to a budget constraint) changes in the propensity of other players to participate to the game. We display several algorithms that show that the problem is often tractable for so-called network centrality games and influence attribution games, as well as an example when optimal manipulation is intractable, even though computing power indices is feasible. | A lot of work has been devoted recently to measuring and characterizing in multi-agent settings @cite_29 @cite_22 @cite_19 . Synergies between players in cooperative games are obviously relevant to the theme of this paper: synergic agents' participation to coalitions increases the Shapley value of the given agent. The nature of some of our results (Theorems , and ), that target nodes in a fixed order, provide a concrete way for ranking synergies between these nodes and the attacked one. | {
"cite_N": [
"@cite_19",
"@cite_29",
"@cite_22"
],
"mid": [
"2091149117",
"2406209768",
"8289063"
],
"abstract": [
"Previous approaches to select agents to form a team rely on single-agent capabilities, and team performance is treated as a sum of such known capabilities. Motivated by complex team formation situations, we address the problem where both single-agent capabilities may not be known upfront, e.g., as in ad hoc teams, and where team performance goes beyond single-agent capabilities and depends on the specific synergy among agents. We formally introduce a novel weighted synergy graph model to capture new interactions among agents. Agents are represented as vertices in the graph, and their capabilities are represented as Normally-distributed variables. The edges of the weighted graph represent how well the agents work together, i.e., their synergy in a team. We contribute a learning algorithm that learns the weighted synergy graph using observations of performance of teams of only two and three agents. Further, we contribute two team formation algorithms, one that finds the optimal team in exponential time, and one that approximates the optimal team in polynomial time. We extensively evaluate our learning algorithm, and demonstrate the expressiveness of the weighted synergy graph in a variety of problems. We show our approach in a rich ad hoc team formation problem capturing a rescue domain, namely the RoboCup Rescue domain, where simulated robots rescue civilians and put out fires in a simulated urban disaster. We show that the weighted synergy graph outperforms a competing algorithm, thus illustrating the efficacy of our model and algorithms.",
"We investigate synergy, or lack thereof, between agents in co-operative games, building on the popular notion of Shapley value. We think of a pair of agents as synergistic (resp., antagonistic) if the Shapley value of one agent when the other agent participates in a joint effort is higher (resp. lower) than when the other agent does not participate. Our main theoretical result is that any graph specifying synergistic and antagonistic pairs can arise even from a restricted class of cooperative games. We also study the computational complexity of determining whether a given pair of agents is synergistic. Finally, we use the concepts developed in the paper to uncover the structure of synergies in two real-world organizations, the European Union and the International Monetary Fund.",
"The performance of a team at a task depends critically on the composition of its members. There is a notion of synergy in human teams that represents how well teams work together, and we are interested in modeling synergy in multi-agent teams. We focus on the problem of team formation, i.e., selecting a subset of a group of agents in order to perform a task, where each agent has its own capabilities, and the performance of a team of agents depends on the individual agent capabilities as well as the synergistic effects among the agents. We formally define synergy and how it can be computed using a synergy graph, where the distance between two agents in the graph correlates with how well they work together. We contribute a learning algorithm that learns a synergy graph from observations of the performance of subsets of the agents, and show that our learning algorithm is capable of learning good synergy graphs without prior knowledge of the interactions of the agents or their capabilities. We also contribute an algorithm to solve the team formation problem using the learned synergy graph, and experimentally show that the team formed by our algorithm outperforms a competing algorithm."
]
} |
1903.00846 | 2919800272 | Internet of Things (IoT) is a novel paradigm, which not only facilitates a large number of devices to be ubiquitously connected over the Internet but also provides a mechanism to remotely control these devices. The IoT is pervasive and is almost an integral part of our daily life. As devices are becoming increasingly connected, privacy and security issues become more and more critical and these need to be addressed on an urgent basis. IoT implementations and devices are eminently prone to threats that could compromise the security and privacy of the consumers, which, in turn, could influence its practical deployment. In recent past, some research has been carried out to secure IoT devices with an intention to alleviate the security concerns of users. The purpose of this paper is to highlight the security and privacy issues in IoT systems. To this effect, the paper examines the security issues at each layer in the IoT protocol stack, identifies the underlying challenges and key security requirements and provides a brief overview of existing security solutions to safeguard the IoT from the layered context. | The term Internet of Things (IoT) was first introduced by Kevin Ashton of MIT’s Auto-ID lab in 1998 @cite_60 . IoT is believed to be the most influential technology of the generation after Internet. The number of interconnected physical devices is significantly increasing and it has already surpassed the human population in 2010. There has been significant work in the development of IoT-enabled devices in the recent time. The advances in the technologies in terms of resources constrained and energy-efficient devices have extended the outreach of internet even to the remote locations also. The number of interconnected physical devices have exceeded all expectations. | {
"cite_N": [
"@cite_60"
],
"mid": [
"2022678714"
],
"abstract": [
"Abstract The Internet of Things as an emerging global, Internet-based information service architecture facilitating the exchange of goods in global supply chain networks is developing on the technical basis of the present Domain Name System; drivers are private actors. Learning from the experiences with the “traditional” Internet governance it is important to tackle the relevant issues of a regulatory framework from the beginning; in particular, the implementation of an independently managed decentralized multiple-root system and the establishment of basic governance principles (such as transparency and accountability, legitimacy of institutional bodies, inclusion of civil society) are to be envisaged."
]
} |
1903.00984 | 2918469223 | Advances in sensor technologies, object detection algorithms, planning frameworks and hardware designs have motivated the deployment of robots in warehouse automation. A variety of such applications, like order fulfillment or packing tasks, require picking objects from unstructured piles and carefully arranging them in bins or containers. Desirable solutions need to be low-cost, easily deployable and controllable, making minimalistic hardware choices desirable. The challenge in designing an effective solution to this problem relates to appropriately integrating multiple components, so as to achieve a robust pipeline that minimizes failure conditions. The current work proposes a complete pipeline for solving such packing tasks, given access only to RGB-D data and a single robot arm with a vacuum-based end-effector, which is also used as a pushing finger. To achieve the desired level of robustness, three key manipulation primitives are identified, which take advantage of the environment and simple operations to successfully pack multiple cubic objects. The overall approach is demonstrated to be robust to execution and perception errors. The impact of each manipulation primitive is evaluated by considering different versions of the proposed pipeline, which incrementally introduce reasoning about object poses and corrective manipulation actions. | Most efforts in robot picking relate to grasping with form- or force-closure using fingered hands @cite_37 . While early works focused on standalone objects @cite_15 , recent efforts are oriented toward objects in clutter @cite_27 @cite_25 @cite_8 . Either analytical or empirical @cite_27 , grasping techniques typically identify points on object surfaces and hand configurations that allow grasping. Analytical methods rely on mechanical and geometric object models to identify stable grasps. Empirical techniques rely on examples of successful or failed grasps to train a classifier to predict the success probabilities of grasps on novel objects @cite_16 . Analytical methods can be applied in many setups but are prone to modeling errors. Empirical methods are model-free and efficient @cite_30 but require a large number of data. The heuristic picking strategy presented here inherits the properties of analytical methods while reducing computational burden. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_8",
"@cite_27",
"@cite_15",
"@cite_16",
"@cite_25"
],
"mid": [
"2773721443",
"2005824379",
"2405660904",
"2950303304",
"1978580730",
"2963033241",
"2128613422"
],
"abstract": [
"",
"This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands. Robotic grasping has been an active research subject for decades, and a great deal of effort has been spent on grasp synthesis algorithms. Existing papers focus on reviewing the mechanics of grasping and the finger-object contact interactions Bicchi and Kumar (2000) [12] or robot hand design and their control Al- (1993) [70]. Robot grasp synthesis algorithms have been reviewed in Shimoga (1996) [71], but since then an important progress has been made toward applying learning techniques to the grasping problem. This overview focuses on analytical as well as empirical grasp synthesis approaches.",
"We present a fully autonomous robotic system for grasping objects in dense clutter. The objects are unknown and have arbitrary shapes. Therefore, we cannot rely on prior models. Instead, the robot learns online, from scratch, to manipulate the objects by trial and error. Grasping objects in clutter is significantly harder than grasping isolated objects, because the robot needs to push and move objects around in order to create sufficient space for the fingers. These pre-grasping actions do not have an immediate utility, and may result in unnecessary delays. The utility of a pre-grasping action can be measured only by looking at the complete chain of consecutive actions and effects. This is a sequential decision-making problem that can be cast in the reinforcement learning framework. We solve this problem by learning the stochastic transitions between the observed states, using nonparametric density estimation. The learned transition function is used only for re-calculating the values of the executed actions in the observed states, with different policies. Values of new state-actions are obtained by regressing the values of the executed actions. The state of the system at a given time is a depth (3D) image of the scene. We use spectral clustering for detecting the different objects in the image. The performance of our system is assessed on a robot with real-world objects.",
"",
"This article presents a survey of the existing computational algorithms meant for achieving four important properties in autonomous multifingered robotic hands. The four properties are: dexterity, equilibrium, stability, and dynamic behavior The multifingered robotic hands must be controlled so as to possess these properties and hence be able to autonomously perform complex tasks in a way similar to human hands.Existing algorithms to achieve dexterity primarily involve solving an unconstrained linear programming problem where an objective function can be chosen to represent one or more of the currently known dexterity measures. Algorithms to achieve equilibrium also constitute solving a linear program ming problem wherein the positivity, friction, and joint torque constraints of all fingers are accounted for while optimizing the internal grasping forces. Stability algorithms aim at achiev ing positive definite grasp impedance matrices by solving for the required fingertip impedances. This problem reduces ...",
"This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping. Our proposed Generative Grasping Convolutional Neural Network (GG-CNN) predicts the quality and pose of grasps at every pixel. This one-to-one mapping from a depth image overcomes limitations of current deep-learning grasping techniques by avoiding discrete sampling of grasp candidates and long computation times. Additionally, our GG-CNN is orders of magnitude smaller while detecting stable grasps with equivalent performance to current state-of-the-art techniques. The light- weight and single-pass generative nature of our GG-CNN allows for closed-loop control at up to 50Hz, enabling accurate grasping in non-static environments where objects move and in the presence of robot control inaccuracies. In our real-world tests, we achieve an 83 grasp success rate on a set of previously unseen objects with adversarial geometry and 88 on a set of household objects that are moved during the grasp attempt. We also achieve 81 accuracy when grasping in dynamic clutter.",
"Manipulating natural objects of irregular shapes, such as rocks, is an essential capability of robots operating in outdoor environments. Physics-based simulators are commonly used to plan stable grasps for man-made objects. However, planning is an expensive process that is based on simulating hand and object trajectories in different configurations, and evaluating the outcome of each trajectory. This problem is particularly concerning when the objects are irregular or cluttered, because the space of feasible grasps is significantly smaller, and more configurations need to be evaluated before finding a good one. In this paper, we first present a learning technique for fast detection of an initial set of potentially stable grasps in a cluttered scene. The best detected grasps are further optimized by fine-tuning the configuration of the hand in simulation. To reduce the computational burden of this last operation, we model the outcomes of the grasps as a Gaussian Process, and use an entropy-search method in order to focus the optimization on regions where the best grasp is most likely to be. This approach is tested on the task of clearing piles of real, unknown, rock debris with an autonomous robot. Empirical results show a clear advantage of the proposed approach when the time window for decision is short."
]
} |
1903.00984 | 2918469223 | Advances in sensor technologies, object detection algorithms, planning frameworks and hardware designs have motivated the deployment of robots in warehouse automation. A variety of such applications, like order fulfillment or packing tasks, require picking objects from unstructured piles and carefully arranging them in bins or containers. Desirable solutions need to be low-cost, easily deployable and controllable, making minimalistic hardware choices desirable. The challenge in designing an effective solution to this problem relates to appropriately integrating multiple components, so as to achieve a robust pipeline that minimizes failure conditions. The current work proposes a complete pipeline for solving such packing tasks, given access only to RGB-D data and a single robot arm with a vacuum-based end-effector, which is also used as a pushing finger. To achieve the desired level of robustness, three key manipulation primitives are identified, which take advantage of the environment and simple operations to successfully pack multiple cubic objects. The overall approach is demonstrated to be robust to execution and perception errors. The impact of each manipulation primitive is evaluated by considering different versions of the proposed pipeline, which incrementally introduce reasoning about object poses and corrective manipulation actions. | The 3D bin packing problem, which is NP-hard @cite_5 @cite_3 , requires objects of different or similar volumes to be packed into a cubical bin. Most strategies search for @math -optimal solutions via greedy algorithms @cite_20 . While bin packing is studied extensively, to the best of the authors' knowledge there are few attempts to deploy bin packing solutions on real robots, where inaccuracies in vision and control are taken into account. Such inaccuracies have been considered in the context of efforts relating to the Amazon Robotics Challenge @cite_13 @cite_12 @cite_21 @cite_11 @cite_0 @cite_6 but most of these systems do not deal with bin packing. Most deployments of automatic packing use mechanical components, such as conveyor trays, that are specifically designed for certain products @cite_10 , rendering them difficult to customize and deploy. Industrial packing systems also assume that the objects are already mechanically sorted before packing. While this work makes the assumption that the objects have the same dimensions, the setup is more challenging as the objects are randomly thrown in the initial bin. | {
"cite_N": [
"@cite_10",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_13",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"1548119923",
"2963394817",
"2101057470",
"2221752211",
"2551030640",
"",
"2529170537",
"2890982024",
"2068863978",
"2766201361"
],
"abstract": [
"This paper describes a strawberry-harvesting robot, a packing robot, and a movable bench system. The harvesting and packing operations in strawberry production require harder, more time-consuming work compared to other operations such as transplanting and chemical spraying, making automation of these tasks desirable. Since harvesting and packing operation account for half of total working hours, automation of these tasks are strongly desired. First of all, based on the findings of many studies on strawberry-harvesting robots for soil culture and elevated substrate culture, our institute of the Bio-oriented Technology Research Advancement Institution and Shibuya Seiki developed a commercial model of a strawberry-harvesting robot, which is chiefly composed a cylindrical manipulator, machine vision, an end-effector, and traveling platform. The results showed an average 54.9 harvesting success rate, 8.6 s cycle time of picking operation, and 102.5 m h work efficiency in hanging-type growing beds in an experimental greenhouse. Secondly, a prototype automatic packing robot consisting of a supply unit and a packing unit was developed. The supply unit picks up strawberries from a harvesting container, and the packing unit sucks each fruit from calyx side and locates its orientation into a tray. Performance testing showed that automatic packing had a task success rate of 97.3 , with a process time per fruit of 7.3 s. Thirdly, a movable bench system was developed, which makes planting beds rotate in longitudinal and lateral ways. This system brought high density production and labour saving operation at a fixed position, such as crop maintenance and harvesting. By setting up the main body of a strawberry-harvesting robot on working space, unmanned operation technique was developed and tested in an experimental greenhouse. Field experiments of these new automation technologies were conducted and gave a potential of practical use.",
"Ahstract-The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing, addressing development in robotic vision and manipulation. This paper presents the design of our custom-built, cost-effective, Cartesian robot system Cartman, which won first place in the competition finals by stowing 14 (out of 16) and picking all 9 items in 27 minutes, scoring a total of 272 points. We highlight our experience-centred design methodology and key aspects of our system that contributed to our competitiveness. We believe these aspects are crucial to building robust and effective robotic systems.",
"The problem addressed in this paper is that of orthogonally packing a given set of rectangular-shaped items into the minimum number of three-dimensional rectangular bins. The problem is strongly NP-hard and extremely difficult to solve in practice. Lower bounds are discussed, and it is proved that the asymptotic worst-case performance ratio of the continuous lower bound is ?. An exact algorithm for filling a single bin is developed, leading to the definition of an exact branch-and-bound algorithm for the three-dimensional bin packing problem, which also incorporates original approximation algorithms. Extensive computational results, involving instances with up to 90 items, are presented: It is shown that many instances can be solved to optimality within a reasonable time limit.",
"An important logistics application of robotics involves manipulators that pick-and-place objects placed in warehouse shelves. A critical aspect of this task corresponds to detecting the pose of a known object in the shelf using visual data. Solving this problem can be assisted by the use of an RGBD sensor, which also provides depth information beyond visual data. Nevertheless, it remains a challenging problem since multiple issues need to be addressed, such as low illumination inside shelves, clutter, texture-less and reflective objects as well as the limitations of depth sensors. This letter provides a new rich dataset for advancing the state-of-the-art in RGBD-based 3D object pose estimation, which is focused on the challenges that arise when solving warehouse pick-and-place tasks. The publicly available dataset includes thousands of images and corresponding ground truth data for the objects used during the first Amazon Picking Challenge at different poses and clutter conditions. Each image is accompanied with ground truth information to assist in the evaluation of algorithms for object detection. To show the utility of the dataset, a recent algorithm for RGBD-based pose estimation is evaluated in this letter. Given the measured performance of the algorithm on the dataset, this letter shows how it is possible to devise modifications and improvements to increase the accuracy of pose estimation algorithms. This process can be easily applied to a variety of different methodologies for object pose detection and improve performance in the domain of warehouse pick-and-place.",
"This paper studies two end-effector modalities for warehouse picking: (i) a recently developed, underactuated three-finger hand and (ii) a custom built, vacuum-based gripper. The two systems differ on how they pick objects. The first tool provides increased flexibility, while the vacuum alternative is simpler and smaller. The aim is to show how the end-effector influences the success rate and speed of robotic picking. For the study, the same planning process is followed for known poses of multiple objects with different geometries and characteristics. The resulting trajectories are executed on a real system showing that, under different conditions, different types of end-effectors can be beneficial. This motivates the development of hybrid solutions.",
"",
"This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge. Note to Practitioners —Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"Robotic picking from cluttered bins is a demanding task, for which Amazon Robotics holds challenges. The 2017 Amazon Robotics Challenge (ARC) required stowing items into a storage system, picking specific items, and packing them into boxes. In this paper, we describe the entry of team NimbRo Picking. Our deep object perception pipeline can be quickly and efficiently adapted to new items using a custom turntable capture system and transfer learning. It produces high-quality item segments, on which grasp poses are found. A planning component coordinates manipulation actions between two robot arms, minimizing execution time. The system has been demonstrated successfully at ARC, where our team reached second places in both the picking task and the final stow-and-pick task. We also evaluate individual components.",
"We prove that the First Fit bin packing algorithm is stable under the input distribution Uk − 2; k for all k ≥ 3, settling an open question from the recent survey by Coffman, Garey, and Johnson [“Approximation algorithms for bin backing: A survey,” Approximation algorithms for NP-hard problems, D. Hochbaum (Editor), PWS, Boston, 1996]. Our proof generalizes the multidimensional Markov chain analysis used by Kenyon, Sinclair, and Rabani to prove that Best Fit is also stable under these distributions [Proc Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, 1995, pp. 351–358]. Our proof is motivated by an analysis of Random Fit, a new simple packing algorithm related to First Fit, that is interesting in its own right. We show that Random Fit is stable under the input distributions Uk− 2; k, as well as present worst case bounds and some results on distributions Uk− 1; k and Uk; k for Random Fit. © 2000 John Wiley & Sons, Inc. Random Struct. Alg., 16, 240–259, 2000 Correspondence to: Michael Mitzenmacher. *Most of this work was done while at the Max-Planch-Institut fur Informatik, Saarbrucken, Germany. † A substantial portion of this research was done while at the Computer Science Department, UC Berkeley and Digital Equipment Corporation Systems Research Center. Contract grant sponsor: National Science Foundation. Contract grant number: CCR-9505448. © 2000 John Wiley & Sons, Inc.",
"This work proposes a process for efficiently searching over combinations of individual object 6D pose hypotheses in cluttered scenes, especially in cases involving occlusions and objects resting on each other. The initial set of candidate object poses is generated from state-of-the-art object detection and global point cloud registration techniques. The best-scored pose per object by using these techniques may not be accurate due to overlaps and occlusions. Nevertheless, experimental indications provided in this work show that object poses with lower ranks may be closer to the real poses than ones with high ranks according to registration techniques. This motivates a global optimization process for improving these poses by taking into account scene-level physical interactions between objects. It also implies that the Cartesian product of candidate poses for interacting objects must be searched so as to identify the best scene-level hypothesis. To perform the search efficiently, the candidate poses for each object are clustered so as to reduce their number but still keep a sufficient diversity. Then, searching over the combinations of candidate object poses is performed through a Monte Carlo Tree Search (MCTS) process that uses the similarity between the observed depth image of the scene and a rendering of the scene given the hypothesized pose as a score that guides the search procedure. MCTS handles in a principled way the tradeoff between fine-tuning the most promising poses and exploring new ones, by using the Upper Confidence Bound (UCB) technique. Experimental results indicate that this process is able to quickly identify in cluttered scenes physically-consistent object poses that are significantly closer to ground truth compared to poses found by point cloud registration methods."
]
} |
1903.00984 | 2918469223 | Advances in sensor technologies, object detection algorithms, planning frameworks and hardware designs have motivated the deployment of robots in warehouse automation. A variety of such applications, like order fulfillment or packing tasks, require picking objects from unstructured piles and carefully arranging them in bins or containers. Desirable solutions need to be low-cost, easily deployable and controllable, making minimalistic hardware choices desirable. The challenge in designing an effective solution to this problem relates to appropriately integrating multiple components, so as to achieve a robust pipeline that minimizes failure conditions. The current work proposes a complete pipeline for solving such packing tasks, given access only to RGB-D data and a single robot arm with a vacuum-based end-effector, which is also used as a pushing finger. To achieve the desired level of robustness, three key manipulation primitives are identified, which take advantage of the environment and simple operations to successfully pack multiple cubic objects. The overall approach is demonstrated to be robust to execution and perception errors. The impact of each manipulation primitive is evaluated by considering different versions of the proposed pipeline, which incrementally introduce reasoning about object poses and corrective manipulation actions. | Non-prehensile manipulation, such as pushing, has been shown to help grasping objects in clutter @cite_1 @cite_33 . In these works, pushing is used to reduce the uncertainty of a target object's pose. Through pushing, target objects move into graspable regions. This work follows the same principle and relies on pushing actions to counter the effects of inaccurate localization and point cloud registration. The problem is different because pushing is used for placing instead of grasping objects. The proposed method also uses pushing actions for toppling picked objects to change their orientations as well as to re-arrange misaligned objects. Other efforts have also considered pushing as a primitive in rearrangement tasks @cite_7 @cite_14 . The proposed system takes advantage of the compliance properties of the end-effector and leverages collisions with the environment to accurately place objects or topple them. A closely related approach @cite_19 performs within-hand manipulation of a grasped object by pushing it against its environment. | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_7",
"@cite_1",
"@cite_19"
],
"mid": [
"",
"1989021449",
"1522963174",
"143499627",
"1198461769"
],
"abstract": [
"",
"Robotic manipulation systems suffer from two main problems in unstructured human environments: uncertainty and clutter. We introduce a planning framework addressing these two issues. The framework plans rearrangement of clutter using non-prehensile actions, such as pushing. Pushing actions are also used to manipulate object pose uncertainty. The framework uses an action library that is derived analytically from the mechanics of pushing and is provably conservative. The framework reduces the problem to one of combinatorial search, and demonstrates planning times on the order of seconds. With the extra functionality, our planner succeeds where traditional grasp planners fail, and works under high uncertainty by utilizing the funneling effect of pushing. We demonstrate our results with experiments in simulation and on HERB, a robotic platform developed at the Personal Robotics Lab at Carnegie Mellon University.",
"We present a randomized kinodynamic planner that solves rearrangement planning problems. We embed a physics model into the planner to allow reasoning about interaction with objects in the environment. By carefully selecting this model, we are able to reduce our state and action space, gaining tractability in the search. The result is a planner capable of generating trajectories for full arm manipulation and simultaneous object interaction. We demonstrate the ability to solve more rearrangement by pushing tasks than existing primitive based solutions. Finally, we show the plans we generate are feasible for execution on a real robot.",
"Humans use a remarkable set of strategies to manipulate objects in clutter. We pick up, push, slide, and sweep with our hands and arms to rearrange clutter surrounding our primary task. But our robots treat the world like the Tower of Hanoi — moving with pick-and-place actions and fearful to interact with it with anything but rigid grasps. This produces inefficient plans and is often inapplicable with heavy, large, or otherwise ungraspable objects. We introduce a framework for planning in clutter that uses a library of actions inspired by human strategies. The action library is derived analytically from the mechanics of pushing and is provably conservative. The framework reduces the problem to one of combinatorial search, and demonstrates planning times on the order of seconds. With the extra functionality, our planner succeeds where traditional grasp planners fail, and works under high uncertainty by utilizing the funneling effect of pushing. We demonstrate our results with experiments in simulation and on HERB, a robotic platform developed at the Personal Robotics Lab at Carnegie Mellon University.",
"This paper explores the manipulation of a grasped object by pushing it against its environment. Relying on precise arm motions and detailed models of frictional contact, prehensile pushing enables dexterous manipulation with simple manipulators, such as those currently available in industrial settings, and those likely affordable by service and field robots."
]
} |
1903.00862 | 2920248155 | Network motifs are patterns of over-represented node interactions in a network which have been previously used as building blocks to understand various aspects of the social networks. In this paper, we use motif patterns to characterize the information diffusion process in social networks. We study the lifecycle of information cascades to understand what leads to saturation of growth in terms of cascade reshares, thereby resulting in expiration, an event we call diffusion inhibition''. In an attempt to understand what causes inhibition, we use motifs to dissect the network obtained from information cascades coupled with traces of historical diffusion or social network links. Our main results follow from experiments on a dataset of cascades from the Weibo platform and the Flixster movie ratings. We observe the temporal counts of 5-node undirected motifs from the cascade temporal networks leading to the inhibition stage. Empirical evidences from the analysis lead us to conclude the following about stages preceding inhibition: (1) individuals tend to adopt information more from users they have known in the past through social networks or previous interactions thereby creating patterns containing triads more frequently than acyclic patterns with linear chains and (2) users need multiple exposures or rounds of social reinforcement for them to adopt an information and as a result information starts spreading slowly thereby leading to the death of the cascade. Following these observations, we use motif based features to predict the edge cardinality of the network exhibited at the time of inhibition. We test features of motif patterns by using regression models for both individual patterns and their combination and we find that motifs as features are better predictors of the future network organization than individual node centralities. | In this paper, we use network features in the stages preceding inhibition to predict the attributes of the inhibition network structure of cascades. Such problems of structure prediction has been a subject of research mainly from a diffusion network inference perspective @cite_39 . Using topological features of the network structure has been used in @cite_4 to predict the future links of the network. In this paper, we adopt motif based network features to observe whether the motifs' appearances are indicators of how the nodes would organize themselves to form the network during inhibition. | {
"cite_N": [
"@cite_4",
"@cite_39"
],
"mid": [
"2542727820",
"2164900957"
],
"abstract": [
"Online social networking sites have become increasingly popular over the last few years. As a result, new interdisciplinary research directions have emerged in which social network analysis methods are applied to networks containing hundreds millions of users. Unfortunately, links between individuals may be missing due to imperfect acquirement processes or because they are not yet reflected in the online network (i.e., friends in real world did not form a virtual connection.) Existing link prediction techniques lack the scalability required for full application on a continuously growing social network which may be adding everyday users with thousands of connections. The primary bottleneck in link prediction techniques is extracting structural features required for classifying links. In this paper we propose a set of simple, easy-to-compute structural features that can be analyzed to identify missing links. We show that a machine learning classifier trained using the proposed simple structural features can successfully identify missing links even when applied to a hard problem of classifying links between individuals who have at least one common friend. A new friends measure that we developed is shown to be a good predictor for missing links and an evaluation experiment was performed on five large social networks datasets: Face book, Flickr, You Tube, Academia and The Marker. Our methods can provide social network site operators with the capability of helping users to find known, offline contacts and to discover new friends online. They may also be used for exposing hidden links in an online social network.",
"Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected – but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data – observed infection times of nodes – we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data."
]
} |
1903.00847 | 2969040309 | In this paper, we present an online two-level vehicle trajectory prediction framework for urban autonomous driving where there are complex contextual factors, such as lane geometries, road constructions, traffic regulations and moving agents. Our method combines high-level policy anticipation with low-level context reasoning. We leverage a long short-term memory (LSTM) network to anticipate the vehicle’s driving policy (e.g., forward, yield, turn left, turn right, etc.) using its sequential history observations. The policy is then used to guide a low-level optimization-based context reasoning process. We show that it is essential to incorporate the prior policy anticipation due to the multimodal nature of the future trajectory. Moreover, contrary to existing regression-based trajectory prediction methods, our optimization-based reasoning process can cope with complex contextual factors. The final output of the two-level reasoning process is a continuous trajectory that automatically adapts to different traffic configurations and accurately predicts future vehicle motions. The performance of the proposed framework is analyzed and validated in an emerging autonomous driving simulation platform (CARLA). | The problem of vehicle trajectory prediction has been actively studied in the literature. As concluded in @cite_11 , there are three levels of prediction models, namely, physics-based, maneuver-based and interaction-aware motion models. Physics-based motion models use dynamic and kinematic vehicle models to propagate future states @cite_4 @cite_27 . However, the prediction results only hold for the very short-term (less than one second). Maneuver-based motion models are more advanced in the sense that the model may forecast relatively complex maneuvers, such as lane change and turns at intersections, by revealing the maneuver pattern. Many of the works on this level present a probabilistic framework to account for the uncertainty and variation of the motion patterns, such as Gaussian processes (GPs) @cite_26 @cite_31 , Monte Carlo sampling @cite_7 , Gaussian mixture models (GMMs) @cite_5 and hidden Markov models @cite_28 . However, they typically assume vehicles are independent entities and fail to model interactions within the context and with other agents. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_11"
],
"mid": [
"2105242877",
"2048093873",
"2123665586",
"2084246939",
"2123050817",
"2099040926",
"",
"2097545165"
],
"abstract": [
"Maneuver recognition and trajectory prediction of moving vehicles are two important and challenging tasks of advanced driver assistance systems (ADAS) at urban intersections. This paper presents a continuing work to handle these two problems in a consistent framework using non-parametric regression models. We provide a feature normalization scheme and present a strategy for constructing three-dimensional Gaussian process regression models from two-dimensional trajectory patterns. These models can capture spatio-temporal characteristics of traffic situations. Given a new, partially observed and unlabeled trajectory, the maneuver can be recognized online by comparing the likelihoods of the observation data for each individual regression model. Furthermore, we take advantage of our representation for trajectory prediction. Because predicting possible trajectories at urban intersection involves obvious multimodalities and non-linearities, we employ the Monte Carlo method to handle these difficulties. This approach allows the incremental prediction of possible trajectories in situations where unimodal estimators such as Kalman Filters would not work well. The proposed framework is evaluated experimentally in urban intersection scenarios using real-world data.",
"In this paper, we present our approach for collision risk estimation between vehicles. The vehicles are equipped with GPS receivers and communication devices. Our approach consists on using the knowledge given trough communication tool to predict the trajectories of the surrounding vehicles. Based on these trajectories, we identify the configurations of the collisions between vehicles. The risk is calculated using several indicators that are reflecting not only the possible collisions but also the dangerousness of these collisions. Our algorithm is tested on crossroads using scenarios involving real prototypes producing realistic scenarios.",
"This paper presents a threat-assessment algorithm for general road scenes. A road scene consists of a number of objects that are known, and the threat level of the scene is based on their current positions and velocities. The future driver inputs of the surrounding objects are unknown and are modeled as random variables. In order to capture realistic driver behavior, a dynamic driver model is implemented as a probabilistic prior, which computes the likelihood of a potential maneuver. A distribution of possible future scenarios can then be approximated using a Monte Carlo sampling. Based on this distribution, different threat measures can be computed, e.g., probability of collision or time to collision. Since the algorithm is based on the Monte Carlo sampling, it is computationally demanding, and several techniques are presented to increase performance without increasing computational load. The algorithm is intended both for online safety applications in a vehicle and for offline data analysis.",
"The ability to classify driver behavior lays the foundation for more advanced driver assistance systems. In particular, improving safety at intersections has been identified as a high priority due to the large number of intersection-related fatalities. This paper focuses on developing algorithms for estimating driver behavior at road intersections and validating them on real traffic data. It introduces two classes of algorithms that can classify drivers as compliant or violating. They are based on (1) support vector machines and (2) hidden Markov models, which are two very popular machine learning approaches that have been used successfully for classification in multiple disciplines. However, existing work has not explored the benefits of applying these techniques to the problem of driver behavior classification at intersections. The developed algorithms are successfully validated using naturalistic intersection data collected in Christiansburg, VA, through the U.S. Department of Transportation Cooperative Intersection Collision Avoidance System for Violations initiative. Their performances are also compared with those of three traditional methods, and the results show significant improvements with the new algorithms.",
"This paper presents a model-based algorithm that estimates how the driver of a vehicle can either steer, brake, or accelerate to avoid colliding with an arbitrary object. In this algorithm, the motion of the vehicle is described by a linear bicycle model, and the perimeter of the vehicle is represented by a rectangle. The estimated perimeter of the object is described by a polygon that is allowed to change size, shape, position, and orientation at sampled time instances. Potential evasive maneuvers are modeled, parameterized, and approximated such that an analytical expression can be derived to estimate the set of maneuvers that the driver can use to avoid a collision. This set of maneuvers is then assessed to determine if the driver needs immediate assistance to avoid or mitigate an accident. The proposed threat-assessment algorithm is evaluated using authentic data from both real traffic conditions and collision situations on a test track and by using simulations with a detailed vehicle model. The evaluations show that the algorithm outperforms conventional threat-assessment algorithms at rear-end collisions in terms of the timing of autonomous brake activation. This is crucial for increasing the performance of collision-avoidance systems and for decreasing the risk of unnecessary braking. Moreover, the algorithm is computationally efficient and can be used to assist the driver in avoiding or mitigating collisions with all types of road users in all kinds of traffic scenarios.",
"This paper develops a probabilistic anticipation algorithm for dynamic objects observed by an autonomous robot in an urban environment. Predictive Gaussian mixture models are used due to their ability to probabilistically capture continuous and discrete obstacle decisions and behaviors; the predictive system uses the probabilistic output (state estimate and covariance) of a tracking system and map of the environment to compute the probability distribution over future obstacle states for a specified anticipation horizon. A Gaussian splitting method is proposed based on the sigma-point transform and the nonlinear dynamics function, which enables increased accuracy as the number of mixands grows. An approach to caching elements of this optimal splitting method is proposed, in order to enable real-time implementation. Simulation results and evaluations on data from the research community demonstrate that the proposed algorithm can accurately anticipate the probability distributions over future states of nonlinear systems.",
"",
"With the objective to improve road safety, the automotive industry is moving toward more “intelligent” vehicles. One of the major challenges is to detect dangerous situations and react accordingly in order to avoid or mitigate accidents. This requires predicting the likely evolution of the current traffic situation, and assessing how dangerous that future situation might be. This paper is a survey of existing methods for motion prediction and risk assessment for intelligent vehicles. The proposed classification is based on the semantics used to define motion and risk. We point out the tradeoff between model completeness and real-time constraints, and the fact that the choice of a risk assessment method is influenced by the selected motion model."
]
} |
1903.00847 | 2969040309 | In this paper, we present an online two-level vehicle trajectory prediction framework for urban autonomous driving where there are complex contextual factors, such as lane geometries, road constructions, traffic regulations and moving agents. Our method combines high-level policy anticipation with low-level context reasoning. We leverage a long short-term memory (LSTM) network to anticipate the vehicle’s driving policy (e.g., forward, yield, turn left, turn right, etc.) using its sequential history observations. The policy is then used to guide a low-level optimization-based context reasoning process. We show that it is essential to incorporate the prior policy anticipation due to the multimodal nature of the future trajectory. Moreover, contrary to existing regression-based trajectory prediction methods, our optimization-based reasoning process can cope with complex contextual factors. The final output of the two-level reasoning process is a continuous trajectory that automatically adapts to different traffic configurations and accurately predicts future vehicle motions. The performance of the proposed framework is analyzed and validated in an emerging autonomous driving simulation platform (CARLA). | Interaction-aware models, on the other hand, take the driving context and vehicle interactions into account, and most of them, such as @cite_16 @cite_12 and @cite_29 , are based on dynamic Bayesian networks (DBNs). Though these methods are context-aware, they require refactoring the models when considering a new contextual factor. Our method belongs to the interaction-aware level. Compared to the DBN-based prediction methods, our method is more flexible and can be easily adapted to different traffic configurations. | {
"cite_N": [
"@cite_29",
"@cite_16",
"@cite_12"
],
"mid": [
"2054161738",
"2083651391",
"2065696058"
],
"abstract": [
"Human drivers are endowed with an inborn ability to put themselves in the position of other drivers and reason about their behavior and intended actions. State-of-the-art driving-assistance systems, on the other hand, are generally limited to physical models and ad hoc safety rules. In order to drive safely amongst humans, autonomous vehicles need to develop an understanding of the situation in the form of a high-level description of the state of traffic participants. This paper presents a probabilistic model to estimate the state of vehicles by considering interactions between drivers immersed in traffic. The model is defined within a probabilistic filtering framework; estimation and prediction are carried out with statistical inference techniques. Memory requirements increase linearly with the number of vehicles, and thus, it is possible to scale the model to complex scenarios involving many participants. The approach is validated using real-world data collected by a group of interacting ground vehicles.",
"Estimating and predicting traffic situations over time is an essential capability for sophisticated driver assistance systems and autonomous driving. When longer prediction horizons are needed, e.g., in decision making or motion planning, the uncertainty induced by incomplete environment perception and stochastic situation development over time cannot be neglected without sacrificing robustness and safety. Building consistent probabilistic models of drivers interactions with the environment, the road network and other traffic participants poses a complex problem. In this paper, we model the decision making process of drivers by building a hierarchical Dynamic Bayesian Model that describes physical relationships as well as the driver's behaviors and plans. This way, the uncertainties in the process on all abstraction levels can be handled in a mathematically consistent way. As drivers behaviors are difficult to model, we present an approach for learning continuous, non-linear, context-dependent models for the behavior of traffic participants. We propose an Expectation Maximization (EM) approach for learning the models integrated in the DBN from unlabeled observations. Experiments show a significant improvement in estimation and prediction accuracy over standard models which only consider vehicle dynamics. Finally, a novel approach to tactical decision making for autonomous driving is outlined. It is based on a continuous Partially Observable Markov Decision Process (POMDP) that uses the presented model for prediction.",
"This paper proposes a novel approach to risk assessment at road intersections. Unlike most approaches in the literature, it does not rely on trajectory prediction. Instead, dangerous situations are identified by comparing what drivers intend to do with what they are expected to do. Driver intentions and expectations are estimated from the joint motion of the vehicles, taking into account the layout of the intersection and the traffic rules at the intersection. The proposed approach was evaluated in simulation with two vehicles involved in typical collision scenarios. An analysis of the collision prediction horizon allows to characterize the efficiency of the approach in different situations, as well as the potential of different strategies to avoid an accident after a dangerous situation is detected."
]
} |
1903.00733 | 2951047514 | Advertising is a primary means for revenue generation for millions of websites and smartphone apps (publishers). Naturally, a fraction of publishers abuse the ad-network to systematically defraud advertisers of their money. Defenses have matured to overcome some forms of click fraud but are inadequate against the threat of organic click fraud attacks. Malware detection systems including honeypots fail to stop click fraud apps; ad-network filters are better but measurement studies have reported that a third of the clicks supplied by ad-networks are fake; collaborations between ad-networks and app stores that bad-lists malicious apps works better still, but fails to prevent criminals from writing fraudulent apps which they monetise until they get banned and start over again. This work develops novel inference techniques that can isolate click fraud attacks using their fundamental properties. In the mimicry defence , we leverage the observation that organic click fraud involves the re-use of legitimate clicks. Thus we can isolate fake-clicks by detecting patterns of click-reuse within ad-network clickstreams with historical behaviour serving as a baseline. Second, in bait-click defence . we leverage the vantage point of an ad-network to inject a pattern of bait clicks into the user's device, to trigger click fraud-apps that are gated on user-behaviour. Our experiments show that the mimicry defence detects around 81 of fake-clicks in stealthy (low rate) attacks with a false-positive rate of 110110 per hundred thousand clicks. Bait-click defence enables further improvements in detection rates of 95 and reduction in false-positive rates of between 0 and 30 clicks per million, a substantial improvement over current approaches. | Threshold-based approaches detect hotspots of activity between click-malware and publishers. One set of techniques detect traffic hotspots @cite_57 @cite_4 @cite_44 . Another technique is to examine publisher-user pairs with above-average click rates @cite_1 . All the techniques in this approach develop a normative baseline of activity and detect malicious behaviour beyond a threshold distance from the baseline. The idea is that fraudsters need to scale their activity to a level where their turnover (from a click fraud campaign) covers their costs as well as generate a profit. | {
"cite_N": [
"@cite_57",
"@cite_44",
"@cite_1",
"@cite_4"
],
"mid": [
"2132133059",
"2150010379",
"2096697338",
"2099194763"
],
"abstract": [
"Click fraud is jeopardizing the industry of Internet advertising. Internet advertising is crucial for the thriving of the entire Internet, since it allows producers to advertise their products, and hence contributes to the well being of e-commerce. Moreover, advertising supports the intellectual value of the Internet by covering the running expenses of the content publishers' sites. Some publishers are dishonest, and use automation to generate traffic to defraud the advertisers. Similarly, some advertisers automate clicks on the advertisements of their competitors to deplete their competitors ' advertising budgets. In this paper, we describe the advertising network model, and discuss the issue of fraud that is an integral problem in such setting. We propose using online algorithms on aggregate data to accurately and proactively detect automated traffic, preserve surfers' privacy, while not altering the industry model. We provide a complete classification of the hit inflation techniques; and devise stream analysis techniques that detect a variety of fraud attacks. We abstract detecting the fraud attacks of some classes as theoretical stream analysis problems that we bring to the data management research community as open problems. A framework is outlined for deploying the proposed detection algorithms on a generic architecture. We conclude by some successful preliminary findings of our attempt to detect fraud on a real network.",
"Several data management challenges arise in the context of Internet advertising networks, where Internet advertisers pay Internet publishers to display advertisements on their Web sites and drive traffic to the advertisers from surfers' clicks. Although advertisers can target appropriate market segments, the model allows dishonest publishers to defraud the advertisers by simulating fake traffic to their own sites to claim more revenue. This paper addresses the case of publishers launching fraud attacks from numerous machines, which is the most widespread scenario. The difficulty of uncovering these attacks is proportional to the number of machines and resources exploited by the fraudsters. In general, detecting this class of fraud entails solving a new data mining problem, which is finding correlations in multidimensional data. Since the dimensions have large cardinalities, the search space is huge, which has long allowed dishonest publishers to inflate their traffic, and deplete the advertisers' advertising budgets. We devise the approximate SLEUTH algorithms to solve the problem efficiently, and uncover single-publisher frauds. We demonstrate the effectiveness of SLEUTH both analytically and by reporting some of its results on the Fastclick network, where numerous fraudsters were discovered.",
"Click-spam in online advertising, where unethical publishers use malware or trick users into clicking ads, siphons off hundreds of millions of advertiser dollars meant to support free websites and apps. Ad networks today, sadly, rely primarily on security through obscurity to defend against click-spam. In this paper, we present Viceroi, a principled approach to catching click-spam in search ad networks. It is designed based on the intuition that click-spam is a profit-making business that needs to deliver higher return on investment (ROI) for click-spammers than other (ethical) business models to offset the risk of getting caught. Viceroi operates at the ad network where it has visibility into all ad clicks. Working with a large real-world ad network, we find that the simple-yet-general Viceroi approach catches over six very different classes of click-spam attacks (e.g., malware-driven, search-hijacking, arbitrage) without any tuning knobs.",
"Click fraud is jeopardizing the industry of Internet advertising. Internet advertising is crucial for the thriving of the entire Internet, since it allows producers to advertise their products, and hence contributes to the well being of e-commerce. Moreover, advertising supports the intellectual value of the Internet by covering the running expenses of publishing content. Some content publishers are dishonest, and use automation to generate traffic to defraud the advertisers. Similarly, some advertisers automate clicks on the advertisements of their competitors to deplete their competitors' advertising budgets. This paper describes the advertising network model, and focuses on the most sophisticated type of fraud, which involves coalitions among fraudsters. We build on several published theoretical results to devise the Similarity-Seeker algorithm that discovers coalitions made by pairs of fraudsters. We then generalize the solution to coalitions of arbitrary sizes. Before deploying our system on a real network, we conducted comprehensive experiments on data samples for proof of concept. The results were very accurate. We detected several coalitions, formed using various techniques, and spanning numerous sites. This reveals the generality of our model and approach."
]
} |
1903.00733 | 2951047514 | Advertising is a primary means for revenue generation for millions of websites and smartphone apps (publishers). Naturally, a fraction of publishers abuse the ad-network to systematically defraud advertisers of their money. Defenses have matured to overcome some forms of click fraud but are inadequate against the threat of organic click fraud attacks. Malware detection systems including honeypots fail to stop click fraud apps; ad-network filters are better but measurement studies have reported that a third of the clicks supplied by ad-networks are fake; collaborations between ad-networks and app stores that bad-lists malicious apps works better still, but fails to prevent criminals from writing fraudulent apps which they monetise until they get banned and start over again. This work develops novel inference techniques that can isolate click fraud attacks using their fundamental properties. In the mimicry defence , we leverage the observation that organic click fraud involves the re-use of legitimate clicks. Thus we can isolate fake-clicks by detecting patterns of click-reuse within ad-network clickstreams with historical behaviour serving as a baseline. Second, in bait-click defence . we leverage the vantage point of an ad-network to inject a pattern of bait clicks into the user's device, to trigger click fraud-apps that are gated on user-behaviour. Our experiments show that the mimicry defence detects around 81 of fake-clicks in stealthy (low rate) attacks with a false-positive rate of 110110 per hundred thousand clicks. Bait-click defence enables further improvements in detection rates of 95 and reduction in false-positive rates of between 0 and 30 clicks per million, a substantial improvement over current approaches. | Several papers have highlighted the importance of the field of advertising and click fraud @cite_6 @cite_2 . Keeping ad networks free of fraud is highlighted by the work of @cite_33 , who show that ad networks free of click fraud results in a competitive advantage over rival ad networks and thus attracts more advertisers. Research shows that even the largest advertising platforms are affected by click fraud @cite_29 and are tackling the problem, by primarily employing data mining techniques to distinguish legitimate, fraudulent or bot-generated click events. Clickbots are a leading attack vector for carrying out click fraud; around 30 major ad networks @cite_35 @cite_58 and originate from malware networks (rent-a-botnet services). | {
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_29",
"@cite_6",
"@cite_2",
"@cite_58"
],
"mid": [
"2099937944",
"",
"1097054",
"2165613200",
"2012286502",
"2025827699"
],
"abstract": [
"Advertising plays a vital role in supporting free websites and smartphone apps. Click-spam, i.e., fraudulent or invalid clicks on online ads where the user has no actual interest in the advertiser's site, results in advertising revenue being misappropriated by click-spammers. While ad networks take active measures to block click-spam today, the effectiveness of these measures is largely unknown. Moreover, advertisers and third parties have no way of independently estimating or defending against click-spam. In this paper, we take the first systematic look at click-spam. We propose the first methodology for advertisers to independently measure click-spam rates on their ads. We also develop an automated methodology for ad networks to proactively detect different simultaneous click-spam attacks. We validate both methodologies using data from major ad networks. We then conduct a large-scale measurement study of click-spam across ten major ad networks and four types of ads. In the process, we identify and perform in-depth analysis on seven ongoing click-spam attacks not blocked by major ad networks at the time of this writing. Our findings highlight the severity of the click-spam problem, especially for mobile ads.",
"",
"Microsoft adCenter is the third largest Search advertising platform in the United States behind Google and Yahoo, and services about 10 of US traffic. At this scale of traffic approximately 1 billion events per hour, amounting to 2.3 billion ad dollars annually, need to be scored to determine if it is fraudulent or bot-generated [32, 37, 41]. In order to accomplish this, adCenter has developed arguably one of the largest data mining systems in the world to score traffic quality, and has employed them successfully over 5 years. The current paper describes the unique challenges posed by data mining at massive scale, the design choices and rationale behind the technologies to address the problem, and shows some examples and some quantitative results on the effectiveness of the system in combating click fraud.",
"Online advertisements (ads) provide a powerful mechanism for advertisers to effectively target Web users. Ads can be customized based on a user's browsing behavior, geographic location, and personal interests. There is currently a multi-billion dollar market for online advertising, which generates the primary revenue for some of the most popular websites on the Internet. In order to meet the immense market demand, and to manage the complex relationships between advertisers and publishers (i.e., the websites hosting the ads), marketplaces known as \"ad exchanges\" are employed. These exchanges allow publishers (sellers of ad space) and advertisers(buyers of this ad space) to dynamically broker traffic through ad networks to efficiently maximize profits for all parties. Unfortunately, the complexities of these systems invite a considerable amount of abuse from cybercriminals, who profit at the expense of the advertisers. In this paper, we present a detailed view of how one of the largest ad exchanges operates and the associated security issues from the vantage point of a member ad network. More specifically, we analyzed a dataset containing transactions for ingress and egress ad traffic from this ad network. In addition, we examined information collected from a command-and-control server used to operate a botnet that is leveraged to perpetrate ad fraud against the same ad exchange.",
"Online advertising drives the economy of the World Wide Web. Modern websites of any size and popularity include advertisements to monetize visits from their users. To this end, they assign an area of their web page to an advertising company (so called ad exchange) that will use it to display promotional content. By doing this, the website owner implicitly trusts that the advertising company will offer legitimate content and it will not put the site's visitors at risk of falling victims of malware campaigns and other scams. In this paper, we perform the first large-scale study of the safety of the advertisements that are encountered by the users on the Web. In particular, we analyze to what extent users are exposed to malicious content through advertisements, and investigate what are the sources of this malicious content. Additionally, we show that some ad exchanges are more prone to serving malicious advertisements than others, probably due to their deficient filtering mechanisms. The observations that we make in this paper shed light on a little studied, yet important, aspect of advertisement networks, and can help both advertisement networks and website owners in securing their web pages and in keeping their visitors safe.",
"Online advertising is currently the richest source of revenue for many Internet giants. The increased number of online businesses, specialized websites and modern profiling techniques have all contributed to an explosion of the income of ad brokers from online advertising. The single biggest threat to this growth, is however, click-fraud. Trained botnets and individuals are hired by click-fraud specialists in order to maximize the revenue of certain users from the ads they publish on their websites, or to launch an attack between competing businesses. In this note we wish to raise the awareness of the networking research community on potential research areas within the online advertising field. As an example strategy, we present Bluff ads; a class of ads that join forces in order to increase the effort level for click-fraud spammers. Bluff ads are either targeted ads, with irrelevant display text, or highly relevant display text, with irrelevant targeting information. They act as a litmus test for the legitimacy of the individual clicking on the ads. Together with standard threshold-based methods, fake ads help to decrease click-fraud levels."
]
} |
1903.00950 | 2949073003 | Games with continuous strategy sets arise in several machine learning problems (e.g. adversarial learning). For such games, simple no-regret learning algorithms exist in several cases and ensure convergence to coarse correlated equilibria (CCE). The efficiency of such equilibria with respect to a social function, however, is not well understood. In this paper, we define the class of valid utility games with continuous strategies and provide efficiency bounds for their CCEs. Our bounds rely on the social function being a monotone DR-submodular function. We further refine our bounds based on the curvature of the social function. Furthermore, we extend our efficiency bounds to a class of non-submodular functions that satisfy approximate submodularity properties. Finally, we show that valid utility games with continuous strategies can be designed to maximize monotone DR-submodular functions subject to disjoint constraints with approximation guarantees. The approximation guarantees we derive are based on the efficiency of the equilibria of such games and can improve the existing ones in the literature. We illustrate and validate our results on a budget allocation game and a sensor coverage problem. | Although continuous games are finding increasing applicability, from a theoretical viewpoint they are less understood than games with finitely many strategies. Recently, no-regret learning algorithms @cite_20 have been proposed for continuous games under different set-ups @cite_5 @cite_2 @cite_8 . Similarly to finite games @cite_20 , these no-regret dynamics converge to (CCEs) @cite_2 @cite_19 , the weakest class of equilibria which includes pure Nash equilibria, mixed Nash equilibria and correlated equilibria. However, CCEs may be highly suboptimal for the social function. A central open question is to understand the (in)efficiency of such equilibria. Differently from the finite case, where bounds on such inefficiency are known for a large variety of games @cite_13 , in continuous games this question is not well understood. | {
"cite_N": [
"@cite_8",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_20"
],
"mid": [
"2783096200",
"2962979365",
"2102794752",
"",
"2294025081",
"1570963478"
],
"abstract": [
"This paper examines the convergence of no-regret learning in games with continuous action sets. For concreteness, we focus on learning via “dual averaging”, a widely used class of no-regret learning schemes where players take small steps along their individual payoff gradients and then “mirror” the output back to their action sets. In terms of feedback, we assume that players can only estimate their payoff gradients up to a zero-mean error with bounded variance. To study the convergence of the induced sequence of play, we introduce the notion of variational stability, and we show that stable equilibria are locally attracting with high probability whereas globally stable equilibria are globally attracting with probability 1. We also discuss some applications to mixed-strategy learning in finite games, and we provide explicit estimates of the method’s convergence speed.",
"The cornerstone underpinning deep learning is the guarantee that gradient descent on an objective converges to local minima. Unfortunately, this guarantee fails in settings, such as generative adversarial nets, where there are multiple interacting losses. The behavior of gradient-based methods in games is not well understood – and is becoming increasingly important as adversarial and multiobjective architectures proliferate. In this paper, we develop new techniques to understand and control the dynamics in general games. The key result is to decompose the second-order dynamics into two components. The first is related to potential games, which reduce to gradient descent on an implicit function; the second relates to Hamiltonian games, a new class of games that obey a conservation law, akin to conservation laws in classical mechanical systems. The decomposition motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding stable fixed points in general games. Basic experiments show SGA is competitive with recently proposed algorithms for finding stable fixed points in GANs – whilst at the same time being applicable to – and having guarantees in – much more general games.",
"Hart and Schmeidler's extension of correlated equilibrium to games with infinite sets of strategies is studied. General properties of the set of correlated equilibria are described. It is shown that, just like for finite games, if all players play according to an appropriate regret-minimizing strategy then the empirical frequencies of play converge to the set of correlated equilibria whenever the strategy sets are convex and compact.",
"",
"The price of anarchy, defined as the ratio of the worst-case objective function value of a Nash equilibrium of a game and that of an optimal outcome, quantifies the inefficiency of selfish behavior. Remarkably good bounds on this measure are known for a wide range of application domains. However, such bounds are meaningful only if a game's participants successfully reach a Nash equilibrium. This drawback motivates inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash equilibria and correlated equilibria, and to sequences of outcomes generated by natural experimentation strategies, such as successive best responses and simultaneous regret-minimization. We establish a general and fundamental connection between the price of anarchy and its seemingly more general relatives. First, we identify a “canonical sufficient condition” for an upper bound on the price of anarchy of pure Nash equilibria, which we call a smoothness argument. Second, we prove an “extension theorem”: every bound on the price of anarchy that is derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of every outcome sequence generated by no-regret learners. Smoothness arguments also have automatic implications for the inefficiency of approximate equilibria, for bicriteria bounds, and, under additional assumptions, for polynomial-length best-response sequences. Third, we prove that in congestion games, smoothness arguments are “complete” in a proof-theoretic sense: despite their automatic generality, they are guaranteed to produce optimal worst-case upper bounds on the price of anarchy.",
"1. Introduction 2. Prediction with expert advice 3. Tight bounds for specific losses 4. Randomized prediction 5. Efficient forecasters for large classes of experts 6. Prediction with limited feedback 7. Prediction and playing games 8. Absolute loss 9. Logarithmic loss 10. Sequential investment 11. Linear pattern recognition 12. Linear classification 13. Appendix."
]
} |
1903.00950 | 2949073003 | Games with continuous strategy sets arise in several machine learning problems (e.g. adversarial learning). For such games, simple no-regret learning algorithms exist in several cases and ensure convergence to coarse correlated equilibria (CCE). The efficiency of such equilibria with respect to a social function, however, is not well understood. In this paper, we define the class of valid utility games with continuous strategies and provide efficiency bounds for their CCEs. Our bounds rely on the social function being a monotone DR-submodular function. We further refine our bounds based on the curvature of the social function. Furthermore, we extend our efficiency bounds to a class of non-submodular functions that satisfy approximate submodularity properties. Finally, we show that valid utility games with continuous strategies can be designed to maximize monotone DR-submodular functions subject to disjoint constraints with approximation guarantees. The approximation guarantees we derive are based on the efficiency of the equilibria of such games and can improve the existing ones in the literature. We illustrate and validate our results on a budget allocation game and a sensor coverage problem. | To measure the inefficiency of CCEs arising from no-regret dynamics, @cite_16 introduces the . This notion generalizes the well-established price of anarchy (PoA) of @cite_23 which instead measures the inefficiency of the worst pure Nash equilibria of the game. There are numerous reasons why players may not reach a pure Nash equilibrium @cite_16 @cite_15 @cite_13 . In contrast, regret minimization can be done by each player via simple and efficient algorithms @cite_16 . Recently, @cite_13 generalizes the price of total anarchy defining the PoA which measures the inefficiency of any CCE (including the ones arising from regret minimization), and provides examples of games for which it can be bounded. | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_23"
],
"mid": [
"2155114889",
"2034184818",
"2294025081",
""
],
"abstract": [
"We present several new characterizations of correlated equilibria in games with continuous utility functions. These have the advantage of being more computationally and analytically tractable than the standard definition in terms of departure functions. We use these characterizations to construct effective algorithms for approximating a single correlated equilibrium or the entire set of correlated equilibria of a game with polynomial utility functions.",
"We propose weakening the assumption made when studying the price of anarchy: Rather than assume that self-interested players will play according to a Nash equilibrium (which may even be computationally hard to find), we assume only that selfish players play so as to minimize their own regret. Regret minimization can be done via simple, efficient algorithms even in many settings where the number of action choices for each player is exponential in the natural parameters of the problem. We prove that despite our weakened assumptions, in several broad classes of games, this \"price of total anarchy\" matches the Nash price of anarchy, even though play may never converge to Nash equilibrium. In contrast to the price of anarchy and the recently introduced price of sinking, which require all players to behave in a prescribed manner, we show that the price of total anarchy is in many cases resilient to the presence of Byzantine players, about whom we make no assumptions. Finally, because the price of total anarchy is an upper bound on the price of anarchy even in mixed strategies, for some games our results yield as corollaries previously unknown bounds on the price of anarchy in mixed strategies.",
"The price of anarchy, defined as the ratio of the worst-case objective function value of a Nash equilibrium of a game and that of an optimal outcome, quantifies the inefficiency of selfish behavior. Remarkably good bounds on this measure are known for a wide range of application domains. However, such bounds are meaningful only if a game's participants successfully reach a Nash equilibrium. This drawback motivates inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash equilibria and correlated equilibria, and to sequences of outcomes generated by natural experimentation strategies, such as successive best responses and simultaneous regret-minimization. We establish a general and fundamental connection between the price of anarchy and its seemingly more general relatives. First, we identify a “canonical sufficient condition” for an upper bound on the price of anarchy of pure Nash equilibria, which we call a smoothness argument. Second, we prove an “extension theorem”: every bound on the price of anarchy that is derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of every outcome sequence generated by no-regret learners. Smoothness arguments also have automatic implications for the inefficiency of approximate equilibria, for bicriteria bounds, and, under additional assumptions, for polynomial-length best-response sequences. Third, we prove that in congestion games, smoothness arguments are “complete” in a proof-theoretic sense: despite their automatic generality, they are guaranteed to produce optimal worst-case upper bounds on the price of anarchy.",
""
]
} |
1903.00950 | 2949073003 | Games with continuous strategy sets arise in several machine learning problems (e.g. adversarial learning). For such games, simple no-regret learning algorithms exist in several cases and ensure convergence to coarse correlated equilibria (CCE). The efficiency of such equilibria with respect to a social function, however, is not well understood. In this paper, we define the class of valid utility games with continuous strategies and provide efficiency bounds for their CCEs. Our bounds rely on the social function being a monotone DR-submodular function. We further refine our bounds based on the curvature of the social function. Furthermore, we extend our efficiency bounds to a class of non-submodular functions that satisfy approximate submodularity properties. Finally, we show that valid utility games with continuous strategies can be designed to maximize monotone DR-submodular functions subject to disjoint constraints with approximation guarantees. The approximation guarantees we derive are based on the efficiency of the equilibria of such games and can improve the existing ones in the literature. We illustrate and validate our results on a budget allocation game and a sensor coverage problem. | In the context of distributed optimization, where a game is designed to optimize a given objective @cite_12 , bounds on the robust price of anarchy find a similar importance. In this setting, a distributed scheme to optimize the social function is to let each player implement a no-regret learning algorithm based only on its payoff information. A bound on the robust PoA provides an approximation guarantee to such optimization scheme. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2111140566"
],
"abstract": [
"Game-theoretic tools are becoming a popular design choice for distributed resource allocation algorithms. A central component of this design choice is the assignment of utility functions to the individual agents. The goal is to assign each agent an admissible utility function such that the resulting game possesses a host of desirable properties, including scalability, tractability, and existence and efficiency of pure Nash equilibria. In this paper we formally study this question of utility design on a class of games termed distributed welfare games. We identify several utility design methodologies that guarantee desirable game properties irrespective of the specific application domain. Lastly, we illustrate the results in this paper on two commonly studied classes of resource allocation problems: “coverage” problems and “coloring” problems."
]
} |
1903.00950 | 2949073003 | Games with continuous strategy sets arise in several machine learning problems (e.g. adversarial learning). For such games, simple no-regret learning algorithms exist in several cases and ensure convergence to coarse correlated equilibria (CCE). The efficiency of such equilibria with respect to a social function, however, is not well understood. In this paper, we define the class of valid utility games with continuous strategies and provide efficiency bounds for their CCEs. Our bounds rely on the social function being a monotone DR-submodular function. We further refine our bounds based on the curvature of the social function. Furthermore, we extend our efficiency bounds to a class of non-submodular functions that satisfy approximate submodularity properties. Finally, we show that valid utility games with continuous strategies can be designed to maximize monotone DR-submodular functions subject to disjoint constraints with approximation guarantees. The approximation guarantees we derive are based on the efficiency of the equilibria of such games and can improve the existing ones in the literature. We illustrate and validate our results on a budget allocation game and a sensor coverage problem. | Bounds on the robust PoA provided by @cite_13 mostly concern games with finitely many actions. A class of such games are the introduced by @cite_0 . In such games, the social function is a set function and, using this property, @cite_13 showed that the PoA bound derived in @cite_0 indeed extends to all CCEs of the game. This class of games covers numerous applications including market sharing, facility location, and routing problems, and were used by @cite_12 for distributed optimization. Strategies consist of selecting subsets of a ground set, and can be equivalently represented as binary decisions. Recently, authors in @cite_11 extend the notion of valid utility games to integer domains. By leveraging properties of submodular functions over integer lattices, they show that the robust PoA bound of @cite_13 extends to the integer case. The notion of submodularity has recently been extended to continuous domains, mainly in order to design efficient optimization algorithms @cite_21 @cite_27 @cite_7 . To the best of author's knowledge, such notion has not been utilized for analyzing efficiency of equilibria of games over continuous domains. | {
"cite_N": [
"@cite_7",
"@cite_21",
"@cite_0",
"@cite_27",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2963116072",
"2963719189",
"2159759087",
"2962866337",
"2294025081",
"2111140566",
"1926415857"
],
"abstract": [
"In this paper, we study the problem of maximizing continuous submodular functions that naturally arise in many learning applications such as those involving utility functions in active learning and sensing, matrix approximations and network inference. Despite the apparent lack of convexity in such functions, we prove that stochastic projected gradient methods can provide strong approximation guarantees for maximizing continuous submodular functions with convex constraints. More specifically, we prove that for monotone continuous DR-submodular functions, all fixed points of projected gradient ascent provide a factor @math approximation to the global maxima. We also study stochastic gradient methods and show that after @math iterations these methods reach solutions which achieve in expectation objective values exceeding @math . An immediate application of our results is to maximize submodular functions that are defined stochastically, i.e. the submodular function is defined as an expectation over a family of submodular functions with an unknown distribution. We will show how stochastic gradient methods are naturally well-suited for this setting, leading to a factor @math approximation when the function is monotone. In particular, it allows us to approximately maximize discrete, monotone submodular optimization problems via projected gradient ascent on a continuous relaxation, directly connecting the discrete and continuous domains. Finally, experiments on real data demonstrate that our projected gradient methods consistently achieve the best utility compared to other continuous baselines while remaining competitive in terms of computational effort.",
"Submodular set-functions have many applications in combinatorial optimization, as they can be minimized and approximately maximized in polynomial time. A key element in many of the algorithms and analyses is the possibility of extending the submodular set-function to a convex function, which opens up tools from convex optimization. Submodularity goes beyond set-functions and has naturally been considered for problems with multiple labels or for functions defined on continuous domains, where it corresponds essentially to cross second-derivatives being nonpositive. In this paper, we show that most results relating submodularity and convexity for set-functions can be extended to all submodular functions. In particular, (a) we naturally define a continuous extension in a set of probability measures, (b) show that the extension is convex if and only if the original function is submodular, (c) prove that the problem of minimizing a submodular function is equivalent to a typically non-smooth convex optimization problem, and (d) propose another convex optimization problem with better computational properties (e.g., a smooth dual problem). Most of these extensions from the set-function situation are obtained by drawing links with the theory of multi-marginal optimal transport, which provides also a new interpretation of existing results for set-functions. We then provide practical algorithms to minimize generic submodular functions on discrete domains, with associated convergence rates.",
"We consider the following class of problems. The value of an outcome to a society is measured via a submodular utility function (submodularity has a natural economic interpretation: decreasing marginal utility). Decisions, however, are controlled by non-cooperative agents who seek to maximise their own private utility. We present, under basic assumptions, guarantees on the social performance of Nash equilibria. For submodular utility functions, any Nash equilibrium gives an expected social utility within a factor 2 of optimal, subject to a function-dependent additive term. For non-decreasing, submodular utility functions, any Nash equilibrium gives an expected social utility within a factor 1+ spl delta of optimal, where 0 spl les spl delta spl les 1 is a number based upon discrete curvature of the function. A condition under which all sets of social and private utility functions induce pure strategy Nash equilibria is presented. The case in which agents themselves make use of approximation algorithms in decision making is discussed and performance guarantees given. Finally we present specific problems that fall into our framework. These include competitive versions of the facility location problem and k-median problem, a maximisation version of the traffic routing problem studied by Roughgarden and Tardos (2000), and multiple-item auctions.",
"",
"The price of anarchy, defined as the ratio of the worst-case objective function value of a Nash equilibrium of a game and that of an optimal outcome, quantifies the inefficiency of selfish behavior. Remarkably good bounds on this measure are known for a wide range of application domains. However, such bounds are meaningful only if a game's participants successfully reach a Nash equilibrium. This drawback motivates inefficiency bounds that apply more generally to weaker notions of equilibria, such as mixed Nash equilibria and correlated equilibria, and to sequences of outcomes generated by natural experimentation strategies, such as successive best responses and simultaneous regret-minimization. We establish a general and fundamental connection between the price of anarchy and its seemingly more general relatives. First, we identify a “canonical sufficient condition” for an upper bound on the price of anarchy of pure Nash equilibria, which we call a smoothness argument. Second, we prove an “extension theorem”: every bound on the price of anarchy that is derived via a smoothness argument extends automatically, with no quantitative degradation in the bound, to mixed Nash equilibria, correlated equilibria, and the average objective function value of every outcome sequence generated by no-regret learners. Smoothness arguments also have automatic implications for the inefficiency of approximate equilibria, for bicriteria bounds, and, under additional assumptions, for polynomial-length best-response sequences. Third, we prove that in congestion games, smoothness arguments are “complete” in a proof-theoretic sense: despite their automatic generality, they are guaranteed to produce optimal worst-case upper bounds on the price of anarchy.",
"Game-theoretic tools are becoming a popular design choice for distributed resource allocation algorithms. A central component of this design choice is the assignment of utility functions to the individual agents. The goal is to assign each agent an admissible utility function such that the resulting game possesses a host of desirable properties, including scalability, tractability, and existence and efficiency of pure Nash equilibria. In this paper we formally study this question of utility design on a class of games termed distributed welfare games. We identify several utility design methodologies that guarantee desirable game properties irrespective of the specific application domain. Lastly, we illustrate the results in this paper on two commonly studied classes of resource allocation problems: “coverage” problems and “coloring” problems.",
"In marketing planning, advertisers seek to maximize the number of customers by allocating given budgets to each media channel effectively. The budget allocation problem with a bipartite influence model captures this scenario; however, the model is problematic because it assumes there is only one advertiser in the market. In reality, there are many advertisers which are in conflict of advertisement; thus we must extend the model for such a case. By extending the budget allocation problem with a bipartite influence model, we propose a gametheoretic model problem that considers many advertisers. By simulating our model, we can analyze the behavior of a media channel market, e.g., we can estimate which media channels are allocated by an advertiser, and which customers are influenced by an advertiser. Our model has many attractive features. First, our model is a potential game; therefore, it has a pure Nash equilibrium. Second, any Nash equilibrium of our game has 2-optimal social utility, i.e., the price of anarchy is 2. Finally, the proposed model can be simulated very efficiently; thus it can be used to analyze large markets."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.