aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1903.06814
2922139758
In order to operate autonomously, a robot should explore the environment and build a model of each of the surrounding objects. A common approach is to carefully scan the whole workspace. This is time-consuming. It is also often impossible to reach all the viewpoints required to acquire full knowledge about the environment. Humans can perform shape completion of occluded objects by relying on past experience. Therefore, we propose a method that generates images of an object from various viewpoints using a single input RGB image. A deep neural network is trained to imagine the object appearance from many viewpoints. We present the whole pipeline, which takes a single RGB image as input and returns a sequence of RGB and depth images of the object. The method utilizes a CNN-based object detector to extract the object from the natural scene. Then, the proposed network generates a set of RGB and depth images. We show the results both on a synthetic dataset and on real images.
Many objects met in manipulation tasks are symmetric. The complete shape of a partially observed object can be recovered by finding the symmetry planes and taking the scene context into account @cite_5 . A similar approach to object shape prediction, based on the symmetry plane, is proposed by @cite_8 . In contrast, a CNN-based neural network is used to complete partial 3D shapes @cite_7 . The network operates on the 3D map of voxels and generates a high-resolution voxel grid.
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_8" ], "mid": [ "2565078525", "", "2156583822" ], "abstract": [ "Object shape information is essential for robot manipulation tasks, in particular for grasp planning and collision-free motion planning. But in general a complete object model is not available, in particular when dealing with unknown objects. We propose a method for completing shapes that are only partially known, which is a common situation when a robot perceives a new object only from one direction. Our approach is based on the assumption that most objects used in service robotic setups have symmetries. We determine and rate symmetry plane candidates to estimate the hidden parts of the object. By finding possible supporting planes based on its immediate neighborhood, the search space for symmetry planes is restricted, and the bottom part of the object is added. Gaps along the sides in the direction of the view axis are closed by linear interpolation. We evaluate our approach with real-world experiments using the YCB object and model set [1].", "", "We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robot's understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned." ] }
1903.06814
2922139758
In order to operate autonomously, a robot should explore the environment and build a model of each of the surrounding objects. A common approach is to carefully scan the whole workspace. This is time-consuming. It is also often impossible to reach all the viewpoints required to acquire full knowledge about the environment. Humans can perform shape completion of occluded objects by relying on past experience. Therefore, we propose a method that generates images of an object from various viewpoints using a single input RGB image. A deep neural network is trained to imagine the object appearance from many viewpoints. We present the whole pipeline, which takes a single RGB image as input and returns a sequence of RGB and depth images of the object. The method utilizes a CNN-based object detector to extract the object from the natural scene. Then, the proposed network generates a set of RGB and depth images. We show the results both on a synthetic dataset and on real images.
Recently, CNN has been proven to be effective in the task of rendering a whole 3D scene from few images @cite_4 , image synthesis from the text @cite_1 , or semantic image synthesis @cite_2 , new-view image synthesis from sets of real-world, natural imagery @cite_6 , or image completion @cite_10 . However, we are first to show that the sequence of 2D images of the object from a given set of viewpoints can be generated from a single image only using CNN.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_6", "@cite_2", "@cite_10" ], "mid": [ "2808492412", "2405756170", "1776042733", "2963522749", "2953318193" ], "abstract": [ "Scene representation—the process of converting visual sensory data into concise descriptions—is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem. To this end, we introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand the world around them.", "Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.", "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "We present an approach to synthesizing photographic images conditioned on semantic layouts. Given a semantic label map, our approach produces an image with photographic appearance that conforms to the input layout. The approach thus functions as a rendering engine that takes a two-dimensional semantic specification of the scene and produces a corresponding photographic image. Unlike recent and contemporaneous work, our approach does not rely on adversarial training. We show that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective. The presented approach scales seamlessly to high resolutions; we demonstrate this by synthesizing photographic images at 2-megapixel resolution, the full resolution of our training data. Extensive perceptual experiments on datasets of outdoor and indoor scenes demonstrate that images synthesized by the presented approach are considerably more realistic than alternative approaches.", "Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent." ] }
1903.06811
2921758275
Camera network and multi-camera calibration for external parameters is a necessary step for a variety of contexts in computer vision and robotics, ranging from three-dimensional reconstruction to human activity tracking. This paper describes a method for camera network and or multi-camera calibration suitable for specific contexts: the cameras may not all have a common field of view, or if they do, there may be some views that are 180 degrees from one another, and the network may be asynchronous. The calibration object required is one or more planar calibration patterns, rigidly attached to one another, and are distinguishable from one another, such as aruco or charuco patterns. We formulate the camera network and or multi-camera calibration problem in this context using rigidity constraints, represented as a system of equations, and an approximate solution is found through a two-step process. Synthetic and real experiments, including scenarios of a asynchronous camera network and rotating imaging system, demonstrate the method in a variety of settings. Reconstruction accuracy error was less than 0.5 mm for all datasets. This method is suitable for new users to calibrate a camera network, and the modularity of the calibration object also allows for disassembly, shipping, and the use of this method in a variety of large and small spaces.
Multi-camera calibration or asynchronous camera networks . Liu @cite_7 use a two-step approach to calibrating a variety of configurations, including multi-camera contexts, by using hand-eye calibration to generate an initial solution, and then minimize reprojection error. Joo @cite_18 , working with an asynchronous camera network, used patterns projected onto white cloth to calibrate via bundle adjustment.
{ "cite_N": [ "@cite_18", "@cite_7" ], "mid": [ "2771965516", "2191878551" ], "abstract": [ "We present an approach to capture the 3D motion of a group of people engaged in a social interaction. The core challenges in capturing social interactions are: (1) occlusion is functional and frequent; (2) subtle motion needs to be measured over a space large enough to host a social group; (3) human appearance and configuration variation is immense; and (4) attaching markers to the body may prime the nature of interactions. The Panoptic Studio is a system organized around the thesis that social interactions should be measured through the integration of perceptual analyses over a large variety of view points. We present a modularized system designed around this principle, consisting of integrated structural, hardware, and software innovations. The system takes, as input, 480 synchronized video streams of multiple people engaged in social activities, and produces, as output, the labeled time-varying 3D structure of anatomical landmarks on individuals in the space. Our algorithm is designed to fuse the “weak” perceptual processes in the large number of views by progressively generating skelet al proposals from low-level appearance cues, and a framework for temporal refinement is also presented by associating body parts to reconstructed dense 3D trajectory stream. Our system and method are the first in reconstructing full body motion of more than five people engaged in social interactions without using markers. We also empirically demonstrate the impact of the number of views in achieving this goal.", "This article presents a camera calibration system, Caliber, and the underlying pose estimation problem it solves, which we call sensor localization with rigidity (SL-R). SL-R is a constraint-satisfaction-like problem that finds a set of poses satisfying certain constraints. These constraints include not only relative pose constraints such as those found in SLAM and motion estimation problems, but also rigidity constraints: the notion of objects that are rigidly attached to each other so that their relative pose is fixed over time even if that pose is not known a priori. We show that SL-R is NP-hard, but give an inference-based algorithm that works well in practice. SL-R enables Caliber, a tool to calibrate systems of cameras connected by rigid or actuated links, using image observations and information about known motions of the system. The user provides a model of the system in the form of a kinematic tree, and Caliber uses our SL-R algorithm to generate an estimate for the rigidity constraints, then performs nonlinear optimization to produce a solution that is locally least-squares optimal in terms of reprojection error. In this way, Caliber is able to calibrate a variety of setups that would have previously required special-purpose code to calibrate. We demonstrate Caliber in a number of different scenarios using both synthetic and experimental data." ] }
1903.06811
2921758275
Camera network and multi-camera calibration for external parameters is a necessary step for a variety of contexts in computer vision and robotics, ranging from three-dimensional reconstruction to human activity tracking. This paper describes a method for camera network and or multi-camera calibration suitable for specific contexts: the cameras may not all have a common field of view, or if they do, there may be some views that are 180 degrees from one another, and the network may be asynchronous. The calibration object required is one or more planar calibration patterns, rigidly attached to one another, and are distinguishable from one another, such as aruco or charuco patterns. We formulate the camera network and or multi-camera calibration problem in this context using rigidity constraints, represented as a system of equations, and an approximate solution is found through a two-step process. Synthetic and real experiments, including scenarios of a asynchronous camera network and rotating imaging system, demonstrate the method in a variety of settings. Reconstruction accuracy error was less than 0.5 mm for all datasets. This method is suitable for new users to calibrate a camera network, and the modularity of the calibration object also allows for disassembly, shipping, and the use of this method in a variety of large and small spaces.
CNNs and deep learning. Convolutional neural networks (CNNs) and deep learning have been employed recently in multiple contexts to predict camera pose. For instance, @cite_9 designed CNNs to predict relative pose in stereo images. Peretroukhin and Kelly @cite_5 , in a visual odometry context, use classical geometric and probibalistic approaches, with deep networks used as a corrector @cite_5 . Other works focussed on appropriate loss functions for camera pose localization in the context of monocular cameras @cite_13 .
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_13" ], "mid": [ "2754329383", "2592936284", "2605111497" ], "abstract": [ "We present a novel method to fuse the power of deep networks with the computational efficiency of geometric and probabilistic localization algorithms. In contrast to other methods that completely replace a classical visual estimator with a deep network, we propose an approach that uses a convolutional neural network to learn difficult-to-model corrections to the estimator from ground-truth training data. To this end, we derive a novel loss function for learning SE(3) corrections based on a matrix Lie groups approach, with a natural formulation for balancing translation and rotation errors. We use this loss to train a deep pose correction network (DPC-Net) that predicts corrections for a particular estimator, sensor and environment. Using the KITTI odometry dataset, we demonstrate significant improvements to the accuracy of a computationally-efficient sparse stereo visual odometry pipeline, that render it as accurate as a modern computationally-intensive dense estimator. Further, we show how DPC-Net can be used to mitigate the effect of poorly calibrated lens distortion parameters.", "This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance.", "Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet [22] is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNets performance across datasets ranging from indoor rooms to a small city." ] }
1903.06939
2921596091
Lemmatization of standard languages is concerned with (i) abstracting over morphological differences and (ii) resolving token-lemma ambiguities of inflected words in order to map them to a dictionary headword. In the present paper we aim to improve lemmatization performance on a set of non-standard historical languages in which the difficulty is increased by an additional aspect (iii): spelling variation due to lacking orthographic standards. We approach lemmatization as a string-transduction task with an encoder-decoder architecture which we enrich with sentence context information using a hierarchical sentence encoder. We show significant improvements over the state-of-the-art when training the sentence encoder jointly for lemmatization and language modeling. Crucially, our architecture does not require POS or morphological annotations, which are not always available for historical corpora. Additionally, we also test the proposed model on a set of typologically diverse standard languages showing results on par or better than a model without enhanced sentence representations and previous state-of-the-art systems. Finally, to encourage future work on processing of non-standard varieties, we release the dataset of non-standard languages underlying the present study, based on openly accessible sources.
With the advent of current encoder-decoder architectures, lemmatization as a string-transduction task has gained interest partly inspired by the success of such architectures in Neural Machine Translation (NMT). For instance, apply a state-of-the-art NMT system with the lemma as target and as source the focus token with a fixed window over neighboring tokens. Most similar to our work is the approach by , which conditions the decoder on sentence-level distributional features extracted from a sentence-level bidirectional RNN and morphological tags. Recently, work on non-standard historical varieties has focused on spelling normalization using rule-based, statistical and neural string-transduction models @cite_15 @cite_28 @cite_4 . Previous studies on lemmatization of historical variants focused on evaluating off-the-shelf systems. For instance, evaluates different pre-existing models on a dataset of German and Medieval Latin, and focuses on Early Irish. The most similar to the present paper in this area is work by , which tackled lemmatization of Middle Dutch with a neural encoder that extracts character and word-level features from a fixed-length token window and predicts the target lemma from a closed-set of true lemmas.
{ "cite_N": [ "@cite_28", "@cite_15", "@cite_4" ], "mid": [ "2111283389", "2251463342", "2963715460" ], "abstract": [ "We present a novel approach to the task of word lemmatisation. We formalise lemmatisation as a category tagging task, by describing how a word-to-lemma transformation rule can be encoded in a single label and how a set of such labels can be inferred for a specific language. In this way, a lemmatisation system can be trained and tested using any supervised tagging model. In contrast to previous approaches, the proposed technique allows us to easily integrate relevant contextual information. We test our approach on eight languages reaching a new state-of-the-art level for the lemmatisation task.", "We present a multilingual evaluation of approaches for spelling normalisation of historical text based on data from five languages: English, German, Hungarian, Icelandic, and Swedish. Three different normalisation methods are evaluated: a simplistic filtering model, a Levenshteinbased approach, and a character-based statistical machine translation approach. The evaluation shows that the machine translation approach often gives the best results, but also that all approaches improve over the baseline and that no single method works best for all languages.", "" ] }
1903.06901
2922035987
Many Natural Language Processing works on emotion analysis only focus on simple emotion classification without exploring the potentials of putting emotion into "event context", and ignore the analysis of emotion-related events. One main reason is the lack of this kind of corpus. Here we present Cause-Emotion-Action Corpus, which manually annotates not only emotion, but also cause events and action events. We propose two new tasks based on the data-set: emotion causality and emotion inference. The first task is to extract a triple (cause, emotion, action). The second task is to infer the probable emotion. We are currently releasing the data-set with 10,603 samples and 15,892 events, basic statistic analysis and baseline on both emotion causality and emotion inference tasks. Baseline performance demonstrates that there is much room for both tasks to be improved.
@cite_13 first defined emotion-provoking event and constructed a corpus in Japanese using massive examples extracted from the web, then did sentiment polarity classification and emotion classification. @cite_5 worked on creating prevalence-ranked dictionaries of emotion-provoking events through both manual labor and automatic information extraction.
{ "cite_N": [ "@cite_5", "@cite_13" ], "mid": [ "2161431802", "2090987251" ], "abstract": [ "This paper is concerned with the discovery and aggregation of events that provoke a particular emotion in the person who experiences them, or emotion-provoking events. We first describe the creation of a small manually-constructed dictionary of events through a survey of 30 subjects. Next, we describe first attempts at automatically acquiring and aggregating these events from web data, with a baseline from previous work and some simple extensions using seed expansion and clustering. Finally, we propose several evaluation measures for evaluating the automatically acquired events, and perform an evaluation of the effectiveness of automatic event extraction.", "In this paper, we propose a data-oriented method for inferring the emotion of a speaker conversing with a dialog system from the semantic content of an utterance. We first fully automatically obtain a huge collection of emotion-provoking event instances from the Web. With Japanese chosen as a target language, about 1.3 million emotion provoking event instances are extracted using an emotion lexicon and lexical patterns. We then decompose the emotion classification task into two sub-steps: sentiment polarity classification (coarsegrained emotion classification), and emotion classification (fine-grained emotion classification). For each subtask, the collection of emotion-proviking event instances is used as labelled examples to train a classifier. The results of our experiments indicate that our method significantly outperforms the baseline method. We also find that compared with the single-step model, which applies the emotion classifier directly to inputs, our two-step model significantly reduces sentiment polarity errors, which are considered fatal errors in real dialog applications." ] }
1903.06901
2922035987
Many Natural Language Processing works on emotion analysis only focus on simple emotion classification without exploring the potentials of putting emotion into "event context", and ignore the analysis of emotion-related events. One main reason is the lack of this kind of corpus. Here we present Cause-Emotion-Action Corpus, which manually annotates not only emotion, but also cause events and action events. We propose two new tasks based on the data-set: emotion causality and emotion inference. The first task is to extract a triple (cause, emotion, action). The second task is to infer the probable emotion. We are currently releasing the data-set with 10,603 samples and 15,892 events, basic statistic analysis and baseline on both emotion causality and emotion inference tasks. Baseline performance demonstrates that there is much room for both tasks to be improved.
@cite_4 first proposed a task on emotion cause detection. They manually constructed a corpus from Academia Sinica Balanced Chinese Corpus. @cite_20 built a dataset using SINA city news then propose an event-driven emotion cause extraction method using multi-kernel SVMs. @cite_15 directly selected the emotions-directed frames in FrameNet to build an English emotion cause (or stimulus) corpus then used CRFs to detect emotion causes. Some study @cite_1 designed corpus through annotating the emotion cause expressions in Chinese Weibo and extended the rule based method to informal text in Weibo text. @cite_21 focused on current original-subtweet-based emotion detection and annotated a multiple-user structure. @cite_10 organized NTCIR-13 ECA (emotion cause analysis) task. It designed two subtasks including emotion cause detection subtask and emotion cause extraction subtask.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_1", "@cite_15", "@cite_10", "@cite_20" ], "mid": [ "1604245705", "2766095568", "2161624371", "2182096631", "", "2562539671" ], "abstract": [ "Emotion processing has always been a great challenge. Given the fact that an emotion is triggered by cause events and that cause events are an integral part of emotion, this paper constructs a Chinese emotion cause corpus as a first step towards automatic inference of cause-emotion correlation. The corpus focuses on five primary emotions, namely happiness, sadness, fear, anger, and surprise. It is annotated with emotion cause events based on our proposed annotation scheme. Corpus data shows that most emotions are expressed with causes, and that causes mostly occur before the corresponding emotion verbs. We also examine the correlations between emotions and cause events in terms of linguistic cues: causative verbs, perception verbs, epistemic markers, conjunctions, prepositions, and others. Results show that each group of linguistic cues serves as an indicator marking the cause events in different structures of emotional constructions. We believe that the emotion cause corpus will be the useful resource for automatic emotion cause detection as well as emotion detection and classification.", "A notably challenging problem in emotion analysis is recognizing the cause of an emotion. Although there have been a few studies on emotion cause detection, most of them work on news reports or a few of them focus on microblogs using a single-user structure (i.e., all texts in a microblog are written by the same user). In this article, we focus on emotion cause detection for Chinese microblogs using a multiple-user structure (i.e., texts in a microblog are successively written by several users). First, based on the fact that the causes of an emotion of a focused user may be provided by other users in a microblog with the multiple-user structure, we design an emotion cause annotation scheme which can deal with such a complicated case, and then provide an emotion cause corpus using the annotation scheme. Second, based on the analysis of the emotion cause corpus, we formalize two emotion cause detection tasks for microblogs (current-subtweet-based emotion cause detection and original-subtweet-based emotion cause detection). Furthermore, in order to examine the difficulty of the two emotion cause detection tasks and the contributions of texts written by different users in a microblog with the multiple-user structure, we choose two popular classification methods (SVM and LSTM) to do emotion cause detection. Our experiments show that the current-subtweet-based emotion cause detection is much more difficult than the original-subtweet-based emotion cause detection, and texts written by different users are very helpful for both emotion cause detection tasks. This study presents a pilot study of emotion cause detection which deals with Chinese microblogs using a complicated structure.", "To identify the cause of emotion is a new challenge for researchers in nature language processing. Currently, there is no existing works on emotion cause detection from Chinese micro-blogging (Weibo) text. In this study, an emotion cause annotated corpus is firstly designed and developed through anno- tating the emotion cause expressions in Chinese Weibo Text. Up to now, an emotion cause annotated corpus which consists of the annotations for 1,333 Chinese Weibo is constructed. Based on the observations on this corpus, the characteristics of emotion cause expression are identified. Accordingly, a rule- based emotion cause detection method is developed which uses 25 manually complied rules. Furthermore, two machine learning based cause detection me- thods are developed including a classification-based method using support vec- tor machines and a sequence labeling based method using conditional random fields model. It is the largest available resources in this research area. The expe- rimental results show that the rule-based method achieves 68.30 accuracy rate. Furthermore, the method based on conditional random fields model achieved 77.57 accuracy which is 37.45 higher than the reference baseline method. These results show the effectiveness of our proposed emotion cause detection method.", "Emotion, a pervasive aspect of human experience, has long been of interest to social and behavioural sciences. It is now the subject of multi-disciplinary research also in computational linguistics. Emotion recognition, studied in the area of sentiment analysis, has focused on detecting the expressed emotion. A related challenging question, why the experiencer feels that emotion, has, to date, received very little attention. The task is difficult and there are no annotated English resources. FrameNet refers to the person, event or state of affairs which evokes the emotional response in the experiencer as emotion stimulus. We automatically build a dataset annotated with both the emotion and the stimulus using FrameNet’s emotions-directed frame. We address the problem as information extraction: we build a CRF learner, a sequential learning model to detect the emotion stimulus spans in emotion-bearing sentences. We show that our model significantly outperforms all the baselines.", "", "" ] }
1903.06901
2922035987
Many Natural Language Processing works on emotion analysis only focus on simple emotion classification without exploring the potentials of putting emotion into "event context", and ignore the analysis of emotion-related events. One main reason is the lack of this kind of corpus. Here we present Cause-Emotion-Action Corpus, which manually annotates not only emotion, but also cause events and action events. We propose two new tasks based on the data-set: emotion causality and emotion inference. The first task is to extract a triple (cause, emotion, action). The second task is to infer the probable emotion. We are currently releasing the data-set with 10,603 samples and 15,892 events, basic statistic analysis and baseline on both emotion causality and emotion inference tasks. Baseline performance demonstrates that there is much room for both tasks to be improved.
@cite_12 presented an annotation scheme for events that negatively or positively affect entities (benefactive malefactive events). Then @cite_0 constructed two sense-level lexicon of benefactive and malefactive events for opinion inference.
{ "cite_N": [ "@cite_0", "@cite_12" ], "mid": [ "2251721284", "2110234338" ], "abstract": [ "Opinion inference arises when opinions are expressed toward states and events which positive or negatively affect entities, i.e., benefactive and malefactive events. This paper addresses creating a lexicon of such events, which would be helpful to infer opinions. Verbs may be ambiguous, in that some meanings may be benefactive and others may be malefactive or neither. Thus, we use WordNet to create a sense-level lexicon. We begin with seed senses culled from FrameNet and expand the lexicon using WordNet relationships. The evaluations show that the accuracy of the approach is well above baseline accuracy.", "This paper presents an annotation scheme for events that negatively or positively affect entities (benefactive malefactive events) and for the attitude of the writer toward their agents and objects. Work on opinion and sentiment tends to focus on explicit expressions of opinions. However, many attitudes are conveyed implicitly, and benefactive malefactive events are important for inferring implicit attitudes. We describe an annotation scheme and give the results of an inter-annotator agreement study. The annotated corpus is available online." ] }
1903.06901
2922035987
Many Natural Language Processing works on emotion analysis only focus on simple emotion classification without exploring the potentials of putting emotion into "event context", and ignore the analysis of emotion-related events. One main reason is the lack of this kind of corpus. Here we present Cause-Emotion-Action Corpus, which manually annotates not only emotion, but also cause events and action events. We propose two new tasks based on the data-set: emotion causality and emotion inference. The first task is to extract a triple (cause, emotion, action). The second task is to infer the probable emotion. We are currently releasing the data-set with 10,603 samples and 15,892 events, basic statistic analysis and baseline on both emotion causality and emotion inference tasks. Baseline performance demonstrates that there is much room for both tasks to be improved.
@cite_18 proposed Event2Mind to supporting commonsense inference on events with a specific focus on modeling stereotypical intents and reactions of people, described in short free-form text.
{ "cite_N": [ "@cite_18" ], "mid": [ "2896555051" ], "abstract": [ "We investigate a new commonsense inference task: given an event described in a short free-form text (\"X drinks coffee in the morning\"), a system reasons about the likely intents (\"X wants to stay awake\") and reactions (\"X feels alert\") of the event's participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people's intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts." ] }
1903.06473
2924060277
We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image. To reduce the ambiguities associated with the surface geometry reconstruction, even for the reconstruction of invisible areas, we propose and leverage a dense semantic representation generated from SMPL model as an additional input. One key feature of our network is that it fuses different scales of image features into the 3D space through volumetric feature transformation, which helps to recover accurate surface geometry. The visible surface details are further refined through a normal refinement network, which can be concatenated with the volume generation network using our proposed volumetric normal projection layer. We also contribute THuman, a 3D real-world human model dataset containing about 7000 models. The network is trained using training data generated from the dataset. Overall, due to the specific design of our network and the diversity in our dataset, our method enables 3D human model estimation given only a single image and outperforms state-of-the-art approaches.
Previous studies focused on using multiview images for human model reconstruction @cite_43 @cite_48 @cite_73 . Shape cues like silhouette @cite_54 @cite_36 , stereo and shading cues have been integrated in both passive @cite_48 @cite_73 @cite_2 and active illumination @cite_24 @cite_49 modes to improve the reconstruction performance. State-of-the-art real-time @cite_52 @cite_33 and extremely high-quality @cite_74 reconstruction results have also been demonstrated with tens or even hundreds of cameras using binocular @cite_59 or multiview stereo matching @cite_38 algorithms. To capture detailed motions of multiple interacting characters, more than six hundred cameras have been used to overcome the occlusion challenges @cite_66 @cite_45 . However, all these multi-camera systems require complicated environment setups including camera calibration, synchronization and lighting control.
{ "cite_N": [ "@cite_38", "@cite_33", "@cite_36", "@cite_48", "@cite_54", "@cite_52", "@cite_24", "@cite_43", "@cite_45", "@cite_49", "@cite_2", "@cite_74", "@cite_59", "@cite_73", "@cite_66" ], "mid": [ "2129404737", "2768345177", "", "2117888987", "2142540472", "2461005315", "2058676365", "2042418341", "2963515833", "2040436296", "2113507517", "2044618760", "", "", "2215643317" ], "abstract": [ "This paper proposes a novel algorithm for multiview stereopsis that outputs a dense set of small rectangular patches covering the surfaces visible in the images. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these before using visibility constraints to filter away false matches. The keys to the performance of the proposed algorithm are effective techniques for enforcing local photometric consistency and global visibility constraints. Simple but effective methods are also proposed to turn the resulting patch model into a mesh which can be further refined by an algorithm that enforces both photometric consistency and regularization constraints. The proposed approach automatically detects and discards outliers and obstacles and does not require any initialization in the form of a visual hull, a bounding box, or valid depth ranges. We have tested our algorithm on various data sets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and \"crowded\" scenes where moving obstacles appear in front of a static structure of interest. A quantitative evaluation on the Middlebury benchmark [1] shows that the proposed method outperforms all others submitted so far for four out of the six data sets.", "We present Motion2Fusion, a state-of-the-art 360 performance capture system that enables *real-time* reconstruction of arbitrary non-rigid scenes. We provide three major contributions over prior work: 1) a new non-rigid fusion pipeline allowing for far more faithful reconstruction of high frequency geometric details, avoiding the over-smoothing and visual artifacts observed previously. 2) a high speed pipeline coupled with a machine learning technique for 3D correspondence field estimation reducing tracking errors and artifacts that are attributed to fast motions. 3) a backward and forward non-rigid alignment strategy that more robustly deals with topology changes but is still free from scene priors. Our novel performance capture system demonstrates real-time results nearing 3x speed-up from previous state-of-the-art work on the exact same GPU hardware. Extensive quantitative and qualitative comparisons show more precise geometric and texturing results with less artifacts due to fast motions or topology changes than prior art.", "", "Creating realistic animated models of people is a central task in digital content production. Traditionally, highly skilled artists and animators construct shape and appearance models for digital character. They then define the character's motion at each time frame or specific key-frames in a motion sequence to create a digital performance. Increasingly, producers are using motion capture technology to record animations from an actor's performance. This technology reduces animation production time and captures natural movements to create a more believable production. However, motion capture requires the use of specialist suits and markers and only records skelet al motion. It lacks the detailed secondary surface dynamics of cloth and hair that provide the visual realism of a live performance. Over the last decade, we have investigated studio capture technology with the objective of creating models of real people that accurately reflect the time-varying shape and appearance of the whole body with clothing. Surface capture is a fully automated system for capturing a human's shape and appearance as well as motion from multiple video cameras to create highly realistic animated content from an actor's performance in full wardrobe. Our system solves two key problems in performance capture: scene capture from a limited number of camera views and efficient scene representation for visualization", "In this paper, we describe an efficient image-based approach to computing and shading visual hulls from silhouette image data. Our algorithm takes advantage of epipolar geometry and incremental computation to achieve a constant rendering cost per rendered pixel. It does not suffer from the computation complexity, limited resolution, or quantization artifacts of previous volumetric approaches. We demonstrate the use of this algorithm in a real-time virtualized reality application running off a small number of video streams.", "We contribute a new pipeline for live multi-view performance capture, generating temporally coherent high-quality reconstructions in real-time. Our algorithm supports both incremental reconstruction, improving the surface estimation over time, as well as parameterizing the nonrigid scene motion. Our approach is highly robust to both large frame-to-frame motion and topology changes, allowing us to reconstruct extremely challenging scenes. We demonstrate advantages over related real-time techniques that either deform an online generated template or continually fuse depth data nonrigidly into a single reference model. Finally, we show geometric reconstruction results on par with offline methods which require orders of magnitude more processing time and many more RGBD cameras.", "In this paper we present a scalable 3D video framework for capturing and rendering dynamic scenes. The acquisition system is based on multiple sparsely placed 3D video bricks, each comprising a projector, two grayscale cameras, and a color camera. Relying on structured light with complementary patterns, texture images and pattern-augmented views of the scene are acquired simultaneously by time-multiplexed projections and synchronized camera exposures. Using space–time stereo on the acquired pattern images, high-quality depth maps are extracted, whose corresponding surface samples are merged into a view-independent, point-based 3D data structure. This representation allows for effective photo-consistency enforcement and outlier removal, leading to a significant decrease of visual artifacts and a high resulting rendering quality using EWA volume splatting. Our framework and its view-independent representation allow for simple and straightforward editing of 3D video. In order to demonstrate its flexibility, we show compositing techniques and spatiotemporal effects.", "A new visual medium, Virtualized Reality, immerses viewers in a virtual reconstruction of real-world events. The Virtualized Reality world model consists of real images and depth information computed from these images. Stereoscopic reconstructions provide a sense of complete immersion, and users can select their own viewpoints at view time, independent of the actual camera positions used to capture the event.", "We present a unified deformation model for the markerless capture of human movement at multiple scales, including facial expressions, body motion, and hand gestures. An initial model is generated by locally stitching together models of the individual parts of the human body, which we refer to as \"Frank\". This model enables the full expression of part movements, including face and hands, by a single seamless model. We capture a dataset of people wearing everyday clothes and optimize the Frank model to create \"Adam\": a calibrated model that shares the same skeleton hierarchy as the initial model with a simpler parameterization. Finally, we demonstrate the use of these models for total motion tracking in a multiview setup, simultaneously capturing the large-scale body movements and the subtle face and hand motion of a social group of people.", "We describe a system for high-resolution capture of moving 3D geometry, beginning with dynamic normal maps from multiple views. The normal maps are captured using active shape-from-shading (photometric stereo), with a large lighting dome providing a series of novel spherical lighting configurations. To compensate for low-frequency deformation, we perform multi-view matching and thin-plate spline deformation on the initial surfaces obtained by integrating the normal maps. Next, the corrected meshes are merged into a single mesh using a volumetric method. The final output is a set of meshes, which were impossible to produce with previous methods. The meshes exhibit details on the order of a few millimeters, and represent the performance over human-size working volumes at a temporal resolution of 60Hz.", "We present an approach to add true fine-scale spatio-temporal shape detail to dynamic scene geometry captured from multi-view video footage. Our approach exploits shading information to recover the millimeter-scale surface structure, but in contrast to related approaches succeeds under general unconstrained lighting conditions. Our method starts off from a set of multi-view video frames and an initial series of reconstructed coarse 3D meshes that lack any surface detail. In a spatio-temporal maximum a posteriori probability (MAP) inference framework, our approach first estimates the incident illumination and the spatially-varying albedo map on the mesh surface for every time instant. Thereafter, albedo and illumination are used to estimate the true geometric detail visible in the images and add it to the coarse reconstructions. The MAP framework uses weak temporal priors on lighting, albedo and geometry which improve reconstruction quality yet allow for temporal variations in the data.", "We present the first end-to-end solution to create high-quality free-viewpoint video encoded as a compact data stream. Our system records performances using a dense set of RGB and IR video cameras, generates dynamic textured surfaces, and compresses these to a streamable 3D video format. Four technical advances contribute to high fidelity and robustness: multimodal multi-view stereo fusing RGB, IR, and silhouette information; adaptive meshing guided by automatic detection of perceptually salient areas; mesh tracking to create temporally coherent subsequences; and encoding of tracked textured meshes as an MPEG video stream. Quantitative experiments demonstrate geometric accuracy, texture fidelity, and encoding efficiency. We release several datasets with calibrated inputs and processed results to foster future research.", "", "", "We present an approach to capture the 3D structure and motion of a group of people engaged in a social interaction. The core challenges in capturing social interactions are: (1) occlusion is functional and frequent, (2) subtle motion needs to be measured over a space large enough to host a social group, and (3) human appearance and configuration variation is immense. The Panoptic Studio is a system organized around the thesis that social interactions should be measured through the perceptual integration of a large variety of view points. We present a modularized system designed around this principle, consisting of integrated structural, hardware, and software innovations. The system takes, as input, 480 synchronized video streams of multiple people engaged in social activities, and produces, as output, the labeled time-varying 3D structure of anatomical landmarks on individuals in the space. The algorithmic contributions include a hierarchical approach for generating skelet al trajectory proposals, and an optimization framework for skelet al reconstruction with trajectory re-association." ] }
1903.06473
2924060277
We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image. To reduce the ambiguities associated with the surface geometry reconstruction, even for the reconstruction of invisible areas, we propose and leverage a dense semantic representation generated from SMPL model as an additional input. One key feature of our network is that it fuses different scales of image features into the 3D space through volumetric feature transformation, which helps to recover accurate surface geometry. The visible surface details are further refined through a normal refinement network, which can be concatenated with the volume generation network using our proposed volumetric normal projection layer. We also contribute THuman, a 3D real-world human model dataset containing about 7000 models. The network is trained using training data generated from the dataset. Overall, due to the specific design of our network and the diversity in our dataset, our method enables 3D human model estimation given only a single image and outperforms state-of-the-art approaches.
To reduce the difficulty of system setup, human model reconstruction from extremely sparse camera views has recently been investigated by using CNNs for learning silhouette cues @cite_27 and stereo cues @cite_67 . These systems require about 4 camera views for a coarse-level surface detail capture. Note also that although temporal deformation systems using lightweight camera setups (usually with about eight cameras) @cite_70 @cite_18 @cite_55 have been developed for dynamic human model reconstruction using skeleton tracking @cite_70 @cite_69 or human mesh-based template deformation @cite_18 , these systems assume a pre-scanned subject-specific human template as a key model for deformation.
{ "cite_N": [ "@cite_67", "@cite_18", "@cite_69", "@cite_70", "@cite_55", "@cite_27" ], "mid": [ "2894865236", "2109752307", "2089384364", "2122578066", "2110434318", "2811169695" ], "abstract": [ "We present a deep learning based volumetric approach for performance capture using a passive and highly sparse multi-view capture system. State-of-the-art performance capture systems require either pre-scanned actors, large number of cameras or active sensors. In this work, we focus on the task of template-free, per-frame 3D surface reconstruction from as few as three RGB sensors, for which conventional visual hull or multi-view stereo methods fail to generate plausible results. We introduce a novel multi-view Convolutional Neural Network (CNN) that maps 2D images to a 3D volumetric field and we use this field to encode the probabilistic distribution of surface points of the captured subject. By querying the resulting field, we can instantiate the clothed human body at arbitrary resolutions. Our approach scales to different numbers of input images, which yield increased reconstruction quality when more views are used. Although only trained on synthetic data, our network can generalize to handle real footage from body performance capture. Our method is suitable for high-quality low-cost full body volumetric capture solutions, which are gaining popularity for VR and AR content creation. Experimental results demonstrate that our method is significantly more robust and accurate than existing techniques when only very sparse views are available.", "This paper proposes a new marker-less approach to capturing human performances from multi-view video. Our algorithm can jointly reconstruct spatio-temporally coherent geometry, motion and textural surface appearance of actors that perform complex and rapid moves. Furthermore, since our algorithm is purely meshbased and makes as few as possible prior assumptions about the type of subject being tracked, it can even capture performances of people wearing wide apparel, such as a dancer wearing a skirt. To serve this purpose our method efficiently and effectively combines the power of surface- and volume-based shape deformation techniques with a new mesh-based analysis-through-synthesis framework. This framework extracts motion constraints from video and makes the laser-scan of the tracked subject mimic the recorded performance. Also small-scale time-varying shape detail is recovered by applying model-guided multi-view stereo to refine the model surface. Our method delivers captured performance data at high level of detail, is highly versatile, and is applicable to many complex types of scenes that could not be handled by alternative marker-based or marker-free recording techniques.", "Capturing the skeleton motion and detailed time-varying surface geometry of multiple, closely interacting peoples is a very challenging task, even in a multicamera setup, due to frequent occlusions and ambiguities in feature-to-person assignments. To address this task, we propose a framework that exploits multiview image segmentation. To this end, a probabilistic shape and appearance model is employed to segment the input images and to assign each pixel uniquely to one person. Given the articulated template models of each person and the labeled pixels, a combined optimization scheme, which splits the skeleton pose optimization problem into a local one and a lower dimensional global one, is applied one by one to each individual, followed with surface estimation to capture detailed nonrigid deformations. We show on various sequences that our approach can capture the 3D motion of humans accurately even if they move rapidly, if they wear wide apparel, and if they are engaged in challenging multiperson motions, including dancing, wrestling, and hugging.", "Details in mesh animations are difficult to generate but they have great impact on visual quality. In this work, we demonstrate a practical software system for capturing such details from multi-view video recordings. Given a stream of synchronized video images that record a human performance from multiple viewpoints and an articulated template of the performer, our system captures the motion of both the skeleton and the shape. The output mesh animation is enhanced with the details observed in the image silhouettes. For example, a performance in casual loose-fitting clothes will generate mesh animations with flowing garment motions. We accomplish this with a fast pose tracking method followed by nonrigid deformation of the template to fit the silhouettes. The entire process takes less than sixteen seconds per frame and requires no markers or texture cues. Captured meshes are in full correspondence making them readily usable for editing operations including texturing, deformation transfer, and deformation model learning.", "This paper proposes a method for capturing the performance of a human or an animal from a multi-view video sequence. Given an articulated template model and silhouettes from a multi-view image sequence, our approach recovers not only the movement of the skeleton, but also the possibly non-rigid temporal deformation of the 3D surface. While large scale deformations or fast movements are captured by the skeleton pose and approximate surface skinning, true small scale deformations or non-rigid garment motion are captured by fitting the surface to the silhouette. We further propose a novel optimization scheme for skeleton-based pose estimation that exploits the skeleton's tree structure to split the optimization problem into a local one and a lower dimensional global one. We show on various sequences that our approach can capture the 3D motion of animals and humans accurately even in the case of rapid movements and wide apparel like skirts.", "We present a convolutional autoencoder that enables high fidelity volumetric reconstructions of human performance to be captured from multi-view video comprising only a small set of camera views. Our method yields similar end-to-end reconstruction error to that of a probabilistic visual hull computed using significantly more (double or more) viewpoints. We use a deep prior implicitly learned by the autoencoder trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. This opens up the possibility of high-end volumetric performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count." ] }
1903.06473
2924060277
We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image. To reduce the ambiguities associated with the surface geometry reconstruction, even for the reconstruction of invisible areas, we propose and leverage a dense semantic representation generated from SMPL model as an additional input. One key feature of our network is that it fuses different scales of image features into the 3D space through volumetric feature transformation, which helps to recover accurate surface geometry. The visible surface details are further refined through a normal refinement network, which can be concatenated with the volume generation network using our proposed volumetric normal projection layer. We also contribute THuman, a 3D real-world human model dataset containing about 7000 models. The network is trained using training data generated from the dataset. Overall, due to the specific design of our network and the diversity in our dataset, our method enables 3D human model estimation given only a single image and outperforms state-of-the-art approaches.
Regarding single-view human model reconstruction, there are only two recent works by @cite_10 and @cite_1 . In the former study, the 3D human datasets used for the training process are essentially synthesized human imagery textured over SMPL models (lacking geometry details), leading to SMPL-like voxel geometries in their outputs. The latter study shows the ability to output high-quality details, but their training set is highly constrained, leading to difficulty in generalization, e.g., to different human poses.
{ "cite_N": [ "@cite_1", "@cite_10" ], "mid": [ "2949155024", "2797515701" ], "abstract": [ "This paper proposes the use of an end-to-end Convolutional Neural Network for direct reconstruction of the 3D geometry of humans via volumetric regression. The proposed method does not require the fitting of a shape model and can be trained to work from a variety of input types, whether it be landmarks, images or segmentation masks. Additionally, non-visible parts, either self-occluded or otherwise, are still reconstructed, which is not the case with depth map regression. We present results that show that our method can handle both pose variation and detailed reconstruction given appropriate datasets for training.", "Human shape estimation is an important task for video editing , animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric body-part segmentation." ] }
1903.06473
2924060277
We propose DeepHuman, an image-guided volume-to-volume translation CNN for 3D human reconstruction from a single RGB image. To reduce the ambiguities associated with the surface geometry reconstruction, even for the reconstruction of invisible areas, we propose and leverage a dense semantic representation generated from SMPL model as an additional input. One key feature of our network is that it fuses different scales of image features into the 3D space through volumetric feature transformation, which helps to recover accurate surface geometry. The visible surface details are further refined through a normal refinement network, which can be concatenated with the volume generation network using our proposed volumetric normal projection layer. We also contribute THuman, a 3D real-world human model dataset containing about 7000 models. The network is trained using training data generated from the dataset. Overall, due to the specific design of our network and the diversity in our dataset, our method enables 3D human model estimation given only a single image and outperforms state-of-the-art approaches.
Most of the available 3D human datasets are used for 3D pose and skeleton detection. Both the HumanEva @cite_32 and Human3.6M @cite_71 datasets contain multiview human video sequences with ground-truth 3D skeleton motion obtained from a marker-based motion capture system. Because of the need to wear markers or special suits, both of these datasets have limited apparel divergence. MPI-INF-3DHP @cite_23 dataset enriches the cloth appearance by using a multiview markerless mocap system. However, all the above datasets lack a 3D model of each temporal frame. Recently, with the emergence of the requirement of pose and shape reconstruction from a single image, the synthesized SURREAL @cite_21 datasets have been created for this task by rendering SMPL models with different shape and pose parameters under different clothing textures. The Unite the People" dataset @cite_57 provides real-world human images annotated semi-automatic with 3D SMPL models. These two datasets, in contrasts to our dataset, do not contain surface geometry details.
{ "cite_N": [ "@cite_21", "@cite_32", "@cite_57", "@cite_23", "@cite_71" ], "mid": [ "2576289912", "2099333815", "2573098616", "2797184202", "2101032778" ], "abstract": [ "Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.", "While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HumanEva datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.", "3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits in-the-wild. However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes.", "We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.", "We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20 improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http: vision.imar.ro human3.6m ." ] }
1903.06440
2921460706
A unified mathematical model for synchronisation and swarming has recently been proposed. Each system entity, called a "swarmalator", coordinates its internal phase and location with the other entities in a way that these two attributes are mutually coupled. This paper realises and studies, for the first time, the concept of swarmalators in a technical system. We adapt and extend the original model for its use with mobile robots and implement it in the Robot Operating System 2 (ROS 2). Simulations and experiments with small robots demonstrate the feasibility of the model and show its potential to be applied to real-world systems. All types of space-time patterns achieved in theory can be reproduced in practice. Applications can be found in monitoring, exploration, entertainment and art, among other domains.
There are different methods for pattern formation and control (see @cite_12 @cite_24 @cite_18 ). Self-organised flocking of mobile robots was studied and evaluated in simulations and on robots by @cite_21 and @cite_11 . Robotic swarm navigation with obstacle avoidance was analysed in @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_3", "@cite_24", "@cite_12", "@cite_11" ], "mid": [ "2032251775", "2127683394", "2460265649", "2052142439", "2044484214", "2087370230" ], "abstract": [ "In the field of formation control, researchers generally control multiple robots in only one team, and little research focuses on multi-team formation control. In this paper, we propose an architecture, called Virtual Operator MultiAgent System (VOMAS), to perform formation control for multiple teams of mobile robots with the capabilities and advantages of scalability and autonomy. VOMAS is a hybrid architecture with two main agents. The virtual operator agent handles high level missions and team control, and the robot agent deals with low level formation control. The virtual operator uses four basic services including join, remove, split, and merge requests to perform multi-team control. A new robot can be easily added to a team by cloning a new virtual operator to control it. The robot agent uses a simple formation representation method to show formation to a large number of robots, and it uses the concept of potential field and behavior-based control to perform kinematic control to keep formation both in holonomic and nonholonomic mobile robots. In addition, we also test the stability, robustness, and uncertainty in the simulation.", "In this paper, we study self-organized flocking in a swarm of mobile robots. We present Kobot, a mobile robot platform developed specifically for swarm robotic studies. We describe its infrared-based short range sensing system, capable of measuring the distance from obstacles and detecting kin robots, and a novel sensing system called the virtual heading system (VHS) which uses a digital compass and a wireless communication module for sensing the relative headings of neighboring robots.", "In swarm robotics, it is necessary to develop methods and strategies that guide the collective execution of tasks by the robots. The design of such tasks can be done considering it as a collection of simpler behaviors, called subtasks. In this paper, the Wave Swarm is presented as a general strategy to manage the sequence of subtasks that compose the collective navigation, which is an important task in swarm robotics. The proposed strategy is based mainly on the execution of wave algorithms. The swarm is viewed as a distributed system, wherein the communication is achieved by message passing among robot's neighborhood. Message propagation delimits the start and end of each subtask. Simulations are performed to demonstrate that controlled navigation of robot swarms clusters is achieved with three subtasks, which are recruitment, alignment and movement.", "A novel technique is presented for organizing swarms of robots into formation utilizing artificial potential fields generated from normal and sigmoid functions. These functions construct the surface swarm members travel on, controlling the overall swarm geometry and the individual member spacing. Limiting functions are defined to provide tighter swarm control by modifying and adjusting a set of control variables forcing the swarm to behave according to set constraints, formation and member spacing. The swarm function and limiting functions are combined to control swarm formation, orientation, and swarm movement as a whole. Parameters are chosen based on desired formation as well as user defined constraints. This approach compared to others, is simple, computationally efficient, scales well to different swarm sizes, to heterogeneous systems, and to both centralized and decentralized swarm models. Simulation results are presented for a swarm of four and ten particles following circle, ellipse and wedge formations. Experimental results are also included with four unmanned ground vehicles (UGV).", "In this note we make a minor correction to a scheme for robots to broadcast their private information. All major results of the paper [I. Suzuki and M. Yamashita, SIAM J. Comput., 28 (1999), pp. 1347-1363] hold with this correction.", "In flocking, a swarm of robots moves cohesively in a common direction. Traditionally, flocking is realized using two main control rules: proximal control, which controls the cohesion of the swarm using local range-and bearing information about neighboring robots; and alignment control, which allows the robots to align in a common direction and uses more elaborate sensing mechanisms to obtain the orientation of neighboring robots. So far, limited attention has been given to motion control, used to translate the output of these two control rules into robot motion. In this paper, we propose a novel motion control method: magnitude-dependent motion control (MDMC). Through simulations and real robot experiments, we show that, with MDMC, flocking in a random direction is possible without the need for alignment control and for robots having a preferred direction of travel. MDMC has the advantage to be implementable on very simple robots that lack the capability to detect the orientation of their neighbors. In addition, we introduce a small proportion of robots informed about a desired direction of travel. We compare MDMC with a motion control method used in previous robotics literature, which we call magnitude-independent motion control (MIMC), and we show that the swarms can travel longer distances in the desired direction when using MDMC instead of MIMC. Finally, we systematically study flocking under various conditions: with or without alignment control, with or without informed robots, with MDMC or with MIMC." ] }
1903.06440
2921460706
A unified mathematical model for synchronisation and swarming has recently been proposed. Each system entity, called a "swarmalator", coordinates its internal phase and location with the other entities in a way that these two attributes are mutually coupled. This paper realises and studies, for the first time, the concept of swarmalators in a technical system. We adapt and extend the original model for its use with mobile robots and implement it in the Robot Operating System 2 (ROS 2). Simulations and experiments with small robots demonstrate the feasibility of the model and show its potential to be applied to real-world systems. All types of space-time patterns achieved in theory can be reproduced in practice. Applications can be found in monitoring, exploration, entertainment and art, among other domains.
Synchronisation is often applied in technology, e.g. for slotted communication or in robotic swarms to allocate and execute tasks together. One of the popular algorithms applied in those scenarios is firefly synchronisation because entities have to exchange only simple pulses. It can be used in wireless networks @cite_9 @cite_8 , but other, simpler communication interfaces can be used to emit and receive pulses as well (e.g. sound @cite_17 and light @cite_4 ).
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_17", "@cite_8" ], "mid": [ "2234682834", "2156029527", "2169337728", "2098897082" ], "abstract": [ "The convergence and precision of synchronization algorithms based on the theory of pulse-coupled oscillators is evaluated on programmable radios. Measurements in different wireless topologies show that such algorithms reach precisions in the low microsecond range. Based on the observation that phase rate deviation among radios is a limiting factor for the achievable precision, we propose a distributed algorithm for automatic phase rate equalization and show by experiments that an improved precision below one microsecond is possible in the given setups. It is also experimentally demonstrated that the stochastic nature of coupling is a key ingredient for convergence to synchrony. The proposed scheme can be applied in wireless systems for distributed synchronization of transmission slots, or sleep cycles.", "Recently, there has been growing interest in the synchronization of mobile pulse-coupled oscillators. We build on the work by (Phys. Rev. Lett. 110, 114101) and show that agents that interact exclusively with others in their cone of vision can exhibit different synchronization regimes. Depending on their speed, synchronization emerges as a slow process through spreading of the local coherence, as a fast process where global synchronization dominates, or it is inhibited for a range of intermediate speeds. In addition, we show that, not only the speed of the agents, but also their angle and range of interaction can tune the appearance of this intermediate regime.", "Self-organized synchronization is a common phenomenon observed in many natural and artificial systems: simple coupling rules at the level of the individual components of the system result in an overall coherent behavior. Owing to these properties, synchronization appears particularly interesting for swarm robotics systems, as it allows robust temporal coordination of the group while minimizing the complexity of the individual controllers. The goal of the experiments presented in this paper is the study of self-organizing synchronization for robots that present an individual periodic behavior. In order to design the robot controllers, we make use of artificial evolution, which proves to be capable of synthesizing minimal synchronization strategies based on the dynamical coupling between robots and environment. The obtained results are analyzed under a dynamical system perspective, which allows us to uncover the evolved mechanisms and to predict the scalability properties of the self-organizing synchronization with respect to varying group size.", "Synchronization is considered a particularly difficult task in wireless sensor networks due to its decentralized structure. Interestingly, synchrony has often been observed in networks of biological agents (e.g., synchronously flashing fireflies, or spiking of neurons). In this paper, we propose a bio-inspired network synchronization protocol for large scale sensor networks that emulates the simple strategies adopted by the biological agents. The strategy synchronizes pulsing devices that are led to emit their pulses periodically and simultaneously. The convergence to synchrony of our strategy follows from the theory of Mirollo and Strogatz, 1990, while the scalability is evident from the many examples existing in the natural world. When the nodes are within a single broadcast range, our key observation is that the dependence of the synchronization time on the number of nodes N is subject to a phase transition: for values of N beyond a specific threshold, the synchronization is nearly immediate; while for smaller N, the synchronization time decreases smoothly with respect to N. Interestingly, a tradeoff is observed between the total energy consumption and the time necessary to reach synchrony. We obtain an optimum operating point at the local minimum of the energy consumption curve that is associated to the phase transition phenomenon mentioned before. The proposed synchronization protocol is directly applied to the cooperative reach-back communications problem. The main advantages of the proposed method are its scalability and low complexity." ] }
1903.06440
2921460706
A unified mathematical model for synchronisation and swarming has recently been proposed. Each system entity, called a "swarmalator", coordinates its internal phase and location with the other entities in a way that these two attributes are mutually coupled. This paper realises and studies, for the first time, the concept of swarmalators in a technical system. We adapt and extend the original model for its use with mobile robots and implement it in the Robot Operating System 2 (ROS 2). Simulations and experiments with small robots demonstrate the feasibility of the model and show its potential to be applied to real-world systems. All types of space-time patterns achieved in theory can be reproduced in practice. Applications can be found in monitoring, exploration, entertainment and art, among other domains.
There are a few attempts to use the synchronisation pulses also for swarm coordination. proposed a method in which robots detect faulty agents that stopped blinking and fix them or take their tasks over @cite_20 . described how light emitted by robots can be used as a guiding signal @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_20" ], "mid": [ "2047987986", "2111952942" ], "abstract": [ "The miniaturization of microrobots is accompanied by limitations of signaling, sensing and agility. Control of a swarm of simple microrobots has to cope with such constraints in a way which still guarantees the accomplishment of a task. A recently proposed communication method, which is based on the coupling of signal oscillators of individual agents [13], may provide a basis for a distributed control of a simulated swarm of simple microrobots (similar to I-Swarm microrobots) engaged in a cleaning scenario. This self-organized communication method was biologically inspired from males of chorusing insects which are known for the rapid synchronization of their acoustic signals in a chorus. Signal oscillator properties were used to generate waves of synchronized signaling (s-waves) among a swarm of agents. In a simulation of a cleaning scenario, agents on the dump initiated concentrically spreading s-waves by shortening their intrinsic signal period. Dirt-carrying agents localized the dump by heading against the wave front. After optimization of certain control parameters the properties of this distributed control strategy were investigated in different variants of a cleaning scenario. These include a second dump, obstacles, different agent densities, agent drop-out and a second signal oscillator.", "One of the essential benefits of swarm robotic systems is redundancy. In case one robot breaks down, another robot can take steps to repair the failed robot or take over the failed robot's task. Although fault tolerance and robustness to individual failures have often been central arguments in favor of swarm robotic systems, few studies have been dedicated to the subject. In this paper, we take inspiration from the synchronized flashing behavior observed in some species of fireflies. We derive a completely decentralized algorithm to detect non-operational robots in a swarm robotic system. Each robot flashes by lighting up its on-board light-emitting diodes (LEDs), and neighboring robots are driven to flash in synchrony. Since robots that are suffering catastrophic failures do not flash periodically, they can be detected by operational robots. We explore the performance of the proposed algorithm both on a real-world swarm robotic system and in simulation. We show that failed robots are detected correctly and in a timely manner, and we show that a system composed of robots with simulated self-repair capabilities can survive relatively high failure rates." ] }
1903.06571
2952277062
In this paper, we introduce a new problem of manipulating a given video by inserting other videos into it. Our main task is, given an object video and a scene video, to insert the object video at a user-specified location in the scene video so that the resulting video looks realistic. We aim to handle different object motions and complex backgrounds without expensive segmentation annotations. As it is difficult to collect training pairs for this problem, we synthesize fake training pairs that can provide helpful supervisory signals when training a neural network with unpaired real data. The proposed network architecture can take both real and fake pairs as input and perform both supervised and unsupervised training in an adversarial learning scheme. To synthesize a realistic video, the network renders each frame based on the current input and previous frames. Within this framework, we observe that injecting noise into previous frames while generating the current frame stabilizes training. We conduct experiments on real-world videos in object tracking and person re-identification benchmark datasets. Experimental results demonstrate that the proposed algorithm is able to synthesize long sequences of realistic videos with a given object video inserted.
Given a pair of an object image and a scene image, the ST-GAN approach @cite_7 learns a warping of the object conditioned on the scene. Based on the warping, the object is transformed to a new location without changing its appearance. As it focuses on geometric realism, they use carefully segmented object as an input.
{ "cite_N": [ "@cite_7" ], "mid": [ "2794080999" ], "abstract": [ "We address the problem of finding realistic geometric corrections to a foreground object such that it appears natural when composited into a background image. To achieve this, we propose a novel Generative Adversarial Network (GAN) architecture that utilizes Spatial Transformer Networks (STNs) as the generator, which we call Spatial Transformer GANs (ST-GANs). ST-GANs seek image realism by operating in the geometric warp parameter space. In particular, we exploit an iterative STN warping scheme and propose a sequential training strategy that achieves better results compared to naive training of a single generator. One of the key advantages of ST-GAN is its applicability to high-resolution images indirectly since the predicted warp parameters are transferable between reference frames. We demonstrate our approach in two applications: (1) visualizing how indoor furniture (e.g. from product images) might be perceived in a room, (2) hallucinating how accessories like glasses would look when matched with real portraits." ] }
1903.06571
2952277062
In this paper, we introduce a new problem of manipulating a given video by inserting other videos into it. Our main task is, given an object video and a scene video, to insert the object video at a user-specified location in the scene video so that the resulting video looks realistic. We aim to handle different object motions and complex backgrounds without expensive segmentation annotations. As it is difficult to collect training pairs for this problem, we synthesize fake training pairs that can provide helpful supervisory signals when training a neural network with unpaired real data. The proposed network architecture can take both real and fake pairs as input and perform both supervised and unsupervised training in an adversarial learning scheme. To synthesize a realistic video, the network renders each frame based on the current input and previous frames. Within this framework, we observe that injecting noise into previous frames while generating the current frame stabilizes training. We conduct experiments on real-world videos in object tracking and person re-identification benchmark datasets. Experimental results demonstrate that the proposed algorithm is able to synthesize long sequences of realistic videos with a given object video inserted.
Other approaches aim to insert an object by rendering its appearance. In @cite_13 , an object in a target category is inserted into a scene given a location and a size of a bounding box. It first predicts a shape of the object in the semantic space, after which an output image is generated from the predicted semantic label map and an input image. A similar approach is proposed in @cite_3 without using a semantic label map. A bounding box of a pedestrian is replaced by random noise and then infilled with a new pedestrian based on the surrounding context.
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2949808870", "2796322794" ], "abstract": [ "Understanding, reasoning, and manipulating semantic concepts of images have been a fundamental research problem for decades. Previous work mainly focused on direct manipulation on natural image manifold through color strokes, key-points, textures, and holes-to-fill. In this work, we present a novel hierarchical framework for semantic image manipulation. Key to our hierarchical framework is that we employ a structured semantic layout as our intermediate representation for manipulation. Initialized with coarse-level bounding boxes, our structure generator first creates pixel-wise semantic layout capturing the object shape, object-object interactions, and object-scene relations. Then our image generator fills in the pixel-level textures guided by the semantic layout. Such framework allows a user to manipulate images at object-level by adding, removing, and moving one bounding box at a time. Experimental evaluations demonstrate the advantages of the hierarchical manipulation framework over existing image generation and context hole-filing models, both qualitatively and quantitatively. Benefits of the hierarchical framework are further demonstrated in applications such as semantic object manipulation, interactive image editing, and data-driven image manipulation.", "State-of-the-art pedestrian detection models have achieved great success in many benchmarks. However, these models require lots of annotation information and the labeling process usually takes much time and efforts. In this paper, we propose a method to generate labeled pedestrian data and adapt them to support the training of pedestrian detectors. The proposed framework is built on the Generative Adversarial Network (GAN) with multiple discriminators, trying to synthesize realistic pedestrians and learn the background context simultaneously. To handle the pedestrians of different sizes, we adopt the Spatial Pyramid Pooling (SPP) layer in the discriminator. We conduct experiments on two benchmarks. The results show that our framework can smoothly synthesize pedestrians on background images of variations and different levels of details. To quantitatively evaluate our approach, we add the generated samples into training data of the baseline pedestrian detectors and show the synthetic images are able to improve the detectors' performance." ] }
1903.06571
2952277062
In this paper, we introduce a new problem of manipulating a given video by inserting other videos into it. Our main task is, given an object video and a scene video, to insert the object video at a user-specified location in the scene video so that the resulting video looks realistic. We aim to handle different object motions and complex backgrounds without expensive segmentation annotations. As it is difficult to collect training pairs for this problem, we synthesize fake training pairs that can provide helpful supervisory signals when training a neural network with unpaired real data. The proposed network architecture can take both real and fake pairs as input and perform both supervised and unsupervised training in an adversarial learning scheme. To synthesize a realistic video, the network renders each frame based on the current input and previous frames. Within this framework, we observe that injecting noise into previous frames while generating the current frame stabilizes training. We conduct experiments on real-world videos in object tracking and person re-identification benchmark datasets. Experimental results demonstrate that the proposed algorithm is able to synthesize long sequences of realistic videos with a given object video inserted.
To learn both placement and shape of a new object, the method in @cite_27 removes existing objects from the scene using an image in-painting algorithm. Then, a network is trained to recover the existing objects. The results of this method rely significantly run script on whether the adopted image in-painting algorithm performs well, not generating noisy pixels. This issue is alleviated in @cite_17 by learning the joint distribution of the location and shape of an object conditioned on the semantic label map. This method aims to find plausible locations and sizes of a bounding box by learning diverse affine transforms that warp a unit bounding box into the scene. Then, objects of different shapes are synthesized conditioned on the predicted location and its surrounding context.
{ "cite_N": [ "@cite_27", "@cite_17" ], "mid": [ "2768124800", "2950905664" ], "abstract": [ "We explore beyond object detection and semantic segmentation, and propose to address the problem of estimating the presence probabilities of nonexistent pedestrians in a street scene. Our method builds upon a combination of generative and discriminative procedures to achieve the perceptual capability of figuring out missing visual information. We adopt state-of-the-art inpainting techniques to generate the training data for nonexistent pedestrian detection. The learned detector can predict the probability of observing a pedestrian at some location in image, even if that location exhibits only the background. We evaluate our method by inserting pedestrians into images according to the presence probabilities and conducting user study to determine the 'realisticness' of synthetic images. The empirical results show that our method can capture the idea of where the reasonable places are for pedestrians to walk or stand in a street scene.", "Learning to insert an object instance into an image in a semantically coherent manner is a challenging and interesting problem. Solving it requires (a) determining a location to place an object in the scene and (b) determining its appearance at the location. Such an object insertion model can potentially facilitate numerous image editing and scene parsing applications. In this paper, we propose an end-to-end trainable neural network for the task of inserting an object instance mask of a specified class into the semantic label map of an image. Our network consists of two generative modules where one determines where the inserted object mask should be (i.e., location and scale) and the other determines what the object mask shape (and pose) should look like. The two modules are connected together via a spatial transformation network and jointly trained. We devise a learning procedure that leverage both supervised and unsupervised data and show our model can insert an object at diverse locations with various appearances. We conduct extensive experimental validations with comparisons to strong baselines to verify the effectiveness of the proposed network." ] }
1903.06571
2952277062
In this paper, we introduce a new problem of manipulating a given video by inserting other videos into it. Our main task is, given an object video and a scene video, to insert the object video at a user-specified location in the scene video so that the resulting video looks realistic. We aim to handle different object motions and complex backgrounds without expensive segmentation annotations. As it is difficult to collect training pairs for this problem, we synthesize fake training pairs that can provide helpful supervisory signals when training a neural network with unpaired real data. The proposed network architecture can take both real and fake pairs as input and perform both supervised and unsupervised training in an adversarial learning scheme. To synthesize a realistic video, the network renders each frame based on the current input and previous frames. Within this framework, we observe that injecting noise into previous frames while generating the current frame stabilizes training. We conduct experiments on real-world videos in object tracking and person re-identification benchmark datasets. Experimental results demonstrate that the proposed algorithm is able to synthesize long sequences of realistic videos with a given object video inserted.
The future frame prediction task conditions on previous frames to synthesize image content @cite_12 @cite_34 @cite_1 @cite_0 @cite_20 @cite_25 @cite_18 . Due to the future uncertainty and accumulated error in the prediction process, it typically can generate only short video sequences. On the other hand, we synthesize long video sequences by inserting one video into other videos.
{ "cite_N": [ "@cite_18", "@cite_25", "@cite_1", "@cite_0", "@cite_34", "@cite_12", "@cite_20" ], "mid": [ "2963253230", "2615413256", "2952390294", "2619034550", "2400532028", "2248556341", "" ], "abstract": [ "We propose a hierarchical approach for making long-term predictions of future frames. To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions. Long-term video prediction is difficult to perform by recurrently observing the predicted frames because the small errors in pixel space exponentially amplify as predictions are made deeper into the future. Our approach prevents pixel-level error propagation from happening by removing the need to observe the predicted frames. Our model is built with a combination of LSTM and analogy-based encoder-decoder convolutional neural networks, which independently predict the video structure and generate the future frames, respectively. In experiments, our model is evaluated on the Human 3.6M and Penn Action datasets on the task of long-term pixel-level video prediction of humans performing actions and demonstrate significantly better results than the state-of-the-art.", "We propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the proposed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos.", "In a given scene, humans can often easily predict a set of immediate future events that might happen. However, generalized pixel-level anticipation in computer vision systems is difficult because machine learning struggles with the ambiguity inherent in predicting the future. In this paper, we focus on predicting the dense trajectory of pixels in a scene, specifically what will move in the scene, where it will travel, and how it will deform over the course of one second. We propose a conditional variational autoencoder as a solution to this problem. In this framework, direct inference from the image shapes the distribution of possible trajectories, while latent variables encode any necessary information that is not available in the image. We show that our method is able to successfully predict events in a wide variety of scenes and can produce multiple different predictions when the future is ambiguous. Our algorithm is trained on thousands of diverse, realistic videos and requires absolutely no human labeling. In addition to non-semantic action prediction, we find that our method learns a representation that is applicable to semantic vision tasks.", "We present a new model DrNET that learns disentangled image representations from video. Our approach leverages the temporal coherence of video and a novel adversarial loss to learn a representation that factorizes each frame into a stationary part and a temporally varying component. The disentangled representation can be used for a range of tasks. For example, applying a standard LSTM to the time-vary components enables prediction of future frames. We evaluate our approach on a range of synthetic and real videos, demonstrating the ability to coherently generate hundreds of steps into the future.", "A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a \"visual imagination\" of different futures based on different courses of action. Our experiments show that our proposed method produces more accurate video predictions both quantitatively and qualitatively, when compared to prior methods.", "Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset", "" ] }
1903.06571
2952277062
In this paper, we introduce a new problem of manipulating a given video by inserting other videos into it. Our main task is, given an object video and a scene video, to insert the object video at a user-specified location in the scene video so that the resulting video looks realistic. We aim to handle different object motions and complex backgrounds without expensive segmentation annotations. As it is difficult to collect training pairs for this problem, we synthesize fake training pairs that can provide helpful supervisory signals when training a neural network with unpaired real data. The proposed network architecture can take both real and fake pairs as input and perform both supervised and unsupervised training in an adversarial learning scheme. To synthesize a realistic video, the network renders each frame based on the current input and previous frames. Within this framework, we observe that injecting noise into previous frames while generating the current frame stabilizes training. We conduct experiments on real-world videos in object tracking and person re-identification benchmark datasets. Experimental results demonstrate that the proposed algorithm is able to synthesize long sequences of realistic videos with a given object video inserted.
The dynamic contents textures in a video can also be used for conditional video synthesis. In @cite_15 , dynamic textures in a video such as water flow or fire flame are captured by learning a two-stream network. Then, the work animates an input image to a video with realistic dynamic motions. Artistic styles of a video is transferred to edit a target video while preserving its contents @cite_19 @cite_8 .
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_8" ], "mid": [ "2740546229", "", "2748043525" ], "abstract": [ "Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.", "", "Manually re-drawing an image in a certain artistic style takes a professional artist a long time. Doing this for a video sequence single-handedly is beyond imagination. We present two computational approaches that transfer the style from one image (for example, a painting) to a whole video sequence. In our first approach, we adapt to videos the original image style transfer technique by based on energy minimization. We introduce new ways of initialization and new loss functions to generate consistent and stable stylized video sequences even in cases with large motion and strong occlusion. Our second approach formulates video stylization as a learning problem. We propose a deep network architecture and training procedures that allow us to stylize arbitrary-length videos in a consistent and stable way, and nearly in real time. We show that the proposed methods clearly outperform simpler baselines both qualitatively and quantitatively. Finally, we propose a way to adapt these approaches also to 360 (^ ) images and videos as they emerge with recent virtual reality hardware." ] }
1903.06571
2952277062
In this paper, we introduce a new problem of manipulating a given video by inserting other videos into it. Our main task is, given an object video and a scene video, to insert the object video at a user-specified location in the scene video so that the resulting video looks realistic. We aim to handle different object motions and complex backgrounds without expensive segmentation annotations. As it is difficult to collect training pairs for this problem, we synthesize fake training pairs that can provide helpful supervisory signals when training a neural network with unpaired real data. The proposed network architecture can take both real and fake pairs as input and perform both supervised and unsupervised training in an adversarial learning scheme. To synthesize a realistic video, the network renders each frame based on the current input and previous frames. Within this framework, we observe that injecting noise into previous frames while generating the current frame stabilizes training. We conduct experiments on real-world videos in object tracking and person re-identification benchmark datasets. Experimental results demonstrate that the proposed algorithm is able to synthesize long sequences of realistic videos with a given object video inserted.
For more generic video-to-video translations, the scheme in @cite_23 formulates conditional generative adversarial networks (GANs) to synthesize photorealistic videos given a sequence of semantic label maps, sketches or human pose as an input. During training, the network takes paired data as input, sequences of a semantic label map and the corresponding RGB image sequence. The network is constrained to preserve the content of the input sequence in the output video.
{ "cite_N": [ "@cite_23" ], "mid": [ "2886748926" ], "abstract": [ "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image synthesis problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without understanding temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a novel video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generator and discriminator architectures, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our approach to future video prediction, outperforming several state-of-the-art competing systems." ] }
1903.06348
2963295068
When lambda expressions were introduced to the Java programming language as part of the release of Java 8 in 2014, they were the language's first step into functional programming. Since lambda expressions are still relatively new, not all developers use or understand them. In this paper, we first present the results of an empirical study to determine how frequently developers of GitHub repositories make use of lambda expressions and how they are documented. We find that 11 of Java GitHub repositories use lambda expressions, and that only 6 of the lambda expressions are accompanied by source code comments. We then present a tool called LambdaDoc which can automatically detect lambda expressions in a Java repository and generate natural language documentation for them. Our evaluation of LambdaDoc with 23 professional developers shows that they perceive the generated documentation to be complete, concise, and expressive, while the majority of the documentation produced by our participants without tool support was inadequate. Our contribution builds an important step towards automatically generating documentation for functional programming constructs in an object-oriented language.
After the introduction of lambda expressions, prior work started to investigate the use of lambda expressions and its impact. found that using lambda expressions in C++ has a negative impact on programming speed of inexperienced users @cite_16 . To encourage developers to adopt new language features, Khatchadourian and Masuhara submitted pull requests introducing language features to open source projects @cite_0 . investigated the adoption of lambda expressions in 241 Java open source projects and found that projects migrate to lambda expressions by converting classes to lambda expressions, replacing loops conditionals with streams, and enhancing functionality by wrapping existing code to lambda expressions @cite_2 . Complementing the prior work, in this work, we found that exception handling is a common purpose of using lambda expressions in Java and that lambda expressions are usually implicit, single-line, and have one parameter.
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_2" ], "mid": [ "2785588997", "2358692116", "2764146461" ], "abstract": [ "Programming languages and platforms improve over time, sometimes resulting in new language features that offer many benefits. However, despite these benefits, developers may not always be willing to adopt them in their projects for various reasons. In this paper, we describe an empirical study where we assess the adoption of a particular new language feature. Studying how developers use (or do not use) new language features is important in programming language research and engineering because it gives designers insight into the usability of the language to create meaning programs in that language. This knowledge, in turn, can drive future innovations in the area. Here, we explore Java 8 default methods, which allow interfaces to contain (instance) method implementations. Default methods can ease interface evolution, make certain ubiquitous design patterns redundant, and improve both modularity and maintainability. A focus of this work is to discover, through a scientific approach and a novel technique, situations where developers found these constructs useful and where they did not, and the reasons for each. Although several studies center around assessing new language features, to the best of our knowledge, this kind of construct has not been previously considered. Despite their benefits, we found that developers did not adopt default methods in all situations. Our study consisted of submitting pull requests introducing the language feature to 19 real-world, open source Java projects without altering original program semantics. This novel assessment technique is proactive in that the adoption was driven by an automatic refactoring approach rather than waiting for developers to discover and integrate the feature themselves. In this way, we set forth best practices and patterns of using the language feature effectively earlier rather than later and are able to possibly guide (near) future language evolution. We foresee this technique to be useful in assessing other new language features, design patterns, and other programming idioms.", "Lambdas have seen increasing use in mainstream programming languages, notably in Java 8 and C++ 11. While the technical aspects of lambdas are known, we conducted the first randomized controlled trial on the human factors impact of C++ 11 lambdas compared to iterators. Because there has been recent debate on having students or professionals in experiments, we recruited undergraduates across the academic pipeline and professional programmers to evaluate these findings in a broader context. Results afford some doubt that lambdas benefit developers and show evidence that students are negatively impacted in regard to how quickly they can write correct programs to a test specification and whether they can complete a task. Analysis from log data shows that participants spent more time with compiler errors, and have more errors, when using lambdas as compared to iterators, suggesting difficulty with the syntax chosen for C++. Finally, experienced users were more likely to complete tasks, with or without lambdas, and could do so more quickly, with experience as a factor explaining 45.7 of the variance in our sample in regard to completion time.", "Java 8 retrofitted lambda expressions, a core feature of functional programming, into a mainstream object-oriented language with an imperative paradigm. However, we do not know how Java developers have adapted to the functional style of thinking, and more importantly, what are the reasons motivating Java developers to adopt functional programming. Without such knowledge, researchers miss opportunities to improve the state of the art, tool builders use unrealistic assumptions, language designers fail to improve upon their designs, and developers are unable to explore efficient and effective use of lambdas. We present the first large-scale, quantitative and qualitative empirical study to shed light on how imperative programmers use lambda expressions as a gateway into functional thinking. Particularly, we statically scrutinize the source code of 241 open-source projects with 19,770 contributors, to study the characteristics of 100,540 lambda expressions. Moreover, we investigate the historical trends and adoption rates of lambdas in the studied projects. To get a complementary perspective, we seek the underlying reasons on why developers introduce lambda expressions, by surveying 97 developers who are introducing lambdas in their projects, using the firehouse interview method. Among others, our findings revealed an increasing trend in the adoption of lambdas in Java: in 2016, the ratio of lambdas introduced per added line of code increased by 54 compared to 2015. Lambdas were used for various reasons, including but not limited to (i) making existing code more succinct and readable, (ii) avoiding code duplication, and (iii) simulating lazy evaluation of functions. Interestingly, we found out that developers are using Java's built-in functional interfaces inefficiently, i.e., they prefer to use general functional interfaces over the specialized ones, overlooking the performance overheads that might be imposed. Furthermore, developers are not adopting techniques from functional programming, e.g., currying. Finally, we present the implications of our findings for researchers, tool builders, language designers, and developers." ] }
1903.06348
2963295068
When lambda expressions were introduced to the Java programming language as part of the release of Java 8 in 2014, they were the language's first step into functional programming. Since lambda expressions are still relatively new, not all developers use or understand them. In this paper, we first present the results of an empirical study to determine how frequently developers of GitHub repositories make use of lambda expressions and how they are documented. We find that 11 of Java GitHub repositories use lambda expressions, and that only 6 of the lambda expressions are accompanied by source code comments. We then present a tool called LambdaDoc which can automatically detect lambda expressions in a Java repository and generate natural language documentation for them. Our evaluation of LambdaDoc with 23 professional developers shows that they perceive the generated documentation to be complete, concise, and expressive, while the majority of the documentation produced by our participants without tool support was inadequate. Our contribution builds an important step towards automatically generating documentation for functional programming constructs in an object-oriented language.
While software documentation makes it easier for developers to comprehend software artefacts, manually-written documentation becomes a tedious task for developers. Several studies empirically investigate the essentials of software documentation. De showed that developers perceived that source code comments are the second most important software artefact in Agile software development @cite_9 . found that developers spent effort on maintaining API documentation, e.g., literal polishes @cite_5 . found that newly added code was rarely documented and a source code comment was often changed along with the associated source code @cite_8 . reported that while developers need documentation to understand unit test cases, a large proportion of C # projects on GitHub lacked comments for unit test cases @cite_4 . Moreover, found that neglecting to update a comment increases the probability of having future defects in a software system @cite_10 . In this paper, we empirically investigated the source code comments accompanying lambda expressions and found that only 6
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_5", "@cite_10" ], "mid": [ "2504197999", "1967995512", "2133890582", "2148469619", "2151228786" ], "abstract": [ "Maintaining unit test cases is important during the maintenance and evolution of a software system. In particular, automatically documenting these unit test cases can ameliorate the burden on developers maintaining them. For instance, by relyingon up-to-date documentation, developers can more easily identify test cases that relate to some new or modified functionality of the system. We surveyed 212 developers (both industrial and open-source) to understand their perspective towards writing, maintaining, and documenting unit test cases. In addition, we mined change histories of C# software systems and empirically found that unit test methods seldom had preceding comments andinfrequently had inner comments, and both were rarely modified as those methods were modified. In order to support developers in maintaining unit test cases, we propose a novel approach -- UnitTestScribe -- that combines static analysis, natural language processing, backwardslicing, and code summarization techniques to automatically generate natural language documentation of unit test cases. We evaluated UnitTestScribe on four subject systems by means of an online survey with industrial developers and graduate students. In general, participants indicated that UnitTestScribe descriptions are complete, concise, and easy to read.", "Comments are valuable especially for program understanding and maintenance, but do developers comment their code? To which extent do they add comments or adapt them when they evolve the code? We examine the question whether source code and associated comments are really changed together along the evolutionary history of a software system. In this paper, we describe an approach to map code and comments to observe their co-evolution over multiple versions. We investigated three open source systems (i.e., ArgoUML, Azureus, and JDT core) and describe how comments and code co-evolved over time. Some of our findings show that: 1) newly added code - despite its growth rate - barely gets commented; 2) class and method declarations are commented most frequently but far less, for example, method calls; and 3) that 97 of comment changes are done in the same revision as the associated source code change.", "Software engineering has been striving for years to improve the practice of software development and maintenance. Documentation has long been prominent on the list of recommended practices to improve development and help maintenance. Recently however, agile methods started to shake this view, arguing that the goal of the game is to produce software and that documentation is only useful as long as it helps to reach this goal.On the other hand, in the re-engineering field, people wish they could re-document useful legacy software so that they may continue maintain them or migrate them to new platform.In these two case, a crucial question arises: \"How much documentation is enough?\" In this article, we present the results of a survey of software maintainers to try to establish what documentation artifacts are the most useful to them.", "With the evolution of an API library, its documentation also evolves. The evolution of API documentation is common knowledge for programmers and library developers, but not in a quantitative form. Without such quantitative knowledge, programmers may neglect important revisions of API documentation, and library developers may not effectively improve API documentation based on its revision histories. There is a strong need to conduct a quantitative study on API documentation evolution. However, as API documentation is large in size and revisions can be complicated, it is quite challenging to conduct such a study. In this paper, we present an analysis methodology to analyze the evolution of API documentation. Based on the methodology, we conduct a quantitative study on API documentation evolution of five widely used real-world libraries. The results reveal various valuable findings, and these findings allow programmers and library developers to better understand API documentation evolution.", "When changing source code, developers sometimes update the associated comments of the code (a consistent update), while at other times they do not (an inconsistent update). Similarly, developers sometimes only update a comment without its associated code (an inconsistent update). The relationship of such comment update practices and software bugs has never been explored empirically. While some (in)consistent updates might be harmless, software engineering folklore warns of the risks of inconsistent updates between code and comments, because these updates are likely to lead to out-of-date comments, which in turn might mislead developers and cause the introduction of bugs in the future. In this paper, we study comment update practices in three large open-source systems written in C (FreeBSD and PostgreSQL) and Java (Eclipse). We find that these practices can better explain and predict future bugs than other indicators like the number of prior bugs or changes. Our findings suggest that inconsistent changes are not necessarily correlated with more bugs. Instead, a change in which a function and its comment are suddenly updated inconsistently, whereas they are usually updated consistently (or vice versa), is risky (high probability of introducing a bug) and should be reviewed carefully by practitioners." ] }
1903.06500
2921468572
We propose a novel ranking model that combines the Bradley-Terry-Luce probability model with a nonnegative matrix factorization framework to model and uncover the presence of latent variables that influence the performance of top tennis players. We derive an efficient, provably convergent, and numerically stable majorization-minimization-based algorithm to maximize the likelihood of datasets under the proposed statistical model. The model is tested on datasets involving the outcomes of matches between 20 top male and female tennis players over 14 major tournaments for men (including the Grand Slams and the ATP Masters 1000) and 16 major tournaments for women over the past 10 years. Our model automatically infers that the surface of the court (e.g., clay or hard court) is a key determinant of the performances of male players, but less so for females. Top players on various surfaces over this longitudinal period are also identified in an objective manner.
The paper most closely related to the present one is @cite_5 in which a topic modelling approach was used for ranking. However, unlike our work in which continuous-valued skill levels in @math are inferred, permutations (i.e., discrete objects) and their corresponding mixture weights were learned. We opine that our model and results provide a more nuanced and quantitative view of the relative skill levels between players under different latent conditions.
{ "cite_N": [ "@cite_5" ], "mid": [ "2963284053" ], "abstract": [ "We propose a topic modeling approach to the prediction of preferences in pairwise comparisons. We develop a new generative model for pairwise comparisons that accounts for multiple shared latent rankings that are prevalent in a population of users. This new model also captures inconsistent user behavior in a natural way. We show how the estimation of latent rankings in the new generative model can be formally reduced to the estimation of topics in a statistically equivalent topic modeling problem. We leverage recent advances in the topic modeling literature to develop an algorithm that can learn shared latent rankings with provable consistency as well as sample and computational complexity guarantees. We demonstrate that the new approach is empirically competitive with the current state-of-the-art approaches in predicting preferences on some semi-synthetic and real world datasets." ] }
1903.06593
2921375082
We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots. The new method, PifPaf, uses a Part Intensity Field (PIF) to localize body parts and a Part Association Field (PAF) to associate body parts with each other to form full human poses. Our method outperforms previous methods at low resolution and in crowded, cluttered and occluded scenes thanks to (i) our new composite field PAF encoding fine-grained information and (ii) the choice of Laplace loss for regressions which incorporates a notion of uncertainty. Our architecture is based on a fully convolutional, single-shot, box-free design. We perform on par with the existing state-of-the-art bottom-up method on the standard COCO keypoint task and produce state-of-the-art results on a modified COCO keypoint task for the transportation domain.
Over the past years, state-of-the-art methods for pose estimation are based on Convolutional Neural Networks @cite_1 @cite_37 @cite_3 @cite_25 . They outperform traditional methods based on pictorial structures @cite_30 @cite_24 @cite_36 and deformable part models @cite_13 . The deep learning tsunami started with DeepPose @cite_31 that uses a cascade of convolutional networks for full-body pose estimation. Then, instead of predicting absolute human joint locations, some works refine pose estimates by predicting error feedback ( , corrections) at each iteration @cite_40 @cite_6 or using a human pose refinement network to exploit dependencies between input and output spaces @cite_27 . There is now an arms race towards proposing alternative neural network architectures: from convolutional pose machines @cite_42 , stacked hourglass networks @cite_41 @cite_35 , to recurrent networks @cite_0 , and voting schemes such as @cite_2 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_37", "@cite_36", "@cite_41", "@cite_42", "@cite_1", "@cite_3", "@cite_6", "@cite_24", "@cite_0", "@cite_40", "@cite_27", "@cite_2", "@cite_31", "@cite_13", "@cite_25" ], "mid": [ "2030536784", "", "2897455116", "2074587583", "2950762923", "2255781698", "", "2952819818", "2303693074", "2143478373", "2363162442", "1537698211", "", "2950735449", "2113325037", "2168356304", "" ], "abstract": [ "In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.", "", "We propose a novel method for multi-person 2D pose estimation. Our model zooms in the image gradually, which we refer to as the Magnify-Net, to solve the bottleneck problem of mean average precision (mAP) versus pixel error. Moreover, we squeeze the network efficiently by an inspired design that increases the mAP while saving the processing time. It is a simple, yet robust, bottom-up approach consisting of one stage. The architecture is designed to detect the part position and their association jointly via two branches of the same sequential prediction process, resulting in a remarkable performance and efficiency rise. Our method outcompetes the previous state-of-the-art results on the challenging COCO key-points task and MPII Multi-Person Dataset.", "We present a technique for estimating the spatial layout of humans in still images--the position of the head, torso and arms. The theme we explore is that once a person is localized using an upper body detector, the search for their body parts can be considerably simplified using weak constraints on position and appearance arising from that detection. Our approach is capable of estimating upper body pose in highly challenging uncontrolled images, without prior knowledge of background, clothing, lighting, or the location and scale of the person in the image. People are only required to be upright and seen from the front or the back (not side). We evaluate the stages of our approach experimentally using ground truth layout annotation on a variety of challenging material, such as images from the PASCAL VOC 2008 challenge and video frames from TV shows and feature films. We also propose and evaluate techniques for searching a video dataset for people in a specific pose. To this end, we develop three new pose descriptors and compare their classification and retrieval performance to two baselines built on state-of-the-art object detection models.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.", "", "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to both multi-person pose estimation and instance segmentation and report state-of-the-art performance for multi-person pose on the MPII and MS-COCO datasets.", "We propose a viewpoint invariant model for 3D human pose estimation from a single depth image. To achieve this, our discriminative model embeds local regions into a learned viewpoint invariant feature space. Formulated as a multi-task learning problem, our model is able to selectively predict partial poses in the presence of noise and occlusion. Our approach leverages a convolutional and recurrent network architecture with a top-down error feedback mechanism to self-correct previous pose estimates in an end-to-end manner. We evaluate our model on a previously published depth dataset and a newly collected human pose dataset containing 100 K annotated depth images from extreme viewpoints. Experiments show that our model achieves competitive performance on frontal views while achieving state-of-the-art performance on alternate viewpoints.", "In this work, we address the problem of estimating 2d human pose from still images. Recent methods that rely on discriminatively trained deformable parts organized in a tree model have shown to be very successful in solving this task. Within such a pictorial structure framework, we address the problem of obtaining good part templates by proposing novel, non-linear joint regressors. In particular, we employ two-layered random forests as joint regressors. The first layer acts as a discriminative, independent body part classifier. The second layer takes the estimated class distributions of the first one into account and is thereby able to predict joint locations by modeling the interdependence and co-occurrence of the parts. This results in a pose estimation framework that takes dependencies between body parts already for joint localization into account and is thus able to circumvent typical ambiguities of tree structures, such as for legs and arms. In the experiments, we demonstrate that our body parts dependent joint regressors achieve a higher joint localization accuracy than tree-based state-of-the-art methods.", "We propose a novel ConvNet model for predicting 2D human body poses in an image. The model regresses a heatmap representation for each body keypoint, and is able to learn and represent both the part appearances and the context of the part configuration. We make the following three contributions: (i) an architecture combining a feed forward module with a recurrent module, where the recurrent module can be run iteratively to improve the performance; (ii) the model can be trained end-to-end and from scratch, with auxiliary losses incorporated to improve performance; (iii) we investigate whether keypoint visibility can also be predicted. The model is evaluated on two benchmark datasets. The result is a simple architecture that achieves performance on par with the state of the art, but without the complexity of a graphical model stage (or layers).", "Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.", "", "In this paper we consider the problem of human pose estimation from a single still image. We propose a novel approach where each location in the image votes for the position of each keypoint using a convolutional neural net. The voting scheme allows us to utilize information from the whole image, rather than rely on a sparse set of keypoint locations. Using dense, multi-target votes, not only produces good keypoint predictions, but also enables us to compute image-dependent joint keypoint probabilities by looking at consensus voting. This differs from most previous methods where joint probabilities are learned from relative keypoint locations and are independent of the image. We finally combine the keypoints votes and joint probabilities in order to identify the optimal pose configuration. We show our competitive performance on the MPII Human Pose and Leeds Sports Pose datasets.", "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "" ] }
1903.06593
2921375082
We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots. The new method, PifPaf, uses a Part Intensity Field (PIF) to localize body parts and a Part Association Field (PAF) to associate body parts with each other to form full human poses. Our method outperforms previous methods at low resolution and in crowded, cluttered and occluded scenes thanks to (i) our new composite field PAF encoding fine-grained information and (ii) the choice of Laplace loss for regressions which incorporates a notion of uncertainty. Our architecture is based on a fully convolutional, single-shot, box-free design. We perform on par with the existing state-of-the-art bottom-up method on the standard COCO keypoint task and produce state-of-the-art results on a modified COCO keypoint task for the transportation domain.
Bottom-up methods include the pioneering work by Pishchulin with DeepCut @cite_16 and Insafutdinov with DeeperCut @cite_39 . They solve the part association with an integer linear program which results in processing times for a single image of the order of hours. Later works accelerate the inference time @cite_7 and broaden the applications to track animal behavior @cite_44 . Other methods drastically reduce inference time by using greedy decoders in combination with additional tools as in Part Affinity Fields @cite_37 , Associative Embedding @cite_3 and PersonLab @cite_25 . Recently, MultiPoseNet @cite_17 develops a multi-task learning architecture combining detection, segmentation and pose estimation for people.
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_3", "@cite_39", "@cite_44", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "2897455116", "", "2952819818", "2951478140", "", "2951256101", "", "2819476901" ], "abstract": [ "We propose a novel method for multi-person 2D pose estimation. Our model zooms in the image gradually, which we refer to as the Magnify-Net, to solve the bottleneck problem of mean average precision (mAP) versus pixel error. Moreover, we squeeze the network efficiently by an inspired design that increases the mAP while saving the processing time. It is a simple, yet robust, bottom-up approach consisting of one stage. The architecture is designed to detect the part position and their association jointly via two branches of the same sequential prediction process, resulting in a remarkable performance and efficiency rise. Our method outcompetes the previous state-of-the-art results on the challenging COCO key-points task and MPII Multi-Person Dataset.", "", "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to both multi-person pose estimation and instance segmentation and report state-of-the-art performance for multi-person pose on the MPII and MS-COCO datasets.", "The goal of this paper is to advance the state-of-the-art of articulated pose estimation in scenes with multiple people. To that end we contribute on three fronts. We propose (1) improved body part detectors that generate effective bottom-up proposals for body parts; (2) novel image-conditioned pairwise terms that allow to assemble the proposals into a variable number of consistent body part configurations; and (3) an incremental optimization strategy that explores the search space more efficiently thus leading both to better performance and significant speed-up factors. Evaluation is done on two single-person and two multi-person pose estimation benchmarks. The proposed approach significantly outperforms best known multi-person pose estimation results while demonstrating competitive performance on the task of single person pose estimation. Models and code available at this http URL", "", "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation. Models and code available at this http URL.", "", "In this paper, we present MultiPoseNet, a novel bottom-up multi-person pose estimation architecture that combines a multi-task model with a novel assignment method. MultiPoseNet can jointly handle person detection, person segmentation and pose estimation problems. The novel assignment method is implemented by the Pose Residual Network (PRN) which receives keypoint and person detections, and produces accurate poses by assigning keypoints to person instances. On the COCO keypoints dataset, our pose estimation method outperforms all previous bottom-up methods both in accuracy (+4-point mAP over previous best result) and speed; it also performs on par with the best top-down methods while being at least 4x faster. Our method is the fastest real time system with ( 23 ) frames sec." ] }
1903.06593
2921375082
We propose a new bottom-up method for multi-person 2D human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots. The new method, PifPaf, uses a Part Intensity Field (PIF) to localize body parts and a Part Association Field (PAF) to associate body parts with each other to form full human poses. Our method outperforms previous methods at low resolution and in crowded, cluttered and occluded scenes thanks to (i) our new composite field PAF encoding fine-grained information and (ii) the choice of Laplace loss for regressions which incorporates a notion of uncertainty. Our architecture is based on a fully convolutional, single-shot, box-free design. We perform on par with the existing state-of-the-art bottom-up method on the standard COCO keypoint task and produce state-of-the-art results on a modified COCO keypoint task for the transportation domain.
Other intermediate representations have been build on top of 2D pose estimates in the image plane including 3D pose estimates @cite_45 , human pose estimation in videos @cite_15 and dense pose estimation @cite_34 that would all profit from improved 2D pose estimates.
{ "cite_N": [ "@cite_15", "@cite_45", "@cite_34" ], "mid": [ "602397586", "", "2785694322" ], "abstract": [ "The objective of this work is human pose estimation in videos, where multiple frames are available. We investigate a ConvNet architecture that is able to benefit from temporal context by combining information across the multiple frames using optical flow. To this end we propose a network architecture with the following novelties: (i) a deeper network than previously investigated for regressing heatmaps, (ii) spatial fusion layers that learn an implicit spatial model, (iii) optical flow is used to align heatmap predictions from neighbouring frames, and (iv) a final parametric pooling layer which learns to combine the aligned heatmaps into a pooled confidence map. We show that this architecture outperforms a number of others, including one that uses optical flow solely at the input layers, one that regresses joint coordinates directly, and one that predicts heatmaps without spatial fusion. The new architecture outperforms the state of the art by a large margin on three video pose estimation datasets, including the very challenging Poses in the Wild dataset, and outperforms other deep methods that don't use a graphical model on the single-image FLIC benchmark (and also [5, 35] in the high precision region).", "", "In this work, we establish dense correspondences between RGB image and a surface-based representation of the human body, a task we refer to as dense human pose estimation. We first gather dense correspondences for 50K persons appearing in the COCO dataset by introducing an efficient annotation pipeline. We then use our dataset to train CNN-based systems that deliver dense correspondence 'in the wild', namely in the presence of background, occlusions and scale variations. We improve our training set's effectiveness by training an 'inpainting' network that can fill in missing groundtruth values and report clear improvements with respect to the best results that would be achievable in the past. We experiment with fully-convolutional networks and region-based models and observe a superiority of the latter; we further improve accuracy through cascading, obtaining a system that delivers highly0accurate results in real time. Supplementary materials and videos are provided on the project page this http URL" ] }
1903.06580
2922230028
Learning data representations that reflect the customers' creditworthiness can improve marketing campaigns, customer relationship management, data and process management or the credit risk assessment in retail banks. In this research, we adopt the Variational Autoencoder (VAE), which has the ability to learn latent representations that contain useful information. We show that it is possible to steer the latent representations in the latent space of the VAE using the Weight of Evidence and forming a specific grouping of the data that reflects the customers' creditworthiness. Our proposed method learns a latent representation of the data, which shows a well-defied clustering structure capturing the customers' creditworthiness. These clusters are well suited for the aforementioned banks' activities. Further, our methodology generalizes to new customers, captures high-dimensional and complex financial data, and scales to large data sets.
Latent representations in the VAE have also been used for predictions in a semi-supervised context. In @cite_26 , latent representations for pre-treatment and post-treatment gene expression are use to predict drug response. Their proposed model achieves higher performance relative to Ridge logistic regression @cite_3 using the original input data. In addition, PCA transformations are used in three different classifiers to predict drug responses, but their performance, in most of the experiments, is not better relative to Ridge regression and the VAE model.
{ "cite_N": [ "@cite_26", "@cite_3" ], "mid": [ "2670063815", "2047028564" ], "abstract": [ "We present two deep generative models based on Variational Autoencoders to improve the accuracy of drug response prediction. Our models, Perturbation Variational Autoencoder and its semi-supervised extension, Drug Response Variational Autoencoder (Dr.VAE), learn latent representation of the underlying gene states before and after drug application that depend on: (i) drug-induced biological change of each gene and (ii) overall treatment response outcome. Our VAE-based models outperform the current published benchmarks in the field by anywhere from 3 to 11 AUROC and 2 to 30 AUPR. In addition, we found that better reconstruction accuracy does not necessarily lead to improvement in classification accuracy and that jointly trained models perform better than models that minimize reconstruction error independently.", "In multiple regression it is shown that parameter estimates based on minimum residual sum of squares have a high probability of being unsatisfactory, if not incorrect, if the prediction vectors are not orthogonal. Proposed is an estimation procedure based on adding small positive quantities to the diagonal of X′X. Introduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X′X to obtain biased estimates with smaller mean square error." ] }
1903.06580
2922230028
Learning data representations that reflect the customers' creditworthiness can improve marketing campaigns, customer relationship management, data and process management or the credit risk assessment in retail banks. In this research, we adopt the Variational Autoencoder (VAE), which has the ability to learn latent representations that contain useful information. We show that it is possible to steer the latent representations in the latent space of the VAE using the Weight of Evidence and forming a specific grouping of the data that reflects the customers' creditworthiness. Our proposed method learns a latent representation of the data, which shows a well-defied clustering structure capturing the customers' creditworthiness. These clusters are well suited for the aforementioned banks' activities. Further, our methodology generalizes to new customers, captures high-dimensional and complex financial data, and scales to large data sets.
Classification of speech emotion is another example where latent representations of the input data have been successfully used for classification. Using Long Short Term Memory (LSTM) networks to classify emotion, @cite_2 compares the predictive power of data transformations using VAE and regular Auto Encoders. Speech emotion prediction is more accurate when the latent representations in the VAE are used as predictors. The classification results are further improved by using latent representations obtained with conditional VAE @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_2" ], "mid": [ "2188365844", "2795642794" ], "abstract": [ "Supervised deep learning has been successfully applied to many recognition problems. Although it can approximate a complex many-to-one function well when a large amount of training data is provided, it is still challenging to model complex structured output representations that effectively perform probabilistic inference and make diverse predictions. In this work, we develop a deep conditional generative model for structured output prediction using Gaussian latent variables. The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build robust structured prediction algorithms, such as input noise-injection and multi-scale prediction objective at training. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in generating diverse but realistic structured output predictions using stochastic inference. Furthermore, the proposed training methods are complimentary, which leads to strong pixel-level object segmentation and semantic labeling performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset.", "Learning the latent representation of data in unsupervised fashion is a very interesting process that provides relevant features for enhancing the performance of a classifier. For speech emotion recognition tasks, generating effective features is crucial. Currently, handcrafted features are mostly used for speech emotion recognition, however, features learned automatically using deep learning have shown strong success in many problems, especially in image processing. In particular, deep generative models such as Variational Autoencoders (VAEs) have gained enormous success for generating features for natural images. Inspired by this, we propose VAEs for deriving the latent representation of speech signals and use this representation to classify emotions. To the best of our knowledge, we are the first to propose VAEs for speech emotion classification. Evaluations on the IEMOCAP dataset demonstrate that features learned by VAEs can produce state-of-the-art results for speech emotion classification." ] }
1903.06580
2922230028
Learning data representations that reflect the customers' creditworthiness can improve marketing campaigns, customer relationship management, data and process management or the credit risk assessment in retail banks. In this research, we adopt the Variational Autoencoder (VAE), which has the ability to learn latent representations that contain useful information. We show that it is possible to steer the latent representations in the latent space of the VAE using the Weight of Evidence and forming a specific grouping of the data that reflects the customers' creditworthiness. Our proposed method learns a latent representation of the data, which shows a well-defied clustering structure capturing the customers' creditworthiness. These clusters are well suited for the aforementioned banks' activities. Further, our methodology generalizes to new customers, captures high-dimensional and complex financial data, and scales to large data sets.
In another classification study, @cite_17 train logistic regression models, on t-SNE @cite_15 embeddings of high-dimensional VAE latent variables, to classify tumours. Their results show that the latent embeddings in the VAE learn a biological relevant information and successfully classify disease sub-types. Both works in @cite_2 @cite_17 build upon the model @cite_32 . The Tybalt exploits the data transformation capabilities of the VAE to generate latent representations of gene expression data.
{ "cite_N": [ "@cite_15", "@cite_2", "@cite_32", "@cite_17" ], "mid": [ "2157444450", "2795642794", "2745167257", "2789868997" ], "abstract": [ "We describe a probabilistic approach to the task of placing objects, described by high-dimensional vectors or by pairwise dissimilarities, in a low-dimensional space in a way that preserves neighbor identities. A Gaussian is centered on each object in the high-dimensional space and the densities under this Gaussian (or the given dissimilarities) are used to define a probability distribution over all the potential neighbors of the object. The aim of the embedding is to approximate this distribution as well as possible when the same operation is performed on the low-dimensional \"images\" of the objects. A natural cost function is a sum of Kullback-Leibler divergences, one per object, which leads to a simple gradient for adjusting the positions of the low-dimensional images. Unlike other dimensionality reduction methods, this probabilistic framework makes it easy to represent each object by a mixture of widely separated low-dimensional images. This allows ambiguous objects, like the document count vector for the word \"bank\", to have versions close to the images of both \"river\" and \"finance\" without forcing the images of outdoor concepts to be located close to those of corporate concepts.", "Learning the latent representation of data in unsupervised fashion is a very interesting process that provides relevant features for enhancing the performance of a classifier. For speech emotion recognition tasks, generating effective features is crucial. Currently, handcrafted features are mostly used for speech emotion recognition, however, features learned automatically using deep learning have shown strong success in many problems, especially in image processing. In particular, deep generative models such as Variational Autoencoders (VAEs) have gained enormous success for generating features for natural images. Inspired by this, we propose VAEs for deriving the latent representation of speech signals and use this representation to classify emotions. To the best of our knowledge, we are the first to propose VAEs for speech emotion classification. Evaluations on the IEMOCAP dataset demonstrate that features learned by VAEs can produce state-of-the-art results for speech emotion classification.", "The Cancer Genome Atlas (TCGA) has profiled over 10,000 tumors across 33 different cancer-types for many genomic features, including gene expression levels. Gene expression measurements capture substantial information about the state of each tumor. Certain classes of deep neural network models are capable of learning a meaningful latent space. Such a latent space could be used to explore and generate hypothetical gene expression profiles under various types of molecular and genetic perturbation. For example, one might wish to use such a model to predict a tumor9s response to specific therapies or to characterize complex gene expression activations existing in differential proportions in different tumors. Variational autoencoders (VAEs) are a deep neural network approach capable of generating meaningful latent spaces for image and text data. In this work, we sought to determine the extent to which a VAE can be trained to model cancer gene expression, and whether or not such a VAE would capture biologically-relevant features. In the following report, we introduce a VAE trained on TCGA pan-cancer RNA-seq data, identify specific patterns in the VAE encoded features, and discuss potential merits of the approach. We name our method \"Tybalt\" after an instigative, cat-like character who sets a cascading chain of events in motion in Shakespeare9s \"Romeo and Juliet.\" From a systems biology perspective, Tybalt could one day aid in cancer stratification or predict specific activated expression patterns that would result from genetic changes or treatment effects.", "" ] }
1903.06580
2922230028
Learning data representations that reflect the customers' creditworthiness can improve marketing campaigns, customer relationship management, data and process management or the credit risk assessment in retail banks. In this research, we adopt the Variational Autoencoder (VAE), which has the ability to learn latent representations that contain useful information. We show that it is possible to steer the latent representations in the latent space of the VAE using the Weight of Evidence and forming a specific grouping of the data that reflects the customers' creditworthiness. Our proposed method learns a latent representation of the data, which shows a well-defied clustering structure capturing the customers' creditworthiness. These clusters are well suited for the aforementioned banks' activities. Further, our methodology generalizes to new customers, captures high-dimensional and complex financial data, and scales to large data sets.
The VAE has also been used in the natural language processing field. Studying bilingual word embeddings, @cite_14 use the VAE to generate latent representations, which explicitly induce the underlying semantics of bilingual text. Their model is able to learn a hidden representation of paired bag-of-words sentences. Furthermore, in @cite_24 recurrent neural networks are combined with the VAE to model text data. The latent transformations are able to generate coherent sentences. In addition, the proposed model in this research is able to impute missing words in text corpus.
{ "cite_N": [ "@cite_24", "@cite_14" ], "mid": [ "2210838531", "2762058688" ], "abstract": [ "The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling.", "Abstract Bilingual word embeddings (BWEs) have been shown to be useful in various cross-lingual natural language processing tasks. To accurately learn BWEs, previous studies often resort to discriminative approaches which explore semantic proximities between translation equivalents of different languages. Instead, in this paper, we propose a neural generative bilingual autoencoder (NGBAE) which introduces a latent variable to explicitly induce the underlying semantics of bilingual text. In this way, NGBAE is able to obtain better BWEs from more robust bilingual semantics by modeling the semantic distributions of bilingual text. In order to facilitate scalable inference and learning, we utilize deep neural networks to perform the recognition and generation procedures, and then employ stochastic gradient variational Bayes algorithm to optimize them jointly. We validate the proposed model via both extrinsic (cross-lingual document classification and translation probability modeling) and intrinsic (word embedding analysis) evaluations. Experimental results demonstrate the effectiveness of NGBAE on learning BWEs." ] }
1903.06580
2922230028
Learning data representations that reflect the customers' creditworthiness can improve marketing campaigns, customer relationship management, data and process management or the credit risk assessment in retail banks. In this research, we adopt the Variational Autoencoder (VAE), which has the ability to learn latent representations that contain useful information. We show that it is possible to steer the latent representations in the latent space of the VAE using the Weight of Evidence and forming a specific grouping of the data that reflects the customers' creditworthiness. Our proposed method learns a latent representation of the data, which shows a well-defied clustering structure capturing the customers' creditworthiness. These clusters are well suited for the aforementioned banks' activities. Further, our methodology generalizes to new customers, captures high-dimensional and complex financial data, and scales to large data sets.
Research has also been conducted on modifying the original VAE aiming to improve the quality of the learned latent representations. In @cite_25 , for example, the authors add an hyperparamter @math to the VAE, which limits the capacity of the latent information channel and impose an emphasis on learning statistically independent latent factors. Hence, the model is able to learn disentangled factors of variation.
{ "cite_N": [ "@cite_25" ], "mid": [ "2753738274" ], "abstract": [ "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data." ] }
1903.06580
2922230028
Learning data representations that reflect the customers' creditworthiness can improve marketing campaigns, customer relationship management, data and process management or the credit risk assessment in retail banks. In this research, we adopt the Variational Autoencoder (VAE), which has the ability to learn latent representations that contain useful information. We show that it is possible to steer the latent representations in the latent space of the VAE using the Weight of Evidence and forming a specific grouping of the data that reflects the customers' creditworthiness. Our proposed method learns a latent representation of the data, which shows a well-defied clustering structure capturing the customers' creditworthiness. These clusters are well suited for the aforementioned banks' activities. Further, our methodology generalizes to new customers, captures high-dimensional and complex financial data, and scales to large data sets.
In @cite_31 the concept of supervision in VAE is introduced. The authors group the input data, aiming to learn representations of the data that reflect the semantics behind a specific grouping of the data. In other words, the grouping makes it possible to learn a semantically useful data transformation. Similarly, @cite_21 @cite_14 use supervision but in the latent space. Both works @cite_21 @cite_14 , manipulate the latent representations arithmetically to decompose the latent representation into different attributes.
{ "cite_N": [ "@cite_14", "@cite_31", "@cite_21" ], "mid": [ "2762058688", "2620364083", "2605762339" ], "abstract": [ "Abstract Bilingual word embeddings (BWEs) have been shown to be useful in various cross-lingual natural language processing tasks. To accurately learn BWEs, previous studies often resort to discriminative approaches which explore semantic proximities between translation equivalents of different languages. Instead, in this paper, we propose a neural generative bilingual autoencoder (NGBAE) which introduces a latent variable to explicitly induce the underlying semantics of bilingual text. In this way, NGBAE is able to obtain better BWEs from more robust bilingual semantics by modeling the semantic distributions of bilingual text. In order to facilitate scalable inference and learning, we utilize deep neural networks to perform the recognition and generation procedures, and then employ stochastic gradient variational Bayes algorithm to optimize them jointly. We validate the proposed model via both extrinsic (cross-lingual document classification and translation probability modeling) and intrinsic (word embedding analysis) evaluations. Experimental results demonstrate the effectiveness of NGBAE on learning BWEs.", "We would like to learn a representation of the data which decomposes an observation into factors of variation which we can independently control. Specifically, we want to use minimal supervision to learn a latent representation that reflects the semantics behind a specific grouping of the data, where within a group the samples share a common factor of variation. For example, consider a collection of face images grouped by identity. We wish to anchor the semantics of the grouping into a relevant and disentangled representation that we can easily exploit. However, existing deep probabilistic models often assume that the observations are independent and identically distributed. We present the Multi-Level Variational Autoencoder (ML-VAE), a new deep probabilistic model for learning a disentangled representation of a set of grouped observations. The ML-VAE separates the latent representation into semantically meaningful parts by working both at the group level and the observation level, while retaining efficient test-time inference. Quantitative and qualitative evaluations show that the ML-VAE model (i) learns a semantically meaningful disentanglement of grouped data, (ii) enables manipulation of the latent representation, and (iii) generalises to unseen groups.", "An ability to model a generative process and learn a latent representation for speech in an unsupervised fashion will be crucial to process vast quantities of unlabelled speech data. Recently, deep probabilistic generative models such as Variational Autoencoders (VAEs) have achieved tremendous success in modeling natural images. In this paper, we apply a convolutional VAE to model the generative process of natural speech. We derive latent space arithmetic operations to disentangle learned latent representations. We demonstrate the capability of our model to modify the phonetic content or the speaker identity for speech segments using the derived operations, without the need for parallel supervisory data." ] }
1903.06580
2922230028
Learning data representations that reflect the customers' creditworthiness can improve marketing campaigns, customer relationship management, data and process management or the credit risk assessment in retail banks. In this research, we adopt the Variational Autoencoder (VAE), which has the ability to learn latent representations that contain useful information. We show that it is possible to steer the latent representations in the latent space of the VAE using the Weight of Evidence and forming a specific grouping of the data that reflects the customers' creditworthiness. Our proposed method learns a latent representation of the data, which shows a well-defied clustering structure capturing the customers' creditworthiness. These clusters are well suited for the aforementioned banks' activities. Further, our methodology generalizes to new customers, captures high-dimensional and complex financial data, and scales to large data sets.
In this research, as in @cite_31 @cite_21 @cite_14 , we introduce a supervision stage in the VAE. In this stage, we form groups that share a common factor of variation. The difference in our method is that the grouping is derived from the class label, see Section . This means that our proposed method is a semi-supervised representation learning model where we indirectly steer the data transformation using a specific grouping of the input data. Finally, we only focus on learning a data representation of bank customers' data that is able to capture the customers' creditworthiness in the latent space of the VAE, and not in the predictive power of such representations.
{ "cite_N": [ "@cite_14", "@cite_31", "@cite_21" ], "mid": [ "2762058688", "2620364083", "2605762339" ], "abstract": [ "Abstract Bilingual word embeddings (BWEs) have been shown to be useful in various cross-lingual natural language processing tasks. To accurately learn BWEs, previous studies often resort to discriminative approaches which explore semantic proximities between translation equivalents of different languages. Instead, in this paper, we propose a neural generative bilingual autoencoder (NGBAE) which introduces a latent variable to explicitly induce the underlying semantics of bilingual text. In this way, NGBAE is able to obtain better BWEs from more robust bilingual semantics by modeling the semantic distributions of bilingual text. In order to facilitate scalable inference and learning, we utilize deep neural networks to perform the recognition and generation procedures, and then employ stochastic gradient variational Bayes algorithm to optimize them jointly. We validate the proposed model via both extrinsic (cross-lingual document classification and translation probability modeling) and intrinsic (word embedding analysis) evaluations. Experimental results demonstrate the effectiveness of NGBAE on learning BWEs.", "We would like to learn a representation of the data which decomposes an observation into factors of variation which we can independently control. Specifically, we want to use minimal supervision to learn a latent representation that reflects the semantics behind a specific grouping of the data, where within a group the samples share a common factor of variation. For example, consider a collection of face images grouped by identity. We wish to anchor the semantics of the grouping into a relevant and disentangled representation that we can easily exploit. However, existing deep probabilistic models often assume that the observations are independent and identically distributed. We present the Multi-Level Variational Autoencoder (ML-VAE), a new deep probabilistic model for learning a disentangled representation of a set of grouped observations. The ML-VAE separates the latent representation into semantically meaningful parts by working both at the group level and the observation level, while retaining efficient test-time inference. Quantitative and qualitative evaluations show that the ML-VAE model (i) learns a semantically meaningful disentanglement of grouped data, (ii) enables manipulation of the latent representation, and (iii) generalises to unseen groups.", "An ability to model a generative process and learn a latent representation for speech in an unsupervised fashion will be crucial to process vast quantities of unlabelled speech data. Recently, deep probabilistic generative models such as Variational Autoencoders (VAEs) have achieved tremendous success in modeling natural images. In this paper, we apply a convolutional VAE to model the generative process of natural speech. We derive latent space arithmetic operations to disentangle learned latent representations. We demonstrate the capability of our model to modify the phonetic content or the speaker identity for speech segments using the derived operations, without the need for parallel supervisory data." ] }
1903.06336
2951132905
Discrepancy between training and testing domains is a fundamental problem in the generalization of machine learning techniques. Recently, several approaches have been proposed to learn domain invariant feature representations through adversarial deep learning. However, label shift, where the percentage of data in each class is different between domains, has received less attention. Label shift naturally arises in many contexts, especially in behavioral studies where the behaviors are freely chosen. In this work, we propose a method called Domain Adversarial nets for Target Shift (DATS) to address label shift while learning a domain invariant representation. This is accomplished by using distribution matching to estimate label proportions in a blind test set. We extend this framework to handle multiple domains by developing a scheme to upweight source domains most similar to the target domain. Empirical results show that this framework performs well under large label shift in synthetic and real experiments, demonstrating the practical importance.
Recently, optimal transport has been used to analyze the problem of label shift in domain adaptation @cite_23 , but did not consider learning a feature extractor in conjunction with their framework. Notably, estimating terms in optimal transport is computationally expensive; accuracy of fast neural network based approximations is not guaranteed @cite_17 . The target shift problem has also been addressed by using conditional properties via confusion matrix consistency @cite_32 . This approach has not been extended to multiple domains or adapted to learn domain-invariant feature. To the extent of our knowledge, this is the first work that learns domain-invariant features while adjusting for target shift.
{ "cite_N": [ "@cite_32", "@cite_23", "@cite_17" ], "mid": [ "2785542264", "2790341021", "" ], "abstract": [ "Faced with distribution shift between training and test set, we wish to detect and quantify the shift, and to correct our classifiers without test set labels. Motivated by medical diagnosis, where diseases (targets), cause symptoms (observations), we focus on label shift, where the label marginal @math changes but the conditional @math does not. We propose Black Box Shift Estimation (BBSE) to estimate the test distribution @math . BBSE exploits arbitrary black box predictors to reduce dimensionality prior to shift correction. While better predictors give tighter estimates, BBSE works even when predictors are biased, inaccurate, or uncalibrated, so long as their confusion matrices are invertible. We prove BBSE's consistency, bound its error, and introduce a statistical test that uses BBSE to detect shift. We also leverage BBSE to correct classifiers. Experiments demonstrate accurate estimates and improved prediction, even on high-dimensional datasets of natural images", "In this paper, we propose to tackle the problem of reducing discrepancies between multiple domains referred to as multi-source domain adaptation and consider it under the target shift assumption: in all domains we aim to solve a classification problem with the same output classes, but with labels' proportions differing across them. We design a method based on optimal transport, a theory that is gaining momentum to tackle adaptation problems in machine learning due to its efficiency in aligning probability distributions. Our method performs multi-source adaptation and target shift correction simultaneously by learning the class probabilities of the unlabeled target sample and the coupling allowing to align two (or more) probability distributions. Experiments on both synthetic and real-world data related to satellite image segmentation task show the superiority of the proposed method over the state-of-the-art.", "" ] }
1903.05889
2968815933
Detection and tracking of dynamic objects is a key feature for autonomous behavior in a continuously changing environment. With the increasing popularity and capability of micro aerial vehicles (MAVs) efficient algorithms have to be utilized to enable multi object tracking on limited hardware and data provided by lightweight sensors. We present a novel segmentation approach based on a combination of median filters and an efficient pipeline for detection and tracking of small objects within sparse point clouds generated by a Velodyne VLP-16 sensor. We achieve real-time performance on a single core of our MAV hardware by exploiting the inherent structure of the data. Our approach is evaluated on simulated and real scans of in- and outdoor environments, obtaining results comparable to the state of the art. Additionally, we provide an application for filtering the dynamic and mapping the static part of the data, generating further insights into the performance of the pipeline on unlabeled data.
Model-free tracking on the contrary is independent of a predefined model. Objects are detected either by searching for similar regions in consecutive scans implicitly building and updating a model @cite_23 or, as usually applied for multi object tracking (MOT), by extracting the background and tracking the remaining measurement groups @cite_11 @cite_19 . Such methods rely on the dynamics of objects as static or temporarily static objects are not tracked.
{ "cite_N": [ "@cite_19", "@cite_23", "@cite_11" ], "mid": [ "", "1904671147", "2027452306" ], "abstract": [ "", "In this paper, we address the problem of model-free online object tracking based on color representations. According to the findings of recent benchmark evaluations, such trackers often tend to drift towards regions which exhibit a similar appearance compared to the object of interest. To overcome this limitation, we propose an efficient discriminative object model which allows us to identify potentially distracting regions in advance. Furthermore, we exploit this knowledge to adapt the object representation beforehand so that distractors are suppressed and the risk of drifting is significantly reduced. We evaluate our approach on recent online tracking benchmark datasets demonstrating state-of-the-art results. In particular, our approach performs favorably both in terms of accuracy and robustness compared to recent tracking algorithms. Moreover, the proposed approach allows for an efficient implementation to enable online object tracking in real-time.", "Both, the estimation of the trajectory of a sensor and the detection and tracking of moving objects are essential tasks for autonomous robots. This work proposes a new algorithm that treats both problems jointly. The sole input is a sequence of dense 3D measurements as returned by multi-layer laser scanners or time-of-flight cameras. A major characteristic of the proposed approach is its applicability to any type of environment since specific object models are not used at any algorithm stage. More specifically, precise localization in non-flat environments is possible as well as the detection and tracking of e.g. trams or recumbent bicycles. Moreover, 3D shape estimation of moving objects is inherent to the proposed method. Thorough evaluation is conducted on a vehicular platform with a mounted Velodyne HDL-64E laser scanner." ] }
1903.05981
2952470056
Reputation systems concern soft security dynamics in diverse areas. Trust dynamics in a reputation system should be stable and adaptable at the same time to serve the purpose. Many reputation mechanisms have been proposed and tested over time. However, the main drawback of reputation management is that users need to share private information to gain trust in a system such as phone numbers, reviews, and ratings. Recently, a novel model that tries to overcome this issue was presented: the Dynamic Interaction-based Reputation Model (DIBRM). This approach to trust considers only implicit information automatically deduced from the interactions of users within an online community. In this primary research study, the Reddit and MathOverflow online social communities have been selected for testing DIBRM. Results show how this novel approach to trust can mimic behaviors of the selected reputation systems, namely Reddit and MathOverflow, only with temporal information.
The work by @cite_23 tested the hypothesis that temporal-based factors, such as the activity, frequency, regularity of interactions of an entity and its presence in a community, can be used as evidence of its trustworthiness. This hypothesis was tested considering Wikipedia and its content (12000 users, 94000 articles). The authors successfully demonstrated how their algorithm could identify top Wikipedia contributors similarly to the explicit barnstars' mechanism https: en.wikipedia.org wiki Wikipedia:Barnstars employed by Wikipedia. The main drawback of using this approach is the computational time required to quantify each temporal factors for each users which exponentially increases in the number of users and their interactions. To overcome this limitation, a simpler mechanism was introduced by @cite_24 . Here, the frequency of interactions among users is uniquely considered to compute their reputation. This approach was tested by employing information collected from StackOverflow https: stackoverflow.com . Findings show how this mechanism can closely approximate the reputation scores inferred by the StackOverflow mechanism.
{ "cite_N": [ "@cite_24", "@cite_23" ], "mid": [ "2887478640", "2146481439" ], "abstract": [ "In this paper, we investigate how dynamic properties of reputation can influence the quality of users' ranking. Reputation systems should be based on rules that can guarantee high level of trust and help identify unreliable units. To understand the effectiveness of dynamic properties in the evaluation of reputation, we propose our own model (DIB-RM) that utilizes three factors: forgetting, cumulative, and activity period. In order to evaluate the model, we use data from StackOverflow which also has its own reputation model. We estimate similarity of ratings between DIB-RM and the StackOverflow reputation model to test our hypothesis. We use two values to calculate our metrics: DIB-RM reputation and historical reputation. We found out that historical reputation gives better metric values. Our preliminary results are presented for different sets of values of the aforementioned factors in order to analyze how effectively the model can be used for modeling reputation systems.", "In this paper we investigate how temporal factors (i.e. factors computed by considering only the time-distribution of interactions) can be used as an evidence of an entity’s trustworthiness. While reputation and direct experience are the two most widely used sources of trust in applications, we believe that new sources of evidence and new applications should be investigated [1]. Moreover, while these two classical techniques are based on evaluating the outcomes of interactions (direct or indirect), temporal factors are based on quantitative analysis, representing an alternative way of assessing trust. Our presumption is that, even with this limited information, temporal factors could be a plausible evidence of trust that might be aggregated with more traditional sources. After defining our formal model of four main temporal factors - activity, presence, regularity, frequency, we performed an evaluation over the Wikipedia project, considering more than 12000 users and 94000 articles. Our encouraging results show how, based solely on temporal factors, plausible trust decisions can be achieved." ] }
1903.06259
2922327564
We examined the use of modern Generative Adversarial Nets to generate novel images of oil paintings using the Painter By Numbers dataset. We implemented Spectral Normalization GAN (SN-GAN) and Spectral Normalization GAN with Gradient Penalty, and compared their outputs to a Deep Convolutional GAN. Visually, and quantitatively according to the Sliced Wasserstein Distance metric, we determined that the SN-GAN produced paintings that were most comparable to our training dataset. We then performed a series of experiments to add supervised conditioning to SN-GAN, the culmination of which is what we believe to be a novel architecture that can generate face paintings with user-specified characteristics.
where @math is the real data distribution, @math is the generator's learned distribution, and @math is the classification by the discriminator. At the optimal value for D, this loss function can be rewritten to be in terms of the distance measure known as Jensen-Shannon (JS) divergence between the real and generated distributions @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "2125389028" ], "abstract": [ "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels." ] }
1903.05882
2921841822
In this paper, we propose a new method for ensuring formally that a controlled trajectory stay inside a given safety set S for a given duration T. Using a finite gridding X of S, we first synthesize, for a subset of initial nodes x of X , an admissible control for which the Euler-based approximate trajectories lie in S at t @math [0,T]. We then give sufficient conditions which ensure that the exact trajectories, under the same control, also lie in S for t @math [0,T], when starting at initial points 'close' to nodes x. The statement of such conditions relies on results giving estimates of the deviation of Euler-based approximate trajectories, using one-sided Lipschitz constants. We illustrate the interest of the method on several examples, including a stochastic one.
We distinguish between works dealing with problem P1 and those dealing with P2. Problem P1: In many papers of the literature on SL methods with state constraints (see, e.g., @cite_22 ), the authors enforce the trajectory system to stay in @math by introducing a (somehow artificial) penalization'' term in the cost function @math , making the cost of crossing the boundary of @math prohibitive (cf. @cite_3 ). In order to guarantee the result of convergence of @math to @math , they also often make a restrictive assumption of controllability''. Note however that, in works like @cite_0 @cite_18 @cite_15 , no controllability assumption is made.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_3", "@cite_0", "@cite_15" ], "mid": [ "1964345118", "", "2014405238", "2113108728", "2056616297" ], "abstract": [ "We consider a target problem for a nonlinear system under state constraints. We give a new continuous level-set approach for characterizing the optimal times and the backward-reachability sets. This approach leads to a characterization via a Hamilton-Jacobi equation, without assuming any controllability assumption. We also treat the case of time-dependent state constraints, as well as a target problem for a two-player game with state constraints. Our method gives a good framework for numerical approximations, and some numerical illustrations are included in the paper.", "", "We study a class of semi-Lagrangian schemes which can be interpreted as a discrete version of the Hopf-Lax-Oleinik representation formula for the exact viscosity solution of first order evolutive Hamilton-Jacobi equations. That interpretation shows that the scheme is potentially accurate to any prescribed order. We discuss how the method can be implemented for convex and coercive Hamiltonians with a particular structure and how this method can be coupled with a discrete Legendre trasform. We also show that in one dimension, the first-order semi-Lagrangian scheme coincides with the integration of the Godunov scheme for the corresponding conservation laws. Several test illustrate the main features of semi-Lagrangian schemes for evolutive Hamilton-Jacobi equations.", "The paper deals with deterministic optimal control problem with state constraints and non-linear dynamics. It is known for such a problem that the value function is in general discontinuous and its characterization by means of an HJ equation requires some controllability assumptions involving the dynamics and the set of state constraints. Here, we first adopt the viability point of view and look at the value function as its epigraph. Then, we prove that this epigraph can always be described by an auxiliary optimal control problem free of state constraints, and for which the value function is Lipschitz continuous and can be characterized, without any additional assumption, as the unique viscosity solution of a Hamilton-Jacobi equation. The idea introduced in this paper bypass the regularity issues on the value function of the constrained control problem and leads to a constructive way to compute its epigraph by a large panel of numerical schemes. Our approach can be extended to more general control problems. We study in this paper the extension to the infinite horizon problem as well as for the two-player game setting. Finally, an illustrative numerical example is given to show the relevance of the approach.", "We prove the existence of a value for pursuit games with state constraints. We also prove that this value is lower semicontinuous." ] }
1903.05882
2921841822
In this paper, we propose a new method for ensuring formally that a controlled trajectory stay inside a given safety set S for a given duration T. Using a finite gridding X of S, we first synthesize, for a subset of initial nodes x of X , an admissible control for which the Euler-based approximate trajectories lie in S at t @math [0,T]. We then give sufficient conditions which ensure that the exact trajectories, under the same control, also lie in S for t @math [0,T], when starting at initial points 'close' to nodes x. The statement of such conditions relies on results giving estimates of the deviation of Euler-based approximate trajectories, using one-sided Lipschitz constants. We illustrate the interest of the method on several examples, including a stochastic one.
Problem P2: In the work of @cite_11 @cite_25 , the authors pursue an objective similar to ours: providing a (finite time-horizon) optimal control procedure with a formal guarantee of constraint satisfaction (safety). However they do not use SL-schemes, but perform a reachability analysis based on over-approximative state set representations ( zonotopes , cf. @cite_8 @cite_17 ).
{ "cite_N": [ "@cite_8", "@cite_17", "@cite_25", "@cite_11" ], "mid": [ "1569692898", "2146979191", "2735485745", "2752215370" ], "abstract": [ "We present a method for the computation of reachable sets of uncertain linear systems. The main innovation of the method consists in the use of zonotopes for reachable set representation. Zonotopes are special polytopes with several interesting properties : they can be encoded efficiently, they are closed under linear transformations and Minkowski sum. The resulting method has been used to treat several examples and has shown great performances for high dimensional systems. An extension of the method for the verification of piecewise linear hybrid systems is proposed.", "We introduce zonotope bundles for computing the set of states reachable by a dynamical system, also known as the reachable set. Reachable set computations suffer from the curse of dimensionality, which has been successfully addressed by using zonotopes for linear systems. However, zonotopes are not closed under intersection leading to challenges when applying them to nonlinear and hybrid problems. We introduce zonotope bundles as the intersection of zonotopes (without explicitly computing the intersection). Zonotope bundles are closed under intersection, while inheriting many positive properties of zonotopes. This is demonstrated for linear, nonlinear, and hybrid systems. A further property of zonotope bundles is that their computation can be easily parallelized.", "Optimal control finds an optimal input trajectory which steers an initial state to a desired final state while satisfying given state and input constraints. However, most efficient approaches are restricted to a single initial state. In this paper, we present a new approach, which combines reachability analysis with optimal control. This enables us to solve the optimal control problem for a whole set of initial states by optimizing over the set of all possible solutions. At the same time, we are able to provide formal guarantees for the satisfaction of state and input constraints. Taking the effects of sets of disturbances into account ensures that the resulting controller is robust against them, which is a big advantage over many existing approaches. We show the applicability of our approach with a vehicle-platoon example.", "Abstract We address the problem of finding an optimal solution for a nonlinear system for a set of initial states rather than just for a single initial state. In addition, we consider state and input constraints as well as a set of possible disturbances. While previous optimal control techniques typically ignore the fact that the current state of a system is not exactly known, future safety-critical systems demand that all uncertainties including the initial state are considered; this is required for e.g. automated vehicles, surgical robots, or human-robot interaction. We present a new method that obtains optimal control inputs by finding optimal weights for generators that span the space reachable by the considered system. This solution routine can be used not only for a single initial state but also for a set of initial states - this is not possible using classical optimization techniques. We ensure that all constraints are met by using reachability analysis, which provides formal bounds for all possible system trajectories. We demonstrate the applicability of our approach with an example from automated driving; for this example, the result is obtained within a few seconds and outperforms a classical LQR approach." ] }
1903.05882
2921841822
In this paper, we propose a new method for ensuring formally that a controlled trajectory stay inside a given safety set S for a given duration T. Using a finite gridding X of S, we first synthesize, for a subset of initial nodes x of X , an admissible control for which the Euler-based approximate trajectories lie in S at t @math [0,T]. We then give sufficient conditions which ensure that the exact trajectories, under the same control, also lie in S for t @math [0,T], when starting at initial points 'close' to nodes x. The statement of such conditions relies on results giving estimates of the deviation of Euler-based approximate trajectories, using one-sided Lipschitz constants. We illustrate the interest of the method on several examples, including a stochastic one.
In @cite_26 , the authors also provide a formal guarantee of safety property. Contrarily to @cite_11 @cite_25 , they do use SL-schemes. They also focus to (periodically) sampled systems as we do. However, they still perform a form of reachability analysis similar to @cite_11 @cite_25 , using convex polytopes as state set representations. Their growth bound are not based on OSL constants as here, but rather on overapproximations of Lagrange remainders in Taylor series.
{ "cite_N": [ "@cite_26", "@cite_25", "@cite_11" ], "mid": [ "", "2735485745", "2752215370" ], "abstract": [ "", "Optimal control finds an optimal input trajectory which steers an initial state to a desired final state while satisfying given state and input constraints. However, most efficient approaches are restricted to a single initial state. In this paper, we present a new approach, which combines reachability analysis with optimal control. This enables us to solve the optimal control problem for a whole set of initial states by optimizing over the set of all possible solutions. At the same time, we are able to provide formal guarantees for the satisfaction of state and input constraints. Taking the effects of sets of disturbances into account ensures that the resulting controller is robust against them, which is a big advantage over many existing approaches. We show the applicability of our approach with a vehicle-platoon example.", "Abstract We address the problem of finding an optimal solution for a nonlinear system for a set of initial states rather than just for a single initial state. In addition, we consider state and input constraints as well as a set of possible disturbances. While previous optimal control techniques typically ignore the fact that the current state of a system is not exactly known, future safety-critical systems demand that all uncertainties including the initial state are considered; this is required for e.g. automated vehicles, surgical robots, or human-robot interaction. We present a new method that obtains optimal control inputs by finding optimal weights for generators that span the space reachable by the considered system. This solution routine can be used not only for a single initial state but also for a set of initial states - this is not possible using classical optimization techniques. We ensure that all constraints are met by using reachability analysis, which provides formal bounds for all possible system trajectories. We demonstrate the applicability of our approach with an example from automated driving; for this example, the result is obtained within a few seconds and outperforms a classical LQR approach." ] }
1903.05928
2921796004
Distributed phased arrays based multiple-input multiple-output (DPA-MIMO) is a newly debuted architecture that enables both spatial multiplexing and beamforming while facilitating highly reconfigurable hardware implementation in millimeter-wave (mmWave) frequency bands. With a DPA-MIMO system, we focus on channel state information (CSI) acquisition and hybrid precoding. As benefited from a coordinated and open-loop pilot beam pattern design, all the subarrays can simultaneously perform channel sounding with less training overhead compared to the time-sharing operation of each subarray. Furthermore, two sparse channel recovery algorithms, known as joint orthogonal matching pursuit (JOMP) and joint sparse Bayesian learning with @math reweighting (JSBL- @math ), are proposed to exploit the hidden structured sparsity in the beam-domain channel vector. Finally, successive interference cancellation (SIC) based hybrid precoding through subarray grouping is illustrated for the DPA-MIMO system, which decomposes the joint subarray RF beamformer design into an interactive per-subarray-group handle. Simulation results show that the proposed two channel estimators fully take advantage of the partial coupling characteristic of DPA-MIMO channels to perform channel recovery, and the proposed hybrid precoding algorithm is suitable for such array-of-subarrays architecture with satisfactory performance and low complexity.
In mmWave wireless systems, channel state information (CSI) acquisition is unconditionally required prior to beamforming to enable sufficient link margin. Most of the previous research works in this area focus on the fully-connected structure. For instance, a multi-stage channel sounding approach for this structure was developed by exploiting the sparse scattering nature of the mmWave channel @cite_2 . From the compressed sensing (CS) perspective, this feedback based divide-and-conquer searching process which uses a hierarchical multi-resolution codebook leads to an adaptive equivalent measurement matrix with fewer measurements. Compared to the closed-loop beam training methods in @cite_3 @cite_2 @cite_29 , a CS-based open-loop channel estimator can decrease feedback overhead by using deterministic pilot beam patterns which are designed through minimizing the total coherence of the equivalent measurement matrix @cite_31 . Following the design criteria of multi-resolution codebooks in @cite_2 , subarray based coordinated beam training with time-delay phase shifters is proposed for sub-Terahertz (sub-THz) communication systems with an array-of-subarrays architecture @cite_46 . Due to the high complexity of jointly optimizing the RF beam directions across multiple subarrays, two complementary approaches are developed to obtain a small set of dominant candidate directions in @cite_56 .
{ "cite_N": [ "@cite_31", "@cite_29", "@cite_3", "@cite_56", "@cite_2", "@cite_46" ], "mid": [ "2339667469", "2295341737", "2034651337", "2116193517", "2111953900", "2730027154" ], "abstract": [ "We propose an efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency (RF) beamformers with large antenna arrays followed by a baseband MIMO processor. A sparse signal recovery problem exploiting the sparse nature of mm-wave channels is formulated for channel estimation based on the parametric channel model with quantized angles of departures arrivals (AoDs AoAs), called the angle grids. The problem is solved by the orthogonal matching pursuit (OMP) algorithm employing a redundant dictionary consisting of array response vectors with finely quantized angle grids. We suggest the use of non-uniformly quantized angle grids and show that such grids reduce the coherence of the redundant dictionary. The lower and upper bounds of the sum-of-squared errors of the proposed OMP-based estimator are derived analytically: the lower bound is derived by considering the oracle estimator that assumes the knowledge of AoDs AoAs, and the upper bound is derived based on the results of the OMP performance guarantees. The design of training vectors (or sensing matrix) is particularly important in hybrid MIMO systems, because the RF beamformer prevents the use of independent and identically distributed random training vectors, which are popular in compressed sensing. We design training vectors so that the total coherence of the equivalent sensing matrix is minimized for a given RF beamforming matrix, which is assumed to be unitary. It is observed that the estimation accuracy can be improved significantly by randomly permuting the columns of the RF beamforming matrix. The simulation results demonstrate the advantage of the proposed OMP with a redundant dictionary over the existing methods such as the least squares method and the OMP based on the virtual channel model.", "In this paper, we study hierarchical codebook design for channel estimation in millimeter-wave (mmWave) communications with a hybrid precoding structure. Due to the limited saturation power of the mmWave power amplifier, we consider the per-antenna power constraint (PAPC). We first propose a metric, termed generalized detection probability (GDP), to evaluate the quality of an arbitrary codeword . This metric not only enables an optimization approach for mmWave codebook design, but also can be used to compare the performance of two different codewords codebooks. To the best of our knowledge, GDP is the first such metric, particularly for mmWave codebook design. We then propose a heuristic approach to design a hierarchical codebook exploiting beam widening with the multi-RF-chain sub-array (BMW-MS) technique. To obtain crucial parameters of BMW-MS, we provide two solutions, namely, a low-complexity search (LCS) solution to optimize the GDP metric and a closed-form (CF) solution to pursue a flat beam pattern. Performance comparisons show that BMW-MS LCS and BMW-MS CF achieve very close performances, and they outperform the existing alternatives under the PAPC.", "Recently, there has been considerable interest in new tiered network cellular architectures, which would likely use many more cell sites than found today. Two major challenges will be i) providing backhaul to all of these cells and ii) finding efficient techniques to leverage higher frequency bands for mobile access and backhaul. This paper proposes the use of outdoor millimeter wave communications for backhaul networking between cells and mobile access within a cell. To overcome the outdoor impairments found in millimeter wave propagation, this paper studies beamforming using large arrays. However, such systems will require narrow beams, increasing sensitivity to movement caused by pole sway and other environmental concerns. To overcome this, we propose an efficient beam alignment technique using adaptive subspace sampling and hierarchical beam codebooks. A wind sway analysis is presented to establish a notion of beam coherence time. This highlights a previously unexplored tradeoff between array size and wind-induced movement. Generally, it is not possible to use larger arrays without risking a corresponding performance loss from wind-induced beam misalignment. The performance of the proposed alignment technique is analyzed and compared with other search and alignment methods. The results show significant performance improvement with reduced search time.", "The use of the millimeter (mm) wave spectrum for next generation (5G) mobile communication has gained significant attention recently. The small carrier wavelengths at mmwave frequencies enable synthesis of compact antenna arrays, providing beamforming gains that compensate the increased propagation losses. In this work, we investigate the feasibility of employing multiple antenna arrays (at the transmitter and or receiver) to obtain diversity multiplexing gains in mmwave systems, where each of the arrays is capable of beamforming independently. Considering a codebook-based beamforming system (the set of possible beamforming directions is fixed a priori, e.g., to facilitate limited feedback), we observe that the complexity of jointly optimizing the beamforming directions across the multiple arrays is highly prohibitive, even for very reasonable system parameters. To overcome this bottleneck, we develop reduced complexity algorithms for optimizing the choice of beamforming directions, premised on the sparse multipath structure of the mmwave channel. Specifically, we reduce the cardinality of the joint beamforming search space, by restricting attention to a small set of dominant candidate directions. To obtain the set of dominant directions, we develop two complementary approaches: 1) based on computation of a novel spatial power metric; a detailed analysis of this metric shows that, in the limit of large antenna arrays, the selected candidate directions approach the channel's dominant angles of arrival and departure, and 2) precise estimation of the channel's (long-term) dominant angles of arrival, exploiting the correlations of the signals received across the different receiver subarrays. Our methods enable a drastic reduction of the optimization search space (a factor of 100 reduction), while delivering close to optimal performance, thereby indicating the potential feasibility of achieving diversity and multiplexing gains in mmwave systems.", "Millimeter wave (mmWave) cellular systems will enable gigabit-per-second data rates thanks to the large bandwidth available at mmWave frequencies. To realize sufficient link margin, mmWave systems will employ directional beamforming with large antenna arrays at both the transmitter and receiver. Due to the high cost and power consumption of gigasample mixed-signal devices, mmWave precoding will likely be divided among the analog and digital domains. The large number of antennas and the presence of analog beamforming requires the development of mmWave-specific channel estimation and precoding algorithms. This paper develops an adaptive algorithm to estimate the mmWave channel parameters that exploits the poor scattering nature of the channel. To enable the efficient operation of this algorithm, a novel hierarchical multi-resolution codebook is designed to construct training beamforming vectors with different beamwidths. For single-path channels, an upper bound on the estimation error probability using the proposed algorithm is derived, and some insights into the efficient allocation of the training power among the adaptive stages of the algorithm are obtained. The adaptive channel estimation algorithm is then extended to the multi-path case relying on the sparse nature of the channel. Using the estimated channel, this paper proposes a new hybrid analog digital precoding algorithm that overcomes the hardware constraints on the analog-only beamforming, and approaches the performance of digital solutions. Simulation results show that the proposed low-complexity channel estimation algorithm achieves comparable precoding gains compared to exhaustive channel training algorithms. The results illustrate that the proposed channel estimation and precoding algorithms can approach the coverage probability achieved by perfect channel knowledge even in the presence of interference.", "Millimeter-wave (mmWave) and sub-Terahertz (THz) communications are compelling as an enabler for next-generation wireless networks. In this paper, we study mmWave and sub-THz systems with array-of-subarray architecture. To accommodate the ultrabroad bandwidth in the mmWave and sub-THz bands, time-delay phase shifters are introduced in system design. Our goal is to investigate beamforming training with hybrid processing to extract the dominant channel information, which would fully exploit channel characteristics while respecting the nature of circuit hardware. In particular, codebooks based on time-delay phase shifters are defined and structured. Then, two multi-resolution time-delay codebooks are designed through subarray coordination. One is built on adaptation of physical beam directions, and the other relies on dynamic approximation of beam patterns. Also, a low-complexity system implementation with modifications on the time-delay codebooks is studied. Furthermore, based on the proposed codebooks, a hierarchical beamforming training strategy with reduced overhead is developed to enable simultaneous training for multiple users. Simulation results show that the proposed multi-resolution time-delay codebooks could provide sufficient beam gains and are robust over large bandwidth. Also, the effectiveness of the hierarchical beamforming training is verified." ] }
1903.05928
2921796004
Distributed phased arrays based multiple-input multiple-output (DPA-MIMO) is a newly debuted architecture that enables both spatial multiplexing and beamforming while facilitating highly reconfigurable hardware implementation in millimeter-wave (mmWave) frequency bands. With a DPA-MIMO system, we focus on channel state information (CSI) acquisition and hybrid precoding. As benefited from a coordinated and open-loop pilot beam pattern design, all the subarrays can simultaneously perform channel sounding with less training overhead compared to the time-sharing operation of each subarray. Furthermore, two sparse channel recovery algorithms, known as joint orthogonal matching pursuit (JOMP) and joint sparse Bayesian learning with @math reweighting (JSBL- @math ), are proposed to exploit the hidden structured sparsity in the beam-domain channel vector. Finally, successive interference cancellation (SIC) based hybrid precoding through subarray grouping is illustrated for the DPA-MIMO system, which decomposes the joint subarray RF beamformer design into an interactive per-subarray-group handle. Simulation results show that the proposed two channel estimators fully take advantage of the partial coupling characteristic of DPA-MIMO channels to perform channel recovery, and the proposed hybrid precoding algorithm is suitable for such array-of-subarrays architecture with satisfactory performance and low complexity.
The demand of energy-efficient subarray architectures has motivated many researchers to investigate high-efficiency hybrid precoding schemes. For example, assisted by the mechanism of successive interference cancellation (SIC) in multi-user detection, the hybrid precoder design can be simplified by decomposing the total achievable rate optimization problem, with nonconvex constant amplitude constraints of phase shifters, into a series of subrate optimization problems each of which handles one subarray @cite_13 . Due to the block diagonal structure of RF precoders in partially-connected hybrid MIMO systems, the RF and digital precoders can be optimized based on the principle of alternating minimization @cite_4 . During each iteration, a semidefinite relaxation (SDR) problem is formed to obtain the optimal digital precoder while the optimal RF precoder has a closed-form expression. In order to incorporate the merits of both the fully-connected and partially-connected structures, a novel multi-subarray structure, where each subarray consists of multiple RF chains and each RF chain connects to all the antennas corresponding to the subarray, is proposed to provide high spectral efficiency (SE) and energy efficiency (EE) @cite_47 .
{ "cite_N": [ "@cite_47", "@cite_13", "@cite_4" ], "mid": [ "2757012997", "1432778544", "2259391824" ], "abstract": [ "In this paper, we propose a hybridly connected structure for hybrid beamforming in millimeter-wave (mmWave) massive MIMO systems, where the antenna arrays at the transmitter and receiver consist of multiple sub-arrays, each of which connects to multiple radio frequency (RF) chains, and each RF chain connects to all the antennas corresponding to the sub-array. In this structure, through successive interference cancelation, we decompose the precoding matrix optimization problem into multiple precoding sub-matrix optimization problems. Then, near-optimal hybrid digital and analog precoders are designed through factorizing the precoding sub-matrix for each sub-array. Furthermore, we compare the performance of the proposed hybridly connected structure with the existing fully and partially connected structures in terms of spectral efficiency, the required number of phase shifters, and energy efficiency. Finally, simulation results are presented to demonstrate that the spectral efficiency of the hybridly connected structure is better than that of the partially connected structure and that its spectral efficiency can approach that of the fully connected structure with the increase in the number of RF chains. Moreover, the proposed algorithm for the hybridly connected structure is capable of achieving higher energy efficiency than existing algorithms for the fully and partially connected structures.", "Millimeter wave (mmWave) MIMO will likely use hybrid analog and digital precoding, which uses a small number of RF chains to reduce the energy consumption associated with mixed signal components like analog-to-digital components not to mention baseband processing complexity. However, most hybrid precoding techniques consider a fully connected architecture requiring a large number of phase shifters, which is also energy-intensive. In this paper, we focus on the more energy-efficient hybrid precoding with subconnected architecture, and propose a successive interference cancelation (SIC)-based hybrid precoding with near-optimal performance and low complexity. Inspired by the idea of SIC for multiuser signal detection, we first propose to decompose the total achievable rate optimization problem with nonconvex constraints into a series of simple subrate optimization problems, each of which only considers one subantenna array. Then, we prove that maximizing the achievable subrate of each subantenna array is equivalent to simply seeking a precoding vector sufficiently close (in terms of Euclidean distance) to the unconstrained optimal solution. Finally, we propose a low-complexity algorithm to realize SIC-based hybrid precoding, which can avoid the need for the singular value decomposition (SVD) and matrix inversion. Complexity evaluation shows that the complexity of SIC-based hybrid precoding is only about 10 as complex as that of the recently proposed spatially sparse precoding in typical mmWave MIMO systems. Simulation results verify that SIC-based hybrid precoding is near-optimal and enjoys higher energy efficiency than the spatially sparse precoding and the fully digital precoding.", "Millimeter wave (mmWave) communications has been regarded as a key enabling technology for 5G networks, as it offers orders of magnitude greater spectrum than current cellular bands. In contrast to conventional multiple-input–multiple-output (MIMO) systems, precoding in mmWave MIMO cannot be performed entirely at baseband using digital precoders, as only a limited number of signal mixers and analog-to-digital converters can be supported considering their cost and power consumption. As a cost-effective alternative, a hybrid precoding transceiver architecture, combining a digital precoder and an analog precoder, has recently received considerable attention. However, the optimal design of such hybrid precoders has not been fully understood. In this paper, treating the hybrid precoder design as a matrix factorization problem, effective alternating minimization (AltMin) algorithms will be proposed for two different hybrid precoding structures, i.e., the fully-connected and partially-connected structures. In particular, for the fully-connected structure, an AltMin algorithm based on manifold optimization is proposed to approach the performance of the fully digital precoder, which, however, has a high complexity. Thus, a low-complexity AltMin algorithm is then proposed, by enforcing an orthogonal constraint on the digital precoder. Furthermore, for the partially-connected structure, an AltMin algorithm is also developed with the help of semidefinite relaxation. For practical implementation, the proposed AltMin algorithms are further extended to the broadband setting with orthogonal frequency division multiplexing modulation. Simulation results will demonstrate significant performance gains of the proposed AltMin algorithms over existing hybrid precoding algorithms. Moreover, based on the proposed algorithms, simulation comparisons between the two hybrid precoding structures will provide valuable design insights." ] }
1903.06241
2927797998
Development of quality assured software-intensive systems, such as automotive embedded systems, is an increasing challenge as the complexity of these systems significantly increases. EAST-ADL is an architecture description language developed to specify automotive embedded system architectures at multiple abstraction levels in the development of safety-critical automotive products. In this paper, we propose an architecture-based verification technique which enhances the model-based development process supported by EAST-ADL by adapting model-checking to EAST-ADL specifications. We employ UPPAAL as a verification tool to ensure that predicted function behaviors of the models in EAST-ADL satisfy functional and real-time requirements. The criteria for this architecture-based verification is presented and the transformation rules which comply with this criteria are derived. This enables us to extract the relevant information from EAST-ADL specifications and to generate analyzable UPPAAL models. The formal semantics of EAST-ADL is defined which is essential to automate the verification of EAST-ADL specifications. Our approach is demonstrated by verifying the safety of the steering truck system units.
For safety-driven system development in the automotive domain, feature- and architecture based analysis is prescribed by ISO standard as the state-of-the-art approach to functional safety. However, at an early stage it is difficult to see function dependencies that would result in updated function requirements. Therefore, A. @cite_5 provide one approach that performs iterative analysis to manage changes in the safety architecture at analysis level and still meet function specific safety goals derived at vehicle level. In comparison to our work, their main concern is to define the semantics for requirement selection in order to ensure correct inclusion of requirements for a function definition. There is no formal modeling and verification approach to the behavioral definition of EAST-ADL.
{ "cite_N": [ "@cite_5" ], "mid": [ "1571882816" ], "abstract": [ "For systems where functions are distributed but share support for computation, communication, environment sensing and actuation, it is essential to understand how such functions can affect each other. Preliminary Hazard Analysis (PHA) is the task through which safety requirements are established. This is usually a document-based process where each system function is analyzed alone, making it difficult to reason about the commonalities of related functional concepts and the distribution of safety mechanisms across a systemof-systems. This paper presents a model-based approach to PHA with the EAST-ADL2 language and in accordance with the ISO DIS 26262 standard. The language explicitly supports the definition and handling of requirements, functions and technical solutions, and their various relations and constraints as a coherent whole with multiple views. We show in particular the engineering needs for a systematic approach to PHA and the related language features for precise modeling of requirements, user functionalities, system operation contexts, and the derived safety mechanisms." ] }
1903.06241
2927797998
Development of quality assured software-intensive systems, such as automotive embedded systems, is an increasing challenge as the complexity of these systems significantly increases. EAST-ADL is an architecture description language developed to specify automotive embedded system architectures at multiple abstraction levels in the development of safety-critical automotive products. In this paper, we propose an architecture-based verification technique which enhances the model-based development process supported by EAST-ADL by adapting model-checking to EAST-ADL specifications. We employ UPPAAL as a verification tool to ensure that predicted function behaviors of the models in EAST-ADL satisfy functional and real-time requirements. The criteria for this architecture-based verification is presented and the transformation rules which comply with this criteria are derived. This enables us to extract the relevant information from EAST-ADL specifications and to generate analyzable UPPAAL models. The formal semantics of EAST-ADL is defined which is essential to automate the verification of EAST-ADL specifications. Our approach is demonstrated by verifying the safety of the steering truck system units.
Our earlier work @cite_16 verifies EAST-ADL models using UPPAAL-PORT [1] http: www.uppaal.org port . The work uses mainly structural information of EAST-ADL and its requirement, where models are manually translated into SAVE-CCM @cite_9 , which is the required input format for UPPAAL-PORT. In contrast to that work @cite_16 , we consider recent additions to EAST-ADL, including both the timing and behavior extensions by analyzing application internal behaviors specified in behavioral annex as well as using artifact packages. Furthermore, we eliminate the need for intermediate formalisms like SAVE-CCM and target direct transformation of EAST-ADL specifications to UPPAAL TAs.
{ "cite_N": [ "@cite_9", "@cite_16" ], "mid": [ "2171206721", "169129071" ], "abstract": [ "Component based development is a promising approach for embedded systems. Typical for embedded software is the presence of resource constraints in multiple dimensions. An essential dimension is time, since many embedded systems have real-time requirements. We define a formal semantics of a component language for embedded systems, SaveCCM, a language designed with vehicle applications and safety concerns in focus. The semantics is defined by a transformation into timed automata with tasks, a formalism that explicitly models timing and real-time task scheduling. A simple SaveCCM system with a PI controller is used as a case study. Temporal properties of the PI controller have been successfully verified using the timed automata model checker Uppaal.", "We study the use of formal modeling and verification techniques at an early stage in the development of safety-critical automotive products which are originally described in the domain specific architectural language EAST-ADL2. This architectural language only focuses on the structural definition of functional blocks. However, the behavior inside each functional block is not specified and that limits formal modeling and analysis of systems behaviors as well as efficient verification of safety properties. In this paper, we tackle this problem by proposing one modeling approach, which formally captures the behavioral execution inside each functional block and their interactions, and helps to improve the formal modeling and verification capability of EAST-ADL2: the behavior of each elementary function of EAST-ADL2 is specified in UPPAAL Timed Automata. The formal syntax and semantics are defined in order to specify the behavior model inside EAST-ADL2 and their interactions. A composition of the functional behaviors is considered a network of Timed Automata that enables us to verify behaviors of the entire system using the UPPAAL model checker. The method has been demonstrated by verifying the safety of the Brake-by-wire system design." ] }
1903.06241
2927797998
Development of quality assured software-intensive systems, such as automotive embedded systems, is an increasing challenge as the complexity of these systems significantly increases. EAST-ADL is an architecture description language developed to specify automotive embedded system architectures at multiple abstraction levels in the development of safety-critical automotive products. In this paper, we propose an architecture-based verification technique which enhances the model-based development process supported by EAST-ADL by adapting model-checking to EAST-ADL specifications. We employ UPPAAL as a verification tool to ensure that predicted function behaviors of the models in EAST-ADL satisfy functional and real-time requirements. The criteria for this architecture-based verification is presented and the transformation rules which comply with this criteria are derived. This enables us to extract the relevant information from EAST-ADL specifications and to generate analyzable UPPAAL models. The formal semantics of EAST-ADL is defined which is essential to automate the verification of EAST-ADL specifications. Our approach is demonstrated by verifying the safety of the steering truck system units.
Another widely used ADL within both industry and the research community for architectural modeling and analysis of time-critical software intensive embedded systems is Architecture Analysis and Design Language (AADL) @cite_2 . S. Bj " o @cite_3 propose an approach to formal and implemented semantics of AADL, where the Timed Abstract State Machine (TASM) @cite_8 language is used as the formal underpinning. We take a similar approach by transforming EAST-ADL constructs to Timed Automata, especially UPPAAL TA, in order to allow tool-supported automated simulation and verification of EAST-ADL specification by using UPPAAL model checker. The result of our work contribute towards the development of EAST-ADL behavior extension @cite_11 and further refinement of the existing behavior extension.
{ "cite_N": [ "@cite_8", "@cite_11", "@cite_3", "@cite_2" ], "mid": [ "1532063552", "", "1847406255", "3986271" ], "abstract": [ "In this paper, we present the Timed Abstract State Machine (TASM) language, which is a language for the specification of embedded real-time systems. In the engineering of embedded real-time systems, the correctness of the system is defined in terms of three aspects - function, time, and resource consumption. The goal of the TASM language and its associated toolset is to provide a basis for specification-based real-time system engineering where these three aspects can be specified and analyzed. The TASM language is built on top of Abstract State Machines (ASM) by including facilities for compact and legible specification of non-functional behavior, namely time and resource consumption. The TASM language provides a notation which is well-suited to the specification needs of embedded real-time systems. We begin the presentation of the language with a historical survey on the use of ASM in specifying real-time systems. The core difference between the TASM language and ASM is that steps are inherently durative instead of being instantaneous and steps consume resources. These concepts capture the reality of physical systems in a flexible abstract model. We present the syntax and semantics of the language and illustrate the concepts using an extended version of the production cell case study.", "", "The Architecture Analysis and Design Language (AADL) is a popular language for architectural modeling and analysis of software intensive systems in application domains such as automotive, avionics, railway and medical systems. These systems often have stringent real-time requirements. This paper presents an extension to AADL's behavior model using time annotations in order to improve the evaluation of timing properties in AADL. The translational semantics of this extension is based on mappings to the Timed Abstract State Machines (TASM) language. As a result, timing analysis with timed simulation or timed model checking is possible. The translation is supported by an Eclipse-based plug-in and the approach is validated with a case study of an industrial production cell system.", "Abstract : In November 2004, the Society of Automotive Engineers (SAE) released the aerospace standard AS5506, named the Architecture Analysis & Design Language (AADL). The AADL is a modeling language that supports early and repeated analyses of a system's architecture with respect to performance-critical properties through an extendable notation, a tool framework, and precisely defined semantics. The language employs formal modeling concepts for the description and analysis of application system architectures in terms of distinct components and their interactions. It includes abstractions of software, computational hardware, and system components for (a) specifying and analyzing real-time embedded and high dependability systems, complex systems of systems, and specialized performance capability systems and (b) mapping of software onto computational hardware elements. The AADL is especially effective for model-based analysis and specification of complex real-time embedded systems. This technical note is an introduction to the concepts, language structure, and application of the AADL." ] }
1903.06008
2920845749
The behavior of users of music streaming services is investigated from the point of view of the temporal dimension of individual songs; specifically, the main object of the analysis is the point in time within a song at which users stop listening and start streaming another song ("skip"). The main contribution of this study is the ascertainment of a correlation between the distribution in time of skipping events and the musical structure of songs. It is also shown that such distribution is not only specific to the individual songs, but also independent of the cohort of users and, under stationary conditions, date of observation. Finally, user behavioral data is used to train a predictor of the musical structure of a song solely from its acoustic content; it is shown that the use of such data, available in large quantities to music streaming services, yields significant improvements in accuracy over the customary fashion of training this class of algorithms, in which only smaller amounts of hand-labeled data are available.
Research on the topic has long been hindered by the lack of sizeable amounts of expert annotations -- the manual segmentations and labelings of recordings done by musically-competent individuals -- because of its time-consuming nature, coupled with the legal retrictions involved in to sharing copyrighted recordings. Initiatives such as the SALAMI dataset @cite_13 attempt to fill this need by providing a relatively large (several hundreds) source of annotations for recordings, many of which are in the public domain.
{ "cite_N": [ "@cite_13" ], "mid": [ "2293125622" ], "abstract": [ "This paper describes the design and creation of an unprecedentedly large database of over 2400 structural annotations of nearly 1400 musical recordings. The database is intended to be a test set for algorithms that will be used to analyze a much larger corpus of hundreds of thousands of recordings, as part of the Structural Analysis of Large Amounts of Musical Information (SALAMI) project. This paper describes the design goals of the database and the practical issues that were encountered during its creation. In particular, we discuss the selection of the recordings, the development of an annotation format and procedure that adapts work by Peeters and Deruty [10], and the management and execution of the project. We also summarize some of the properties of the resulting corpus of annotations, including average inter-annotator agreement." ] }
1903.06008
2920845749
The behavior of users of music streaming services is investigated from the point of view of the temporal dimension of individual songs; specifically, the main object of the analysis is the point in time within a song at which users stop listening and start streaming another song ("skip"). The main contribution of this study is the ascertainment of a correlation between the distribution in time of skipping events and the musical structure of songs. It is also shown that such distribution is not only specific to the individual songs, but also independent of the cohort of users and, under stationary conditions, date of observation. Finally, user behavioral data is used to train a predictor of the musical structure of a song solely from its acoustic content; it is shown that the use of such data, available in large quantities to music streaming services, yields significant improvements in accuracy over the customary fashion of training this class of algorithms, in which only smaller amounts of hand-labeled data are available.
An alternative approach to deal with the aforementioned issues is the release of the algorithms themselves as open source software: MSAF @cite_4 is the leading effort in that regard within the Music Information Retrieval community, and couples that goal with an evaluation framework; the latter is in turn based on MIReval @cite_15 , a reference implementation of a large set of common music-specific IR evaluation metrics. The remainder of this paper makes use of the following algorithm implementations borrowed from MSAF : cnmf @cite_10 , foote @cite_2 , olda @cite_9 , scluster @cite_22 , sf @cite_19 . Furthermore, another reference algorithm is given by the The Echo Nest Analyzer, based on @cite_14 , whose results can be accessed through Spotify's public API.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_22", "@cite_9", "@cite_19", "@cite_2", "@cite_15", "@cite_10" ], "mid": [ "1510436408", "2576859507", "2406676415", "2053049659", "2018830110", "2168249484", "2407685581", "2051307814" ], "abstract": [ "Automated creation of new music by listening is disclosed. A method to create new music may comprise listening to a plurality of music, learning from the plurality of music, and performing concatenative synthesis based on the listening and the learning to create the new music. The method may be performed on a computing device having an audio interface, such as a personal computer.", "", "Many approaches to analyzing the structure of a musical recording involve detecting sequential patterns within a selfsimilarity matrix derived from time-series features. Such patterns ideally capture repeated sequences, which then form the building blocks of large-scale structure. In this work, techniques from spectral graph theory are applied to analyze repeated patterns in musical recordings. The proposed method produces a low-dimensional encoding of repetition structure, and exposes the hierarchical relationships among structural components at differing levels of granularity. Finally, we demonstrate how to apply the proposed method to the task of music segmentation.", "This paper describes a supervised learning algorithm which optimizes a feature representation for temporally constrained clustering. The proposed method is applied to music segmentation, in which a song is partitioned into functional or locally homogeneous segments (e.g., verse or chorus). To facilitate", "Automatically inferring the structural properties of raw multimedia documents is essential in today's digitized society. Given its hierarchical and multi-faceted organization, musical pieces represent a challenge for current computational systems. In this article, we present a novel approach to music structure annotation based on the combination of structure features with time series similarity. Structure features encapsulate both local and global properties of a time series, and allow us to detect boundaries between homogeneous, novel, or repeated segments. Time series similarity is used to identify equivalent segments, corresponding to musically meaningful parts. Extensive tests with a total of five benchmark music collections and seven different human annotations show that the proposed approach is robust to different ground truth choices and parameter settings. Moreover, we see that it outperforms previous approaches evaluated under the same framework.", "The paper describes methods for automatically locating points of significant change in music or audio, by analyzing local self-similarity. This method can find individual note boundaries or even natural segment boundaries such as verse chorus or speech music transitions, even in the absence of cues such as silence. This approach uses the signal to model itself, and thus does not rely on particular acoustic cues nor requires training. We present a wide variety of applications, including indexing, segmenting, and beat tracking of music and audio. The method works well on a wide variety of audio sources.", "Central to the field of MIR research is the evaluation of algorithms used to extract information from music data. We present mir_eval, an open source software library which provides a transparent and easy-to-use implementation of the most common metrics used to measure the performance of MIR algorithms. In this paper, we enumerate the metrics implemented by mir_eval and quantitatively compare each to existing implementations. When the scores reported by mir_eval differ substantially from the reference, we detail the differences in implementation. We also provide a brief overview of mir_eval’s architecture, design, and intended use. 1. EVALUATING MIR ALGORITHMS Much of the research in Music Information Retrieval (MIR) involves the development of systems that process raw music data to produce semantic information. The goal of these systems is frequently defined as attempting to duplicate the performance of a human listener given the same task [5]. A natural way to determine a system’s effectiveness might be for a human to study the output produced by the system and judge its correctness. However, this would yield only subjective ratings, and would also be extremely timeconsuming when evaluating a system’s output over a large corpus of music. Instead, objective metrics are developed to provide a well-defined way of computing a score which indicates each system’s output’s correctness. These metrics typically involve a heuristically-motivated comparison of the system’s output to a reference which is known to be correct. Over time, certain metrics have become standard for each ∗Please direct correspondence to craffel@gmail.com c © Colin Raffel, Brian McFee, Eric J. Humphrey, Justin Salamon, Oriol Nieto, Dawen Liang, Daniel P. W. Ellis. Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Attribution: Colin Raffel, Brian McFee, Eric J. Humphrey, Justin Salamon, Oriol Nieto, Dawen Liang, Daniel P. W. Ellis.", "We propose a novel and fast approach to discover structure in western popular music by using a specific type of matrix factorization that adds a convex constrain to obtain a decomposition that can be interpreted as a set of weighted cluster centroids. We show that these centroids capture the different sections of a musical piece (e.g. verse, chorus) in a more consistent and efficient way than classic non-negative matrix factorization. This technique is capable of identifying the boundaries of the sections and then grouping them into different clusters. Additionally, we evaluate this method on two different datasets and show that it is competitive compared to other music segmentation techniques, outperforming other matrix factorization methods." ] }
1903.06008
2920845749
The behavior of users of music streaming services is investigated from the point of view of the temporal dimension of individual songs; specifically, the main object of the analysis is the point in time within a song at which users stop listening and start streaming another song ("skip"). The main contribution of this study is the ascertainment of a correlation between the distribution in time of skipping events and the musical structure of songs. It is also shown that such distribution is not only specific to the individual songs, but also independent of the cohort of users and, under stationary conditions, date of observation. Finally, user behavioral data is used to train a predictor of the musical structure of a song solely from its acoustic content; it is shown that the use of such data, available in large quantities to music streaming services, yields significant improvements in accuracy over the customary fashion of training this class of algorithms, in which only smaller amounts of hand-labeled data are available.
Newer approaches, based on more recent Machine Learning techniques, include @cite_7 and @cite_12 , that achieve State of the Art accuracy in prediction by making use of a Convolutional Neural Network architecture.
{ "cite_N": [ "@cite_12", "@cite_7" ], "mid": [ "2399621629", "2404620398" ], "abstract": [ "The determination of structural boundaries is a key task for understanding the structure of a musical piece, but it is also highly ambiguous. Recently, Convolutional Neural Networks (CNN) trained on spectrogram features and human annotations have been successfully used to tackle the problem, but still fall clearly behind human performance. We expand on the CNN approach by combining spectrograms with self-similarity lag matrices as audio features, thereby capturing more facets of the underlying structural information. Furthermore, in order to consider the hierarchical nature of structural organization, we explore different strategies to learn from the two-level annotations of main and secondary boundaries available in the SALAMI structural annotation dataset. We show that both measures improve boundary recognition performance, resulting in a significant improvement over the previous state of the art. As a side-effect, our algorithm can predict boundaries on two different structural levels, equivalent to the training data.", "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrievalF -measure at different temporal tolerances: We advance the state-of-the-art from 0:33 to 0:46 for tolerances of0:5 seconds, and from 0:52 to 0:62 for tolerances of3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions." ] }
1903.05965
2921127294
How to obtain a model with good interpretability and performance has always been an important research topic. In this paper, we propose rectified decision trees (ReDT), a knowledge distillation based decision trees rectification with high interpretability, small model size, and empirical soundness. Specifically, we extend the impurity calculation and the pure ending condition of the classical decision tree to propose a decision tree extension that allows the use of soft labels generated by a well-trained teacher model in training and prediction process. It is worth noting that for the acquisition of soft labels, we propose a new multiple cross-validation based method to reduce the effects of randomness and overfitting. These approaches ensure that ReDT retains excellent interpretability and even achieves fewer nodes than the decision tree in the aspect of compression while having relatively good performance. Besides, in contrast to traditional knowledge distillation, back propagation of the student model is not necessarily required in ReDT, which is an attempt of a new knowledge distillation approach. Extensive experiments are conducted, which demonstrates the superiority of ReDT in interpretability, compression, and empirical soundness.
The interpretability of complex machine learning models, especially ensemble approaches and deep learning, has been widely concerned. At present, the most widely used machine learning models are mainly forest-based algorithms and DNN, so their interpretability is of great significance. There are a few previous studies on the interpretability of forest-based algorithms. The first work is done by Breiman @cite_15 , who propose to use tree ensemble to generate additional samples for the further construction of a single decision tree. In @cite_29 , Node harvest is proposed to simplify tree ensembles by using the shallow parts of the trees. Considering the simplification of tree ensembles as a model selection problem, and using the Bayesian method for selection is also proposed in @cite_16 . The interpretability research of DNN mainly on three aspects: visualizing the representations in intermediate layers of DNN @cite_31 @cite_6 , representation diagnosis @cite_24 @cite_30 and build explainable DNNs @cite_18 @cite_21 . Recently, a knowledge distillation based method is provided, which uses a trained DNN to create a more explainable model in the form of soft decision trees @cite_11 .
{ "cite_N": [ "@cite_30", "@cite_31", "@cite_18", "@cite_29", "@cite_21", "@cite_6", "@cite_24", "@cite_15", "@cite_16", "@cite_11" ], "mid": [ "2765787895", "", "2963226019", "", "2751777443", "2963081790", "2949667497", "", "2963463132", "2769421449" ], "abstract": [ "Given a pre-trained CNN without any testing samples, this paper proposes a simple yet effective method to diagnose feature representations of the CNN. We aim to discover representation flaws caused by potential dataset bias. More specifically, when the CNN is trained to estimate image attributes, we mine latent relationships between representations of different attributes inside the CNN. Then, we compare the mined attribute relationships with ground-truth attribute relationships to discover the CNN's blind spots and failure modes due to dataset bias. In fact, representation flaws caused by dataset bias cannot be examined by conventional evaluation strategies based on testing images, because testing images may also have a similar bias. Experiments have demonstrated the effectiveness of our method.", "", "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657.", "", "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.", "The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. In this work, we describe Network Dissection, a method that interprets networks by providing meaningful labels to their individual units. The proposed method quantifies the interpretability of CNN representations by evaluating the alignment between individual hidden units and visual semantic concepts. By identifying the best alignments, units are given interpretable labels ranging from colors, materials, textures, parts, objects and scenes. The method reveals that deep representations are more transparent and interpretable than they would be under a random equivalently powerful basis. We apply our approach to interpret and compare the latent representations of several network architectures trained to solve a wide range of supervised and self-supervised tasks. We then examine factors affecting the network interpretability such as the number of the training iterations, regularizations, different initialization parameters, as well as networks depth and width. Finally we show that the interpreted units can be used to provide explicit explanations of a given CNN prediction for an image. Our results highlight that interpretability is an important property of deep neural networks that provides new insights into what hierarchical structures can learn.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "", "", "Deep neural networks have proved to be a very effective way to perform classification tasks. They excel when the input data is high dimensional, the relationship between the input and the output is complicated, and the number of labeled training examples is large. But it is hard to explain why a learned network makes a particular classification decision on a particular test case. This is due to their reliance on distributed hierarchical representations. If we could take the knowledge acquired by the neural net and express the same knowledge in a model that relies on hierarchical decisions instead, explaining a particular decision would be much easier. We describe a way of using a trained neural net to create a type of soft decision tree that generalizes better than one learned directly from the training data." ] }
1903.05965
2921127294
How to obtain a model with good interpretability and performance has always been an important research topic. In this paper, we propose rectified decision trees (ReDT), a knowledge distillation based decision trees rectification with high interpretability, small model size, and empirical soundness. Specifically, we extend the impurity calculation and the pure ending condition of the classical decision tree to propose a decision tree extension that allows the use of soft labels generated by a well-trained teacher model in training and prediction process. It is worth noting that for the acquisition of soft labels, we propose a new multiple cross-validation based method to reduce the effects of randomness and overfitting. These approaches ensure that ReDT retains excellent interpretability and even achieves fewer nodes than the decision tree in the aspect of compression while having relatively good performance. Besides, in contrast to traditional knowledge distillation, back propagation of the student model is not necessarily required in ReDT, which is an attempt of a new knowledge distillation approach. Extensive experiments are conducted, which demonstrates the superiority of ReDT in interpretability, compression, and empirical soundness.
Recently, Knowledge distillation has been widely accepted as a compression method. The concept of knowledge distillation in the teacher-student framework by introducing the teacher’s softened output is first proposed in @cite_13 . Since then, a series of improvements and applications of knowledge distillation have been proposed @cite_14 @cite_12 . At present, almost all knowledge distillation focus on the compression of DNN and require the back-propagation of the student model. Besides, using knowledge distillation to distill DNN into a soften decision tree to achieve great interpretability and compressibility is recently proposed in @cite_11 . This method can be regarded as the first attempt to apply knowledge distillation to interpretability.
{ "cite_N": [ "@cite_14", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2964118293", "1821462560", "2739879705", "2769421449" ], "abstract": [ "Abstract: While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN performs a mapping from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model, (2) the student DNN outperforms the original DNN, and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.", "Deep neural networks have proved to be a very effective way to perform classification tasks. They excel when the input data is high dimensional, the relationship between the input and the output is complicated, and the number of labeled training examples is large. But it is hard to explain why a learned network makes a particular classification decision on a particular test case. This is due to their reliance on distributed hierarchical representations. If we could take the knowledge acquired by the neural net and express the same knowledge in a model that relies on hierarchical decisions instead, explaining a particular decision would be much easier. We describe a way of using a trained neural net to create a type of soft decision tree that generalizes better than one learned directly from the training data." ] }
1903.05956
2921999720
We design fast deterministic algorithms for distance computation in the congested clique model. Our key contributions include: -- A @math -approximation for all-pairs shortest paths in @math rounds on unweighted undirected graphs. With a small additional additive factor, this also applies for weighted graphs. This is the first sub-polynomial constant-factor approximation for APSP in this model. -- A @math -approximation for multi-source shortest paths from @math sources in @math rounds on weighted undirected graphs. This is the first sub-polynomial algorithm obtaining this approximation for a set of sources of polynomial size. Our main techniques are new distance tools that are obtained via improved algorithms for sparse matrix multiplication, which we leverage to construct efficient hopsets and shortest paths. Furthermore, our techniques extend to additional distance problems for which we improve upon the state-of-the-art, including diameter approximation, and an exact single-source shortest paths algorithm for weighted undirected graphs in @math rounds.
As shown by @cite_34 , matrix multiplication can be done in in @math rounds over semirings, and in @math rounds over rings, where @math is the exponent of the matrix multiplication @cite_66 . For rectangular matrix multiplication, @cite_52 gives faster algorithms. The first sparse matrix multiplication algorithms for were given by @cite_59 , as discussed above.
{ "cite_N": [ "@cite_52", "@cite_34", "@cite_66", "@cite_59" ], "mid": [ "2511108295", "2950813619", "2120248756", "2914674183" ], "abstract": [ "Censor- [PODC’15] recently showed how to efficiently implement centralized algebraic algorithms for matrix multiplication in the congested clique model, a model of distributed computing that has received increasing attention in the past few years. This paper develops further algebraic techniques for designing algorithms in this model. We present deterministic and randomized algorithms, in the congested clique model, for efficiently computing multiple independent instances of matrix products, computing the determinant, the rank and the inverse of a matrix, and solving systems of linear equations. As applications of these techniques, we obtain more efficient algorithms for the computation, again in the congested clique model, of the all-pairs shortest paths and the diameter in directed and undirected graphs with small weights, improving over Censor-’s work. We also obtain algorithms for several other graph-theoretic problems such as computing the number of edges in a maximum matching and the Gallai-Edmonds decomposition of a simple graph, and computing a minimum vertex cover of a bipartite graph.", "In this work, we use algebraic methods for studying distance computation and subgraph detection tasks in the congested clique model. Specifically, we adapt parallel matrix multiplication implementations to the congested clique, obtaining an @math round matrix multiplication algorithm, where @math is the exponent of matrix multiplication. In conjunction with known techniques from centralised algorithmics, this gives significant improvements over previous best upper bounds in the congested clique model. The highlight results include: -- triangle and 4-cycle counting in @math rounds, improving upon the @math triangle detection algorithm of [DISC 2012], -- a @math -approximation of all-pairs shortest paths in @math rounds, improving upon the @math -round @math -approximation algorithm of Nanongkai [STOC 2014], and -- computing the girth in @math rounds, which is the first non-trivial solution in this model. In addition, we present a novel constant-round combinatorial algorithm for detecting 4-cycles.", "This paper presents a method to analyze the powers of a given trilinear form (a special kind of algebraic construction also called a tensor) and obtain upper bounds on the asymptotic complexity of matrix multiplication. Compared with existing approaches, this method is based on convex optimization, and thus has polynomial-time complexity. As an application, we use this method to study powers of the construction given by Coppersmith and Winograd [Journal of Symbolic Computation, 1990] and obtain the upper bound ω", "" ] }
1903.05956
2921999720
We design fast deterministic algorithms for distance computation in the congested clique model. Our key contributions include: -- A @math -approximation for all-pairs shortest paths in @math rounds on unweighted undirected graphs. With a small additional additive factor, this also applies for weighted graphs. This is the first sub-polynomial constant-factor approximation for APSP in this model. -- A @math -approximation for multi-source shortest paths from @math sources in @math rounds on weighted undirected graphs. This is the first sub-polynomial algorithm obtaining this approximation for a set of sources of polynomial size. Our main techniques are new distance tools that are obtained via improved algorithms for sparse matrix multiplication, which we leverage to construct efficient hopsets and shortest paths. Furthermore, our techniques extend to additional distance problems for which we improve upon the state-of-the-art, including diameter approximation, and an exact single-source shortest paths algorithm for weighted undirected graphs in @math rounds.
The distributed CONGEST model is identical to the model, with the difference that the communication network is identical to the input graph @math , and nodes can communicate only with their neighbours in each round. Distance computation is extensively studied in the CONGEST model. The study of APSP in weighted graphs has been the focus of many recent papers @cite_23 @cite_47 @cite_30 @cite_9 , culminating in a near tight @math algorithm @cite_23 . Such results were previously known in unweighted graphs @cite_32 @cite_43 @cite_28 or for approximation algorithms @cite_38 @cite_17 . Approximate and exact algorithms for SSSP are studied in @cite_2 @cite_17 @cite_33 @cite_15 @cite_57 @cite_67 @cite_9 . While near-tight algorithms exist for approximating SSSP @cite_2 , there is currently a lot of interest in understanding the complexity of SSSP and SSSP @cite_41 @cite_67 @cite_9 . The problem is studied in @cite_21 , demonstrating the applicability of this tool for many distance problems such as APSP and diameter approximation in unweighted graphs. An extension for the weighted case is studied in @cite_38 . Algorithms and lower bounds for approximating the diameter are studied in @cite_21 @cite_55 @cite_28 @cite_31 @cite_10 .
{ "cite_N": [ "@cite_30", "@cite_47", "@cite_41", "@cite_43", "@cite_2", "@cite_15", "@cite_10", "@cite_38", "@cite_67", "@cite_21", "@cite_23", "@cite_17", "@cite_28", "@cite_55", "@cite_32", "@cite_57", "@cite_33", "@cite_9", "@cite_31" ], "mid": [ "", "", "2963784922", "", "2963618817", "", "", "2040273581", "", "2107282727", "2900102855", "", "", "", "2048098617", "", "", "", "" ], "abstract": [ "", "", "We devise new algorithms for the single-source shortest paths (SSSP) problem with non-negative edge weights in the CONGEST model of distributed computing. While close-to-optimal solutions, in terms of the number of rounds spent by the algorithm, have recently been developed for computing SSSP approximately, the fastest known exact algorithms are still far away from matching the lower bound of Ω (n + D) rounds by Peleg and Rubinovich [SIAM Journal on Computing 2000], where n is the number of nodes in the network and D is its diameter. The state of the art is Elkin's randomized algorithm [STOC 2017] that performs O(n^2 3 D^1 3 + n^5 6) rounds. We significantly improve upon this upper bound with our two new randomized algorithms for polynomially bounded integer edge weights, the first performing O(√n D) rounds and the second performing O(√n D^1 4 + n^3 5 + D) rounds. Our bounds also compare favorably to the independent result by Ghaffari and Li [STOC 2018]. As side results, we obtain a (1+e)-approximation O((√n D^1 4+D) e)-round algorithm for directed SSSP and a new work depth trade-off for exact SSSP on directed graphs in the PRAM model.", "", "A (β, e)-hopset for a weighted undirected n-vertex graph G = (V, E) is a set of edges, whose addition to the graph guarantees that every pair of vertices has a path between them that contains at most β edges, whose length is within 1 + e of the shortest path. In her seminal paper, Cohen [8, JACM 2000] introduced the notion of hopsets in the context of parallel computation of approximate shortest paths, and since then it has found numerous applications in various other settings, such as dynamic graph algorithms, distributed computing, and the streaming model. Cohen [8] devised efficient algorithms for constructing hopsets with polylogarithmic in n number of hops. Her constructions remain the state-of-the–art since the publication of her paper in STOC'94, i.e., for more than two decades. In this paper we exhibit the first construction of sparse hopsets with a constant number of hops. We also find efficient algorithms for hopsets in various computational settings, improving the best known constructions. Generally, our hopsets strictly outperform the hopsets of [8], both in terms of their parameters, and in terms of the resources required to construct them. We demonstrate the applicability of our results for the fundamental problem of computing approximate shortest paths from s sources. Our results improve the running time for this problem in the parallel, distributed and streaming models, for a vast range of s.", "", "", "We study approximate distributed solutions to the weighted all-pairs shortest-paths (APSP) problem in the CONGEST model. We obtain the following results. A deterministic (1+epsilon)-approximation to APSP with running time O(e-2n log n) rounds. The best previously known algorithm was randomized and slower by a Theta(log n) factor. In many cases, routing schemes involve relabeling, i.e., assigning new names to nodes and that are used in distance and routing queries. It is known that relabeling is necessary to achieve running times of o(n log n). In the relabeling model, we obtain the following results. A randomized O(k)-approximation to APSP, for any integer k>1, running in O(n1 2+1 k+D) rounds, where D is the hop diameter of the network. This algorithm simplifies the best previously known result and reduces its approximation ratio from O(k log k) to O(k). Also, the new algorithm uses O(log n)-bit labels, which is asymptotically optimal. A randomized O(k)-approximation to APSP, for any integer k>1, running in time O((nD)1 2 n1 k+D) and producing compact routing tables of size O(n1 k). The node labels consist of O(k log n) bits. This improves on the approximation ratio of Theta(k2) for tables of that size achieved by the best previously known algorithm, which terminates faster, in O(n1 2+1 k+D) rounds. In addition, we improve on the time complexity of the best known deterministic algorithm for distributed approximate Steiner forest.", "", "Given a simple graph G=(V,E) and a set of sources S ⊆ V, denote for each node ν e V by Lν(∞) the lexicographically ordered list of distance source pairs (d(s,v),s), where s ∈ S. For integers d,k ∈ N∪ ∞ , we consider the source detection, or (S,d,k)-detection task, requiring each node v to learn the first k entries of Lν(∞) (if for all of them d(s,v) ≤ d) or all entries (d(s,v),s) ∈ Lν(∞) satisfying that d(s,v) ≤ d (otherwise). Solutions to this problem provide natural generalizations of concurrent breadth-first search (BFS) tree constructions. For example, the special case of k=∞ requires each source s ∈ S to build a complete BFS tree rooted at s, whereas the special case of d=∞ and S=V requires constructing a partial BFS tree comprising at least k nodes from every node in V. In this work, we give a simple, near-optimal solution for the source detection task in the CONGEST model, where messages contain at most O(log n) bits, running in d+k rounds. We demonstrate its utility for various routing problems, exact and approximate diameter computation, and spanner construction. For those problems, we obtain algorithms in the CONGEST model that are faster and in some cases much simpler than previous solutions.", "In the distributed all-pairs shortest paths problem (APSP), every node in the weighted undirected distributed network (the CONGEST model) needs to know the distance from every other node using least number of communication rounds (typically called time complexity ). The problem admits @math -approximation @math -time algorithm and a nearly-tight @math lower bound [Nanongkai, STOC'14; Lenzen and Patt-Shamir PODC'15] @math , @math and @math hide polylogarithmic factors. Note that the lower bounds also hold even in the unweighted case and in the weighted case with polynomial approximation ratios LenzenP_podc13,HolzerW12,PelegRT12,Nanongkai-STOC14 . . For the exact case, Elkin [STOC'17] presented an @math time bound, which was later improved to @math [Huang, Nanongkai, Saranurak FOCS'17]. It was shown that any super-linear lower bound (in @math ) requires a new technique [Censor-Hillel, Khoury, Paz, DISC'17], but otherwise it remained widely open whether there exists a @math -time algorithm for the exact case, which would match the best possible approximation algorithm. This paper resolves this question positively: we present a randomized (Las Vegas) @math -time algorithm, matching the lower bound up to polylogarithmic factors. Like the previous @math bound, our result works for directed graphs with zero (and even negative) edge weights. In addition to the improved running time, our algorithm works in a more general setting than that required by the previous @math bound; in our setting (i) the communication is only along edge directions (as opposed to bidirectional), and (ii) edge weights are arbitrary (as opposed to integers in 1, 2, ... poly(n) ). ...", "", "", "", "We present an algorithm to compute All Pairs Shortest Paths (APSP) of a network in a distributed way. The model of distributed computation we consider is the message passing model: in each synchronous round, every node can transmit a different (but short) message to each of its neighbors. We provide an algorithm that computes APSP in O(n) communication rounds, where n denotes the number of nodes in the network. This implies a linear time algorithm for computing the diameter of a network. Due to a lower bound these two algorithms are optimal up to a logarithmic factor. Furthermore, we present a new lower bound for approximating the diameter D of a graph: Being allowed to answer D+1 or D can speed up the computation by at most a factor D. On the positive side, we provide an algorithm that achieves such a speedup of D and computes an (1+eepsilon) multiplicative approximation of the diameter. We extend these algorithms to compute or approximate other problems, such as girth, radius, center and peripheral vertices. At the heart of these approximation algorithms is the S-Shortest Paths problem which we solve in O(|S|+D) time.", "", "", "", "" ] }
1903.05956
2921999720
We design fast deterministic algorithms for distance computation in the congested clique model. Our key contributions include: -- A @math -approximation for all-pairs shortest paths in @math rounds on unweighted undirected graphs. With a small additional additive factor, this also applies for weighted graphs. This is the first sub-polynomial constant-factor approximation for APSP in this model. -- A @math -approximation for multi-source shortest paths from @math sources in @math rounds on weighted undirected graphs. This is the first sub-polynomial algorithm obtaining this approximation for a set of sources of polynomial size. Our main techniques are new distance tools that are obtained via improved algorithms for sparse matrix multiplication, which we leverage to construct efficient hopsets and shortest paths. Furthermore, our techniques extend to additional distance problems for which we improve upon the state-of-the-art, including diameter approximation, and an exact single-source shortest paths algorithm for weighted undirected graphs in @math rounds.
Hopsets are a central building block in distance computation and are studied extensively in various computing models @cite_68 @cite_17 @cite_5 @cite_9 @cite_26 @cite_49 @cite_18 @cite_4 @cite_27 . The most related to our work are two recent constructions of Elkin and Neiman @cite_14 , and Huang and Pettie @cite_6 , which are based on the emulators of Thorup and Zwick @cite_65 , and are near optimal due to existential results @cite_56 . Specifically, @cite_6 construct @math -hopsets of size @math with @math , where recent existential results show that any construction of @math -hopsets with worst case size @math must have @math , where @math is an integer and @math . For a detailed discussion of hopsets see the introduction in @cite_14 @cite_26 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_14", "@cite_9", "@cite_65", "@cite_6", "@cite_56", "@cite_27", "@cite_49", "@cite_5", "@cite_68", "@cite_17" ], "mid": [ "", "", "", "2610522381", "", "2033698604", "2963983736", "2902989566", "", "", "", "816673839", "" ], "abstract": [ "", "", "", "For a positive parameter @math , the @math -bounded distance between a pair of vertices @math in a weighted undirected graph @math is the length of the shortest @math path in @math with at most @math edges, aka hops . For @math as above and @math , a @math -hopset of @math is a graph @math on the same vertex set, such that all distances in @math are @math -approximated by @math -bounded distances in @math . Hopsets are a fundamental graph-theoretic and graph-algorithmic construct, and they are widely used for distance-related problems in a variety of computational settings. Currently existing constructions of hopsets produce hopsets either with @math edges, or with a hopbound @math . In this paper we devise a construction of linear-size hopsets with hopbound @math . This improves the previous bound almost exponentially. We also devise efficient implementations of our construction in PRAM and distributed settings. The only existing PRAM algorithm EN16 for computing hopsets with a constant (i.e., independent of @math ) hopbound requires @math time. We devise a PRAM algorithm with polylogarithmic running time for computing hopsets with a constant hopbound, i.e., our running time is exponentially better than the previous one. Moreover, these hopsets are also significantly sparser than their counterparts from EN16 . We use our hopsets to devise a distributed routing scheme that exhibits near-optimal tradeoff between individual memory requirement @math of vertices throughout preprocessing and routing phases of the algorithm, and stretch @math , along with a near-optimal construction time @math , where @math is the hop-diameter of the input graph.", "", "Let k ≥ 2 be an integer. We show that any undirected and unweighted graph G = (V, E) on n vertices has a subgraph G' = (V, E') with O(kn1+1 k) edges such that for any two vertices u, v ∈ V, if Δ G (u, v) = d, then Δ G' (u, v) = d+O(d1-1 k-1). Furthermore, we show that such subgraphs can be constructed in O(mn1 k) time, where m and n are the number of edges and vertices in the original graph. We also show that it is possible to construct a weighted graph G* = (V, E*) with O(kn1+1 (2k-1)) edges such that for every u, v ∈ V, if Δ G (u, v) = d, then Δ ≤ Δ G* (u, v) = d + O(d1-1 k-1). These are the first such results with additive error terms of the form o(d), i.e., additive error terms that are sublinear in the distance being approximated.", "Abstract A ( β , ϵ ) -hopset is, informally, a weighted edge set that, when added to a graph, allows one to get from point a to point b using a path with at most β edges (“hops”) and length ( 1 + ϵ ) dist ( a , b ) . In this paper we observe that Thorup and Zwick's sublinear additive emulators are also actually ( O ( k ϵ ) k , ϵ ) -hopsets for every ϵ > 0 , and that with a small change to the Thorup–Zwick construction, the size of the hopset can be made O ( n 1 + 1 2 k + 1 − 1 ) . As corollaries, we also shave “k” factors off the size of Thorup and Zwick's [20] sublinear additive emulators and the sparsest known ( 1 + ϵ , O ( k ϵ ) k − 1 ) -spanners, due to Abboud, Bodwin, and Pettie [1] .", "Spanners, emulators, and approximate distance oracles can be viewed as lossy compression schemes that represent an unweighted graph metric in small space, say @math bits. There ...", "", "", "", "We present a deterministic (1+o(1))-approximation O(n1 2+o(1)+D1+o(1))-time algorithm for solving the single-source shortest paths problem on distributed weighted networks (the CONGEST model); here n is the number of nodes in the network and D is its (hop) diameter. This is the first non-trivial deterministic algorithm for this problem. It also improves (i) the running time of the randomized (1+o(1))-approximation O(n1 2D1 4+D)-time algorithm of Nanongkai [STOC 2014] by a factor of as large as n1 8, and (ii) the O(є−1logє−1)-approximation factor of Lenzen and Patt-Shamir’s O(n1 2+є+D)-time algorithm [STOC 2013] within the same running time. Our running time matches the known time lower bound of Ω(n1 2 logn + D) [Das STOC 2011] modulo some lower-order terms, thus essentially settling the status of this problem which was raised at least a decade ago [Elkin SIGACT News 2004]. It also implies a (2+o(1))-approximation O(n1 2+o(1)+D1+o(1))-time algorithm for approximating a network’s weighted diameter which almost matches the lower bound by [PODC 2012]. In achieving this result, we develop two techniques which might be of independent interest and useful in other settings: (i) a deterministic process that replaces the “hitting set argument” commonly used for shortest paths computation in various settings, and (ii) a simple, deterministic, construction of an (no(1), o(1))-hop set of size O(n1+o(1)). We combine these techniques with many distributed algorithmic techniques, some of which from problems that are not directly related to shortest paths, e.g. ruling sets [ STOC 1987], source detection [Lenzen, Peleg PODC 2013], and partial distance estimation [Lenzen, Patt-Shamir PODC 2015]. Our hop set construction also leads to single-source shortest paths algorithms in two other settings: (i) a (1+o(1))-approximation O(no(1))-time algorithm on congested cliques, and (ii) a (1+o(1))-approximation O(no(1)logW)-pass O(n1+o(1)logW)-space streaming algorithm, when edge weights are in 1, 2, …, W . The first result answers an open problem in [Nanongkai, STOC 2014]. The second result partially answers an open problem raised by McGregor in 2006 [ sublinear.info , Problem 14].", "" ] }
1903.05675
2922398462
Phishing as one of the most well-known cybercrime activities is a deception of online users to steal their personal or confidential information by impersonating a legitimate website. Several machine learning-based strategies have been proposed to detect phishing websites. These techniques are dependent on the features extracted from the website samples. However, few studies have actually considered efficient feature selection for detecting phishing attacks. In this work, we investigate an agreement on the definitive features which should be used in phishing detection. We apply Fuzzy Rough Set (FRS) theory as a tool to select most effective features from three benchmarked data sets. The selected features are fed into three often used classifiers for phishing detection. To evaluate the FRS feature selection in developing a generalizable phishing detection, the classifiers are trained by a separate out-of-sample data set of 14,000 website samples. The maximum F-measure gained by FRS feature selection is 95 using Random Forest classification. Also, there are 9 universal features selected by FRS over all the three data sets. The F-measure value using this universal feature set is approximately 93 which is a comparable result in contrast to the FRS performance. Since the universal feature set contains no features from third-part services, this finding implies that with no inquiry from external sources, we can gain a faster phishing detection which is also robust toward zero-day attacks.
ML-based approaches to detect phishing websites is an active research area that employs a wide range of supervised classification techniques to segregate phishing class. Feng propose a novel neural network for phishing detection @cite_4 . They improve the generalization ability of the network by designing risk minimization principle. Performance of the proposed network is evaluated over a UCI repository https: archive.ics.uci.edu ml datasets phishing+websites containing 11,055 samples labeled as phishing legitimate. The dataset also specifies 30 features for each website categorized as Address bar-based, Abnormal-based, HTML Javascript-based, and Domain-based features. Rao and Pais propose a novel algorithm to detect phishing websites using both machine learning techniques and image checking. They also extract features from URL, website content, and third-party services @cite_16 . It is worth mentioning that although using the features from third-party service can increase the detection time, it increases the detection accuracy in practice @cite_2 . They evaluate the performance of the proposed algorithm over 1407 legitimate and 2119 phishing websites collected from PhishTank http: www.phishtank.com and Alexa database http: www.alexa.com , respectively.
{ "cite_N": [ "@cite_16", "@cite_4", "@cite_2" ], "mid": [ "2782171552", "2801494564", "2890718808" ], "abstract": [ "Phishing is a cyber-attack which targets naive online users tricking into revealing sensitive information such as username, password, social security number or credit card number etc. Attackers fool the Internet users by masking webpage as a trustworthy or legitimate page to retrieve personal information. There are many anti-phishing solutions such as blacklist or whitelist, heuristic and visual similarity-based methods proposed to date, but online users are still getting trapped into revealing sensitive information in phishing websites. In this paper, we propose a novel classification model, based on heuristic features that are extracted from URL, source code, and third-party services to overcome the disadvantages of existing anti-phishing techniques. Our model has been evaluated using eight different machine learning algorithms and out of which, the Random Forest (RF) algorithm performed the best with an accuracy of 99.31 . The experiments were repeated with different (orthogonal and oblique) random forest classifiers to find the best classifier for the phishing website detection. Principal component analysis Random Forest (PCA-RF) performed the best out of all oblique Random Forests (oRFs) with an accuracy of 99.55 . We have also tested our model with the third-party-based features and without third-party-based features to determine the effectiveness of third-party services in the classification of suspicious websites. We also compared our results with the baseline models (CANTINA and CANTINA+). Our proposed technique outperformed these methods and also detected zero-day phishing attacks.", "In recent years, security incidents of website occur increasingly frequently, and this motivates us to study websites’ security. Although there are many phishing detection approaches to detect phishing websites, the detection accuracy has not been desirable. In this paper, we propose a novel phishing detection model based on a novel neural network classification method. This detection model can achieve high accu-racy and has good generalization ability by design risk minimization principle. Furthermore, the training process of the novel detection model is simple and stable by Monte Carlo algorithm. Based on testing of a set of phishing and benign websites, we have noted that this novel phishing detection model achieves the best Accuracy, True-positive rate (TPR), False-positive rate (FPR), Precision, Recall, F-measure and Matthews Correlation Coefficient (MCC) comparable to other models as Naive Bayes (NB), Logistic Regression(LR), K-Nearest Neighbor (KNN), Decision Tree (DT), Linear Support Vector Machine (LSVM), Radial-Basis Support Vector Machine (RSVM) and Linear Discriminant Analysis (LDA). Furthermore, based upon experiments, we find that the proposed detection model can achieve a high Accuracy of 97.71 and a low FPR of 1.7 . It indicates that the proposed detection model is promising and can be effectively applied to phishing detection.", "Abstract Due to the rapid growth of the Internet, users change their preference from traditional shopping to the electronic commerce. Instead of bank shop robbery, nowadays, criminals try to find their victims in the cyberspace with some specific tricks. By using the anonymous structure of the Internet, attackers set out new techniques, such as phishing, to deceive victims with the use of false websites to collect their sensitive information such as account IDs, usernames, passwords, etc. Understanding whether a web page is legitimate or phishing is a very challenging problem, due to its semantics-based attack structure, which mainly exploits the computer users’ vulnerabilities. Although software companies launch new anti-phishing products, which use blacklists, heuristics, visual and machine learning-based approaches, these products cannot prevent all of the phishing attacks. In this paper, a real-time anti-phishing system, which uses seven different classification algorithms and natural language processing (NLP) based features, is proposed. The system has the following distinguishing properties from other studies in the literature: language independence, use of a huge size of phishing and legitimate data, real-time execution, detection of new websites, independence from third-party services and use of feature-rich classifiers. For measuring the performance of the system, a new dataset is constructed, and the experimental results are tested on it. According to the experimental and comparative results from the implemented classification algorithms, Random Forest algorithm with only NLP based features gives the best performance with the 97.98 accuracy rate for detection of phishing URLs." ] }
1903.05675
2922398462
Phishing as one of the most well-known cybercrime activities is a deception of online users to steal their personal or confidential information by impersonating a legitimate website. Several machine learning-based strategies have been proposed to detect phishing websites. These techniques are dependent on the features extracted from the website samples. However, few studies have actually considered efficient feature selection for detecting phishing attacks. In this work, we investigate an agreement on the definitive features which should be used in phishing detection. We apply Fuzzy Rough Set (FRS) theory as a tool to select most effective features from three benchmarked data sets. The selected features are fed into three often used classifiers for phishing detection. To evaluate the FRS feature selection in developing a generalizable phishing detection, the classifiers are trained by a separate out-of-sample data set of 14,000 website samples. The maximum F-measure gained by FRS feature selection is 95 using Random Forest classification. Also, there are 9 universal features selected by FRS over all the three data sets. The F-measure value using this universal feature set is approximately 93 which is a comparable result in contrast to the FRS performance. Since the universal feature set contains no features from third-part services, this finding implies that with no inquiry from external sources, we can gain a faster phishing detection which is also robust toward zero-day attacks.
Mohammad propose a novel self-structuring neural network for detecting phishing websites @cite_21 . They specify 17 features, some extracted from third-party service, for 600 legitimate and 800 phishing websites collected from PhishTank and Millersmiles http: www.millersmiles.co.uk archives. Their experiments indicate the high generalizability and ability of the neural network in phishing detection. In another work, they propose a feed forward neural network trained by back propagation to classify websites @cite_20 . 18 features are specified for 859 legitimate and 969 phishing websites, respectively. Jain and Gupta propose a machine learning-based technique using only client-side features to detect phishing websites @cite_23 . They extract 19 features from the URL and source code of the web pages and evaluate their method over 2,141 phishing web pages from PhishTank and Openfish https: openphish.com and 1,918 legitimate pages from Alexa database, some online payment, and banking websites.
{ "cite_N": [ "@cite_21", "@cite_23", "@cite_20" ], "mid": [ "2029470356", "2779627157", "1573999528" ], "abstract": [ "Internet has become an essential component of our everyday social and financial activities. Nevertheless, internet users may be vulnerable to different types of web threats, which may cause financial damages, identity theft, loss of private information, brand reputation damage and loss of customer's confidence in e-commerce and online banking. Phishing is considered as a form of web threats that is defined as the art of impersonating a website of an honest enterprise aiming to obtain confidential information such as usernames, passwords and social security number. So far, there is no single solution that can capture every phishing attack. In this article, we proposed an intelligent model for predicting phishing attacks based on artificial neural network particularly self-structuring neural networks. Phishing is a continuous problem where features significant in determining the type of web pages are constantly changing. Thus, we need to constantly improve the network structure in order to cope with these changes. Our model solves this problem by automating the process of structuring the network and shows high acceptance for noisy data, fault tolerance and high prediction accuracy. Several experiments were conducted in our research, and the number of epochs differs in each experiment. From the results, we find that all produced structures have high generalization ability.", "The existing anti-phishing approaches use the blacklist methods or features based machine learning techniques. Blacklist methods fail to detect new phishing attacks and produce high false positive rate. Moreover, existing machine learning based methods extract features from the third party, search engine, etc. Therefore, they are complicated, slow in nature, and not fit for the real-time environment. To solve this problem, this paper presents a machine learning based novel anti-phishing approach that extracts the features from client side only. We have examined the various attributes of the phishing and legitimate websites in depth and identified nineteen outstanding features to distinguish phishing websites from legitimate ones. These nineteen features are extracted from the URL and source code of the website and do not depend on any third party, which makes the proposed approach fast, reliable, and intelligent. Compared to other methods, the proposed approach has relatively high accuracy in detection of phishing websites as it achieved 99.39 true positive rate and 99.09 of overall detection accuracy.", "Phishing is increasing dramatically with the development of modern technologies and the global worldwide computer networks. This results in the loss of customer’s confidence in e-commerce and online banking, financial damages, and identity theft. Phishing is fraudulent effort aims to acquire sensitive information from users such as credit card credentials, and social security number. In this article, we propose a model for predicting phishing attacks based on Artificial Neural Network (ANN). A Feed Forward Neural Network trained by Back Propagation algorithm is developed to classify websites as phishing or legitimate. The suggested model shows high acceptance ability for noisy data, fault tolerance and high prediction accuracy with respect to false positive and false negative rates." ] }
1903.05675
2922398462
Phishing as one of the most well-known cybercrime activities is a deception of online users to steal their personal or confidential information by impersonating a legitimate website. Several machine learning-based strategies have been proposed to detect phishing websites. These techniques are dependent on the features extracted from the website samples. However, few studies have actually considered efficient feature selection for detecting phishing attacks. In this work, we investigate an agreement on the definitive features which should be used in phishing detection. We apply Fuzzy Rough Set (FRS) theory as a tool to select most effective features from three benchmarked data sets. The selected features are fed into three often used classifiers for phishing detection. To evaluate the FRS feature selection in developing a generalizable phishing detection, the classifiers are trained by a separate out-of-sample data set of 14,000 website samples. The maximum F-measure gained by FRS feature selection is 95 using Random Forest classification. Also, there are 9 universal features selected by FRS over all the three data sets. The F-measure value using this universal feature set is approximately 93 which is a comparable result in contrast to the FRS performance. Since the universal feature set contains no features from third-part services, this finding implies that with no inquiry from external sources, we can gain a faster phishing detection which is also robust toward zero-day attacks.
Although all the above-mentioned studies have suggested different features to detect phishing websites, some features may not be sufficiently discerning to highlight phishing instances @cite_0 . Only a limited number of work has focused on selecting the most effective features to detect phishing websites. Rajab proposes using Correlation Feature Set (CFS) and Information Gain (IG) to select the most influential features to detect phishing activities @cite_0 . The results over the UCI repository containing 30 features specified for 11,055 samples indicate 11 and 9 features selected by IG and CFS, respectively. The classification performance using selected features is also evaluated by a data mining method called RIPPER. Similarly, Babagoli use a similar data set and propose feature selection using decision trees and the wrapper method @cite_22 which results in selecting 20 features @cite_24 . They evaluate the phishing detection performance using a novel meta-heuristic-based nonlinear regression algorithm. Still, the feature selection methods proposed by these studies are dependent on the data and requires user-specified threshold values which should be set heuristically. These thresholds can affect the ultimate performance of the classification algorithm, especially when features are selected based on out-of-sample training data in practice.
{ "cite_N": [ "@cite_0", "@cite_22", "@cite_24" ], "mid": [ "2802926971", "1974330875", "2788346622" ], "abstract": [ "Since the rapid advancement in computer networks and ebusiness technologies, massive numbers of sales transactions are performed on the World Wide Web on daily basis. These transactions necessitate online financial payments and the use of ebanking hence attracting phishers to target online users' credentials to access their financial information. Phishing involves developing forged websites that are visually identical to truthful websites in order to deceive online users. Different anti-phishing techniques have been proposed to reduce the risks of phishing mainly by educating users or using automated software. One of the main challenge for automated anti-phishing tools is to determine the more influential features in order to detect phishing activities. This article addresses this problem by conducting a thorough analysis using filtering methods against real phishing websites data. The methodology employed is based on data mining method called RIPPER to determine the performance of the classification systems derived by RIPPER and according to different evaluation measures such as error rate, false positives and false negatives.", "Besides optimizing classifier predictive performance and addressing the curse of the dimensionality problem, feature selection techniques support a classification model as simple as possible. In this paper, we present a wrapper feature selection approach based on Bat Algorithm (BA) and Optimum-Path Forest (OPF), in which we model the problem of feature selection as an binary-based optimization technique, guided by BA using the OPF accuracy over a validating set as the fitness function to be maximized. Moreover, we present a methodology to better estimate the quality of the reduced feature set. Experiments conducted over six public datasets demonstrated that the proposed approach provides statistically significant more compact sets and, in some cases, it can indeed improve the classification effectiveness.", "In this paper, we propose a method of phishing website detection that utilizes a meta-heuristic-based nonlinear regression algorithm together with a feature selection approach. In order to validate the proposed method, we used a dataset comprised of 11055 phishing and legitimate webpages, and select 20 features to be extracted from the mentioned websites. This research utilizes two feature selection methods: decision tree and wrapper to select the best feature subset, while the latter incurred the detection accuracy rate as high as 96.32 . After the feature selection process, two meta-heuristic algorithms are successfully implemented to predict and detect the fraudulent websites: harmony search (HS) which was deployed based on nonlinear regression technique and support vector machine (SVM). The nonlinear regression approach was used to classify the websites, where the parameters of the proposed regression model were obtained using HS algorithm. The proposed HS algorithm uses dynamic pitch adjustment rate and generated new harmony. The nonlinear regression based on HS led to accuracy rates of 94.13 and 92.80 for train and test processes, respectively. As a result, the study finds that the nonlinear regression-based HS results in better performance compared to SVM." ] }
1903.05675
2922398462
Phishing as one of the most well-known cybercrime activities is a deception of online users to steal their personal or confidential information by impersonating a legitimate website. Several machine learning-based strategies have been proposed to detect phishing websites. These techniques are dependent on the features extracted from the website samples. However, few studies have actually considered efficient feature selection for detecting phishing attacks. In this work, we investigate an agreement on the definitive features which should be used in phishing detection. We apply Fuzzy Rough Set (FRS) theory as a tool to select most effective features from three benchmarked data sets. The selected features are fed into three often used classifiers for phishing detection. To evaluate the FRS feature selection in developing a generalizable phishing detection, the classifiers are trained by a separate out-of-sample data set of 14,000 website samples. The maximum F-measure gained by FRS feature selection is 95 using Random Forest classification. Also, there are 9 universal features selected by FRS over all the three data sets. The F-measure value using this universal feature set is approximately 93 which is a comparable result in contrast to the FRS performance. Since the universal feature set contains no features from third-part services, this finding implies that with no inquiry from external sources, we can gain a faster phishing detection which is also robust toward zero-day attacks.
In this work, we propose using FRS theory as the feature selection algorithm. In contrast to the related work focusing on only one data set, our experiments evaluate the generalizability of our approach by finding a universal set of discriminative features from three benchmark data sets. The datasets include a new source of 7,000 legitimate and 7,000 phishing website samples randomly extracted from the a phishing website repository @cite_13 . We compare the detection performance using the selected features with the results of features selections proposed in @cite_0 @cite_24 .
{ "cite_N": [ "@cite_0", "@cite_13", "@cite_24" ], "mid": [ "2802926971", "", "2788346622" ], "abstract": [ "Since the rapid advancement in computer networks and ebusiness technologies, massive numbers of sales transactions are performed on the World Wide Web on daily basis. These transactions necessitate online financial payments and the use of ebanking hence attracting phishers to target online users' credentials to access their financial information. Phishing involves developing forged websites that are visually identical to truthful websites in order to deceive online users. Different anti-phishing techniques have been proposed to reduce the risks of phishing mainly by educating users or using automated software. One of the main challenge for automated anti-phishing tools is to determine the more influential features in order to detect phishing activities. This article addresses this problem by conducting a thorough analysis using filtering methods against real phishing websites data. The methodology employed is based on data mining method called RIPPER to determine the performance of the classification systems derived by RIPPER and according to different evaluation measures such as error rate, false positives and false negatives.", "", "In this paper, we propose a method of phishing website detection that utilizes a meta-heuristic-based nonlinear regression algorithm together with a feature selection approach. In order to validate the proposed method, we used a dataset comprised of 11055 phishing and legitimate webpages, and select 20 features to be extracted from the mentioned websites. This research utilizes two feature selection methods: decision tree and wrapper to select the best feature subset, while the latter incurred the detection accuracy rate as high as 96.32 . After the feature selection process, two meta-heuristic algorithms are successfully implemented to predict and detect the fraudulent websites: harmony search (HS) which was deployed based on nonlinear regression technique and support vector machine (SVM). The nonlinear regression approach was used to classify the websites, where the parameters of the proposed regression model were obtained using HS algorithm. The proposed HS algorithm uses dynamic pitch adjustment rate and generated new harmony. The nonlinear regression based on HS led to accuracy rates of 94.13 and 92.80 for train and test processes, respectively. As a result, the study finds that the nonlinear regression-based HS results in better performance compared to SVM." ] }
1903.05807
2920996897
How can we edit or transform the geometric or color property of a point cloud? In this study, we propose a neural style transfer method for point clouds which allows us to transfer the style of geometry or color from one point cloud either independently or simultaneously to another. This transfer is achieved by manipulating the content representations and Gram-based style representations extracted from a pre-trained PointNet-based classification network for colored point clouds. As Gram-based style representation is invariant to the number or the order of points, the same method can be extended to transfer the style extracted from an image to the color expression of a point cloud by merely treating the image as a set of pixels. Experimental results demonstrate the capability of the proposed method for transferring style from either an image or a point cloud to another point cloud of a single object or even an indoor scene.
Neural style transfer aims at seeking a stylised image that preserves the content of a content image with its style from a style image by utilizing feature representations extracted from neural networks. @cite_18 propose a style transfer method by iteratively updating an image such that its content representation and style representation extracted from VGG @cite_11 simultaneously match with that of a content image and a style image. This online optimization algorithm suffers from efficiency issue. To address the issue, a bunch of follow-up work additionally trains a neural network which can generate the stylised image with only one feed-forward pass. Depending on how many types of styles can one network generate, this type of method can be classified into three subclasses: per-style-per-model @cite_5 @cite_16 , multiple-style-per-model @cite_19 @cite_13 and arbitrary-style-per-model @cite_15 .
{ "cite_N": [ "@cite_18", "@cite_19", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2475287302", "2592533706", "2331128040", "", "2295130376", "2949848065", "1686810756" ], "abstract": [ "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.", "Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "", "recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to , but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.", "Despite the rapid progress in style transfer, existing approaches using feed-forward generative network for multi-style or arbitrary-style transfer are usually compromised of image quality and model flexibility. We find it is fundamentally difficult to achieve comprehensive style modeling using 1-dimensional style embedding. Motivated by this, we introduce CoMatch Layer that learns to match the second order feature statistics with the target styles. With the CoMatch Layer, we build a Multi-style Generative Network (MSG-Net), which achieves real-time performance. We also employ an specific strategy of upsampled convolution which avoids checkerboard artifacts caused by fractionally-strided convolution. Our method has achieved superior image quality comparing to state-of-the-art approaches. The proposed MSG-Net as a general approach for real-time style transfer is compatible with most existing techniques including content-style interpolation, color-preserving, spatial control and brush stroke size control. MSG-Net is the first to achieve real-time brush-size control in a purely feed-forward manner for style transfer. Our implementations and pre-trained models for Torch, PyTorch and MXNet frameworks will be publicly available.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1903.05807
2920996897
How can we edit or transform the geometric or color property of a point cloud? In this study, we propose a neural style transfer method for point clouds which allows us to transfer the style of geometry or color from one point cloud either independently or simultaneously to another. This transfer is achieved by manipulating the content representations and Gram-based style representations extracted from a pre-trained PointNet-based classification network for colored point clouds. As Gram-based style representation is invariant to the number or the order of points, the same method can be extended to transfer the style extracted from an image to the color expression of a point cloud by merely treating the image as a set of pixels. Experimental results demonstrate the capability of the proposed method for transferring style from either an image or a point cloud to another point cloud of a single object or even an indoor scene.
@cite_18 utilize the Gram matrix of feature maps to represent the style of an image, and this almost becomes an unsuspicious standard. @cite_0 point out the matching of Gram-based style representation is equivalent to minimizing Maximum Mean Discrepancy (MMD) between feature maps of style images and stylised images, and demonstrate other distribution alignment options for style transfer.
{ "cite_N": [ "@cite_0", "@cite_18" ], "mid": [ "2952008036", "2475287302" ], "abstract": [ "Neural Style Transfer has recently demonstrated very exciting results which catches eyes in both academia and industry. Despite the amazing results, the principle of neural style transfer, especially why the Gram matrices could represent style remains unclear. In this paper, we propose a novel interpretation of neural style transfer by treating it as a domain adaptation problem. Specifically, we theoretically show that matching the Gram matrices of feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with the second order polynomial kernel. Thus, we argue that the essence of neural style transfer is to match the feature distributions between the style images and the generated images. To further support our standpoint, we experiment with several other distribution alignment methods, and achieve appealing results. We believe this novel interpretation connects these two important research fields, and could enlighten future researches.", "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation." ] }
1903.05807
2920996897
How can we edit or transform the geometric or color property of a point cloud? In this study, we propose a neural style transfer method for point clouds which allows us to transfer the style of geometry or color from one point cloud either independently or simultaneously to another. This transfer is achieved by manipulating the content representations and Gram-based style representations extracted from a pre-trained PointNet-based classification network for colored point clouds. As Gram-based style representation is invariant to the number or the order of points, the same method can be extended to transfer the style extracted from an image to the color expression of a point cloud by merely treating the image as a set of pixels. Experimental results demonstrate the capability of the proposed method for transferring style from either an image or a point cloud to another point cloud of a single object or even an indoor scene.
Our method follows the scheme in @cite_18 ; however, unlike the case of image style transfer where pixels are laying in fixed grids, and only RGB color can be modified, both the geometry and color property of point clouds can be stylised by our method.
{ "cite_N": [ "@cite_18" ], "mid": [ "2475287302" ], "abstract": [ "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation." ] }
1903.05807
2920996897
How can we edit or transform the geometric or color property of a point cloud? In this study, we propose a neural style transfer method for point clouds which allows us to transfer the style of geometry or color from one point cloud either independently or simultaneously to another. This transfer is achieved by manipulating the content representations and Gram-based style representations extracted from a pre-trained PointNet-based classification network for colored point clouds. As Gram-based style representation is invariant to the number or the order of points, the same method can be extended to transfer the style extracted from an image to the color expression of a point cloud by merely treating the image as a set of pixels. Experimental results demonstrate the capability of the proposed method for transferring style from either an image or a point cloud to another point cloud of a single object or even an indoor scene.
Learning good representations directly from point clouds in an end-to-end fashion is a challenging task due to its irregular data structure. PointNet @cite_3 is a pioneering work in addressing this problem. To address the variant order issue of an point cloud, PointNet first applies a shared multi-layer perceptron (MLP) to expand each 3D point to higher dimensional feature space, and then use a symmetric function, e.g., max-pooling, to aggregate information along feature axis. This aggregation results in a global feature representing the overall shape of the point cloud. PointNet++ @cite_2 further improved PointNet by recursively applying PointNet to points in local regions to capture local structures at different scales. Similarly, PointCNN @cite_21 designs an @math -conv operation to overcome irregularity, and PointSIFT @cite_1 designs an orientation encoding unit to extract multi-scale representations from point clouds. SpiderCNN @cite_17 encodes the spatial difference between a point and its neighbours into filter functions and extend transitional convolutional operation to irregular point sets.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_3", "@cite_2", "@cite_17" ], "mid": [ "2902302021", "2810641456", "2560609797", "2963121255", "2963158438" ], "abstract": [ "We present a simple and general framework for feature learning from point cloud. The key to the success of CNNs is the convolution operator that is capable of leveraging spatially-local correlation in data represented densely in grids (e.g. images). However, point cloud are irregular and unordered, thus a direct convolving of kernels against the features associated with the points will result in deserting the shape information while being variant to the orders. To address these problems, we propose to learn a X-transformation from the input points, which is used for simultaneously weighting the input features associated with the points and permuting them into latent potentially canonical order. Then element-wise product and sum operations of typical convolution operator are applied on the X-transformed features. The proposed method is a generalization of typical CNNs into learning features from point cloud, thus we call it PointCNN. Experiments show that PointCNN achieves on par or better performance than state-of-the-art methods on multiple challenging benchmark datasets and tasks.", "Recently, 3D understanding research sheds light on extracting features from point cloud directly, which requires effective shape pattern description of point clouds. Inspired by the outstanding 2D shape descriptor SIFT, we design a module called PointSIFT that encodes information of different orientations and is adaptive to scale of shape. Specifically, an orientation-encoding unit is designed to describe eight crucial orientations, and multi-scale representation is achieved by stacking several orientation-encoding units. PointSIFT module can be integrated into various PointNet-based architecture to improve the representation ability. Extensive experiments show our PointSIFT-based framework outperforms state-of-the-art method on standard benchmark datasets. The code and trained model will be published accompanied by this paper.", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.", "Few prior works study deep learning on point sets. PointNet is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.", "Deep neural networks have enjoyed remarkable success for various vision tasks, however it remains challenging to apply CNNs to domains lacking a regular underlying structures such as 3D point clouds. Towards this we propose a novel convolutional architecture, termed SpiderCNN, to efficiently extract geometric features from point clouds. SpiderCNN is comprised of units called SpiderConv, which extend convolutional operations from regular grids to irregular point sets that can be embedded in ( R ^n ), by parametrizing a family of convolutional filters. We design the filter as a product of a simple step function that captures local geodesic information and a Taylor polynomial that ensures the expressiveness. SpiderCNN inherits the multi-scale hierarchical architecture from classical CNNs, which allows it to extract semantic deep features. Experiments on ModelNet40 demonstrate that SpiderCNN achieves state-of-the-art accuracy (92.4 ) on standard benchmarks, and shows competitive performance on segmentation task." ] }
1903.05807
2920996897
How can we edit or transform the geometric or color property of a point cloud? In this study, we propose a neural style transfer method for point clouds which allows us to transfer the style of geometry or color from one point cloud either independently or simultaneously to another. This transfer is achieved by manipulating the content representations and Gram-based style representations extracted from a pre-trained PointNet-based classification network for colored point clouds. As Gram-based style representation is invariant to the number or the order of points, the same method can be extended to transfer the style extracted from an image to the color expression of a point cloud by merely treating the image as a set of pixels. Experimental results demonstrate the capability of the proposed method for transferring style from either an image or a point cloud to another point cloud of a single object or even an indoor scene.
In this work, we modify PointNet @cite_3 to extract feature representations for colored point clouds and demonstrate how can we utilize the feature representations to perform point cloud style transfer.
{ "cite_N": [ "@cite_3" ], "mid": [ "2560609797" ], "abstract": [ "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption." ] }
1903.05833
2963557086
This paper focuses on helping network providers to carry out network capacity planning and sales projection by answering the question: For a given topology and capacity, whether the network can serve current flow demands with high probabilities? We name such probability as “ flow availability” and present the -flow availability estimation (FAVE) problem, which is a generalisation of network connectivity or maximum flow reliability estimations. Realistic networks are often large and dynamic, so flow availabilities cannot be evaluated analytically and simulation is often used. However, naive Monte Carlo (MC) or importance sampling (IS) techniques take an excessive amount of time. To quickly estimate flow availabilities, we utilize the correlations among link and flow failures to figure out the importance of roles played by different links in flow failures, and design three “sequential importance sampling” (SIS) methods which achieve “bounded or even vanishing relative error” with linear computational complexities. When applying to a realistic network, our method reduces the flow availability estimation cost by 900 and 130 times compared with MC and baseline IS methods, respectively. Our method can also facilitate capacity planning by providing better flow availability guarantees, compared with traditional methods.
Our work can be viewed as a generalisation of previous work in estimating network reliability @cite_5 @cite_21 @cite_16 @cite_11 @cite_23 @cite_12 @cite_6 and is related with importance sampling @cite_18 . Here, we briefly review previous rel-evant studies and compare them with our work. We also descr-ibe some concepts so readers can gain a better understanding of sampling methods for the network reliability estimation.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_6", "@cite_23", "@cite_5", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2087018957", "2109289094", "2501980298", "2105584052", "2017999041", "2144646757", "2108858520", "2091719583" ], "abstract": [ "Importance sampling is one of the classical variance reduction techniques for increasing the efficiency of Monte Carlo algorithms for estimating integrals. The basic idea is to replace the original random mechanism in the simulation by a new one and at the same time modify the function being integrated. In this paper the idea is extended to problems arising in the simulation of stochastic systems. Discrete-time Markov chains, continuous-time Markov chains, and generalized semi-Markov processes are covered. Applications are given to a GI G 1 queueing problem and response surface estimation. Computation of the theoretical moments arising in importance sampling is discussed and some numerical examples given.", "Reliability optimization has been a popular area of research, and received significant attention due to the critical importance of reliability in various kinds of systems. Most network reliability optimization problems are only focused on solving simple structured networks (e.g., series-parallel networks) of which the reliability function can be easily obtained in advance. However, modern networks are usually very complex, and it is impossible to calculate the exact network reliability function by using traditional analytical methods in limited time. Hence, a new particle swarm optimization (PSO) based on Monte Carlo simulation (MCS), named MCS-PSO, has been proposed to solve complex network reliability optimization problems. The proposed MCS-PSO can minimize cost under reliability constraints. To the best of our knowledge, this is the first attempt to use PSO combined with MCS to solve complex network reliability problems without requiring knowledge of the reliability function in advance. Compared with previous works to solve this problem, the proposed MCS-PSO can have better efficiency by providing a better solution to the complex network reliability optimization problem.", "Flow networks in which each component is associated with multiple capacities are known as Stochastic flow networks. These networks are prone to partial or complete failures and are often subjected to unavailability. To estimate two-terminal reliability for such stochastic flow networks several algorithms exists in literature. Most of these algorithms takes huge computational time to compute network reliability even for moderate sized networks. The known efficient cutset based stochastic flow network reliability estimation algorithm developed till date can be presented as follows (1) Maximal flows for demand d, of the network are generated using the minimal cuts and capacities of the components. (2) Removes the non-maximal flows to obtain the set of upper boundary flows for demand d. (3) Determines unreliability from the upper boundary flows from which reliability is calculated as 1-unreliability of the network. In this work, we present an algorithm that preorders the minimal cuts and calculates the network unreliability using the ordered minimal cuts. Our algorithm reduces the computational time compared to the existing algorithm. We provide an example to illustrate the proposed method. Matlab simulation is performed to compare the proposed method with the existing methods using the standard benchmark networks available in literature. Simulation results show that the proposed method takes lesser computation time and memory.", "We consider the problem of estimating the unreliability of a stochastic flow network, defined as the probability that the maximum flow value from a source node to a terminal node in a directed network with stochastic link capacities, is less than a specified demand level. The link capacities are assumed to be continuous random variables with a known joint distribution. We are interested in the situation where the unreliability is very small, in which case a crude Monte Carlo is not viable. We show how a Monte Carlo splitting algorithm can be adapted to handle this problem effectively.", "Reliability and availability have always been important characteristics of systems but become even more critical and complex issues on networks. Due to the high complexity of accurate calculation methods, simulation methods for network reliability and availability evaluation have been a popular area of research, and received significant attention. In this paper the network topology model, the failure driven simulation method and the reliability and availability statistical algorithms are put forward, and then the network reliability and availability simulation flow based on the Monte Carlo method was designed. This simulation method can be used to calculate the reliability and availability of a network whose nodes and links may be failure. Two repair policies, immediate repair and delay repair, are provided to choose. Finally, the backbone of CERNET (China Education and Research Network) is taken as an example and the simulation results are analyzed.", "We propose a new Monte Carlo method, based on dynamic importance sampling, to estimate the probability that a given set of nodes is connected in a graph (or network) where each link is failed with a given probability. The method generates the link states one by one, using a sampling strategy that approximates an ideal zero-variance importance sampling scheme. The approximation is based on minimal cuts in subgraphs. In an asymptotic rare-event regime where failure probability becomes very small, we prove that the relative error of our estimator remains bounded, and even converges to 0 under additional conditions, when the unreliability of individual links converges to 0. The empirical performance of the new sampling scheme is illustrated by examples.", "In this paper we show how the permutation Monte Carlo method, originally developed for reliability networks, can be successfully adapted for stochastic flow networks, and in particular for estimation of the probability that the maximal flow in such a network is above some fixed level, called the threshold. A stochastic flow network is defined as one, where the edges are subject to random failures. A failed edge is assumed to be erased (broken) and, thus, not able to deliver any flow. We consider two models; one where the edges fail with the same failure probability and another where they fail with different failure probabilities. For each model we construct a different algorithm for estimation of the desired probability; in the former case it is based on the well known notion of the D-spectrum and in the later one—on the permutational Monte Carlo. We discuss the convergence properties of our estimators and present supportive numerical results.", "We consider network reliability in layered networks where the lower layer experiences random link failures. In layered networks, each failure at the lower layer may lead to multiple failures at the upper layer. We generalize the classical polynomial expression for network reliability to the multi-layer setting. Using random sampling techniques, we develop polynomial time approximation algorithms for the failure polynomial. Our approach gives an approximate expression for reliability as a function of the link failure probability, eliminating the need to resample for different values of the failure probability. Furthermore, it gives insight on how the routings of the logical topology on the physical topology impact network reliability. We show that maximizing the min cut of the (layered) network maximizes reliability in the low failure probability regime. Based on this observation, we develop algorithms for routing the logical topology to maximize reliability." ] }
1903.05833
2963557086
This paper focuses on helping network providers to carry out network capacity planning and sales projection by answering the question: For a given topology and capacity, whether the network can serve current flow demands with high probabilities? We name such probability as “ flow availability” and present the -flow availability estimation (FAVE) problem, which is a generalisation of network connectivity or maximum flow reliability estimations. Realistic networks are often large and dynamic, so flow availabilities cannot be evaluated analytically and simulation is often used. However, naive Monte Carlo (MC) or importance sampling (IS) techniques take an excessive amount of time. To quickly estimate flow availabilities, we utilize the correlations among link and flow failures to figure out the importance of roles played by different links in flow failures, and design three “sequential importance sampling” (SIS) methods which achieve “bounded or even vanishing relative error” with linear computational complexities. When applying to a realistic network, our method reduces the flow availability estimation cost by 900 and 130 times compared with MC and baseline IS methods, respectively. Our method can also facilitate capacity planning by providing better flow availability guarantees, compared with traditional methods.
The most relevant literatures to our work focus on evaluating network reliability . To design reliable networks, one needs to measure the impact of network failures (e.g., link failures) on network performances @cite_15 . It is known that the exact computation of network reliability is #P-complete, and computational complexities of all known algorithms are exponential increasing with the graph scale @cite_17 , which makes the problem intractable even for medium sized networks. Hence, most work on network reliability evaluation considers sampling methods to provide reliability estimations, and they can be classified into ! network connectivity " based and maximum flow " based.
{ "cite_N": [ "@cite_15", "@cite_17" ], "mid": [ "1988810149", "1991536882" ], "abstract": [ "We propose a rational approximation algorithm for network reliability estimation in this paper. Our algorithm can be applied to estimate both the all-terminal reliability measure and the reliability of networks with various protection algorithms. Our algorithm is very efficient and it can obtain the whole reliability function curve.", "This paper presents an overview of results related to the computational complexity of network reliability analysis problems. Network reliability analysis problems deal with the determination of reliability measures for stochastic networks. We show how these problems are related to the more familiar computational network problems of recognizing certain subnetworks, finding optimal subnetworks, and counting certain subnetworks. We use these relationships to show that the k-terminal, the 2-terminal, and the all-terminal network reliability analysis problems are at least as hard as the renowned set of computationally difficult problems, NP-Complete. Finally, we discuss the impact of these results on how one should approach problem solving in this area." ] }
1903.05833
2963557086
This paper focuses on helping network providers to carry out network capacity planning and sales projection by answering the question: For a given topology and capacity, whether the network can serve current flow demands with high probabilities? We name such probability as “ flow availability” and present the -flow availability estimation (FAVE) problem, which is a generalisation of network connectivity or maximum flow reliability estimations. Realistic networks are often large and dynamic, so flow availabilities cannot be evaluated analytically and simulation is often used. However, naive Monte Carlo (MC) or importance sampling (IS) techniques take an excessive amount of time. To quickly estimate flow availabilities, we utilize the correlations among link and flow failures to figure out the importance of roles played by different links in flow failures, and design three “sequential importance sampling” (SIS) methods which achieve “bounded or even vanishing relative error” with linear computational complexities. When applying to a realistic network, our method reduces the flow availability estimation cost by 900 and 130 times compared with MC and baseline IS methods, respectively. Our method can also facilitate capacity planning by providing better flow availability guarantees, compared with traditional methods.
Network connectivity reliability (NCR) : NCR is a classical reliability measure adopted by most work @cite_5 @cite_21 @cite_16 . The network ! ! is modelled as a graph where links are either failed or operational, and NCR is measured by the probability that a given set of nodes are connected when links fail with given probabilities. Authors in @cite_5 take network repair policies into consideration to model link failures, and estimate NCR with the classical MC Authors in @cite_21 combine MC with the particle swarm optimization to handle the NCR problem. To improve the efficiency of MC, authors in @cite_16 apply the IS method and use pre-computed " to approximate the optimal IS estimator.
{ "cite_N": [ "@cite_5", "@cite_21", "@cite_16" ], "mid": [ "2017999041", "2109289094", "2144646757" ], "abstract": [ "Reliability and availability have always been important characteristics of systems but become even more critical and complex issues on networks. Due to the high complexity of accurate calculation methods, simulation methods for network reliability and availability evaluation have been a popular area of research, and received significant attention. In this paper the network topology model, the failure driven simulation method and the reliability and availability statistical algorithms are put forward, and then the network reliability and availability simulation flow based on the Monte Carlo method was designed. This simulation method can be used to calculate the reliability and availability of a network whose nodes and links may be failure. Two repair policies, immediate repair and delay repair, are provided to choose. Finally, the backbone of CERNET (China Education and Research Network) is taken as an example and the simulation results are analyzed.", "Reliability optimization has been a popular area of research, and received significant attention due to the critical importance of reliability in various kinds of systems. Most network reliability optimization problems are only focused on solving simple structured networks (e.g., series-parallel networks) of which the reliability function can be easily obtained in advance. However, modern networks are usually very complex, and it is impossible to calculate the exact network reliability function by using traditional analytical methods in limited time. Hence, a new particle swarm optimization (PSO) based on Monte Carlo simulation (MCS), named MCS-PSO, has been proposed to solve complex network reliability optimization problems. The proposed MCS-PSO can minimize cost under reliability constraints. To the best of our knowledge, this is the first attempt to use PSO combined with MCS to solve complex network reliability problems without requiring knowledge of the reliability function in advance. Compared with previous works to solve this problem, the proposed MCS-PSO can have better efficiency by providing a better solution to the complex network reliability optimization problem.", "We propose a new Monte Carlo method, based on dynamic importance sampling, to estimate the probability that a given set of nodes is connected in a graph (or network) where each link is failed with a given probability. The method generates the link states one by one, using a sampling strategy that approximates an ideal zero-variance importance sampling scheme. The approximation is based on minimal cuts in subgraphs. In an asymptotic rare-event regime where failure probability becomes very small, we prove that the relative error of our estimator remains bounded, and even converges to 0 under additional conditions, when the unreliability of individual links converges to 0. The empirical performance of the new sampling scheme is illustrated by examples." ] }
1903.05833
2963557086
This paper focuses on helping network providers to carry out network capacity planning and sales projection by answering the question: For a given topology and capacity, whether the network can serve current flow demands with high probabilities? We name such probability as “ flow availability” and present the -flow availability estimation (FAVE) problem, which is a generalisation of network connectivity or maximum flow reliability estimations. Realistic networks are often large and dynamic, so flow availabilities cannot be evaluated analytically and simulation is often used. However, naive Monte Carlo (MC) or importance sampling (IS) techniques take an excessive amount of time. To quickly estimate flow availabilities, we utilize the correlations among link and flow failures to figure out the importance of roles played by different links in flow failures, and design three “sequential importance sampling” (SIS) methods which achieve “bounded or even vanishing relative error” with linear computational complexities. When applying to a realistic network, our method reduces the flow availability estimation cost by 900 and 130 times compared with MC and baseline IS methods, respectively. Our method can also facilitate capacity planning by providing better flow availability guarantees, compared with traditional methods.
Maximum flow reliability (MFR) : Another line of work @cite_23 @cite_6 @cite_12 generalizes the NCR problem by considering link capac-ities: link capacities are determined by link statuses, i.e., operational, failed or partially failed, which follow certain probability distributions. Given one source and one sink, MFR is defined as the probability that the maximum flow, i.e., the maximum achievable bandwidth from the source to the sink, is above a given threshold. @cite_23 , link capacities are assumed to be continuous and the MC splitting method is applied for the MFR estimation. Authors in @cite_12 follow the idea of permutating MC and assume all links fail at the beginning and each one of them gets repaired after a random time. Authors in @cite_6 consider estimating MFR with the order minimal cut sets.
{ "cite_N": [ "@cite_12", "@cite_6", "@cite_23" ], "mid": [ "2108858520", "2501980298", "2105584052" ], "abstract": [ "In this paper we show how the permutation Monte Carlo method, originally developed for reliability networks, can be successfully adapted for stochastic flow networks, and in particular for estimation of the probability that the maximal flow in such a network is above some fixed level, called the threshold. A stochastic flow network is defined as one, where the edges are subject to random failures. A failed edge is assumed to be erased (broken) and, thus, not able to deliver any flow. We consider two models; one where the edges fail with the same failure probability and another where they fail with different failure probabilities. For each model we construct a different algorithm for estimation of the desired probability; in the former case it is based on the well known notion of the D-spectrum and in the later one—on the permutational Monte Carlo. We discuss the convergence properties of our estimators and present supportive numerical results.", "Flow networks in which each component is associated with multiple capacities are known as Stochastic flow networks. These networks are prone to partial or complete failures and are often subjected to unavailability. To estimate two-terminal reliability for such stochastic flow networks several algorithms exists in literature. Most of these algorithms takes huge computational time to compute network reliability even for moderate sized networks. The known efficient cutset based stochastic flow network reliability estimation algorithm developed till date can be presented as follows (1) Maximal flows for demand d, of the network are generated using the minimal cuts and capacities of the components. (2) Removes the non-maximal flows to obtain the set of upper boundary flows for demand d. (3) Determines unreliability from the upper boundary flows from which reliability is calculated as 1-unreliability of the network. In this work, we present an algorithm that preorders the minimal cuts and calculates the network unreliability using the ordered minimal cuts. Our algorithm reduces the computational time compared to the existing algorithm. We provide an example to illustrate the proposed method. Matlab simulation is performed to compare the proposed method with the existing methods using the standard benchmark networks available in literature. Simulation results show that the proposed method takes lesser computation time and memory.", "We consider the problem of estimating the unreliability of a stochastic flow network, defined as the probability that the maximum flow value from a source node to a terminal node in a directed network with stochastic link capacities, is less than a specified demand level. The link capacities are assumed to be continuous random variables with a known joint distribution. We are interested in the situation where the unreliability is very small, in which case a crude Monte Carlo is not viable. We show how a Monte Carlo splitting algorithm can be adapted to handle this problem effectively." ] }
1903.05833
2963557086
This paper focuses on helping network providers to carry out network capacity planning and sales projection by answering the question: For a given topology and capacity, whether the network can serve current flow demands with high probabilities? We name such probability as “ flow availability” and present the -flow availability estimation (FAVE) problem, which is a generalisation of network connectivity or maximum flow reliability estimations. Realistic networks are often large and dynamic, so flow availabilities cannot be evaluated analytically and simulation is often used. However, naive Monte Carlo (MC) or importance sampling (IS) techniques take an excessive amount of time. To quickly estimate flow availabilities, we utilize the correlations among link and flow failures to figure out the importance of roles played by different links in flow failures, and design three “sequential importance sampling” (SIS) methods which achieve “bounded or even vanishing relative error” with linear computational complexities. When applying to a realistic network, our method reduces the flow availability estimation cost by 900 and 130 times compared with MC and baseline IS methods, respectively. Our method can also facilitate capacity planning by providing better flow availability guarantees, compared with traditional methods.
Other reliabilities : Some works also study the connection availability @cite_13 and service availability @cite_3 , and consider the probability that a connection or service is available. Authors in ! @cite_13 evaluate the connection availability by computing the connection probability of a small subset of nodes exactly. Authors in @cite_3 evaluate the service availability by using IS to estimate path availabilities. The problem considered in @cite_13 @cite_3 can be transformed to a problem of determining the connectivity of certain nodes, where the network topology is given. Hence, @cite_13 @cite_3 are essentially the same with the NCR related work.
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2114625425", "2153334705" ], "abstract": [ "Communications networks used for transmitting and receiving voice or digital data involve copper wire, fiber optics, radio, satellite, and microwave links. In general, these networks are large, diffuse, and complex. Even using modern high-speed computers, exact computations of the network reliability for use as a figure of merit in an interactive network design procedure can take an inordinate amount of time. This paper discusses various exact techniques whose complexity results in run-times short enough for practical network design. We model the network by an undirected probabilistic graph whose vertices represent the nodes and whose edges represent the links. A special feature of our model is that it allows nodes to be imperfect and associates a reliability measure with each node, assumed to succeed or fail independently. Therefore, the network reliability measure is based upon the reliability measures of the individual links and nodes. >", "A fast simulation technique based on importance sampling is developed for the analysis of path service availability in mesh networks with dynamic path restoration. The method combines the simulation of the path rerouting algorithm with a \"dynamic path failure importance sampling\" (DPFS) scheme to estimate path availabilities efficiently. In DPFS, the failure rates of network elements are biased at increased rates until path failures are observed under rerouting. The simulated model uses \"failure equivalence groups,\" with finite infinite sources of failure events and finite infinite pools of repair personnel, to facilitate the modeling of bidirectional link failures, multiple in-series link cuts, optical amplifier failures along links, node failures, and more general geographically distributed failure scenarios. The analysis of a large mesh network example demonstrates the practicality of the technique." ] }
1903.05697
2968491743
Diversity of environments is a key challenge that causes learned robotic controllers to fail due to the discrepancies between the training and evaluation conditions. Training from demonstrations in various conditions can mitigate—but not completely prevent—such failures. Learned controllers such as neural networks typically do not have a notion of uncertainty that allows to diagnose an offset between training and testing conditions, and potentially intervene. In this work, we propose to use Bayesian Neural Networks, which have such a notion of uncertainty. We show that uncertainty can be leveraged to consistently detect situations in high-dimensional simulated and real robotic domains in which the performance of the learned controller would be sub-par. Also, we show that such an uncertainty based solution allows making an informed decision about when to invoke a fallback strategy. One fallback strategy is to request more data. We empirically show that providing data only when requested results in increased data-efficiency.
LfD makes learning complex control applications @cite_9 @cite_3 @cite_42 @cite_40 easy but demonstrations are expensive. Active learning aims to ameliorate this by seeking to select the most informative learning experiences @cite_11 . We will show in subsection that our work leverages the predictive uncertainty as a cheap proxy as proposed in @cite_23 to gauge what is informative. The work in @cite_45 , @cite_17 and @cite_32 quantify their uncertainty and detect unfamiliar and ambiguous states through an additional mechanism, e.g. depending on formulated pre- and postconditions @cite_17 . Such an approach can be hard to scale up. Our proposed approach uses a single neural network that both controls the system and also detects unfamiliar states that makes it more scalable.
{ "cite_N": [ "@cite_9", "@cite_42", "@cite_17", "@cite_3", "@cite_32", "@cite_40", "@cite_45", "@cite_23", "@cite_11" ], "mid": [ "1986014385", "", "1684361744", "", "2154018708", "", "2119388568", "2462906003", "2903158431" ], "abstract": [ "We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research.", "", "Also referred to as learning by imitation, tutelage, or apprenticeship learning, Programming by Demonstration (PbD) develops methods by which new skills can be transmitted to a robot. This book examines methods by which robots learn new skills through human guidance. Taking a practical perspective, it covers a broad range of applications, including service robots. The text addresses the challenges involved in investigating methods by which PbD is used to provide robots with a generic and adaptive model of control. Drawing on findings from robot control, human-robot interaction, applied machine learning, artificial intelligence, and developmental and cognitive psychology, the book contains a large set of didactic and illustrative examples. Practical and comprehensive machine learning source codes are available on the books companion website: http: www.programming-by-demonstration.org", "", "We present Confidence-Based Autonomy (CBA), an interactive algorithm for policy learning from demonstration. The CBA algorithm consists of two components which take advantage of the complimentary abilities of humans and computer agents. The first component, Confident Execution, enables the agent to identify states in which demonstration is required, to request a demonstration from the human teacher and to learn a policy based on the acquired data. The algorithm selects demonstrations based on a measure of action selection confidence, and our results show that using Confident Execution the agent requires fewer demonstrations to learn the policy than when demonstrations are selected by a human teacher. The second algorithmic component, Corrective Demonstration, enables the teacher to correct any mistakes made by the agent through additional demonstrations in order to improve the policy and future task performance. CBA and its individual components are compared and evaluated in a complex simulated driving domain. The complete CBA algorithm results in the best overall learning performance, successfully reproducing the behavior of the teacher while balancing the tradeoff between number of demonstrations and number of incorrect actions during learning.", "", "Ubiquitous robots need the ability to adapt their behaviour to the changing situations and demands they will encounter during their lifetimes. In particular, non-technical users must be able to modify a robot's behaviour to enable it to perform new, previously unknown tasks. Learning from demonstration is a viable means to transfer a desired control policy onto a robot and mixed-initiative control provides a method for smooth transitioning between learning and acting. We present a learning system (dogged learning) that combines learning from demonstration and mixed initiative control to enable lifelong learning for unknown tasks. We have implemented dogged learning on a Sony Aibo and successfully taught it behaviours such as mimicry and ball seeking", "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (\"avoiding side effects\" and \"avoiding reward hacking\"), an objective function that is too expensive to evaluate frequently (\"scalable supervision\"), or undesirable behavior during the learning process (\"safe exploration\" and \"distributional shift\"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.", "" ] }
1903.05697
2968491743
Diversity of environments is a key challenge that causes learned robotic controllers to fail due to the discrepancies between the training and evaluation conditions. Training from demonstrations in various conditions can mitigate—but not completely prevent—such failures. Learned controllers such as neural networks typically do not have a notion of uncertainty that allows to diagnose an offset between training and testing conditions, and potentially intervene. In this work, we propose to use Bayesian Neural Networks, which have such a notion of uncertainty. We show that uncertainty can be leveraged to consistently detect situations in high-dimensional simulated and real robotic domains in which the performance of the learned controller would be sub-par. Also, we show that such an uncertainty based solution allows making an informed decision about when to invoke a fallback strategy. One fallback strategy is to request more data. We empirically show that providing data only when requested results in increased data-efficiency.
Subtle changes between train-time and test-time conditions might cause controllers to fail silently and unpredictably @cite_23 as highlighted in semi-supervised learning under domain shift @cite_37 , robust generalization @cite_31 , multi-task learning @cite_4 and transfer learning @cite_0 . The different conditions can thus be modelled as different (but closely related) . We describe such different conditions by the term .
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_0", "@cite_23", "@cite_31" ], "mid": [ "2136504847", "2624871570", "2165698076", "2462906003", "2566079294" ], "abstract": [ "Door lock apparatus in which a door latch mechanism is operated by inner and outer door handles coupled to a latch shaft extending through the latch mechanism. Handles are coupled to ends of latch shaft by coupling devices enabling door to be locked from the inside to prevent entry from the outside but can still be opened from the inside by normal operation of outside handle. Inside coupling device has limited lost-motion which is used to operate cam device to unlock the door on actuation of inner handles.", "Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks.", "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (\"avoiding side effects\" and \"avoiding reward hacking\"), an objective function that is too expensive to evaluate frequently (\"scalable supervision\"), or undesirable behavior during the learning process (\"safe exploration\" and \"distributional shift\"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.", "Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. @PARASPLIT Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. @PARASPLIT We interpret our experimental findings by comparison with traditional models." ] }
1903.05697
2968491743
Diversity of environments is a key challenge that causes learned robotic controllers to fail due to the discrepancies between the training and evaluation conditions. Training from demonstrations in various conditions can mitigate—but not completely prevent—such failures. Learned controllers such as neural networks typically do not have a notion of uncertainty that allows to diagnose an offset between training and testing conditions, and potentially intervene. In this work, we propose to use Bayesian Neural Networks, which have such a notion of uncertainty. We show that uncertainty can be leveraged to consistently detect situations in high-dimensional simulated and real robotic domains in which the performance of the learned controller would be sub-par. Also, we show that such an uncertainty based solution allows making an informed decision about when to invoke a fallback strategy. One fallback strategy is to request more data. We empirically show that providing data only when requested results in increased data-efficiency.
There are four broad classes of methods for capturing sequential information. The easiest to use with any off-the-shelf supervised learning methods, which we also employ in our work is using a on sequences @cite_5 @cite_43 . The second method is using such as LSTMs which are relatively difficult to train and work with evenly sampled sequences @cite_16 . The third technique which works well with unevenly sampled data is where training data is fitted to a curve @cite_38 @cite_19 . Lately, techniques have been shown to perform at least as good as recurrent models for learning from sequences @cite_21 , @cite_2 , @cite_15 , @cite_35 , @cite_12 .
{ "cite_N": [ "@cite_38", "@cite_35", "@cite_21", "@cite_19", "@cite_43", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_12" ], "mid": [ "2137226992", "2567070169", "2792764867", "", "", "2519091744", "1946238955", "2120615054", "2139122335", "2950855294" ], "abstract": [ "A new regression technique based on Vapnik's concept of support vectors is introduced. We compare support vector regression (SVR) with a committee regression technique (bagging) based on regression trees and ridge regression done in feature space. On the basis of these experiments, it is expected that SVR will have advantages in high dimensionality space because SVR optimization does not depend on the dimensionality of the input space.", "The pre-dominant approach to language modeling to date is based on recurrent neural networks. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a finite context approach through stacked convolutions, which can be more efficient since they allow parallelization over sequential tokens. We propose a novel simplified gating mechanism that outperforms (2016) and investigate the impact of key architectural decisions. The proposed approach achieves state-of-the-art on the WikiText-103 benchmark, even though it features long-term dependencies, as well as competitive results on the Google Billion Words benchmark. Our model reduces the latency to score a sentence by an order of magnitude compared to a recurrent baseline. To our knowledge, this is the first time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks.", "For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at this http URL .", "", "", "", "Neural Network approaches to time series prediction are briefly discussed, and the need to find the appropriate sample rate and an appropriately sized input window identified. Relevant theoretical results from dynamic systems theory are briefly introduced, and heuristics for finding the appropriate sampling rate and embedding dimension, and thence window size, are discussed. The method is applied to several time series and the resulting generalisation performance of the trained feed-forward neural network predictors is analysed. It is shown that the heuristics can provide useful information in defining the appropriate network architecture.", "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.", "The paper first summarizes a general approach to the training of recurrent neural networks by gradient-based algorithms, which leads to the introduction of four families of training algorithms. Because of the variety of possibilities thus available to the \"neural network designer,\" the choice of the appropriate algorithm to solve a given problem becomes critical. We show that, in the case of process modeling, this choice depends on how noise interferes with the process to be modeled; this is evidenced by three examples of modeling of dynamical processes, where the detrimental effect of inappropriate training algorithms on the prediction error made by the network is clearly demonstrated. >", "The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. In this paper we present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the entire source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT'16 English-Romanian translation we achieve competitive accuracy to the state-of-the-art and we outperform several recently published results on the WMT'15 English-German task. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT'14 English-French translation. Our convolutional encoder speeds up CPU decoding by more than two times at the same or higher accuracy as a strong bi-directional LSTM baseline." ] }
1903.05751
2922390560
In this paper, we propose a reinforcement learning-based algorithm for trajectory optimization for constrained dynamical systems. This problem is motivated by the fact that for most robotic systems, the dynamics may not always be known. Generating smooth, dynamically feasible trajectories could be difficult for such systems. Using sampling-based algorithms for motion planning may result in trajectories that are prone to undesirable control jumps. However, they can usually provide a good reference trajectory which a model-free reinforcement learning algorithm can then exploit by limiting the search domain and quickly finding a dynamically smooth trajectory. We use this idea to train a reinforcement learning agent to learn a dynamically smooth trajectory in a curriculum learning setting. Furthermore, for generalization, we parameterize the policies with goal locations, so that the agent can be trained for multiple goals simultaneously. We show result in both simulated environments as well as real experiments, for a @math -DoF manipulator arm operated in position-controlled mode to validate the proposed idea. We compare the proposed ideas against a PID controller which is used to track a designed trajectory in configuration space. Our experiments show that our RL agent trained with a reference path outperformed a model-free PID controller of the type commonly used on many robotic platforms for trajectory tracking.
Our combination of RL and reference trajectory tracking can be seen as a form of reward shaping @cite_18 . Reward shaping speeds up learning by creating a more informative reward signal. However, designing shaping rewards requires significant non-trivial reward engineering, and may also alter the optimal solution. To alleviate this problem, automatic reward shaping has been researched @cite_16 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_16" ], "mid": [ "2158969944", "1777239053", "2079247031" ], "abstract": [ "This paper investigates the problem of automatically learning how to restructure the reward function of a Markov decision process so as to speed up reinforcement learning. We begin by describing a method that learns a shaped reward function given a set of state and temporal abstractions. Next, we consider decomposition of the per-timestep reward in multieffector problems, in which the overall agent can be decomposed into multiple units that are concurrently carrying out various tasks. We show by example that to find a good reward decomposition, it is often necessary to first shape the rewards appropriately. We then give a function approximation algorithm for solving both problems together. Standard reinforcement learning algorithms can be augmented with our methods, and we show experimentally that in each case, significantly faster learning results.", "", "We introduce the use of learned shaping rewards in reinforcement learning tasks, where an agent uses prior experience on a sequence of tasks to learn a portable predictor that estimates intermediate rewards, resulting in accelerated learning in later tasks that are related but distinct. Such agents can be trained on a sequence of relatively easy tasks in order to develop a more informative measure of reward that can be transferred to improve performance on more difficult tasks without requiring a hand coded shaping function. We use a rod positioning task to show that this significantly improves performance even after a very brief training period." ] }
1903.05820
2964198723
Abstract Recently, the progress of learning-by-synthesis has proposed a training model for synthetic images, which can effectively reduce the cost of human and material resources. However, due to the different distribution of synthetic images compared to real images, the desired performance cannot still be achieved. Real images consist of multiple forms of light orientation, while synthetic images consist of a uniform light orientation. These features are considered to be characteristic of outdoor and indoor scenes, respectively. To solve this problem, the previous method learned a model to improve the realism of the synthetic image. Different from the previous methods, this paper takes the first step to purify real images. Through the style transfer task, the distribution of outdoor real images is converted into indoor synthetic images, thereby reducing the influence of light. Therefore, this paper proposes a real-time style transfer network that preserves image content information (e.g., gaze direction, pupil center position) of an input image (real image) while inferring style information (e.g., image color structure, semantic features) of style image (synthetic image). In addition, the network accelerates the convergence speed of the model and adapts to multi-scale images. Experiments were performed using mixed studies (qualitative and quantitative) methods to demonstrate the possibility of purifying real images in complex directions. Qualitatively, it compares the proposed method with the available methods in a series of indoor and outdoor scenarios of the LPW dataset. In quantitative terms, it evaluates the purified image by training a gaze estimation model on the cross data set. The results show a significant improvement over the baseline method compared to the raw real image.
In general, there are two main types of eye gaze estimation methods: feature-based and appearance-based @cite_15 . Feature-based methods are intended to identify local features of the eye, such as contours, corners of the eye, and reflections from images of the eye. Pupil and corneal reflexes are commonly used for eye localization. Calibration with high-resolution cameras and other specialized hardware such as synchronized cameras and light sources can extract more precise geometric properties. However, the gaze feature is not stable enough under natural light.
{ "cite_N": [ "@cite_15" ], "mid": [ "2167020116" ], "abstract": [ "Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond." ] }
1903.05820
2964198723
Abstract Recently, the progress of learning-by-synthesis has proposed a training model for synthetic images, which can effectively reduce the cost of human and material resources. However, due to the different distribution of synthetic images compared to real images, the desired performance cannot still be achieved. Real images consist of multiple forms of light orientation, while synthetic images consist of a uniform light orientation. These features are considered to be characteristic of outdoor and indoor scenes, respectively. To solve this problem, the previous method learned a model to improve the realism of the synthetic image. Different from the previous methods, this paper takes the first step to purify real images. Through the style transfer task, the distribution of outdoor real images is converted into indoor synthetic images, thereby reducing the influence of light. Therefore, this paper proposes a real-time style transfer network that preserves image content information (e.g., gaze direction, pupil center position) of an input image (real image) while inferring style information (e.g., image color structure, semantic features) of style image (synthetic image). In addition, the network accelerates the convergence speed of the model and adapts to multi-scale images. Experiments were performed using mixed studies (qualitative and quantitative) methods to demonstrate the possibility of purifying real images in complex directions. Qualitatively, it compares the proposed method with the available methods in a series of indoor and outdoor scenarios of the LPW dataset. In quantitative terms, it evaluates the purified image by training a gaze estimation model on the cross data set. The results show a significant improvement over the baseline method compared to the raw real image.
Recent studies aim to better represent the appearance, and @cite_28 proposed a low-dimensional feature extraction method. It divides the eye area into three columns and five columns and calculates the gray value and the percentage of each area. Therefore, a 15-dimensional feature vector is defined. However, this feature does not apply to eye images under free head movement. @cite_35 introduced a deep feature extracted from convolutional neural networks. The deep feature has sparse characters and provides a effective solution for gaze estimation.
{ "cite_N": [ "@cite_28", "@cite_35" ], "mid": [ "2001252859", "2492303557" ], "abstract": [ "We investigate the appearance-based gaze estimation problem, with respect to its essential difficulty in reducing the number of required training samples, and other practical issues such as slight head motion, image resolution variation, and eye blinking. We cast the problem as mapping high-dimensional eye image features to low-dimensional gaze positions, and propose an adaptive linear regression (ALR) method as the key to our solution. The ALR method adaptively selects an optimal set of sparsest training samples for the gaze estimation via l 1 -optimization. In this sense, the number of required training samples is significantly reduced for high accuracy estimation. In addition, by adopting the basic ALR objective function, we integrate the gaze estimation, subpixel alignment and blink detection into a unified optimization framework. By solving these problems simultaneously, we successfully handle slight head motion, image resolution variation and eye blinking in appearance-based gaze estimation. We evaluated the proposed method by conducting experiments with multiple users and variant conditions to verify its effectiveness.", "Conventional appearance-based gaze estimation methods employ local or global features as eye gaze appearance descriptor. But these methods don't work well under natural light with free head movement. To solve this problem, we present an appearance-based gaze estimation method using deep feature representation and feature forest regression. The deep feature is learned through hierarchical extraction of deep Convolutional Neural Network (CNN). And random forest regression with cluster-to-classify node splitting rules is used to take advantage of data distribution in sparse feature space. Experimental results demonstrate that the deep feature has a better performance than local features on calibrated gaze regression. The combination of deep features and random forest regression provides an effective solution for gaze estimation in a natural environment." ] }
1903.05820
2964198723
Abstract Recently, the progress of learning-by-synthesis has proposed a training model for synthetic images, which can effectively reduce the cost of human and material resources. However, due to the different distribution of synthetic images compared to real images, the desired performance cannot still be achieved. Real images consist of multiple forms of light orientation, while synthetic images consist of a uniform light orientation. These features are considered to be characteristic of outdoor and indoor scenes, respectively. To solve this problem, the previous method learned a model to improve the realism of the synthetic image. Different from the previous methods, this paper takes the first step to purify real images. Through the style transfer task, the distribution of outdoor real images is converted into indoor synthetic images, thereby reducing the influence of light. Therefore, this paper proposes a real-time style transfer network that preserves image content information (e.g., gaze direction, pupil center position) of an input image (real image) while inferring style information (e.g., image color structure, semantic features) of style image (synthetic image). In addition, the network accelerates the convergence speed of the model and adapts to multi-scale images. Experiments were performed using mixed studies (qualitative and quantitative) methods to demonstrate the possibility of purifying real images in complex directions. Qualitatively, it compares the proposed method with the available methods in a series of indoor and outdoor scenarios of the LPW dataset. In quantitative terms, it evaluates the purified image by training a gaze estimation model on the cross data set. The results show a significant improvement over the baseline method compared to the raw real image.
There are four main categories of eye image synthesis methods: Optical Flow @cite_5 @cite_23 , 3D eye reconstruction @cite_18 @cite_21 , Model-based method @cite_29 and GANs (Generative Adversarial Networks) @cite_1 .
{ "cite_N": [ "@cite_18", "@cite_29", "@cite_21", "@cite_1", "@cite_23", "@cite_5" ], "mid": [ "1995694455", "2519247488", "2950402190", "", "2003816455", "1899110234" ], "abstract": [ "Inferring human gaze from low-resolution eye images is still a challenging task despite its practical importance in many application scenarios. This paper presents a learning-by-synthesis approach to accurate image-based gaze estimation that is person- and head pose-independent. Unlike existing appearance-based methods that assume person-specific training data, we use a large amount of cross-subject training data to train a 3D gaze estimator. We collect the largest and fully calibrated multi-view gaze dataset and perform a 3D reconstruction in order to generate dense training data of eye images. By using the synthesized dataset to learn a random regression forest, we show that our method outperforms existing methods that use low-resolution eye images.", "Morphable face models are a powerful tool, but have previously failed to model the eye accurately due to complexities in its material and motion. We present a new multi-part model of the eye that includes a morphable model of the facial eye region, as well as an anatomy-based eyeball model. It is the first morphable model that accurately captures eye region shape, since it was built from high-quality head scans. It is also the first to allow independent eyeball movement, since we treat it as a separate part. To showcase our model we present a new method for illumination- and head-pose–invariant gaze estimation from a single RGB image. We fit our model to an image through analysis-by-synthesis, solving for eye region shape, texture, eyeball pose, and illumination simultaneously. The fitted eyeball pose parameters are then used to estimate gaze direction. Through evaluation on two standard datasets we show that our method generalizes to both webcam and high-quality camera images, and outperforms a state-of-the-art CNN method achieving a gaze estimation accuracy of (9.44^ ) in a challenging user-independent scenario.", "Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model's controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild.", "", "Existing appearance-based gaze estimation methods suffer from tedious calibration and appearance variation caused by head movement. In this paper, to handle this problem, we propose a novel appearance-based gaze estimation method by introducing supervised adaptive feature extraction and hierarchical mapping model. Firstly, an adaptive feature learning method is proposed to extract topology-preserving (TOP) feature individually. Then hierarchical mapping method is proposed to localize gaze position based on coarse-to-fine strategy. Appearance synthesis approach is used to increase the refer sample density. Experiments show that under the condition of sparse calibration, proposed method has better performance in accuracy than existing methods under fixed head pose without chinrest. Moreover, our method can be easily extended for head pose-varying gaze estimation.", "In this paper, we address the problem of free head motion in appearance-based gaze estimation. This problem remains challenging because head motion changes eye appearance significantly, and thus, training images captured for an original head pose cannot handle test images captured for other head poses. To overcome this difficulty, we propose a novel gaze estimation method that handles free head motion via eye image synthesis based on a single camera. Compared with conventional fixed head pose methods with original training images, our method only captures four additional eye images under four reference head poses, and then, precisely synthesizes new training images for other unseen head poses in estimation. To this end, we propose a single-directional (SD) flow model to efficiently handle eye image variations due to head motion. We show how to estimate SD flows for reference head poses first, and then use them to produce new SD flows for training image synthesis. Finally, with synthetic training images, joint optimization is applied that simultaneously solves an eye image alignment and a gaze estimation. Evaluation of the method was conducted through experiments to assess its performance and demonstrate its effectiveness." ] }
1903.05820
2964198723
Abstract Recently, the progress of learning-by-synthesis has proposed a training model for synthetic images, which can effectively reduce the cost of human and material resources. However, due to the different distribution of synthetic images compared to real images, the desired performance cannot still be achieved. Real images consist of multiple forms of light orientation, while synthetic images consist of a uniform light orientation. These features are considered to be characteristic of outdoor and indoor scenes, respectively. To solve this problem, the previous method learned a model to improve the realism of the synthetic image. Different from the previous methods, this paper takes the first step to purify real images. Through the style transfer task, the distribution of outdoor real images is converted into indoor synthetic images, thereby reducing the influence of light. Therefore, this paper proposes a real-time style transfer network that preserves image content information (e.g., gaze direction, pupil center position) of an input image (real image) while inferring style information (e.g., image color structure, semantic features) of style image (synthetic image). In addition, the network accelerates the convergence speed of the model and adapts to multi-scale images. Experiments were performed using mixed studies (qualitative and quantitative) methods to demonstrate the possibility of purifying real images in complex directions. Qualitatively, it compares the proposed method with the available methods in a series of indoor and outdoor scenarios of the LPW dataset. In quantitative terms, it evaluates the purified image by training a gaze estimation model on the cross data set. The results show a significant improvement over the baseline method compared to the raw real image.
The eye image synthesis process in @cite_5 used 1D flows to simulate the appearance distortion caused by head pose moving, and @cite_23 introduced a 2D interpolation to synthesize the eye appearance variation caused by eyeball moving. These optical flow methods treat eye image synthesis as optical shift of original image and could not be utilized under large head rotation. Generating eye images by 3D eye reconstruction is highly depending on the pre-trained face 3D model. @cite_18 recovered multi-view eye images from 3D shapes of eye region reconstructed from 8 cameras eye image capture system. While @cite_21 relied on high-quality head scans to collect high resolution eye images. In order to generate multi-part eye images, @cite_29 also presented a morphable model of the facial eye region, as well as an anatomy-based eyeball model. Model-based method tunes parameters to obtain high resolution eye images, which are coincide with the ground truth situation. @cite_1 used GANs to generate synthetic eye images using unlabeled real data and learnt a refiner model that improves the realism of these synthetic images. While GANs output different synthetic image by same input image, it is still not controllable to generate image with specific gaze angle.
{ "cite_N": [ "@cite_18", "@cite_29", "@cite_21", "@cite_1", "@cite_23", "@cite_5" ], "mid": [ "1995694455", "2519247488", "2950402190", "", "2003816455", "1899110234" ], "abstract": [ "Inferring human gaze from low-resolution eye images is still a challenging task despite its practical importance in many application scenarios. This paper presents a learning-by-synthesis approach to accurate image-based gaze estimation that is person- and head pose-independent. Unlike existing appearance-based methods that assume person-specific training data, we use a large amount of cross-subject training data to train a 3D gaze estimator. We collect the largest and fully calibrated multi-view gaze dataset and perform a 3D reconstruction in order to generate dense training data of eye images. By using the synthesized dataset to learn a random regression forest, we show that our method outperforms existing methods that use low-resolution eye images.", "Morphable face models are a powerful tool, but have previously failed to model the eye accurately due to complexities in its material and motion. We present a new multi-part model of the eye that includes a morphable model of the facial eye region, as well as an anatomy-based eyeball model. It is the first morphable model that accurately captures eye region shape, since it was built from high-quality head scans. It is also the first to allow independent eyeball movement, since we treat it as a separate part. To showcase our model we present a new method for illumination- and head-pose–invariant gaze estimation from a single RGB image. We fit our model to an image through analysis-by-synthesis, solving for eye region shape, texture, eyeball pose, and illumination simultaneously. The fitted eyeball pose parameters are then used to estimate gaze direction. Through evaluation on two standard datasets we show that our method generalizes to both webcam and high-quality camera images, and outperforms a state-of-the-art CNN method achieving a gaze estimation accuracy of (9.44^ ) in a challenging user-independent scenario.", "Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model's controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild.", "", "Existing appearance-based gaze estimation methods suffer from tedious calibration and appearance variation caused by head movement. In this paper, to handle this problem, we propose a novel appearance-based gaze estimation method by introducing supervised adaptive feature extraction and hierarchical mapping model. Firstly, an adaptive feature learning method is proposed to extract topology-preserving (TOP) feature individually. Then hierarchical mapping method is proposed to localize gaze position based on coarse-to-fine strategy. Appearance synthesis approach is used to increase the refer sample density. Experiments show that under the condition of sparse calibration, proposed method has better performance in accuracy than existing methods under fixed head pose without chinrest. Moreover, our method can be easily extended for head pose-varying gaze estimation.", "In this paper, we address the problem of free head motion in appearance-based gaze estimation. This problem remains challenging because head motion changes eye appearance significantly, and thus, training images captured for an original head pose cannot handle test images captured for other head poses. To overcome this difficulty, we propose a novel gaze estimation method that handles free head motion via eye image synthesis based on a single camera. Compared with conventional fixed head pose methods with original training images, our method only captures four additional eye images under four reference head poses, and then, precisely synthesizes new training images for other unseen head poses in estimation. To this end, we propose a single-directional (SD) flow model to efficiently handle eye image variations due to head motion. We show how to estimate SD flows for reference head poses first, and then use them to produce new SD flows for training image synthesis. Finally, with synthetic training images, joint optimization is applied that simultaneously solves an eye image alignment and a gaze estimation. Evaluation of the method was conducted through experiments to assess its performance and demonstrate its effectiveness." ] }
1903.05820
2964198723
Abstract Recently, the progress of learning-by-synthesis has proposed a training model for synthetic images, which can effectively reduce the cost of human and material resources. However, due to the different distribution of synthetic images compared to real images, the desired performance cannot still be achieved. Real images consist of multiple forms of light orientation, while synthetic images consist of a uniform light orientation. These features are considered to be characteristic of outdoor and indoor scenes, respectively. To solve this problem, the previous method learned a model to improve the realism of the synthetic image. Different from the previous methods, this paper takes the first step to purify real images. Through the style transfer task, the distribution of outdoor real images is converted into indoor synthetic images, thereby reducing the influence of light. Therefore, this paper proposes a real-time style transfer network that preserves image content information (e.g., gaze direction, pupil center position) of an input image (real image) while inferring style information (e.g., image color structure, semantic features) of style image (synthetic image). In addition, the network accelerates the convergence speed of the model and adapts to multi-scale images. Experiments were performed using mixed studies (qualitative and quantitative) methods to demonstrate the possibility of purifying real images in complex directions. Qualitatively, it compares the proposed method with the available methods in a series of indoor and outdoor scenarios of the LPW dataset. In quantitative terms, it evaluates the purified image by training a gaze estimation model on the cross data set. The results show a significant improvement over the baseline method compared to the raw real image.
Learning-based methods perform well in appearance-based gaze estimation but require large amounts of training data. Learning-by-synthesis approaches were proposed to solve this problem. @cite_11 presented a novel method to synthesize large amounts of variable eye region images as training data, which addressed the limitation of learning-by-synthesis with respect to the appearance variability and the head pose and gaze angle distribution. Other works learn a feature representation in feature space. For instance, @cite_23 proposed an appearance-based gaze estimation method by supervised adaptive feature extraction @cite_4 @cite_13 @cite_17 @cite_2 @cite_12 and hierarchical mapping model @cite_27 @cite_3 @cite_31 , during which appearance synthesis method is proposed to increase the sample density. @cite_20 introduced a CNN-based gaze estimation method, which concatenated head pose vector in the hidden layer of neural network. This change improved the performance of CNN-based gaze estimation training by synthetic image dataset. @cite_18 presented a learning-by-synthesis approach for appearance-based gaze estimation and trained a 3D gaze estimator by a large amount of cross-subject training data. In their experiments, @math -nearest neighbor was selected as comparison, from which we can see that @math -NN regression estimators can perform well with a large amount of dense training samples.
{ "cite_N": [ "@cite_18", "@cite_11", "@cite_4", "@cite_3", "@cite_27", "@cite_23", "@cite_2", "@cite_31", "@cite_20", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "1995694455", "2299591120", "2963701987", "2605117450", "2799150641", "2003816455", "1572158434", "2582558662", "2027879843", "2793244874", "2408458745", "2752197020" ], "abstract": [ "Inferring human gaze from low-resolution eye images is still a challenging task despite its practical importance in many application scenarios. This paper presents a learning-by-synthesis approach to accurate image-based gaze estimation that is person- and head pose-independent. Unlike existing appearance-based methods that assume person-specific training data, we use a large amount of cross-subject training data to train a 3D gaze estimator. We collect the largest and fully calibrated multi-view gaze dataset and perform a 3D reconstruction in order to generate dense training data of eye images. By using the synthesized dataset to learn a random regression forest, we show that our method outperforms existing methods that use low-resolution eye images.", "Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learning-by-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, appearance variability, and the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses real-time approximations for complex eyeball materials and structures as well as anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework available online for the benefit of the research community.", "Multi-view spectral clustering, which aims at yielding an agreement or consensus data objects grouping across multi-views with their graph laplacian matrices, is a fundamental clustering problem. Among the existing methods, Low-Rank Representation (LRR) based method is quite superior in terms of its effectiveness, intuitiveness and robustness to noise corruptions. However, it aggressively tries to learn a common low-dimensional subspace for multi-view data, while inattentively ignoring the local manifold structure in each view, which is critically important to the spectral clustering; worse still, the low-rank minimization is enforced to achieve the data correlation consensus among all views, failing to flexibly preserve the local manifold structure for each view. In this paper, 1) we propose a multi-graph laplacian regularized LRR with each graph laplacian corresponding to one view to characterize its local manifold structure. 2) Instead of directly enforcing the low-rank minimization among all views for correlation consensus, we separately impose low-rank constraint on each view, coupled with a mutual structural consensus constraint, where it is able to not only well preserve the local manifold structure but also serve as a constraint for that from other views, which iteratively makes the views more agreeable. Extensive experiments on real-world multi-view data sets demonstrate its superiority.", "Fine-grained car recognition aims to recognize the category information of a car, such as car make, car model, or even the year of manufacture. A number of recent studies have shown that a deep convolutional neural network (DCNN) trained on a large-scale data set can achieve impressive results at a range of generic object classification tasks. In this paper, we propose a spatially weighted pooling (SWP) strategy, which considerably improves the robustness and effectiveness of the feature representation of most dominant DCNNs. More specifically, the SWP is a novel pooling layer, which contains a predefined number of spatially weighted masks or pooling channels. The SWP pools the extracted features of DCNNs with the guidance of its learnt masks, which measures the importance of the spatial units in terms of discriminative power. As the existing methods that apply uniform grid pooling on the convolutional feature maps of DCNNs, the proposed method can extract the convolutional features and generate the pooling channels from a single DCNN. Thus minimal modification is needed in terms of implementation. Moreover, the parameters of the SWP layer can be learned in the end-to-end training process of the DCNN. By applying our method to several fine-grained car recognition data sets, we demonstrate that the proposed method can achieve better performance than recent approaches in the literature. We advance the state-of-the-art results by improving the accuracy from 92.6 to 93.1 on the Stanford Cars-196 data set and 91.2 to 97.6 on the recent CompCars data set. We have also tested the proposed method on two additional large-scale data sets with impressive results observed.", "In this paper, we propose a novel deep generative approach to cross-modal retrieval to learn hash functions in the absence of paired training samples through the cycle consistency loss. Our proposed approach employs adversarial training scheme to learn a couple of hash functions enabling translation between modalities while assuming the underlying semantic relationship. To induce the hash codes with semantics to the input-output pair, cycle consistency loss is further delved into the adversarial training to strengthen the correlation between the inputs and corresponding outputs. Our approach is generative to learn hash functions, such that the learned hash codes can maximally correlate each input–output correspondence and also regenerate the inputs so as to minimize the information loss. The learning to hash embedding is thus performed to jointly optimize the parameters of the hash functions across modalities as well as the associated generative models. Extensive experiments on a variety of large-scale cross-modal data sets demonstrate that our proposed method outperforms the state of the arts.", "Existing appearance-based gaze estimation methods suffer from tedious calibration and appearance variation caused by head movement. In this paper, to handle this problem, we propose a novel appearance-based gaze estimation method by introducing supervised adaptive feature extraction and hierarchical mapping model. Firstly, an adaptive feature learning method is proposed to extract topology-preserving (TOP) feature individually. Then hierarchical mapping method is proposed to localize gaze position based on coarse-to-fine strategy. Appearance synthesis approach is used to increase the refer sample density. Experiments show that under the condition of sparse calibration, proposed method has better performance in accuracy than existing methods under fixed head pose without chinrest. Moreover, our method can be easily extended for head pose-varying gaze estimation.", "More often than not, a multimedia data described by multiple features, such as color and shape features, can be naturally decomposed of multi-views. Since multi-views provide complementary information to each other, great endeavors have been dedicated by leveraging multiple views instead of a single view to achieve the better clustering performance. To effectively exploit data correlation consensus among multi-views, in this paper, we study subspace clustering for multi-view data while keeping individual views well encapsulated. For characterizing data correlations, we generate a similarity matrix in a way that high affinity values are assigned to data objects within the same subspace across views, while the correlations among data objects from distinct subspaces are minimized. Before generating this matrix, however, we should consider that multi-view data in practice might be corrupted by noise. The corrupted data will significantly downgrade clustering results. We first present a novel objective function coupled with an angular based regularizer. By minimizing this function, multiple sparse vectors are obtained for each data object as its multiple representations. In fact, these sparse vectors result from reaching data correlation consensus on all views. For tackling noise corruption, we present a sparsity-based approach that refines the angular-based data correlation. Using this approach, a more ideal data similarity matrix is generated for multi-view data. Spectral clustering is then applied to the similarity matrix to obtain the final subspace clustering. Extensive experiments have been conducted to validate the effectiveness of our proposed approach.", "Given a query photo issued by a user (q-user), the landmark retrieval is to return a set of photos with their landmarks similar to those of the query, while the existing studies on the landmark retrieval focus on exploiting geometries of landmarks for similarity matches between candidate photos and a query photo. We observe that the same landmarks provided by different users over social media community may convey different geometry information depending on the viewpoints and or angles, and may, subsequently, yield very different results. In fact, dealing with the landmarks with low quality shapes caused by the photography of q-users is often nontrivial and has seldom been studied. In this paper, we propose a novel framework, namely, multi-query expansions, to retrieve semantically robust landmarks by two steps. First, we identify the top- @math photos regarding the latent topics of a query landmark to construct multi-query set so as to remedy its possible low quality shape. For this purpose, we significantly extend the techniques of Latent Dirichlet Allocation. Then, motivated by the typical collaborative filtering methods, we propose to learn a collaborative deep networks-based semantically, nonlinear, and high-level features over the latent factor for landmark photo as the training set, which is formed by matrix factorization over collaborative user-photo matrix regarding the multi-query set. The learned deep network is further applied to generate the features for all the other photos, meanwhile resulting into a compact multi-query set within such space. Then, the final ranking scores are calculated over the high-level feature space between the multi-query set and all other photos, which are ranked to serve as the final ranking list of landmark retrieval. Extensive experiments are conducted on real-world social media data with both landmark photos together with their user information to show the superior performance over the existing methods, especially our recently proposed multi-query based mid-level pattern representation method [1] .", "Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and methods have been not evaluated across multiple datasets. In this work we study appearance-based gaze estimation in the wild. We present the MPIIGaze dataset that contains 213,659 images we collected from 15 participants during natural everyday laptop use over more than three months. Our dataset is significantly more variable than existing ones with respect to appearance and illumination. We also present a method for in-the-wild appearance-based gaze estimation using multimodal convolutional neural networks that significantly outperforms state-of-the art methods in the most challenging cross-dataset evaluation. We present an extensive evaluation of several state-of-the-art image-based gaze estimation algorithms on three current datasets, including our own. This evaluation provides clear insights and allows us to identify key research challenges of gaze estimation in the wild.", "", "Along with the arrival of multimedia time, multimedia data has replaced textual data to transfer information in various fields. As an important form of multimedia data, images have been widely utilized by many applications, such as face recognition and image classification. Therefore, how to accurately annotate each image from a large set of images is of vital importance but challenging. To perform these tasks well, it is crucial to extract suitable features to character the visual contents of images and learn an appropriate distance metric to measure similarities between all images. Unfortunately, existing feature operators, such as histogram of gradient, local binary pattern, and color histogram, care more about the visual character of images and lack the ability to distinguish semantic information. Similarities between those features cannot reflect the real category correlations due to the well-known semantic gap. In order to solve this problem, this paper proposes a regularized distance metric framework called semantic discriminative metric learning (SDML). SDML combines geometric mean with normalized divergences and separates images from different classes simultaneously. The learned distance metric can treat all images from different classes equally. And distinctions between similar classes with entirely different semantic contents are emphasized by SDML. This procedure ensures the consistency between dissimilarities and semantic distinctions and avoids inaccuracy similarities incurred by unbalanced locations of samples. Various experiments on benchmark image datasets show the excellent performance of the novel method.", "Multiview data clustering attracts more attention than their single-view counterparts due to the fact that leveraging multiple independent and complementary information from multiview feature spaces outperforms the single one. Multiview spectral clustering aims at yielding the data partition agreement over their local manifold structures by seeking eigenvalue–eigenvector decompositions. Among all the methods, low-rank representation (LRR) is effective, by exploring the multiview consensus structures beyond the low rankness to boost the clustering performance. However, as we observed, such classical paradigm still suffers from the following stand-out limitations for multiview spectral clustering of overlooking the flexible local manifold structure, caused by aggressively enforcing the low-rank data correlation agreement among all views, and such a strategy, therefore, cannot achieve the satisfied between-views agreement; worse still, LRR is not intuitively flexible to capture the latent data clustering structures. In this paper, first, we present the structured LRR by factorizing into the latent low-dimensional data-cluster representations, which characterize the data clustering structure for each view. Upon such representation, second, the Laplacian regularizer is imposed to be capable of preserving the flexible local manifold structure for each view. Third, we present an iterative multiview agreement strategy by minimizing the divergence objective among all factorized latent data-cluster representations during each iteration of optimization process, where such latent representation from each view serves to regulate those from other views, and such an intuitive process iteratively coordinates all views to be agreeable. Fourth, we remark that such data-cluster representation can flexibly encode the data clustering structure from any view with an adaptive input cluster number. To this end, finally, a novel nonconvex objective function is proposed via the efficient alternating minimization strategy. The complexity analysis is also presented. The extensive experiments conducted against the real-world multiview data sets demonstrate the superiority over the state of the arts." ] }
1903.05832
2922009206
Covert networks are social networks that often consist of harmful users. Social Network Analysis (SNA) has played an important role in reducing criminal activities (e.g., counter terrorism) via detecting the influential users in such networks. There are various popular measures to quantify how influential or central any vertex is in a network. As expected, strategic and influential miscreants in covert networks would try to hide herself and her partners (called leaders ) from being detected via these measures by introducing new edges. show that the corresponding computational problem, called Hiding Leader, is NP-Complete for the degree and closeness centrality measures. We study the popular core centrality measure and show that the problem is NP-Complete even when the core centrality of every leader is only @math . On the contrary, we prove that the problem becomes polynomial time solvable for the degree centrality measure if the degree of every leader is bounded above by any constant. We then focus on the optimization version of the problem and show that the Hiding Leader problem admits a @math factor approximation algorithm for the degree centrality measure. We complement it by proving that one cannot hope to have any @math factor approximation algorithm for any constant @math unless there is a @math factor polynomial time algorithm for the Densest @math -Subgraph problem which would be considered a significant breakthrough.
first proposed and studied the problem @cite_15 @cite_27 . They proved that the problem is for both the degree and closeness centrality measures. They also proposed a procedure to design a captain (covert) network from scratch which not only hides the leaders based on the degree, closeness, and betweenness centrality measures, but also keeps the influence of the leaders high in the network. In this paper, we provide two approximability results for degree centrality and core centrality respectively. We also show the problem is harder in the case of core centrality. @cite_45 studied another related problem to make the degree of each node in the network beyond a given constant by adding minimal edges.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_45" ], "mid": [ "2492695942", "2620696915", "1998091733" ], "abstract": [ "The Internet and social media have fuelled enormous interest in social network analysis. New tools continue to be developed and used to analyse our personal connections, with particular emphasis on detecting communities or identifying key individuals in a social network. This raises privacy concerns that are likely to exacerbate in the future. With this in mind, we ask the question ‘Can individuals or groups actively manage their connections to evade social network analysis tools?’ By addressing this question, the general public may better protect their privacy, oppressed activist groups may better conceal their existence and security agencies may better understand how terrorists escape detection. We first study how an individual can evade ‘node centrality’ analysis while minimizing the negative impact that this may have on his or her influence. We prove that an optimal solution to this problem is difficult to compute. Despite this hardness, we demonstrate how even a simple heuristic, whereby attention is restricted to the individual’s immediate neighbourhood, can be surprisingly effective in practice; for example, it could easily disguise Mohamed Atta’s leading position within the World Trade Center terrorist network. We also study how a community can increase the likelihood of being overlooked by community-detection algorithms. We propose a measure of concealment—expressing how well a community is hidden—and use it to demonstrate the effectiveness of a simple heuristic, whereby members of the community either ‘unfriend’ certain other members or ‘befriend’ some non-members in a coordinated effort to camouflage their community.", "Centrality measures are widely used to identify leaders of covert networks. We study how a group of such leaders can avoid being detected by such measures. More concretely, we study the hardness of choosing a set of edges that can be added to the network in order to decrease the leaders' ranking according to two fundamental centrality measures, namely degree, and closeness. We prove that this problem is NP-complete for each measure. We then study how the leaders can construct a network from scratch, designed specifically for them to hide in disguise. We identify a network structure that not only guarantees to hide the leaders to a certain extent, but also allows them to spread their influence across the network.", "The proliferation of network data in various application domains has raised privacy concerns for the individuals involved. Recent studies show that simply removing the identities of the nodes before publishing the graph social network data does not guarantee privacy. The structure of the graph itself, and in its basic form the degree of the nodes, can be revealing the identities of individuals. To address this issue, we study a specific graph-anonymization problem. We call a graph k-degree anonymous if for every node v, there exist at least k-1 other nodes in the graph with the same degree as v. This definition of anonymity prevents the re-identification of individuals by adversaries with a priori knowledge of the degree of certain nodes. We formally define the graph-anonymization problem that, given a graph G, asks for the k-degree anonymous graph that stems from G with the minimum number of graph-modification operations. We devise simple and efficient algorithms for solving this problem. Our algorithms are based on principles related to the realizability of degree sequences. We apply our methods to a large spectrum of synthetic and real datasets and demonstrate their efficiency and practical utility." ] }
1903.05832
2922009206
Covert networks are social networks that often consist of harmful users. Social Network Analysis (SNA) has played an important role in reducing criminal activities (e.g., counter terrorism) via detecting the influential users in such networks. There are various popular measures to quantify how influential or central any vertex is in a network. As expected, strategic and influential miscreants in covert networks would try to hide herself and her partners (called leaders ) from being detected via these measures by introducing new edges. show that the corresponding computational problem, called Hiding Leader, is NP-Complete for the degree and closeness centrality measures. We study the popular core centrality measure and show that the problem is NP-Complete even when the core centrality of every leader is only @math . On the contrary, we prove that the problem becomes polynomial time solvable for the degree centrality measure if the degree of every leader is bounded above by any constant. We then focus on the optimization version of the problem and show that the Hiding Leader problem admits a @math factor approximation algorithm for the degree centrality measure. We complement it by proving that one cannot hope to have any @math factor approximation algorithm for any constant @math unless there is a @math factor polynomial time algorithm for the Densest @math -Subgraph problem which would be considered a significant breakthrough.
Other problems that align with privacy issues in social networks were studied before @cite_32 @cite_54 . In @cite_32 , the authors showed how an adversary exploits online social networks to find the private information about users. @cite_54 discussed the threat of malware targeted at extracting information in a real-world social network.
{ "cite_N": [ "@cite_54", "@cite_32" ], "mid": [ "2002853559", "2103133870" ], "abstract": [ "Stealing-reality attacks attempt to steal social network and behavioral information through data collection and inference techniques, making them more dangerous than other types of identity theft.", "In order to address privacy concerns, many social media websites allow users to hide their personal profiles from the public. In this work, we show how an adversary can exploit an online social network with a mixture of public and private user profiles to predict the private attributes of users. We map this problem to a relational classification problem and we propose practical models that use friendship and group membership information (which is often not hidden) to infer sensitive attributes. The key novel idea is that in addition to friendship links, groups can be carriers of significant information. We show that on several well-known social media sites, we can easily and accurately recover the information of private-profile users. To the best of our knowledge, this is the first work that uses link-based and group-based classification to study privacy implications in social networks with mixed public and private user profiles." ] }
1903.05832
2922009206
Covert networks are social networks that often consist of harmful users. Social Network Analysis (SNA) has played an important role in reducing criminal activities (e.g., counter terrorism) via detecting the influential users in such networks. There are various popular measures to quantify how influential or central any vertex is in a network. As expected, strategic and influential miscreants in covert networks would try to hide herself and her partners (called leaders ) from being detected via these measures by introducing new edges. show that the corresponding computational problem, called Hiding Leader, is NP-Complete for the degree and closeness centrality measures. We study the popular core centrality measure and show that the problem is NP-Complete even when the core centrality of every leader is only @math . On the contrary, we prove that the problem becomes polynomial time solvable for the degree centrality measure if the degree of every leader is bounded above by any constant. We then focus on the optimization version of the problem and show that the Hiding Leader problem admits a @math factor approximation algorithm for the degree centrality measure. We complement it by proving that one cannot hope to have any @math factor approximation algorithm for any constant @math unless there is a @math factor polynomial time algorithm for the Densest @math -Subgraph problem which would be considered a significant breakthrough.
We also provide a few details about previous work on other network modification (design) problems. A set of design problems were introduced in @cite_5 . @cite_60 addressed a shortest path optimization problem via improving edge weights on undirected graphs. The node version of this problem was also studied @cite_66 @cite_1 @cite_61 . @cite_62 proposed approximation algorithms for single-source and all-pair shortest paths minimization. Faster algorithms for some of these problems were also presented in @cite_11 @cite_44 . @cite_63 minimized the diameter of a network by adding shortcut edges. @cite_10 studied the social network effect in the surprise in elections.
{ "cite_N": [ "@cite_61", "@cite_62", "@cite_60", "@cite_1", "@cite_44", "@cite_63", "@cite_5", "@cite_10", "@cite_66", "@cite_11" ], "mid": [ "2782743306", "", "", "2807726738", "2407891964", "", "2149161872", "2909401526", "", "2053572557" ], "abstract": [ "Reduction of end-to-end network delay is an optimization task with applications in multiple domains. Low delays enable improved information flow in social networks, quick spread of ideas in collaboration networks, low travel times for vehicles on road networks, and increased rate of packets in the case of communication networks. Delay reduction can be achieved by both improving the propagation capabilities of individual nodes and adding additional edges in the network. One of the main challenges in such network design problems is that the effects of local changes are not independent, and as a consequence, there is a combinatorial search-space of possible improvements. Thus, minimizing the cumulative propagation delay requires novel scalable and data-driven approaches. We consider the problem of network delay minimization via node upgrades. We show that the problem is NP-hard and prove strong inapproximability results about it (i.e., APX-hard) even for equal vertex delays. On the positive side, probabilistic approximations for a restricted version of the problem can be obtained. We propose a greedy heuristic to solve the general problem setting which has good quality in practice, but does not scale to very large instances. To enable scalability to real-world networks, we develop approximations for Greedy with probabilistic guarantees for every iteration, tailored to different models of delay distribution and network structures. Our methods scale almost linearly with the graph size and consistently outperform competitors in quality. We evaluate our approaches on several real-world graphs from different genres. We achieve up to two orders of magnitude speed-up compared to alternatives from the literature on moderate size networks, and obtain high-quality results in minutes on large datasets while competitors from the literature require more than four hours.", "", "", "", "The small world phenomenon is a desirable property of social networks, since it guarantees short paths between the nodes of the social graph and thus efficient information spread on the network. It is thus in the benefit of both network users and network owners to enforce and maintain this property. In this work, we study the problem of finding a subset of k edges from a set of candidate edges whose addition to a network leads to the greatest reduction in its average shortest path length. We formulate the problem as a combinatorial optimization problem, and show that it is NP-hard and that known approximation techniques are not applicable. We describe an efficient method for computing the exact effect of a single edge insertion on the average shortest path length, as well as several heuristics for efficiently estimating this effect. We perform experiments on real data to study the performance of our algorithms in practice.", "", "Graphs with weights and delays associated with their edges and or vertices are often used to model communication and signal flow networks. Network performance can be improved by upgrading the network vertices. Such an upgrade reduces the edge vertex delays and comes at a cost. We study different formulations of this network performance improvement problem and show that these are NP-hard. We then consider one of the formulations and develop polynomial time algorithms for some special cases and pseudopolynomial time algorithms for others.", "Elections involving a large voter population often lead to outcomes that surprise many. A better prediction of the true outcome helps reduce the adverse effect of surprise on the economy of a sizable population. This paper starts from the basic observation that individuals in the underlying population build estimates of the distribution of preferences of the whole population based on their immediate neighbors in the underlying social network. The outcome of the election leads to a surprise if these local estimates contradict the outcome of the election for some fixed voting rule. To get a quantitative understanding, we propose a novel mathematical model of the setting where the individuals in the population and their connections are described by a random graph with connection probabilities that are biased based on the preferences of the individuals. Each individual also has some estimate of the bias in their connections. The connection model is inspired by the homophily effect in social networks. We show that the election outcome leads to a surprise if the discrepancy between the estimated bias and the true bias in the local connections exceeds a certain threshold, and confirm the phenomenon that surprising outcomes are associated only with closely contested elections. We consider large elections with networked voters and compare standard voting rules based on their performance on surprise. Our results show that the rules have different behavior for different parts of the population. It also hints at an impossibility result that any reasonable voting rule will be less surprising for all parts of a population. To attest some of our theoretical predictions, we experiment with the large dataset of UK-EU referendum (a.k.a. Brexit).", "", "Small changes in the network topology can have dramatic effects on its capacity to disseminate information. In this paper, we consider the problem of adding a small number of ghost edges in the network in order to minimize the average shortest-path distance between nodes, towards a smaller-world network. We formalize the problem of suggesting ghost edges and we propose a novel method for quickly evaluating the importance of ghost edges in sparse graphs. Through experiments on real and synthetic data sets, we demonstrate that our approach performs very well, for a varying range of conditions, and it outperforms sensible baselines." ] }
1903.05598
2952052808
Recently, privacy has a growing importance in several domains, especially in street-view images. The conventional way to achieve this is to automatically detect and blur sensitive information from these images. However, the processing cost of blurring increases with the ever-growing resolution of images. We propose a system that is cost-effective even after increasing the resolution by a factor of 2.5. The new system utilizes depth data obtained from LiDAR to significantly reduce the search space for detection, thereby reducing the processing cost. Besides this, we test several detectors after reducing the detection space and provide an alternative solution based on state-of-the-art deep learning detectors to the existing HoG-SVM-Deep system that is faster and has a higher performance.
Nowadays, LiDAR-based system are becoming more prevalent in the autonomous driving industry. RGB-D-based object detection has been explored in @cite_1 @cite_7 . However, to the best of our knowledge, this is the first work that utilizes depth obtained from LiDAR to reduce the search space of a detection algorithm. Plane fitting in point clouds using Random Sample Consensus (RANSAC) is well studied and several improvements have been proposed @cite_6 . In this research, we utilize depth data obtained from LiDAR to create a point cloud which is later used to estimate planes. The estimated planes are further applied to reduce the search space in the cycloramas.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_7" ], "mid": [ "2750980686", "2085261163", "" ], "abstract": [ "Object recognition from RGB-D images has become a hot topic and gained a significant popularity in recent years due to its numerous applications. In this paper, we propose a novel multimodal deep convolutional neural networks architecture for RGB-D object recognition which composed of three streams with two different types of deep CNNs, where each stream can separately learn from each modality. Finally, we propose a combined architecture of joint network of these three streams to classify the objects. Compared to RGB data, RGB-D images provide additional depth information that can be represented as depth colorization methods or surface normals. Our goal is to exploit both colorization and surface normals information to encode depth images. We show that by utilizing both colorization and surface normals of depth images combined with RGB significantly can improves the classification accuracy. We evaluate our model on one of the most challenging RGB-D object dataset and achieves comparable performance to state-of-the-art methods.", "A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing", "" ] }
1903.05577
2922373806
We study the video super-resolution (SR) problem not for visual quality, but for facilitating video analytics tasks, e.g. action recognition. The popular action recognition methods based on convolutional networks, exemplified by two-stream networks, are not directly applicable on videos of different spatial resolutions. This can be remedied by performing video SR prior to recognition, which motivates us to improve the SR procedure for recognition accuracy. Tailored for two-stream action recognition networks, we propose two video SR methods for the spatial and temporal streams respectively. On the one hand, we observe that the added details by image SR methods can be either helpful or harmful for recognition, and we propose an optical-flow guided weighted mean-squared-error loss for our spatial-oriented SR (SoSR) network. On the other hand, we observe that existing video SR methods incur temporal discontinuity between frames, which also worsens the recognition accuracy, and we propose a siamese network for our temporal-oriented SR (ToSR) that emphasizes the temporal continuity between consecutive frames. We perform experiments using two state-of-the-art action recognition networks and two well-known datasets--UCF101 and HMDB51. Results demonstrate the effectiveness of our proposed SoSR and ToSR in improving recognition accuracy.
Almost all of the existing image SR methods are designed to enhance the visual quality by adding more image details. In earlier years, PSNR is evaluated as a surrogate of visual quality and thus mean-squared-error is extensively used as loss function @cite_36 @cite_30 @cite_2 @cite_3 @cite_37 @cite_14 @cite_4 @cite_27 . More recently, visual quality is considered directly and several different kinds of loss functions are proposed, such as perceptual loss @cite_21 and loss defined by generative adversarial network (GAN) @cite_17 . For example, Ledig al @cite_29 proposed SRGAN which combined GAN loss and perceptual loss. It is also worth noting that PSNR and visual quality can be even contradictory @cite_39 .
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_14", "@cite_4", "@cite_36", "@cite_29", "@cite_21", "@cite_3", "@cite_39", "@cite_27", "@cite_2", "@cite_17" ], "mid": [ "2949079773", "2747898905", "2964125708", "2735224642", "1885185971", "2523714292", "2950689937", "2476548250", "", "2950116990", "", "2099471712" ], "abstract": [ "We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin.", "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https: github.com tyshiwo DRRN_CVPR17.", "Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the longterm dependency problem is rarely realized for these very deep models, which results in the prior states layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at https: github.com tyshiwo MemNet.", "Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.", "", "Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1903.05019
2922098218
We study two different versions of the simple exclusion process on augmented Galton-Watson trees, the constant speed model and the varying speed model. In both cases, the simple exclusion process starts from an equilibrium distribution with non-vanishing particle density. Moreover, we assume to have initially a particle in the root, the tagged particle. We show for both models that the tagged particle has a positive linear speed and we give explicit formulas for the speeds.
Studying the behavior of the tagged particle in an exclusion process is a classical problem @cite_2 . When the underlying graph is @math , many results were obtained. In the case of translation invariant transition probabilities, a law of large numbers for the position of the tagged particle is known in all regimes @cite_15 @cite_17 . For the @math -dimensional ladder graph, the speed of a tagged particle was studied by Zhang @cite_0 . For the exclusion process on regular trees, @ established a law of large numbers @cite_5 . We obtain their results as a special case. For random environments of the exclusion process, less results are known. In @cite_10 , Chayes and Liggett consider the case of the exclusion process in a one-dimensional i.i.d. random environment. For @math when the transition probabilities are symmetric and not concentrated on the nearest neighbors in the one-dimensional case, Kipnis and Varadhan established a central limit theorem for the tagged particle in their famous paper @cite_18 . Their result was the starting point for a sequence of papers showing central limit theorems for the position of the tagged particle, see @cite_7 @cite_16 for an overview.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_0", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "1987805535", "", "2254435066", "2147301465", "2899267897", "", "", "2135992258", "2074072923" ], "abstract": [ "We prove a functional central limit theorem for additive functionals of stationary reversible ergodic Markov chains under virtually no assumptions other than the necessary ones. We use these results to study the asymptotic behavior of a tagged particle in an infinite particle system performing simple excluded random walk.", "", "We investigate a tagged particle in the exclusion processes on 1, ..., N ×ℤ d , with different densities in different levels k ×ℤ d , ∀ k. Ignoring the level the tagged particle lying in, we only concern its position in ℤ d , denoted by X t . Note that the whole space is not homogeneous. We define the environment process viewed from the tagged particle, of which X t can be expressed as a functional. It is called the tagged particle process. We show the ergodicity of the tagged particle process, then prove the strong law of large numbers. Furthermore, we show the central limit theorem of X t provided the zero-mean condition.", "For a gentle approach to the problems connected with interacting Markov processes we review first what is known in the absence of interaction. For simplicity let S be a countable (or finite) set, and consider a countable (or finite if S is finite) collection of independent Markov processes on S,with common transition function P t (x,y), x, y ∈ S. It is natural to assume constant invariant measure, i.e., @math (0.1)", "We consider exclusion processes on a rooted @math -regular tree. We start from a Bernoulli product measure conditioned on having a particle at the root, which we call the tagged particle. For @math , we show that the tagged particle has positive linear speed and satisfies a central limit theorem. We give an explicit formula for the speed. As a key step in the proof, we first show that the exclusion process \"seen from the tagged particle\" has an ergodic invariant measure.", "", "", "The processes described in the title always have reversible stationary distributions. In this paper, we give sufficient conditions for the existence of, and for the nonexistence of, nonreversible stationary distributions. In the case of an i.i.d. environment, these combine to give a necessary and sufficient condition for the existence of nonreversible stationary distributions.", "On montre l'extremalite d'une mesure invariante pour le processus «n» a partir d'une particule etiquetee et on utilise les proprietes d'ergodicite du processus initial pour montrer que la position suit une loi des grands nombres" ] }
1903.04954
2950857109
We develop an alternative theory to the aggregate matching function in which workers search for jobs through a network of firms: the labor flow network. The lack of an edge between two companies indicates the impossibility of labor flows between them due to high frictions. In equilibrium, firms' hiring behavior correlates through the network, generating highly disaggregated local unemployment. Hence, aggregation depends on the topology of the network in non-trivial ways. This theory provides new micro-foundations for the Beveridge curve, wage dispersion, and the employer-size premium. We apply our model to employer-employee matched records and find that network topologies with Pareto-distributed connections cause disproportionately large changes on aggregate unemployment under high labor supply elasticity.
Whether it is for the whole economy or for submarkets, there are a number of problems that arise from viewing matching in aggregate terms, and here we mention a few. First, when an AMF is responsible of pairing up workers and vacancies, it is assumed that all matches are equally likely. This neglects the importance that specific firms have in reallocating labor within a submarket. Second, defining a submarket is an arbitrary choice that might be well suited for a specific problem, but not necessarily for a broader context. Since these classifications are usually built for taxonomic purposes, they are not designed to minimize inter-submarket flows and maximize intra-submarket, which would capture the structural information of labor market frictions. This problem has been pointed out by @cite_17 in their classical paper on structural unemployment:
{ "cite_N": [ "@cite_17" ], "mid": [ "1998598607" ], "abstract": [ "Examines the role that small firms play in the American labor market. The authors seek to dispel two commonly held misconceptions: that small businesses generate the vast majority of jobs and that small business owners face limited political influence. Empirical data from the U.S. Small Business Administration, the Bureau of Labor Statistics, and the Census Bureau for the mid-1980s as well as privately collected data are reviewed to determine the differences between large and small firms as employers. The share of new jobs created by small firms, the percentage of the workforce employed by small firms, and the longevity of jobs created by small firms reveal that while these firms are responsible for a disproportionate share of new jobs, small firm employment share has largely remained constant. This results from the fact that new firms, and the new jobs they produce, tend to be small; small firms do not often produce new, long-lived jobs. Compensation and working conditions are reviewed. Although small firm wages tend to be lower, the traditional explanations -- more favorable working conditions at small firms, union avoidance, and discouragement of shirking -- are shown to have minimal explanatory power. Rather, lowered non-labor input prices for large firms enables those firms to pay higher wages. Quit rates and applicants-per-vacancy reveal that intangibles such as job satisfaction fail to explain the compensation differential. Unionization rates and union desirability among employees reveals that small firms could be prime targets for organizing drives. Lastly, the political influence of small businesses and the wisdom of governmental aid to small businesses and statutory exemptions or lax enforcement are critically assessed. (CAR)" ] }
1903.05084
2922437908
In complex processes, various events can happen in different sequences. The prediction of the next event activity given an a-priori process state is of importance in such processes. Recent methods leverage deep learning techniques such as recurrent neural networks to predict event activities from raw process logs. However, deep learning techniques cannot efficiently model logical behaviors of complex processes. In this paper, we take advantage of Petri nets as a powerful tool in modeling logical behaviors of complex processes. We propose an approach which first discovers Petri nets from event logs utilizing a recent process mining algorithm. In a second step, we enhance the obtained model with time decay functions to create timed process state samples. Finally, we use these samples in combination with token movement counters and Petri net markings to train a deep learning model that predicts the next event activity. We demonstrate significant performance improvements and outperform the state-of-the-art methods on eight out of nine real-world benchmark event logs in accuracy.
The application of deep learning on predictive business process mining has grown enormously during recent years. Researchers have shown the applicability of machine and deep learning on several target variables such as the remaining time of running cases @cite_50 , forecasting time of events @cite_26 , and predicting upcoming events in running processes while utilizing a-priori knowledge @cite_51 . The prediction of event activities can be considered as a classification problem in which the probability of a next event activity @math given the state @math of the process at time @math , @math , is to be found.
{ "cite_N": [ "@cite_26", "@cite_51", "@cite_50" ], "mid": [ "2767415237", "2744700103", "2283585398" ], "abstract": [ "Predicting the completion time of business process instances would be a very helpful aid when managing processes under service level agreement constraints. The ability to know in advance the trend of running process instances would allow business managers to react in time, in order to prevent delays or undesirable situations. However, making such accurate forecasts is not easy: many factors may influence the required time to complete a process instance. In this paper, we propose an approach based on deep Recurrent Neural Networks (specifically LSTMs) that is able to exploit arbitrary information associated to single events, in order to produce an as-accurate-as-possible prediction of the completion time of running instances. Experiments on real-world datasets confirm the quality of our proposal.", "Predictive business process monitoring aims at leveraging past process execution data to predict how ongoing (uncompleted) process executions will unfold up to their completion. Nevertheless, cases exist in which, together with past execution data, some additional knowledge (a-priori knowledge) about how a process execution will develop in the future is available. This knowledge about the future can be leveraged for improving the quality of the predictions of events that are currently unknown. In this paper, we present two techniques - based on Recurrent Neural Networks with Long Short-Term Memory (LSTM) cells - able to leverage knowledge about the structure of the process execution traces as well as a-priori knowledge about how they will unfold in the future for predicting the sequence of future activities of ongoing process executions. The results obtained by applying these techniques on six real-life logs show an improvement in terms of accuracy over a plain LSTM-based baseline.", "The ability to know in advance the trend of running process instances, with respect to different features, such as the expected completion time, would allow business managers to timely counteract to undesired situations, in order to prevent losses. Therefore, the ability to accurately predict future features of running business process instances would be a very helpful aid when managing processes, especially under service level agreement constraints. However, making such accurate forecasts is not easy: many factors may influence the predicted features. Many approaches have been proposed to cope with this problem but, generally, they assume that the underlying process is stationary. However, in real cases this assumption is not always true. In this work we present new methods for predicting the remaining time of running cases. In particular we propose a method, assuming process stationarity, which achieves state-of-the-art performances and two other methods which are able to make predictions even with non-stationary processes. We also describe an approach able to predict the full sequence of activities that a running case is going to take. All these methods are extensively evaluated on different real case studies." ] }
1903.05084
2922437908
In complex processes, various events can happen in different sequences. The prediction of the next event activity given an a-priori process state is of importance in such processes. Recent methods leverage deep learning techniques such as recurrent neural networks to predict event activities from raw process logs. However, deep learning techniques cannot efficiently model logical behaviors of complex processes. In this paper, we take advantage of Petri nets as a powerful tool in modeling logical behaviors of complex processes. We propose an approach which first discovers Petri nets from event logs utilizing a recent process mining algorithm. In a second step, we enhance the obtained model with time decay functions to create timed process state samples. Finally, we use these samples in combination with token movement counters and Petri net markings to train a deep learning model that predicts the next event activity. We demonstrate significant performance improvements and outperform the state-of-the-art methods on eight out of nine real-world benchmark event logs in accuracy.
Early predictive models focused on analytical approaches. @cite_64 introduced a hybrid approach consisting of a sequence alignment technique to extract similar patterns and to predict upcoming event activities based on a combination of Markov models. The next event activity of a running process case is therefore determined by the transition probabilities of the Markov models.
{ "cite_N": [ "@cite_64" ], "mid": [ "176495558" ], "abstract": [ "Process event prediction is the prediction of various properties of the remaining path of a process sequence or workflow. The prediction is based on the data extracted from a combination of historical (closed) and or live (open) workflows (jobs or process instances). In real-world applications, the problem is compounded by the fact that the number of unique workflows (process prototypes) can be enormous, their occurrences can be limited, and a real process may deviate from the designed process when executed in real environment and under realistic constraints. It is necessary for an efficient predictor to be able to cope with the diverse characteristics of the data.We also have to ensure that useful process data is collected to build the appropriate predictive model. In this paper we propose an extension of Markov models for predicting the next step in a process instance.We have shown, via a set of experiments, that our model offers better results when compared to methods based on random guess, Markov models and Hidden Markov models. The data for our experiments comes from a real live process in a major telecommunication company." ] }
1903.05084
2922437908
In complex processes, various events can happen in different sequences. The prediction of the next event activity given an a-priori process state is of importance in such processes. Recent methods leverage deep learning techniques such as recurrent neural networks to predict event activities from raw process logs. However, deep learning techniques cannot efficiently model logical behaviors of complex processes. In this paper, we take advantage of Petri nets as a powerful tool in modeling logical behaviors of complex processes. We propose an approach which first discovers Petri nets from event logs utilizing a recent process mining algorithm. In a second step, we enhance the obtained model with time decay functions to create timed process state samples. Finally, we use these samples in combination with token movement counters and Petri net markings to train a deep learning model that predicts the next event activity. We demonstrate significant performance improvements and outperform the state-of-the-art methods on eight out of nine real-world benchmark event logs in accuracy.
@cite_15 developed a method which models a process in a probabilistic and instance-specific way. This model is able to predict next event activities and can be translated into a Markov chain. Their approach has been implemented on a simulated automobile insurance claim process.
{ "cite_N": [ "@cite_15" ], "mid": [ "1998640084" ], "abstract": [ "In semi-structured case-oriented business processes, the sequence of process steps is determined by case workers based on available document content associated with a case. Transitions between process execution steps are therefore case specific and depend on independent judgment of case workers. In this paper, we propose an instance-specific probabilistic process model (PPM) whose transition probabilities are customized to the semi-structured business process instance it represents. An instance-specific PPM serves as a powerful representation to predict the likelihood of different outcomes. We also show that certain instance-specific PPMs can be transformed into a Markov chain under some non-restrictive assumptions. For instance-specific PPMs that contain parallel execution of tasks, we provide an algorithm to map them to an extended space Markov chain. This way existing Markov techniques can be leveraged to make predictions about the likelihood of executing future tasks. Predictions provided by our technique could generate early alerts for case workers about the likelihood of important or undesired outcomes in an executing case instance. We have implemented and validated our approach on a simulated automobile insurance claims handling semi-structured business process. Results indicate that an instance-specific PPM provides more accurate predictions than other methods such as conditional probability. We also show that as more document data become available, the prediction accuracy of an instance-specific PPM increases." ] }
1903.05084
2922437908
In complex processes, various events can happen in different sequences. The prediction of the next event activity given an a-priori process state is of importance in such processes. Recent methods leverage deep learning techniques such as recurrent neural networks to predict event activities from raw process logs. However, deep learning techniques cannot efficiently model logical behaviors of complex processes. In this paper, we take advantage of Petri nets as a powerful tool in modeling logical behaviors of complex processes. We propose an approach which first discovers Petri nets from event logs utilizing a recent process mining algorithm. In a second step, we enhance the obtained model with time decay functions to create timed process state samples. Finally, we use these samples in combination with token movement counters and Petri net markings to train a deep learning model that predicts the next event activity. We demonstrate significant performance improvements and outperform the state-of-the-art methods on eight out of nine real-world benchmark event logs in accuracy.
Similarly, @cite_5 proposed a method to predict the likelihood of future process tasks by modeling parallel paths which can be either dependent or independent. The authors applied their methodology to a simulated marketing campaign business process model.
{ "cite_N": [ "@cite_5" ], "mid": [ "2186592864" ], "abstract": [ "In semi-structured processes, the set of activities that need to be performed, their order and whether additional steps are required are determined by human judgment. There is a growing demand for operational support of such processes during runtime particularly in the form of predictions about the likelihood of future tasks. We address the problem of making predictions for a running instance of a semi-structured process that contains parallel execution paths where the execution path taken by a process instance influences its outcome. In particular, we consider five different models for how to represent an execution trace as a path attribute for training a prediction model. We provide a methodology to determine whether parallel paths are independent, and whether it is worthwhile to model execution paths as independent based on a comparison of the information gain obtained by dependent and independent path representations. We tested our methodology by simulating a marketing campaign as a business process model and selected decision trees as the prediction model. In the evaluation, we compare the complexity and prediction accuracy of a prediction model trained with five different models." ] }
1903.05084
2922437908
In complex processes, various events can happen in different sequences. The prediction of the next event activity given an a-priori process state is of importance in such processes. Recent methods leverage deep learning techniques such as recurrent neural networks to predict event activities from raw process logs. However, deep learning techniques cannot efficiently model logical behaviors of complex processes. In this paper, we take advantage of Petri nets as a powerful tool in modeling logical behaviors of complex processes. We propose an approach which first discovers Petri nets from event logs utilizing a recent process mining algorithm. In a second step, we enhance the obtained model with time decay functions to create timed process state samples. Finally, we use these samples in combination with token movement counters and Petri net markings to train a deep learning model that predicts the next event activity. We demonstrate significant performance improvements and outperform the state-of-the-art methods on eight out of nine real-world benchmark event logs in accuracy.
More recently, @cite_20 introduced a predictive model based on the theory of grammatical inference. They have modeled business processes probabilistically with a method called RegPFA which is based on Probabilistic Finite Automaton. Grammatical inference is applied on top of the finite automaton. One of the advantages is that the methodology is based on weaker biases while maintaining comprehensibility. This is important, because users without deep technical knowledge can interpret and understand the models. evaluated their approach against two public available real-world logs demonstrating significant performance improvements. is able to predict the next event activity given a running process case with accuracies between @math 66 Since deep learning techniques are difficult to interpret, @cite_16 developed a method based on matrix factorization and knowledge from business process management to create predictive models which are easier to understand. The authors claim to require fewer parameters than neural networks while maintaining good performance.
{ "cite_N": [ "@cite_16", "@cite_20" ], "mid": [ "2806051973", "2501271607" ], "abstract": [ "Abstract Predictive business process monitoring methods use datasets of completed cases related to a process to predict the behaviour of running cases. To handle the large amounts of available event data, recent works have turned to deep learning techniques and have achieved fairly accurate results. However, results from these techniques are often difficult to interpret and explain. In the area of Recommender systems, factorization models have been an important class of predictive techniques due to its scalability and ability to infer latent features. Motivated by research in Recommender systems, this paper presents a predictive model that combines matrix factorization techniques from Recommender systems and knowledge from Business Process Management to learn interactions between latent features that can be used to predict the next event of an ongoing case. Evaluation on two real-life datasets from a Dutch Financial Institute and Volvo IT Belgium shows that the approach yields results that are comparable and at times superior to state-of-the-art techniques such as neural networks, yielding at most a precision of 0.87 for next event predictions.", "Predictive modeling approaches in business process management provide a way to streamline operational business processes. For instance, they can warn decision makers about undesirable events that are likely to happen in the future, giving the decision maker an opportunity to intervene. The topic is gaining momentum in process mining, a field of research that has traditionally developed tools to discover business process models from data sets of past process behavior. Predictive modeling techniques are built on top of process-discovery algorithms. As these algorithms describe business process behavior using models of formal languages (e.g., Petri nets), strong language biases are necessary in order to generate models with the limited amounts of data included in the data set. Naturally, corresponding predictive modeling techniques reflect these biases. Based on theory from grammatical inference, a field of research that is concerned with inducing language models, we design a new predictive modeling technique based on weaker biases. Fitting a probabilistic model to a data set of past behavior makes it possible to predict how currently running process instances will behave in the future. To clarify how this technique works and to facilitate its adoption, we also design a way to visualize the probabilistic models. We assess the effectiveness of the technique in an experimental evaluation with synthetic and real-world data." ] }
1903.05046
2921312011
We study the problem of recovering a hidden binary @math -sparse @math -dimensional vector @math from @math noisy linear observations @math where @math are i.i.d. @math and @math are i.i.d. @math . A closely related hypothesis testing problem is to distinguish the pair @math generated from this structured model from a corresponding null model where @math consist of purely independent Gaussian entries. In the low sparsity @math and high signal to noise ratio @math regime, we establish an All-or-Nothing' information-theoretic phase transition at a critical sample size @math , resolving a conjecture of gamarnikzadik . Specifically, we show that if @math , then the maximum likelihood estimator almost perfectly recovers the hidden vector with high probability and moreover the true hypothesis can be detected with a vanishing error probability. Conversely, if @math , then it becomes information-theoretically impossible even to recover an arbitrarily small but fixed fraction of the hidden vector support, or to test hypotheses strictly better than random guess. Our proof of the impossibility result builds upon two key techniques, which could be of independent interest. First, we use a conditional second moment method to upper bound the Kullback-Leibler (KL) divergence between the structured and the null model. Second, inspired by the celebrated area theorem, we establish a lower bound to the minimum mean squared estimation error of the hidden vector in terms of the KL divergence between the two models.
For the impossibility direction, previous work [Theorem 5.2] aeron:2010 has established that as @math , achieving @math for any @math is information-theoretically impossible if @math where @math for @math is the binary entropy function. This converse result is proved via a simple rate-distortion argument (see, e.g. @cite_4 for an exposition). In particular, given any estimator @math with @math , we have @math Notice that since @math the result implies that if @math , recovery, that is @math , is information-theoretically impossible and if @math , recovery, that is @math for an arbitrary @math , is impossible.
{ "cite_N": [ "@cite_4" ], "mid": [ "2805892743" ], "abstract": [ "Over the past few years, insights from computer science, statistical physics, and information theory have revealed phase transitions in a wide array of high-dimensional statistical problems at two distinct thresholds: One is the information-theoretical (IT) threshold below which the observation is too noisy so that inference of the ground truth structure is impossible regardless of the computational cost; the other is the computational threshold above which inference can be performed efficiently, i.e., in time that is polynomial in the input size. In the intermediate regime, inference is information-theoretically possible, but conjectured to be computationally hard. This article provides a survey of the common techniques for determining the sharp IT and computational limits, using community detection and submatrix detection as illustrating examples. For IT limits, we discuss tools including the first and second moment method for analyzing the maximal likelihood estimator, information-theoretic methods for proving impossibility results using rate-distortion theory, and methods originated from statistical physics such as interpolation method. To investigate computational limits, we describe a common recipe to construct a randomized polynomial-time reduction scheme that approximately maps instances of the planted clique problem to the problem of interest in total variation distance." ] }
1903.05176
2921935051
Hyperparameter tuning of multi-stage pipelines introduces a significant computational burden. Motivated by the observation that work can be reused across pipelines if the intermediate computations are the same, we propose a pipeline-aware approach to hyperparameter tuning. Our approach optimizes both the design and execution of pipelines to maximize reuse. We design pipelines amenable for reuse by (i) introducing a novel hybrid hyperparameter tuning method called gridded random search, and (ii) reducing the average training time in pipelines by adapting early-stopping hyperparameter tuning approaches. We then realize the potential for reuse during execution by introducing a novel caching problem for ML workloads which we pose as a mixed integer linear program (ILP), and subsequently evaluating various caching heuristics relative to the optimal solution of the ILP. We conduct experiments on simulated and real-world machine learning pipelines to show that a pipeline-aware approach to hyperparameter tuning can offer over an order-of-magnitude speedup over independently evaluating pipeline configurations.
Finding an optimal cache management policy is known as the @cite_2 . The classic paging problem assumes that all pages are the same size (i.e. memory requirement), and that all reads have the same cost (i.e. training time). Under these two assumptions, Belady's algorithm @cite_0 is an optimal cache management policy for the perfect information setting, where the size and cost of all pages are known. If we relax the second assumption and allow for variable read costs, we now have the . Lastly, if we further relax the first contraint to allow for variable page sizes, we have the , which encompasses the weighted paging problem. The optimal perfect information policy for the generalized paging problem can be obtained by solving an integer linear program @cite_3 .
{ "cite_N": [ "@cite_0", "@cite_3", "@cite_2" ], "mid": [ "1987225815", "2052476895", "2033156334" ], "abstract": [ "One of the basic limitations of a digital computer is the size of its available memory. 1 In most cases, it is neither feasible nor economical for a user to insist that every problem program fit into memory. The number of words of information in a program often exceeds the number of cells (i.e., word locations) in memory. The only way to solve this problem is to assign more than one program word to a cell. Since a cell can hold only one word at a time, extra words assigned to the cell must be held in external storage. Conventionally, overlay techniques are employed to exchange memory words and external-storage words whenever needed; this, of course, places an additional planning and coding burden on the programmer. For several reasons, it would be advantageous to rid the programmer of this function by providing him with a “virtual” memory larger than his program. An approach that permits him to use a sufficiently large address range can accomplish this objective, assuming that means are provided for automatic execution of the memory-overlay functions.", "", "We study the weighted version of the classic online paging problem where there is a weight (cost) for fetching each page into the cache. We design a randomized O(log k)-competitive online algorithm for this problem, where k is the cache size. This is the first randomized o(k)-competitive algorithm and its competitive ratio matches the known lower bound for the problem, up to constant factors. More generally, we design an O(log(k (k − h + 1)))-competitive online algorithm for the version of the problem where the online algorithm has cache size k and it is compared to an optimal offline solution with cache size h ≤ k. Our solution is based on a two-step approach. We first obtain an O(log k)-competitive fractional algorithm based on an online primal-dual approach. Next, we obtain a randomized algorithm by rounding in an online manner the fractional solution to a probability distribution on the possible cache states. We also give an online primal-dual randomized O(log N)-competitive algorithm for the Metrical Task System problem (MTS) on a weighted star metric on N leaves." ] }
1903.05058
2922361804
The transition from a weak-disorder (diffusive phase) to a strong-disorder (localized phase) for directed polymers in a random environment is a well studied phenomenon. In the most common setup, it is established that the phase transition is trivial when the transversal dimension @math equals @math or @math (the diffusive phase is reduced to @math ) while when @math , there is a critical temperature @math which delimits the two phases. The proof of the existence of a diffusive regime for @math is based on a second moment method, and thus relies heavily on the assumption that the variable which encodes the disorder intensity (which in most of the mathematics literature assumes the form @math ), has finite second moment. The aim of this work is to investigate how the presence absence of phase transition may depend on the dimension @math in the case when the disorder variable displays heavier tail. To this end we replace @math by @math where @math is in the domain of attraction of a stable law with parameter @math .
Among other works that deal with heavy tail environments we mention @cite_13 , where in the setup the environment @math is allowed to belong to the domain of attraction of a @math -stable law and it is studied properties of paths trajectories drawn from the polymer measure. In this context there is no free energy so the work is fundamentally different from ours. In @cite_10 , it is studied the influence of the jump distribution on the delocalization-localization transition and the interplay between jump tails, spatial dimension and existence of the delocalized phase, when nearest neighbor walks are replaced by long range jumps. Our results most likely extend to that setup, the criterion for having a weak disorder phase in dimension @math becoming @math where @math is the exponent of the random walk. We also mention @cite_3 as another case where a change in the environment setup (there is: moving from the IID setup to a strong spatial correlation in the environment) modifies the criterion for having no phase transition.
{ "cite_N": [ "@cite_10", "@cite_13", "@cite_3" ], "mid": [ "1764139909", "2806580934", "" ], "abstract": [ "In this paper, we consider directed polymers in random environment with long range jumps in discrete space and time. We extend to this case some techniques, results and classifications known in the usual short range case. However, some properties are drastically different when the underlying random walk belongs to the domain of attraction of an @math -stable law. For instance, we construct natural examples of directed polymers in random environment which experience weak disorder in low dimension.", "We study the directed polymer model in dimension @math when the environment is heavy-tailed, with a decay exponent @math . We give all possible scaling limits of the model in the weak-coupling regime, i.e., when the inverse temperature temperature @math vanishes as the size of the system n goes to infinity. When @math , we show that all possible transversal fluctuations @math can be achieved by tuning properly @math , allowing to interpolate between all super-diffusive scales. Moreover, we determine the scaling limit of the model, answering a conjecture by Dey and Zygouras [cf:DZ] - we actually identify five different regimes. On the other hand, when @math , we show that there are only two regimes: the transversal fluctuations are either @math or @math . As a key ingredient, we use the Entropy-controlled Last Passage Percolation (E-LPP), introduced in a companion paper [cf:BT_ELPP].", "" ] }
1708.02139
2742832500
We release a dataset of 65646 StarCraft replays that contains 1535 million frames and 496 million player actions. We provide full game state data along with the original replays that can be viewed in StarCraft. The game state data was recorded every 3 frames which ensures suitability for a wide variety of machine learning tasks such as strategy classification, inverse reinforcement learning, imitation learning, forward modeling, partial information extraction, and others. We use TorchCraft to extract and store the data, which standardizes the data format for both reading from replays and reading directly from the game. Furthermore, the data can be used on different operating systems and platforms. The dataset contains valid, non-corrupted replays only and its quality and diversity was ensured by a number of heuristics. We illustrate the diversity of the data with various statistics and provide examples of tasks that benefit from the dataset. We make the dataset available at this https URL . En Taro Adun!
The existing StarCraft datasets can be subdivided into two groups based on extracted data type (Table ). Datasets from the first group, i.e. @cite_4 @cite_19 @cite_21 focus on specific aspects of the game and contain data that can only be used in a particular context. Datasets from the second group, e.g. @cite_2 and @cite_15 contains general purpose full state data and is not restricted to any particular scope.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_19", "@cite_2", "@cite_15" ], "mid": [ "2152729775", "2018917012", "2144445939", "1516242919", "157948248" ], "abstract": [ "Developing computer-controlled groups to engage in combat, control the use of limited resources, and create units and buildings in real-time strategy (RTS) games is a novel application in game AI. However, tightly controlled online commercial game pose challenges to researchers interested in observing player activities, constructing player strategy models, and developing practical AI technology in them. Instead of setting up new programming environments or building a large amount of agentpsilas decision rules by playerpsilas experience for conducting real-time AI research, the authors use replays of the commercial RTS game StarCraft to evaluate human player behaviors and to construct an intelligent system to learn human-like decisions and behaviors. A case-based reasoning approach was applied for the purpose of training our system to learn and predict player strategies. Our analysis indicates that the proposed system is capable of learning and predicting individual player strategies, and that players provide evidence of their personal characteristics through their building construction order.", "StarCraft is a real-time strategy (RTS) game and the choice of strategy has big impact on the final results of the game. For human players, the most important thing in the game is to select the strategy in the early stage of the game. Also, it is important to recognize the opponent's strategy as quickly as possible. Because of the “fog-of-war” in the game, the player should send a scouting unit to opponent's hidden territory and the player predicts the types of strategy from the partially observed information. Usually, expert players are familiar with the relationships between two build orders and they can change the current build order if his choice is not strong to the opponent's strategy. However, players in AI competitions show quite different behaviors compared to the human leagues. For example, they usually have a pre-selected build order and rarely change their order during the game. In fact, the computer players have little interest in recognizing opponent's strategy and scouting units are used in a limited manner. The reason is that the implementation of scouting behavior and the change of build order from the scouting vision is not a trivial problem. In this paper, we propose to use replays to predict the strategy of players and make decision on the change of build orders. Experimental results on the public replay files show that the proposed method predicts opponent's strategy accurately and increases the chance of winning in the game.", "We present a data mining approach to opponent modeling in strategy games. Expert gameplay is learned by applying machine learning techniques to large collections of game logs. This approach enables domain independent algorithms to acquire domain knowledge and perform opponent modeling. Machine learning algorithms are applied to the task of detecting an opponent's strategy before it is executed and predicting when an opponent will perform strategic actions. Our approach involves encoding game logs as a feature vector representation, where each feature describes when a unit or building type is first produced. We compare our representation to a state lattice representation in perfect and imperfect information environments and the results show that our representation has higher predictive capabilities and is more tolerant of noise. We also discuss how to incorporate our data mining approach into a full game playing agent.", "This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games' state (not only player's orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions' mixtures components", "In order to experiment with machine learning and data mining techniques in the domain of Real-Time Strategy games such as StarCraft, a dataset is required that captures the complex detail of the interactions taking place between the players and the game. This paper describes a new extraction process by which game data is extracted both directly from game log (replay) files, and indirectly through simulating the replays within the StarCraft game engine. Data is then stored in a compact, hierarchical, and easily accessible format. This process is applied to a collection of expert replays, creating a new standardised dataset. The data recorded is enough for almost the complete game state to be reconstructed, from either player's viewpoint, at any point in time (to the nearest second). This process has revealed issues in some of the source replay files, as well as discrepancies in prior datasets. Where practical, these errors have been removed in order to produce a higher-quality reusable dataset." ] }
1708.02139
2742832500
We release a dataset of 65646 StarCraft replays that contains 1535 million frames and 496 million player actions. We provide full game state data along with the original replays that can be viewed in StarCraft. The game state data was recorded every 3 frames which ensures suitability for a wide variety of machine learning tasks such as strategy classification, inverse reinforcement learning, imitation learning, forward modeling, partial information extraction, and others. We use TorchCraft to extract and store the data, which standardizes the data format for both reading from replays and reading directly from the game. Furthermore, the data can be used on different operating systems and platforms. The dataset contains valid, non-corrupted replays only and its quality and diversity was ensured by a number of heuristics. We illustrate the diversity of the data with various statistics and provide examples of tasks that benefit from the dataset. We make the dataset available at this https URL . En Taro Adun!
Several authors have released works that would benefit from the use of a large scale and standardized game replay dataset. @cite_0 manually collects a dataset of 509 games to do unit count prediction under partial observation in the early game. @cite_12 uses the existing dataset provided by @cite_2 to construct a combat model for micromanagement. Recent advance in deep reinforcement learning could benefit from a large existing repository of high quality replays, as @cite_8 showed by using a large set of Go replays to create a system that could defeat the world champion of Go. @cite_14 , @cite_6 , and @cite_18 all tackle the problem of micromanagement using deep reinforcement learning methods, which may be improved with this dataset of fine-grained game state data.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_6", "@cite_0", "@cite_2", "@cite_12" ], "mid": [ "2949201811", "2518713116", "2257979135", "2756196406", "1741238105", "1516242919", "2530830659" ], "abstract": [ "Many real-world problems, such as network packet routing and urban traffic control, are naturally modeled as multi-agent reinforcement learning (RL) problems. However, existing multi-agent RL methods typically scale poorly in the problem size. Therefore, a key challenge is to translate the success of deep learning on single-agent RL to the multi-agent setting. A major stumbling block is that independent Q-learning, the most popular multi-agent RL method, introduces nonstationarity that makes it incompatible with the experience replay memory on which deep Q-learning relies. This paper proposes two methods that address this problem: 1) using a multi-agent variant of importance sampling to naturally decay obsolete data and 2) conditioning each agent's value function on a fingerprint that disambiguates the age of the data sampled from the replay memory. Results on a challenging decentralised variant of StarCraft unit micromanagement confirm that these methods enable the successful combination of experience replay with multi-agent RL.", "We consider scenarios from the real-time strategy game StarCraft as new benchmarks for reinforcement learning algorithms. We propose micromanagement tasks, which present the problem of the short-term, low-level control of army members during a battle. From a reinforcement learning point of view, these scenarios are challenging because the state-action space is very large, and because there is no obvious feature representation for the state-action evaluation function. We describe our approach to tackle the micromanagement scenarios with deep neural network controllers from raw state features given by the game engine. In addition, we present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This algorithm allows for the collection of traces for learning using deterministic policies, which appears much more efficient than, for example, -greedy exploration. Experiments show that with this algorithm, we successfully learn non-trivial strategies for scenarios with armies of up to 15 agents, where both Q-learning and REINFORCE struggle.", "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.", "Many artificial intelligence (AI) applications often require multiple intelligent agents to work in a collaborative effort. Efficient learning for intra-agent communication and coordination is an indispensable step towards general AI. In this paper, we take StarCraft combat game as a case study, where the task is to coordinate multiple agents as a team to defeat their enemies. To maintain a scalable yet effective communication protocol, we introduce a Multiagent Bidirectionally-Coordinated Network (BiCNet ['bIknet]) with a vectorised extension of actor-critic formulation. We show that BiCNet can handle different types of combats with arbitrary numbers of AI agents for both sides. Our analysis demonstrates that without any supervisions such as human demonstrations or labelled data, BiCNet could learn various types of advanced coordination strategies that have been commonly used by experienced game players. In our experiments, we evaluate our approach against multiple baselines under different scenarios; it shows state-of-the-art performance, and possesses potential values for large-scale real-world applications.", "In typical real-time strategy (RTS) games, enemy units are visible only when they are within sight range of a friendly unit. Knowledge of an opponent's disposition is limited to what can be observed through scouting. Information is costly, since units dedicated to scouting are unavailable for other purposes, and the enemy will resist scouting attempts. It is important to infer as much as possible about the opponent's current and future strategy from the available observations. We present a dynamic Bayes net model of strategies in the RTS game Starcraft that combines a generative model of how strategies relate to observable quantities with a principled framework for incorporating evidence gained via scouting. We demonstrate the model's ability to infer unobserved aspects of the game from realistic observations.", "This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games' state (not only player's orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions' mixtures components", "Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or \"simulator\") of the game at hand. However, in some games such forward model is not readily available. In this paper we address the problem of automatically learning forward models (more specifically, combats models) for two-player attrition games. We report experiments comparing several approaches to learn such combat model from replay data to models generated by hand. We use StarCraft, a Real-Time Strategy (RTS) game, as our application domain. Specifically, we use a large collection of already collected replays, and focus on learning a combat model for tactical combats." ] }
1708.02002
2743473392
The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL
The sliding-window paradigm, in which a classifier is applied on a dense image grid, has a long and rich history. One of the earliest successes is the classic work of LeCun al who applied convolutional neural networks to handwritten digit recognition @cite_33 @cite_9 . Viola and Jones @cite_26 used boosted object detectors for face detection, leading to widespread adoption of such models. The introduction of HOG @cite_36 and integral channel features @cite_3 gave rise to effective methods for pedestrian detection. DPMs @cite_22 helped extend dense detectors to more general object categories and had top results on PASCAL @cite_25 for many years. While the sliding-window approach was the leading detection paradigm in classic computer vision, with the resurgence of deep learning @cite_11 , two-stage detectors, described next, quickly came to dominate object detection.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_22", "@cite_36", "@cite_9", "@cite_3", "@cite_25", "@cite_11" ], "mid": [ "2164598857", "2147800946", "2058943444", "2161969291", "", "2159386181", "2125713050", "" ], "abstract": [ "This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.", "The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.", "We describe a state-of-the-art system for finding objects in cluttered images. Our system is based on deformable models that represent objects using local part templates and geometric constraints on the locations of parts. We reduce object detection to classification with latent variables. The latent variables introduce invariances that make it possible to detect objects with highly variable appearance. We use a generalization of support vector machines to incorporate latent information during training. This has led to a general framework for discriminative training of classifiers with latent variables. Discriminative training benefits from large training datasets. In practice we use an iterative algorithm that alternates between estimating latent values for positive examples and solving a large convex optimization problem. Practical optimization of this large convex problem can be done using active set techniques for adaptive subsampling of the training data.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "", "We study the performance of ‘integral channel features’ for image classification tasks, focusing in particular on pedestrian detection. The general idea behind integral channel features is that multiple registered image channels are computed using linear and non-linear transformations of the input image, and then features such as local sums, histograms, and Haar features and their various generalizations are efficiently computed using integral images. Such features have been used in recent literature for a variety of tasks – indeed, variations appear to have been invented independently multiple times. Although integral channel features have proven effective, little effort has been devoted to analyzing or optimizing the features themselves. In this work we present a unified view of the relevant work in this area and perform a detailed experimental evaluation. We demonstrate that when designed properly, integral channel features not only outperform other features including histogram of oriented gradient (HOG), they also (1) naturally integrate heterogeneous sources of information, (2) have few parameters and are insensitive to exact parameter settings, (3) allow for more accurate spatial localization during detection, and (4) result in fast detectors when coupled with cascade classifiers.", "We present a neural network-based face detection system. A retinally connected neural network examines small windows of an image, and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We use a bootstrap algorithm for training, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting non-face training examples, which must be chosen to span the entire space of non-face images. Comparisons with another state-of-the-art face detection system are presented; our system has better performance in terms of detection and false-positive rates.", "" ] }
1708.02002
2743473392
The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL
The dominant paradigm in modern object detection is based on a two-stage approach. As pioneered in the Selective Search work @cite_4 , the first stage generates a sparse set of candidate proposals that should contain all objects while filtering out the majority of negative locations, and the second stage classifies the proposals into foreground classes background. R-CNN @cite_14 upgraded the second-stage classifier to a convolutional network yielding large gains in accuracy and ushering in the modern era of object detection. R-CNN was improved over the years, both in terms of speed @cite_34 @cite_49 and by using learned object proposals @cite_52 @cite_2 @cite_41 . Region Proposal Networks (RPN) integrated proposal generation with the second-stage classifier into a single convolution network, forming the Faster R-CNN framework @cite_21 . Numerous extensions to this framework have been proposed, @cite_51 @cite_44 @cite_20 @cite_30 @cite_42 .
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_4", "@cite_41", "@cite_21", "@cite_42", "@cite_52", "@cite_44", "@cite_49", "@cite_2", "@cite_34", "@cite_51", "@cite_20" ], "mid": [ "", "2102605133", "2088049833", "", "2953106684", "", "2949150497", "", "", "", "2179352600", "2949533892", "" ], "abstract": [ "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).", "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "", "Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations.", "", "", "", "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "" ] }
1708.02002
2743473392
The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL
OverFeat @cite_23 was one of the first modern one-stage object detector based on deep networks. More recently SSD @cite_40 @cite_47 and YOLO @cite_7 @cite_46 have renewed interest in one-stage methods. These detectors have been tuned for speed but their accuracy trails that of two-stage methods. SSD has a 10-20 The design of our RetinaNet detector shares many similarities with previous dense detectors, in particular the concept of anchors' introduced by RPN @cite_21 and use of features pyramids as in SSD @cite_40 and FPN @cite_51 . We emphasize that our simple detector achieves top results not based on innovations in network design but due to our novel loss.
{ "cite_N": [ "@cite_47", "@cite_7", "@cite_21", "@cite_40", "@cite_23", "@cite_46", "@cite_51" ], "mid": [ "", "", "2953106684", "2193145675", "1487583988", "2570343428", "2949533892" ], "abstract": [ "", "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time.", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available." ] }
1708.01980
2743973307
Phrases play an important role in natural language understanding and machine translation (, 2002; , 2005). However, it is difficult to integrate them into current neural machine translation (NMT) which reads and generates sentences word by word. In this work, we propose a method to translate phrases in NMT by integrating a phrase memory storing target phrases from a phrase-based statistical machine translation (SMT) system into the encoder-decoder architecture of NMT. At each decoding step, the phrase memory is first re-written by the SMT model, which dynamically generates relevant target phrases with contextual information provided by the NMT model. Then the proposed model reads the phrase memory to make probability estimations for all phrases in the phrase memory. If phrase generation is carried on, the NMT decoder selects an appropriate phrase from the memory to perform phrase translation and updates its decoding state by consuming the words in the selected phrase. Otherwise, the NMT decoder generates a word from the vocabulary as the general NMT decoder does. Experiment results on the Chinese to English translation show that the proposed model achieves significant improvements over the baseline on various test sets.
In these studies, the generated NMT multi-word phrases are either from an SMT model or a bilingual dictionary. In syntactically guided neural machine translation (SGNMT), the NMT decoder uses phrase translations produced by the hierarchical phrase-based SMT system Hiero, as hard decoding constraints. In this way, syntactic phrases are generated by the NMT decoder @cite_32 . use an SMT translation system, which is integrated an additional bilingual dictionary, to synthesize pseudo-parallel sentences and feed the sentences into the training of NMT in order to translate low-frequency words or phrases. propose an external phrase memory that stores phrase pairs in symbolic forms for NMT. During decoding, the NMT decoder enquires the phrase memory and properly generates phrase translations. The significant differences between these efforts and ours are 1) that we dynamically generate phrase translations via an SMT model, and 2) that at the same time we modify the encoder to incorporate structural information to enhance the capability of NMT in phrase translation.
{ "cite_N": [ "@cite_32" ], "mid": [ "2407166119" ], "abstract": [ "We investigate the use of hierarchical phrase-based SMT lattices in end-to-end neural machine translation (NMT). Weight pushing transforms the Hiero scores for complete translation hypotheses, with the full translation grammar score and full n-gram language model score, into posteriors compatible with NMT predictive probabilities. With a slightly modified NMT beam-search decoder we find gains over both Hiero and NMT decoding alone, with practical advantages in extending NMT to very large input and output vocabularies." ] }
1708.01980
2743973307
Phrases play an important role in natural language understanding and machine translation (, 2002; , 2005). However, it is difficult to integrate them into current neural machine translation (NMT) which reads and generates sentences word by word. In this work, we propose a method to translate phrases in NMT by integrating a phrase memory storing target phrases from a phrase-based statistical machine translation (SMT) system into the encoder-decoder architecture of NMT. At each decoding step, the phrase memory is first re-written by the SMT model, which dynamically generates relevant target phrases with contextual information provided by the NMT model. Then the proposed model reads the phrase memory to make probability estimations for all phrases in the phrase memory. If phrase generation is carried on, the NMT decoder selects an appropriate phrase from the memory to perform phrase translation and updates its decoding state by consuming the words in the selected phrase. Otherwise, the NMT decoder generates a word from the vocabulary as the general NMT decoder does. Experiment results on the Chinese to English translation show that the proposed model achieves significant improvements over the baseline on various test sets.
NMT is essentially a sequence to sequence mapping network that treats the input output units, eg., words, subwords @cite_31 , characters @cite_16 @cite_7 , as non-linguistic symbols. However, linguistic information can be viewed as the task-specific knowledge, which may be a useful supplementary to the sequence to sequence mapping network. To this end, various kinds of linguistic annotations have been introduced into NMT to improve its translation performance. enrich the input units of NMT with various linguistic features, including lemmas, part-of-speech tags, syntactic dependency labels and morphological features. propose factored NMT using the morphological and grammatical decomposition of the words (factors) in output units. explore the phrase structures of input sentences and propose a tree-to-sequence attention model for the vanilla NMT model. propose to linearize source-side parse trees to obtain structural label sequences and explicitly incorporated the structural sequences into NMT, while propose to incorporate target-side syntactic information into NMT by serializing the target sequences into linearized, lexicalized constituency trees. integrate topic knowledge into NMT for domain topic adaptation.
{ "cite_N": [ "@cite_31", "@cite_16", "@cite_7" ], "mid": [ "1816313093", "2311921240", "" ], "abstract": [ "Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character n-gram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.1 and 1.3 BLEU, respectively.", "The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder-decoder with a subword-level encoder and a character-level decoder on four language pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.", "" ] }