aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1903.10269
2923537944
To monitor critical infrastructure, high quality sensors sampled at a high frequency are increasingly installed. However, due to the big amounts of data produced, only simple aggregates are stored. This removes outliers and hides fluctuations that could indicate problems. As a solution we propose compressing time series with dimensions using a model-based method we name Multi-model Group Compression (MMGC). MMGC adaptively compresses groups of correlated time series with dimensions using an extensible set of models within a user-defined error bound (possibly zero). To partition time series into groups, we propose a set of primitives for efficiently describing correlation for data sets of varying sizes. We also propose efficient query processing algorithms for executing multi-dimensional aggregate queries on models instead of data points. Last, we provide an open-source implementation of our methods as extensions to the model-based Time Series Management System (TSMS) ModelarDB. ModelarDB interfaces with the stock versions of Apache Spark and Apache Cassandra and thus can reuse existing infrastructure. Through an evaluation we show that, compared to widely used systems, our extended ModelarDB provides up to 11 times faster ingestion due to high compression, 65 times better compression due to the adaptivity of MMGC, 92 times faster aggregate queries as they are executed on models, and close to linear scalability while also being extensible and supporting online query processing.
mgc has primarily been used for distributed data acquisition instead of centralized compression. An overview and comparison is given in @cite_29 . Gamps @cite_32 performs mgc at a central location by approximating each time series using constant functions. Afterwards, the error bound is relaxed and overlapping models are compressed together, possibly with scaling. Static grouping is done using an approximate algorithm, with the sets re-computed at run-time using dynamically sized windows.
{ "cite_N": [ "@cite_29", "@cite_32" ], "mid": [ "2587422534", "2111976819" ], "abstract": [ "Online monitoring, providing the real-time status information of servers, is indispensable for the management of distributed systems, e.g. failure detection and resource scheduling. The main design challenges for distributed monitoring systems include scalability, fine granularity, reliability and low overheads. And the challenges are growing with the increase of the scales of the distributed systems. To address the above problems, this paper studies improvements to online distributed monitoring systems (ODMSs) from three aspects: online compression algorithm, online compression reliability, and data representation for information interchanges. We summarize and classify the existing online compression algorithms to identify some research gaps that may represent opportunities for future research. A simple solution is proposed to address the problem that the inaccuracy of compression algorithms may be caused by some failures of distributed systems. A bitmap-like data format is presented to reduce the per-node overheads and the overheads of the management node in ODMSs, and compared with other existing formats used in the monitoring system both in mathematical analysis and practical experiment. The results show that the bitmap-like data format achieves best performance overall.", "We consider the problem of collectively approximating a set of sensor signals using the least amount of space so that any individual signal can be efficiently reconstructed within a given maximum (L∞) error e. The problem arises naturally in applications that need to collect large amounts of data from multiple concurrent sources, such as sensors, servers and network routers, and archive them over a long period of time for offline data mining. We present GAMPS, a general framework that addresses this problem by combining several novel techniques. First, it dynamically groups multiple signals together so that signals within each group are correlated and can be maximally compressed jointly. Second, it appropriately scales the amplitudes of different signals within a group and compresses them within the maximum allowed reconstruction error bound. Our schemes are polynomial time O(α, β approximation schemes, meaning that the maximum (L∞) error is at most α e and it uses at most β times the optimal memory. Finally, GAMPS maintains an index so that various queries can be issued directly on compressed data. Our experiments on several real-world sensor datasets show that GAMPS significantly reduces space without compromising the quality of search and query." ] }
1903.10269
2923537944
To monitor critical infrastructure, high quality sensors sampled at a high frequency are increasingly installed. However, due to the big amounts of data produced, only simple aggregates are stored. This removes outliers and hides fluctuations that could indicate problems. As a solution we propose compressing time series with dimensions using a model-based method we name Multi-model Group Compression (MMGC). MMGC adaptively compresses groups of correlated time series with dimensions using an extensible set of models within a user-defined error bound (possibly zero). To partition time series into groups, we propose a set of primitives for efficiently describing correlation for data sets of varying sizes. We also propose efficient query processing algorithms for executing multi-dimensional aggregate queries on models instead of data points. Last, we provide an open-source implementation of our methods as extensions to the model-based Time Series Management System (TSMS) ModelarDB. ModelarDB interfaces with the stock versions of Apache Spark and Apache Cassandra and thus can reuse existing infrastructure. Through an evaluation we show that, compared to widely used systems, our extended ModelarDB provides up to 11 times faster ingestion due to high compression, 65 times better compression due to the adaptivity of MMGC, 92 times faster aggregate queries as they are executed on models, and close to linear scalability while also being extensible and supporting online query processing.
dbms with explicit support for using mathematical models for data cleaning or compression have also been proposed. MauveDB @cite_5 integrates the use of models as part of an rdbms using views, to support data cleaning without needing to export the data to an external application. FunctionDB @cite_7 natively supports models in the form of polynomial functions, allowing queries to be evaluated directly on models when possible. Plato @cite_40 supports models for cleaning and has a framework for adding user-defined models that integrate with the system's optimizer and query processor. Using an in-memory tree-based index, a distributed key-value store and MapReduce @cite_6 allows segments to be stored and queried in a distributed system. @cite_34 provides distributed model-based time series management using mmc with user-defined models by integrating the portable with Spark and Cassandra.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_40", "@cite_5", "@cite_34" ], "mid": [ "2168967034", "2021876375", "2400753791", "2116832440", "2889158314" ], "abstract": [ "Many scientific, financial, data mining and sensor network applications need to work with continuous, rather than discrete data e.g., temperature as a function of location, or stock prices or vehicle trajectories as a function of time. Querying raw or discrete data is unsatisfactory for these applications -- e.g., in a sensor network, it is necessary to interpolate sensor readings to predict values at locations where sensors are not deployed. In other situations, raw data can be inaccurate owing to measurement errors, and it is useful to fit continuous functions to raw data and query the functions, rather than raw data itself -- e.g., fitting a smooth curve to noisy sensor readings, or a smooth trajectory to GPS data containing gaps or outliers. Existing databases do not support storing or querying continuous functions, short of brute-force discretization of functions into a collection of tuples. We present FunctionDB, a novel database system that treats mathematical functions as first-class citizens that can be queried like traditional relations. The key contribution of FunctionDB is an efficient and accurate algebraic query processor - for the broad class of multi-variable polynomial functions, FunctionDB executes queries directly on the algebraic representation of functions without materializing them into discrete points, using symbolic operations: zero finding, variable substitution, and integration. Even when closed form solutions are intractable, FunctionDB leverages symbolic approximation operations to improve performance. We evaluate FunctionDB on real data sets from a temperature sensor network, and on traffic traces from Boston roads. We show that operating in the functional domain has substantial advantages in terms of accuracy (15-30 ) and up to order of magnitude (10x-100x) performance wins over existing approaches that represent models as discrete collections of points.", "As the number of sensors that pervade our lives increases (e.g., environmental sensors, phone sensors, etc.), the efficient management of massive amount of sensor data is becoming increasingly important. The infinite nature of sensor data poses a serious challenge for query processing even in a cloud infrastructure. Traditional raw sensor data management systems based on relational databases lack scalability to accommodate large-scale sensor data efficiently. Thus, distributed key-value stores in the cloud are becoming a prime tool to manage sensor data. Model-view sensor data management, which stores the sensor data in the form of modeled segments, brings the additional advantages of data compression and value interpolation. However, currently there are no techniques for indexing and or query optimization of the model-view sensor data in the cloud; full table scan is needed for query processing in the worst case. In this paper, we propose an innovative index for modeled segments in key-value stores, namely KVI-index. KVI-index consists of two interval indices on the time and sensor value dimensions respectively, each of which has an in-memory search tree and a secondary list materialized in the key-value store. Then, we introduce a KVI-index-Scan-MapReduce hybrid approach to perform efficient query processing upon modeled data streams. As proved by a series of experiments at a private cloud infrastructure, our approach outperforms in query-response time and index-updating efficiency both Hadoop-based parallel processing of the raw sensor data and multiple alternative indexing approaches of model-view data.", "Sensors generate large amounts of spatiotemporal data that have to be stored and analyzed. However, spatiotemporal data still lack the equivalent of a DBMS that would allow their declarative analysis. We argue that the reason for this is that DBMSs have been built with the assumption that the stored data are the ground truth. This is not the case with sensor measurements, which are merely incomplete and inaccurate samples of the ground truth. Based on this observation, we present Plato; an extensible DBMS for spatiotemporal sensor data that leverages signal processing algorithms to infer from the measurements the underlying ground truth in the form of statistical models. These models are then used to answer queries over the data. By operating on the model instead of the raw data, Plato achieves significant data compression and corresponding query processing speedup. Moreover, by employing models that separate the signal from the noise, Plato produces query results of higher quality than even the original measurements.", "Real-world data --- especially when generated by distributed measurement infrastructures such as sensor networks --- tends to be incomplete, imprecise, and erroneous, making it impossible to present it to users or feed it directly into applications. The traditional approach to dealing with this problem is to first process the data using statistical or probabilistic models that can provide more robust interpretations of the data. Current database systems, however, do not provide adequate support for applying models to such data, especially when those models need to be frequently updated as new data arrives in the system. Hence, most scientists and engineers who depend on models for managing their data do not use database systems for archival or querying at all; at best, databases serve as a persistent raw data store.In this paper we define a new abstraction called model-based views and present the architecture of MauveDB, the system we are building to support such views. Just as traditional database views provide logical data independence, model-based views provide independence from the details of the underlying data generating mechanism and hide the irregularities of the data by using models to present a consistent view to the users. MauveDB supports a declarative language for defining model-based views, allows declarative querying over such views using SQL, and supports several different materialization strategies and techniques to efficiently maintain them in the face of frequent updates. We have implemented a prototype system that currently supports views based on regression and interpolation, using the Apache Derby open source DBMS, and we present results that show the utility and performance benefits that can be obtained by supporting several different types of model-based views in a database system.", "" ] }
1903.10269
2923537944
To monitor critical infrastructure, high quality sensors sampled at a high frequency are increasingly installed. However, due to the big amounts of data produced, only simple aggregates are stored. This removes outliers and hides fluctuations that could indicate problems. As a solution we propose compressing time series with dimensions using a model-based method we name Multi-model Group Compression (MMGC). MMGC adaptively compresses groups of correlated time series with dimensions using an extensible set of models within a user-defined error bound (possibly zero). To partition time series into groups, we propose a set of primitives for efficiently describing correlation for data sets of varying sizes. We also propose efficient query processing algorithms for executing multi-dimensional aggregate queries on models instead of data points. Last, we provide an open-source implementation of our methods as extensions to the model-based Time Series Management System (TSMS) ModelarDB. ModelarDB interfaces with the stock versions of Apache Spark and Apache Cassandra and thus can reuse existing infrastructure. Through an evaluation we show that, compared to widely used systems, our extended ModelarDB provides up to 11 times faster ingestion due to high compression, 65 times better compression due to the adaptivity of MMGC, 92 times faster aggregate queries as they are executed on models, and close to linear scalability while also being extensible and supporting online query processing.
Another use of model-based time series compression is for approximate materialization of data cubes. Perera et. al. @cite_18 propose offline algorithms for finding similarities between time series aggregates, in an olap cube, similar aggregates, can then be materialized as a model or as a model and an offset to reduce the size of a materialized cube. A similar method for online data cubes were proposed by Shaikh et. al. @cite_14 . Using models an approximate data cube is materialized in memory. As data points are ingested the in-memory data cube is updated and the data points written to disk for persistence. To preserve memory models representing the oldest data might also be flushed to disk.
{ "cite_N": [ "@cite_18", "@cite_14" ], "mid": [ "2098921878", "2602802281" ], "abstract": [ "Evolving customer requirements and increasing competition force business organizations to store increasing amounts of data and query them for information at any given time. Due to the current growth of data volumes, timely extraction of relevant information becomes more and more difficult with traditional methods. In addition, contemporary Decision Support Systems (DSS) favor faster approximations over slower exact results. Generally speaking, processes that require exchange of data become inefficient when connection bandwidth does not increase as fast as the volume of data. In order to tackle these issues, compression techniques have been introduced in many areas of data processing. In this paper, we outline a new system that does not query complete datasets but instead utilizes models to extract the requested information. For time series data we use Fourier and Cosine transformations and piece-wise aggregation to derive the models. These models are initially created from the original data and are kept in the database along with it. Subsequent queries are answered using the stored models rather than scanning and processing the original datasets. In order to support model query processing, we maintain query statistics derived from experiments and when running the system. Our approach can also reduce communication load by exchanging models instead of data. To allow seamless integration of model-based querying into traditional data warehouses, we introduce a SQL compatible query terminology. Our experiments show that querying models is up to 80 faster than querying over the raw data while retaining a high accuracy.", "Many organizations require detailed and real time analysis of their business data for effective decision making. OLAP is one of the commonly used methods for the analysis of static data and has been studied by many researchers. OLAP is also applicable to data streams, however the requirement to produce real time analysis on fast and evolving data streams is not possible unless the data to be analysed reside on memory. Keeping in view the limited size and the volatile nature of the memory, we propose a novel architecture AOLAP which in addition to storing raw data streams to the secondary storage, maintains data stream’s summaries in a compact memory-based data structure. This work proposes the use of piece-wise linear approximation (PLA) for storing such data summaries corresponding to each materialized node in the OLAP cube. Since the PLA is a compact data structure, it can store the long data streams’ summaries in comparatively smaller space and can give approximate answers to OLAP queries." ] }
1903.10335
2923794453
The identification of the governing equations of chaotic dynamical systems from data has recently emerged as a hot topic. While the seminal work by reported proof-of-concepts for idealized observation setting for fully-observed systems, i.e. large signal-to-noise ratios and high-frequency sampling of all system variables, we here address the learning of data-driven representations of chaotic dynamics for partially-observed systems, including significant noise patterns and possibly lower and irregular sampling setting. Instead of considering training losses based on short-term prediction error like state-of-the-art learning-based schemes, we adopt a Bayesian formulation and state this issue as a data assimilation problem with unknown model parameters. To solve for the joint inference of the hidden dynamics and of model parameters, we combine neural-network representations and state-of-the-art assimilation schemes. Using iterative Expectation-Maximization (EM)-like procedures, the key feature of the proposed inference schemes is the derivation of the posterior of the hidden dynamics. Using a neural-network-based Ordinary Differential Equation (ODE) representation of these dynamics, we investigate two strategies: their combination to Ensemble Kalman Smoothers and Long Short-Term Memory (LSTM)-based variational approximations of the posterior. Through numerical experiments on the Lorenz-63 system with different noise and time sampling settings, we demonstrate the ability of the proposed schemes to recover and reproduce the hidden chaotic dynamics, including their Lyapunov characteristic exponents, when classic machine learning approaches fail.
Recently, the influence of deep learning @cite_32 has spread to every domain, including model identification. @cite_33 used DenseNet, @cite_39 used LSTM, @cite_6 used ResNet to identify the nonlinear dynamical systems by minimizing the short-term prediction error. They try to exploit the power of neural networks to overcome the difficulties of modelling the nonlinearities. The problems of these methods appear when the observations are partial, irregular or when a high level of noise is present. Neural networks, in general, do not have an efficient way to deal with irregularity. When the observations are highly damaged by noise, using the short-term prediction error as the objective function would very likely to make the network overfit the data. These methods can be considered as a specific case of our methodology, when @math collapse to @math .
{ "cite_N": [ "@cite_6", "@cite_32", "@cite_33", "@cite_39" ], "mid": [ "2951629468", "2527569769", "2782210340", "2788043262" ], "abstract": [ "Abstract We present a numerical framework for approximating unknown governing equations using observation data and deep neural networks (DNN). In particular, we propose to use residual network (ResNet) as the basic building block for equation approximation. We demonstrate that the ResNet block can be considered as a one-step method that is exact in temporal integration. We then present two multi-step methods, recurrent ResNet (RT-ResNet) method and recursive ReNet (RS-ResNet) method. The RT-ResNet is a multi-step method on uniform time steps, whereas the RS-ResNet is an adaptive multi-step method using variable time steps. All three methods presented here are based on integral form of the underlying dynamical system. As a result, they do not require time derivative data for equation recovery and can cope with relatively coarsely distributed trajectory data. Several numerical examples are presented to demonstrate the performance of the methods.", "A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence absence of associated labels captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.", "The process of transforming observed data into predictive mathematical models of the physical world has always been paramount in science and engineering. Although data is currently being collected at an ever-increasing pace, devising meaningful models out of such observations in an automated fashion still remains an open problem. In this work, we put forth a machine learning approach for identifying nonlinear dynamical systems from data. Specifically, we blend classical tools from numerical analysis, namely the multi-step time-stepping schemes, with powerful nonlinear function approximators, namely deep neural networks, to distill the mechanisms that govern the evolution of a given data-set. We test the effectiveness of our approach for several benchmark problems involving the identification of complex, nonlinear and chaotic dynamics, and we demonstrate how this allows us to accurately learn the dynamics, forecast future states, and identify basins of attraction. In particular, we study the Lorenz system, the fluid flow behind a cylinder, the Hopf bifurcation, and the Glycoltic oscillator model as an example of complicated nonlinear dynamics typical of biological systems.", "Abstract We present a deep learning model, DE-LSTM, for the simulation of a stochastic process with an underlying nonlinear dynamics. The deep learning model aims to approximate the probability density function of a stochastic process via numerical discretization and the underlying nonlinear dynamics is modeled by the Long Short-Term Memory (LSTM) network. It is shown that, when the numerical discretization is used, the function estimation problem can be solved by a multi-label classification problem. A penalized maximum log likelihood method is proposed to impose a smoothness condition in the prediction of the probability distribution. We show that the time evolution of the probability distribution can be computed by a high-dimensional integration of the transition probability of the LSTM internal states. A Monte Carlo algorithm to approximate the high-dimensional integration is outlined. The behavior of DE-LSTM is thoroughly investigated by using the Ornstein–Uhlenbeck process and noisy observations of nonlinear dynamical systems; Mackey–Glass time series and forced Van der Pol oscillator. It is shown that DE-LSTM makes a good prediction of the probability distribution without assuming any distributional properties of the stochastic process. For a multiple-step forecast of the Mackey–Glass time series, the prediction uncertainty, denoted by the 95 confidence interval, first grows, then dynamically adjusts following the evolution of the system, while in the simulation of the forced Van der Pol oscillator, the prediction uncertainty does not grow in time even for a 3,000-step forecast." ] }
1903.10335
2923794453
The identification of the governing equations of chaotic dynamical systems from data has recently emerged as a hot topic. While the seminal work by reported proof-of-concepts for idealized observation setting for fully-observed systems, i.e. large signal-to-noise ratios and high-frequency sampling of all system variables, we here address the learning of data-driven representations of chaotic dynamics for partially-observed systems, including significant noise patterns and possibly lower and irregular sampling setting. Instead of considering training losses based on short-term prediction error like state-of-the-art learning-based schemes, we adopt a Bayesian formulation and state this issue as a data assimilation problem with unknown model parameters. To solve for the joint inference of the hidden dynamics and of model parameters, we combine neural-network representations and state-of-the-art assimilation schemes. Using iterative Expectation-Maximization (EM)-like procedures, the key feature of the proposed inference schemes is the derivation of the posterior of the hidden dynamics. Using a neural-network-based Ordinary Differential Equation (ODE) representation of these dynamics, we investigate two strategies: their combination to Ensemble Kalman Smoothers and Long Short-Term Memory (LSTM)-based variational approximations of the posterior. Through numerical experiments on the Lorenz-63 system with different noise and time sampling settings, we demonstrate the ability of the proposed schemes to recover and reproduce the hidden chaotic dynamics, including their Lyapunov characteristic exponents, when classic machine learning approaches fail.
Two special methods that do not fall into the two classes above is the Analog Forecasting (AF) @cite_4 and the Sparse Regression (SR) @cite_35 . The analog forecasting is a non-parametric model that "learns by heart" the dynamics in the catalog. For each new observation @math , AF looks up its catalog and find the most similar points. It then predicts the next observation @math by averaging the evolution of these points in the catalog. Since AF is a k-NN based method, it does not work well in high-dimensional spaces. The sparse regression finds the analytic form of the dynamics by performing a regression on a basis formed by many possible functions of each component of the state. This method works extremely well when the observations are complete and clean. When the observation is noisy, partial or irregular, SF fails.
{ "cite_N": [ "@cite_35", "@cite_4" ], "mid": [ "2239232218", "2748401178" ], "abstract": [ "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.", "AbstractIn light of growing interest in data-driven methods for oceanic, atmospheric, and climate sciences, this work focuses on the field of data assimilation and presents the analog data assimilation (AnDA). The proposed framework produces a reconstruction of the system dynamics in a fully data-driven manner where no explicit knowledge of the dynamical model is required. Instead, a representative catalog of trajectories of the system is assumed to be available. Based on this catalog, the analog data assimilation combines the nonparametric sampling of the dynamics using analog forecasting methods with ensemble-based assimilation techniques. This study explores different analog forecasting strategies and derives both ensemble Kalman and particle filtering versions of the proposed analog data assimilation approach. Numerical experiments are examined for two chaotic dynamical systems: the Lorenz-63 and Lorenz-96 systems. The performance of the analog data assimilation is discussed with respect to classical ..." ] }
1903.10442
2923841483
Recent advances in crowd counting have achieved promising results with increasingly complex convolutional neural network designs. However, due to the unpredictable domain shift, generalizing trained model to unseen scenarios is often suboptimal. Inspired by the observation that density maps of different scenarios share similar local structures, we propose a novel adversarial learning approach in this paper, i.e., CODA (). To deal with different object scales and density distributions, we perform adversarial training with pyramid patches of multi-scales from both source- and target-domain. Along with a ranking constraint across levels of the pyramid input, consistent object counts can be produced for different scales. Extensive experiments demonstrate that our network produces much better results on unseen datasets compared with existing counting adaption models. Notably, the performance of our CODA is comparable with the state-of-the-art fully-supervised models that are trained on the target dataset. Further analysis indicates that our density adaption framework can effortlessly extend to scenarios with different objects.
So far, Convolutional Neural Networks (CNNs) have shown their rich representative power in training and testing on a single scenario, but generalizing trained model to unseen scenarios reveals the poor performance due to the dramatically domain shift. Therefore, several semi-supervised counting algorithms have been proposed to solve the issues in domain shift and time-consuming labor annotations during counting adaption. FA @cite_17 generalizes one scenario to another scenario with a small number of labelled samples by exploiting the underlying manifold structure of target crowd data. Based on Bayesian model adaptation of Gaussian processes, GPTL @cite_5 uses limited annotated target domain samples to adapt Gaussian kernel. Besides, Crowd CNN @cite_6 performs candidate scene and local patch retrieval on training data in the light of test data to further fine-tune the pre-trained model.
{ "cite_N": [ "@cite_5", "@cite_6", "@cite_17" ], "mid": [ "2197234429", "1910776219", "2164990725" ], "abstract": [ "The problem of transfer learning is considered in the domain of crowd counting. A solution based on Bayesian model adaptation of Gaussian processes is proposed. This is shown to produce intuitive model updates, which are tractable, and lead to an adapted model (predictive distribution) that accounts for all information in both training and adaptation data. The new adaptation procedure achieves significant gains over previous approaches, based on multi-task learning, while requiring much less computation to deploy. This makes it particularly suited for the problem of expanding the capacity of crowd counting camera networks. A large video dataset for the evaluation of adaptation approaches to crowd counting is also introduced. This contains a number of adaptation tasks, involving information transfer across video collected by 1) a single camera under different scene conditions (different times of the day) and 2) video collected from different cameras. Evaluation of the proposed model adaptation procedure in this dataset shows good performance in realistic operating conditions.", "Cross-scene crowd counting is a challenging task where no laborious data annotation is required for counting people in new target surveillance crowd scenes unseen in the training set. The performance of most existing crowd counting methods drops significantly when they are applied to an unseen scene. To address this problem, we propose a deep convolutional neural network (CNN) for crowd counting, and it is trained alternatively with two related learning objectives, crowd density and crowd count. This proposed switchable learning approach is able to obtain better local optimum for both objectives. To handle an unseen target crowd scene, we present a data-driven method to finetune the trained CNN model for the target scene. A new dataset including 108 crowd scenes with nearly 200,000 head annotations is introduced to better evaluate the accuracy of cross-scene crowd counting methods. Extensive experiments on the proposed and another two existing datasets demonstrate the effectiveness and reliability of our approach.", "Regression-based techniques have shown promising results for people counting in crowded scenes. However, most existing techniques require expensive and laborious data annotation for model training. In this study, we propose to address this problem from three perspectives: (1) Instead of exhaustively annotating every single frame, the most informative frames are selected for annotation automatically and actively. (2) Rather than learning from only labelled data, the abundant unlabelled data are exploited. (3) Labelled data from other scenes are employed to further alleviate the burden for data annotation. All three ideas are implemented in a unified active and semi-supervised regression framework with ability to perform transfer learning, by exploiting the underlying geometric structure of crowd patterns via manifold analysis. Extensive experiments validate the effectiveness of our approach." ] }
1903.10219
2922942510
Since the discovery of adversarial examples in machine learning, researchers have designed several techniques to train neural networks that are robust against different types of attacks (most notably @math and @math based attacks). However, it has been observed that the defense mechanisms designed to protect against one type of attack often offer poor performance against the other. In this paper, we introduce Randomized Adversarial Training (RAT), a technique that is efficient both against @math and @math attacks. To obtain this result, we build upon adversarial training, a technique that is efficient against @math attacks, and demonstrate that adding random noise at training and inference time further improves performance against attacks. We then show that RAT is as efficient as adversarial training against @math attacks while being robust against strong @math attacks. Our final comparative experiments demonstrate that RAT outperforms all state-of-the-art approaches against @math and @math attacks.
The FGSM attack can be seen as a one-step gradient method and have proposed variants of FGSM that perform multiple gradient descent steps. More specifically, introduced Projected Gradient Descent (PGD), an attack that can be used to generate adversarial examples defeating FGSM-based adversarial training. Naturally, also experimented adversarial training using adversarial examples generated with PGD. Adversarial training does offer an increased protection level against PGD adversarial examples, albeit at a higher computational cost. More recently, a number of defenses have been proposed @cite_0 @cite_17 @cite_6 and demonstrate good empirical results against some of the attacks, but do not offer a general improvement in robustness according to the thorough robustness study conducted by .
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_17" ], "mid": [ "2962759300", "", "2964224652" ], "abstract": [ "Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low probability regions of the training distribution, regardless of attack types and targeted models. Using statistical hypothesis testing, we find that modern neural density models are surprisingly good at detecting imperceptible image perturbations. Based on this discovery, we devised PixelDefend, a new approach that purifies a maliciously perturbed image by moving it back towards the distribution seen in the training data. The purified image is then run through an unmodified classifier, making our method agnostic to both the classifier and the attacking method. As a result, PixelDefend can be used to protect already deployed models and be combined with other model-specific defenses. Experiments show that our method greatly improves resilience across a wide variety of state-of-the-art attacking methods, increasing accuracy on the strongest attack from 63 to 84 for Fashion MNIST and from 32 to 70 for CIFAR-10.", "", "Following recent work, neural networks are widely-known to be vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification, threatening the reliability of deep learning in the wild. To guard against adversarial examples, we take inspiration from game theory and cast the problem as a minimax zero-sum game between the adversary and the model. In general, in such settings, optimal policies are stochastic. We propose stochastic activation pruning (SAP), an algorithm that prunes a random subset of activations, scaling up the survivors to compensate. The algorithm preferentially keeps activations with larger magnitudes. SAP can be applied to pre-trained neural networks, even adversarially trained models, without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that in the adversarial setting, SAP confers robustness, increasing accuracy and preserving calibration." ] }
1903.10219
2922942510
Since the discovery of adversarial examples in machine learning, researchers have designed several techniques to train neural networks that are robust against different types of attacks (most notably @math and @math based attacks). However, it has been observed that the defense mechanisms designed to protect against one type of attack often offer poor performance against the other. In this paper, we introduce Randomized Adversarial Training (RAT), a technique that is efficient both against @math and @math attacks. To obtain this result, we build upon adversarial training, a technique that is efficient against @math attacks, and demonstrate that adding random noise at training and inference time further improves performance against attacks. We then show that RAT is as efficient as adversarial training against @math attacks while being robust against strong @math attacks. Our final comparative experiments demonstrate that RAT outperforms all state-of-the-art approaches against @math and @math attacks.
Several attacks have also been developed @cite_18 @cite_19 . In this paper, we mainly experiment with the by which obtains the best results. This is an iterative and unbounded attack (but in practice the distortion remains low). The attack is very efficient against all networks but requires significant computational resources (generating a single adversarial example can takes minutes or even hours). Nevertheless this attack remains the most effective attack and the best existing defense mechanism so far () only offers marginal protection against it.
{ "cite_N": [ "@cite_19", "@cite_18" ], "mid": [ "2963857521", "2243397390" ], "abstract": [ "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95 to 0.5 .In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100 probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.1" ] }
1903.10219
2922942510
Since the discovery of adversarial examples in machine learning, researchers have designed several techniques to train neural networks that are robust against different types of attacks (most notably @math and @math based attacks). However, it has been observed that the defense mechanisms designed to protect against one type of attack often offer poor performance against the other. In this paper, we introduce Randomized Adversarial Training (RAT), a technique that is efficient both against @math and @math attacks. To obtain this result, we build upon adversarial training, a technique that is efficient against @math attacks, and demonstrate that adding random noise at training and inference time further improves performance against attacks. We then show that RAT is as efficient as adversarial training against @math attacks while being robust against strong @math attacks. Our final comparative experiments demonstrate that RAT outperforms all state-of-the-art approaches against @math and @math attacks.
An important idea to better protect models against strong attacks is to add noise at inference. The presence of noise complicates the design of adversarial examples because the precise outcome of the model is not known by the attacker. The first method explicitly using this kind of technique was introduced by and a number of alternative noise based protection methods have been proposed @cite_12 @cite_4 offers some protection. However, have remarked that randomization based protection mechanisms should be evaluated against attacks in expectation. Most existing noise-based protection mechanisms are weak under this evaluation methodology. Nevertheless, randomization remains the only viable mechanism against attacks.
{ "cite_N": [ "@cite_4", "@cite_12" ], "mid": [ "2900663597", "2773300778" ], "abstract": [ "Recent development in the field of Deep Learning have exposed the underlying vulnerability of Deep Neural Network (DNN) against adversarial examples. In image classification, an adversarial example is a carefully modified image that is visually imperceptible to the original image but can cause DNN model to misclassify it. Training the network with Gaussian noise is an effective technique to perform model regularization, thus improving model robustness against input variation. Inspired by this classical method, we explore to utilize the regularization characteristic of noise injection to improve DNN's robustness against adversarial attack. In this work, we propose Parametric-Noise-Injection (PNI) which involves trainable Gaussian noise injection at each layer on either activation or weights through solving the min-max optimization problem, embedded with adversarial training. These parameters are trained explicitly to achieve improved robustness. To the best of our knowledge, this is the first work that uses trainable noise injection to improve network robustness against adversarial attacks, rather than manually configuring the injected noise level through cross-validation. The extensive results show that our proposed PNI technique effectively improves the robustness against a variety of powerful white-box and black-box attacks such as PGD, C & W, FGSM, transferable attack and ZOO attack. Last but not the least, PNI method improves both clean- and perturbed-data accuracy in comparison to the state-of-the-art defense methods, which outperforms current unbroken PGD defense by 1.1 and 6.8 on clean test data and perturbed test data respectively using Resnet-20 architecture.", "Recent studies have revealed the vulnerability of deep neural networks - A small adversarial perturbation that is imperceptible to human can easily make a well-trained deep neural network mis-classify. This makes it unsafe to apply neural networks in security-critical applications. In this paper, we propose a new defensive algorithm called Random Self-Ensemble (RSE) by combining two important concepts: @math and @math . To protect a targeted model, RSE adds random noise layers to the neural network to prevent from state-of-the-art gradient-based attacks, and ensembles the prediction over random noises to stabilize the performance. We show that our algorithm is equivalent to ensemble an infinite number of noisy models @math without any additional memory overhead, and the proposed training procedure based on noisy stochastic gradient descent can ensure the ensemble model has good predictive capability. Our algorithm significantly outperforms previous defense techniques on real datasets. For instance, on CIFAR-10 with VGG network (which has @math accuracy without any attack), under the state-of-the-art C&W attack within a certain distortion tolerance, the accuracy of unprotected model drops to less than @math , the best previous defense technique has @math accuracy, while our method still has @math prediction accuracy under the same level of attack. Finally, our method is simple and easy to integrate into any neural network." ] }
1903.10064
2922535331
This paper proposes an intuitive human-swarm interaction framework inspired by our childhood memory in which we interacted with living ants by changing their positions and environments as if we were omnipotent relative to the ants. In virtual reality, analogously, we can be a super-powered virtual giant who can supervise a swarm of mobile robots in a vast and remote environment by flying over or resizing the world and coordinate them by picking and placing a robot or creating virtual walls. This work implements this idea by using Virtual Reality along with Leap Motion, which is then validated by proof-of-concept experiments using real and virtual mobile robots in mixed reality. We conduct a usability analysis to quantify the effectiveness of the overall system as well as the individual interfaces proposed in this work. The results revealed that the proposed method is intuitive and feasible for interaction with swarm robots, but may require appropriate training for the new end-user interface device.
has been also utilised in @cite_7 @cite_10 . This method generally uses a tablet computer, which, through its rear-view camera, recognises robots and objects in the environment. Using the touchscreen, a user can control the robots shown on the screen, for example, by swipe gestures. In @cite_19 , an AR-based method was tested for cooperative transport tasks of multiple robots. However, this type of interface is only available in the proximity environment.
{ "cite_N": [ "@cite_19", "@cite_10", "@cite_7" ], "mid": [ "2969014073", "", "2624190076" ], "abstract": [ "We present an augmented reality human-swarm interface that combines two modalities of interaction: environment-oriented and robot-oriented. The environment-oriented modality allows the user to modify the environment (either virtual or physical) to indicate a goal to attain for the robot swarm. The robot-oriented modality makes it possible to select individual robots to reassign them to other tasks to increase performance or remedy failures. Previous research has concluded that environment-oriented interaction might prove more difficult to grasp for untrained users. In this paper, we report a user study which indicates that, at least in collective transport, environment-oriented interaction is more effective than purely robot-oriented interaction, and that the two combined achieve remarkable efficacy.", "", "Although human-multi-robot systems have received increased attention in recent years, current implementations rely on structured environments and utilize specialized, research-grade hardware to operate. This letter presents approaches that leverage the visual and inertial sensing of mobile devices to address the estimation and control challenges of multi-robot systems that function in shared spaces with human operators such that both the mobile device camera and robots can move freely in the environment. It is shown that a subset of robots in the system can be used to maintain a reference frame that facilitates tracking and control of the remaining robots to perform tasks, such as object retrieval, using an operator's mobile device as the only sensing and computational platform in the system. To evaluate the performance of the proposed approaches, experiments are conducted in which a system of mobile robots is commanded to retrieve objects in an environment. Results show that, compared to using the visual data alone, integrating both the visual and inertial data from mobile devices yields improvements in performance, flexibility, and computational efficiency in implementing human-multi-robot systems." ] }
1903.10064
2922535331
This paper proposes an intuitive human-swarm interaction framework inspired by our childhood memory in which we interacted with living ants by changing their positions and environments as if we were omnipotent relative to the ants. In virtual reality, analogously, we can be a super-powered virtual giant who can supervise a swarm of mobile robots in a vast and remote environment by flying over or resizing the world and coordinate them by picking and placing a robot or creating virtual walls. This work implements this idea by using Virtual Reality along with Leap Motion, which is then validated by proof-of-concept experiments using real and virtual mobile robots in mixed reality. We conduct a usability analysis to quantify the effectiveness of the overall system as well as the individual interfaces proposed in this work. The results revealed that the proposed method is intuitive and feasible for interaction with swarm robots, but may require appropriate training for the new end-user interface device.
interactions can be another methodology for certain types of swarm robots. The work in @cite_21 presented tiny tabletop mobile robots, with which a human can interact them. By relocating a few of the robots, the entire robots eventually end up with different collective behaviours. This tangible interface inherently does not allow any interfacing error when it comes to changing of a robot's position. Nevertheless, apart from position modifications, it seems not straightforward to include the other interfaces.
{ "cite_N": [ "@cite_21" ], "mid": [ "2533619018" ], "abstract": [ "This paper introduces swarm user interfaces, a new class of human-computer interfaces comprised of many autonomous robots that handle both display and interaction. We describe the design of Zooids, an open-source open-hardware platform for developing tabletop swarm interfaces. The platform consists of a collection of custom-designed wheeled micro robots each 2.6 cm in diameter, a radio base-station, a high-speed DLP structured light projector for optical tracking, and a software framework for application development and control. We illustrate the potential of tabletop swarm user interfaces through a set of application scenarios developed with Zooids, and discuss general design considerations unique to swarm user interfaces." ] }
1903.10064
2922535331
This paper proposes an intuitive human-swarm interaction framework inspired by our childhood memory in which we interacted with living ants by changing their positions and environments as if we were omnipotent relative to the ants. In virtual reality, analogously, we can be a super-powered virtual giant who can supervise a swarm of mobile robots in a vast and remote environment by flying over or resizing the world and coordinate them by picking and placing a robot or creating virtual walls. This work implements this idea by using Virtual Reality along with Leap Motion, which is then validated by proof-of-concept experiments using real and virtual mobile robots in mixed reality. We conduct a usability analysis to quantify the effectiveness of the overall system as well as the individual interfaces proposed in this work. The results revealed that the proposed method is intuitive and feasible for interaction with swarm robots, but may require appropriate training for the new end-user interface device.
All the aforementioned interfaces require a human operator to be within proximity of robots. Instead, -based interactions can be considered as an alternative for beyond-line-of-sight robotic operations. In a virtual space where a human operator interacts with swarm robots, the operator is able to violate the laws of physics, teleporting @cite_12 or resizing the virtual world (as will be shown in this paper) to observe the situation macroscopically. This may facilitate to perceive and control a large number of robots in a vast and remote environment. However, most of existing VR-based interfaces rely on default hand-held equipment. They would be less intuitive than using our bare hands, but also may cause considerable load on the user's arms when in use for a longer time.
{ "cite_N": [ "@cite_12" ], "mid": [ "2847534950" ], "abstract": [ "This chapter describes a series of works developed in order to integrate ROS-based robots with Unity-based virtual reality interfaces. The main goal of this integration is to develop immersive monitoring and commanding interfaces, able to improve the operator’s situational awareness without increasing its workload. In order to achieve this, the available technologies and resources are analyzed and multiple ROS packages and Unity assets are applied, such as (multimaster ), (rosbridge ), RosBridgeLib and SteamVR. Moreover, three applications are presented: an interface for monitoring a fleet of drones, another interface for commanding a robot manipulator and an integration of multiple ground and aerial robots. Finally, some experiences and lessons learned, useful for future developments, are reported." ] }
1903.10145
2951670304
Variational autoencoders (VAEs) with an auto-regressive decoder have been applied for many natural language processing (NLP) tasks. The VAE objective consists of two terms, (i) reconstruction and (ii) KL regularization, balanced by a weighting hyper-parameter . One notorious training difficulty is that the KL term tends to vanish. In this paper we study scheduling schemes for , and show that KL vanishing is caused by the lack of good latent codes in training the decoder at the beginning of optimization. To remedy this, we propose a cyclical annealing schedule, which repeats the process of increasing multiple times. This new procedure allows the progressive learning of more meaningful latent codes, by leveraging the informative representations of previous cycles as warm re-starts. The effectiveness of cyclical annealing is validated on a broad range of NLP tasks, including language modeling, dialog response generation and unsupervised language pre-training.
Several techniques have been proposed to mitigate the KL vanishing issue. The proposed method is most closely related to the monotonic KL annealing technique in @cite_30 . In addition to introducing a specific algorithm, we have comprehensively studied the impact of @math and its scheduling schemes. Our explanations can be used to interpret other techniques, which can be broadly categorized into two classes.
{ "cite_N": [ "@cite_30" ], "mid": [ "2210838531" ], "abstract": [ "The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling." ] }
1903.10145
2951670304
Variational autoencoders (VAEs) with an auto-regressive decoder have been applied for many natural language processing (NLP) tasks. The VAE objective consists of two terms, (i) reconstruction and (ii) KL regularization, balanced by a weighting hyper-parameter . One notorious training difficulty is that the KL term tends to vanish. In this paper we study scheduling schemes for , and show that KL vanishing is caused by the lack of good latent codes in training the decoder at the beginning of optimization. To remedy this, we propose a cyclical annealing schedule, which repeats the process of increasing multiple times. This new procedure allows the progressive learning of more meaningful latent codes, by leveraging the informative representations of previous cycles as warm re-starts. The effectiveness of cyclical annealing is validated on a broad range of NLP tasks, including language modeling, dialog response generation and unsupervised language pre-training.
The first category attempts to weaken Path B, and force the decoder to use Path A. Word drop decoding @cite_30 sets a certain percentage of the target words to zero. It has shown that it may degrade the performance when the drop rate is too high. The dilated CNN was considered in @cite_18 as a new type of decoder to replace the LSTM. By changing the decoder's dilation architecture, one can control Path B: the effective context from @math .
{ "cite_N": [ "@cite_30", "@cite_18" ], "mid": [ "2210838531", "2963600562" ], "abstract": [ "The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling.", "Recent work on generative text modeling has found that variational autoencoders (VAE) with LSTM decoders perform worse than simpler LSTM language models (, 2015). This negative result is so far poorly understood, but has been attributed to the propensity of LSTM decoders to ignore conditioning information from the encoder. In this paper, we experiment with a new type of decoder for VAE: a dilated CNN. By changing the decoder's dilation architecture, we control the size of context from previously generated words. In experiments, we find that there is a trade-off between contextual capacity of the decoder and effective use of encoding information. We show that when carefully managed, VAEs can outperform LSTM language models. We demonstrate perplexity gains on two datasets, representing the first positive language modeling result with VAE. Further, we conduct an in-depth investigation of the use of VAE (with our new decoding architecture) for semi-supervised and unsupervised labeling tasks, demonstrating gains over several strong baselines." ] }
1903.10145
2951670304
Variational autoencoders (VAEs) with an auto-regressive decoder have been applied for many natural language processing (NLP) tasks. The VAE objective consists of two terms, (i) reconstruction and (ii) KL regularization, balanced by a weighting hyper-parameter . One notorious training difficulty is that the KL term tends to vanish. In this paper we study scheduling schemes for , and show that KL vanishing is caused by the lack of good latent codes in training the decoder at the beginning of optimization. To remedy this, we propose a cyclical annealing schedule, which repeats the process of increasing multiple times. This new procedure allows the progressive learning of more meaningful latent codes, by leveraging the informative representations of previous cycles as warm re-starts. The effectiveness of cyclical annealing is validated on a broad range of NLP tasks, including language modeling, dialog response generation and unsupervised language pre-training.
Warm-restart techniques are common in optimization to deal with multimodal functions. The cyclical schedule has been used to train deep neural networks @cite_1 , warm restart stochastic gradient descent @cite_17 , improve convergence rates @cite_28 , obtain model ensembles @cite_8 and explore multimodal distributions in MCMC sampling @cite_11 . All these works applied cyclical schedules to the learning rate. In contrast, this paper represents the first to consider the cyclical schedule for @math in VAE. Though the techniques seem simple and similar, our motivation is different: we use the cyclical schedule to re-open Path A in Figure (b) and provide the opportunity to train the decoder with high-quality @math .
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_1", "@cite_17", "@cite_11" ], "mid": [ "2612983688", "2749581528", "2964054038", "2963263347", "2919841361" ], "abstract": [ "Ensembles of neural networks are known to be much more robust and accurate than individual networks. However, training multiple deep networks for model averaging is computationally expensive. In this paper, we propose a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost. We achieve this goal by training a single neural network, converging to several local minima along its optimization path and saving the model parameters. To obtain repeated rapid convergence, we leverage recent work on cyclic learning rate schedules. The resulting technique, which we refer to as Snapshot Ensembling, is simple, yet surprisingly effective. We show in a series of experiments that our approach is compatible with diverse network architectures and learning tasks. It consistently yields lower error rates than state-of-the-art single models at no additional training cost, and compares favorably with traditional network ensembles. On CIFAR-10 and CIFAR-100 our DenseNet Snapshot Ensembles obtain error rates of 3.4 and 17.4 respectively.", "In this paper, we show a phenomenon, which we named \"super-convergence\", where residual networks can be trained using an order of magnitude fewer iterations than is used with standard training methods. The existence of super-convergence is relevant to understanding why deep networks generalize well. One of the key elements of super-convergence is training with cyclical learning rates and a large maximum learning rate. Furthermore, we present evidence that training with large learning rates improves performance by regularizing the network. In addition, we show that super-convergence provides a greater boost in performance relative to standard training when the amount of labeled training data is limited. We also derive a simplification of the Hessian Free optimization method to compute an estimate of the optimal learning rate. The architectures and code to replicate the figures in this paper are available at github.com lnsmith54 super-convergence.", "It is known that the learning rate is the most important hyper-parameter to tune for training deep neural networks. This paper describes a new method for setting the learning rate, named cyclical learning rates, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates. Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values. Training with cyclical learning rates instead of fixed values achieves improved classification accuracy without a need to tune and often in fewer iterations. This paper also describes a simple way to estimate \"reasonable bounds\" – linearly increasing the learning rate of the network for a few epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10 and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets, and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These are practical tools for everyone who trains neural networks.", "Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14 and 16.21 , respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https: github.com loshchil SGDR", "The posteriors over neural network weights are high dimensional and multimodal. Each mode typically characterizes a meaningfully different representation of the data. We develop Cyclical Stochastic Gradient MCMC (SG-MCMC) to automatically explore such distributions. In particular, we propose a cyclical stepsize schedule, where larger steps discover new modes, and smaller steps characterize each mode. We prove that our proposed learning rate schedule provides faster convergence to samples from a stationary distribution than SG-MCMC with standard decaying schedules. Moreover, we provide extensive experimental results to demonstrate the effectiveness of cyclical SG-MCMC in learning complex multimodal distributions, especially for fully Bayesian inference with modern deep neural networks." ] }
1903.10118
2923871811
So far, research to generate captions from images has been carried out from the viewpoint that a caption holds sufficient information for an image. If it is possible to generate an image that is close to the input image from a generated caption, i.e., if it is possible to generate a natural language caption containing sufficient information to reproduce the image, then the caption is considered to be faithful to the image. To make such regeneration possible, learning using the cycle-consistency loss is effective. In this study, we propose a method of generating captions by learning end-to-end mutual transformations between images and texts. To evaluate our method, we perform comparative experiments with and without the cycle consistency. The results are evaluated by an automatic evaluation and crowdsourcing, demonstrating that our proposed method is effective.
Object recognition in images, understanding the relationships between objects, and the generation of grammatically correct sentences are required to generate captions from images, and research to make this possible has been conducted in recent years using machine learning. We refer the reader to the representative study of @cite_25 . This represents an application of the encoder--decoder model, which achieves a strong performance in machine translation and uses a convolutional neural network (CNN) to encode feature from images and decode captions using a recurrent neural network (RNN). Here, long short-term memory (LSTM) @cite_6 is employed for the RNN. In addition, methods incorporating attention have recently become mainstream @cite_31 . When generating a caption, the first word is acquired by performing discrete sampling, such as searching for the maximum value of the appearance probability of the next word generated from the image features. Then, the next word is acquired from that word in the same manner, and so on.
{ "cite_N": [ "@cite_31", "@cite_25", "@cite_6" ], "mid": [ "2950178297", "1895577753", "" ], "abstract": [ "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "" ] }
1903.10118
2923871811
So far, research to generate captions from images has been carried out from the viewpoint that a caption holds sufficient information for an image. If it is possible to generate an image that is close to the input image from a generated caption, i.e., if it is possible to generate a natural language caption containing sufficient information to reproduce the image, then the caption is considered to be faithful to the image. To make such regeneration possible, learning using the cycle-consistency loss is effective. In this study, we propose a method of generating captions by learning end-to-end mutual transformations between images and texts. To evaluate our method, we perform comparative experiments with and without the cycle consistency. The results are evaluated by an automatic evaluation and crowdsourcing, demonstrating that our proposed method is effective.
In the method of @cite_29 , the authors combine the image captioning network of with a network for evaluating the generated captions. In the generator, the latent variable connected to an image feature extracted by the CNN is passed to LSTM. The generated caption and image are input into a network for evaluation, and the image features extracted by the CNN are multiplied by the text features extracted by LSTM to determine whether the caption matches the image. In the unsupervised image captioning method of @cite_10 , the authors proposed an extension to unpaired data.
{ "cite_N": [ "@cite_29", "@cite_10" ], "mid": [ "2962968835", "2903179935" ], "abstract": [ "Despite the substantial progress in recent years, the image captioning techniques are still far from being perfect. Sentences produced by existing methods, e.g. those based on RNNs, are often overly rigid and lacking in variability. This issue is related to a learning principle widely used in practice, that is, to maximize the likelihood of training samples. This principle encourages high resemblance to the “ground-truth” captions, while suppressing other reasonable descriptions. Conventional evaluation metrics, e.g. BLEU and METEOR, also favor such restrictive methods. In this paper, we explore an alternative approach, with the aim to improve the naturalness and diversity – two essential properties of human expression. Specifically, we propose a new framework based on Conditional Generative Adversarial Networks (CGAN), which jointly learns a generator to produce descriptions conditioned on images and an evaluator to assess how well a description fits the visual content. It is noteworthy that training a sequence generator is nontrivial. We overcome the difficulty by Policy Gradient, a strategy stemming from Reinforcement Learning, which allows the generator to receive early feedback along the way. We tested our method on two large datasets, where it performed competitively against real people in our user study and outperformed other methods on various tasks.", "Deep neural networks have achieved great successes on the image captioning task. However, most of the existing models depend heavily on paired image-sentence datasets, which are very expensive to acquire. In this paper, we make the first attempt to train an image captioning model in an unsupervised manner. Instead of relying on manually labeled image-sentence pairs, our proposed model merely requires an image set, a sentence corpus, and an existing visual concept detector. The sentence corpus is used to teach the captioning model how to generate plausible sentences. Meanwhile, the knowledge in the visual concept detector is distilled into the captioning model to guide the model to recognize the visual concepts in an image. In order to further encourage the generated captions to be semantically consistent with the image, the image and caption are projected into a common latent space so that they can reconstruct each other. Given that the existing sentence corpora are mainly designed for linguistic research and are thus with little reference to image contents, we crawl a large-scale image description corpus of two million natural sentences to facilitate the unsupervised image captioning scenario. Experimental results show that our proposed model is able to produce quite promising results without any caption annotations." ] }
1903.10118
2923871811
So far, research to generate captions from images has been carried out from the viewpoint that a caption holds sufficient information for an image. If it is possible to generate an image that is close to the input image from a generated caption, i.e., if it is possible to generate a natural language caption containing sufficient information to reproduce the image, then the caption is considered to be faithful to the image. To make such regeneration possible, learning using the cycle-consistency loss is effective. In this study, we propose a method of generating captions by learning end-to-end mutual transformations between images and texts. To evaluate our method, we perform comparative experiments with and without the cycle consistency. The results are evaluated by an automatic evaluation and crowdsourcing, demonstrating that our proposed method is effective.
GANs @cite_11 consist of two parts: a generator and discriminator. In GANs, parameters of a generator are updated using gradients from a discriminator. However, because text generation usually involves indifferentiable discrete sampling, as in the study of , it is not possible to propagate gradients from discriminators and train networks. With this motivation, SeqGAN was proposed by @cite_18 as a method enabling the generation of text from latent variables using GANs. As for ordinary GANs, the discriminator determines whether the input text is real or fake. An RNN is employed as the generator, but reinforcement learning is utilized for training. The parameter of the generator is utilized as the policy, the result of the discriminator acts as the reward, and the parameter is updated such that the reward is maximized using the policy gradient method. Because the discriminator can only calculate the reward for data series for which generation has been completed, the remainder of a partially generated series is generated using a Monte Carlo search, and the reward is calculated approximately. Using this method, it is difficult to start learning from random parameters, owing to the use of the policy gradient method, and pre-training is required.
{ "cite_N": [ "@cite_18", "@cite_11" ], "mid": [ "2964268978", "2099471712" ], "abstract": [ "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1903.10118
2923871811
So far, research to generate captions from images has been carried out from the viewpoint that a caption holds sufficient information for an image. If it is possible to generate an image that is close to the input image from a generated caption, i.e., if it is possible to generate a natural language caption containing sufficient information to reproduce the image, then the caption is considered to be faithful to the image. To make such regeneration possible, learning using the cycle-consistency loss is effective. In this study, we propose a method of generating captions by learning end-to-end mutual transformations between images and texts. To evaluate our method, we perform comparative experiments with and without the cycle consistency. The results are evaluated by an automatic evaluation and crowdsourcing, demonstrating that our proposed method is effective.
The method AlignDRAW @cite_16 was proposed by to generate images from text. This approach estimates the relationship between text features and a generated image using an RNN, but produces a blurred image that only captures rough features.
{ "cite_N": [ "@cite_16" ], "mid": [ "2963143316" ], "abstract": [ "Abstract: Motivated by the recent progress in generative models, we introduce a model that generates images from natural language descriptions. The proposed model iteratively draws patches on a canvas, while attending to the relevant words in the description. After training on Microsoft COCO, we compare our model with several baseline generative models on image generation and retrieval tasks. We demonstrate that our model produces higher quality samples than other approaches and generates images with novel scene compositions corresponding to previously unseen captions in the dataset." ] }
1903.10118
2923871811
So far, research to generate captions from images has been carried out from the viewpoint that a caption holds sufficient information for an image. If it is possible to generate an image that is close to the input image from a generated caption, i.e., if it is possible to generate a natural language caption containing sufficient information to reproduce the image, then the caption is considered to be faithful to the image. To make such regeneration possible, learning using the cycle-consistency loss is effective. In this study, we propose a method of generating captions by learning end-to-end mutual transformations between images and texts. To evaluate our method, we perform comparative experiments with and without the cycle consistency. The results are evaluated by an automatic evaluation and crowdsourcing, demonstrating that our proposed method is effective.
@cite_17 generated images from text using GANs. The DCGAN @cite_23 method, proposed by , is a mainstream approach using GANs to generate images from latent variables, and employs a CNN. In addition, CGAN @cite_0 makes it possible to generate an image that matches a condition label. Using the same network as CGAN, in the method of the text feature vector extracted from a caption is connected to the latent variable as a condition, and input into the generator. The discriminator determines whether the generated image is valid for the caption. A method @cite_15 combining a CNN and RNN is also employed to extract features from captions. The model trained using this method is utilized as a text encoder, but this component is employed without updates when training GANs.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_23", "@cite_17" ], "mid": [ "2125389028", "2398118205", "2963684088", "2405756170" ], "abstract": [ "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manuallyencoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.", "Abstract: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions." ] }
1903.10118
2923871811
So far, research to generate captions from images has been carried out from the viewpoint that a caption holds sufficient information for an image. If it is possible to generate an image that is close to the input image from a generated caption, i.e., if it is possible to generate a natural language caption containing sufficient information to reproduce the image, then the caption is considered to be faithful to the image. To make such regeneration possible, learning using the cycle-consistency loss is effective. In this study, we propose a method of generating captions by learning end-to-end mutual transformations between images and texts. To evaluate our method, we perform comparative experiments with and without the cycle consistency. The results are evaluated by an automatic evaluation and crowdsourcing, demonstrating that our proposed method is effective.
The StackGAN method of @cite_1 is another approach for generating high-resolution images. Like that of , this method is based on CGAN, but the training is divided into two stages. An image satisfying the conditions is generated in the first stage, and a high-resolution image is generated in the second stage. Here, if the text feature @math extracted from the caption @math is employed as it is, then the input vector is biased, and learning becomes unstable. To circumvent this issue, to maintain diversity the authors employ a condition @math sampled randomly from the normal distribution, whose average @math and diagonal covariance matrix @math are calculated from the text feature @math . In addition, to avoid overfitting the Kullback--Leibler divergence is incorporated into the loss function, representing the distance between the standard normal distribution and the normal distribution obtained from the feature. An additional study enables the generation of high-resolution images by end-to-end learning without division into two stages @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_1" ], "mid": [ "2963413689", "2964024144" ], "abstract": [ "This paper presents a novel method to deal with the challenging task of generating photographic images conditioned on semantic image descriptions. Our method introduces accompanying hierarchical-nested adversarial objectives inside the network hierarchies, which regularize mid-level representations and assist generator training to capture the complex image statistics. We present an extensile single-stream generator architecture to better adapt the jointed discriminators and push generated images up to high resolutions. We adopt a multi-purpose adversarial loss to encourage more effective image and text information usage in order to improve the semantic consistency and image fidelity simultaneously. Furthermore, we introduce a new visual-semantic similarity measure to evaluate the semantic consistency of generated images. With extensive experimental validation on three public datasets, our method significantly improves previous state of the arts on all datasets over different evaluation metrics.", "Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions." ] }
1903.10172
2924096267
With a single eye fixation lasting a fraction of a second, the human visual system is capable of forming a rich representation of a complex environment, reaching a holistic understanding which facilitates object recognition and detection. This phenomenon is known as recognizing the "gist" of the scene and is accomplished by relying on relevant prior knowledge. This paper addresses the analogous question of whether using memory in computer vision systems can not only improve the accuracy of object detection in video streams, but also reduce the computation time. By interleaving conventional feature extractors with extremely lightweight ones which only need to recognize the gist of the scene, we show that minimal computation is required to produce accurate detections when temporal memory is present. In addition, we show that the memory contains enough information for deploying reinforcement learning algorithms to learn an adaptive inference policy. Our model achieves state-of-the-art performance among mobile methods on the Imagenet VID 2015 dataset, while running at speeds of up to 70+ FPS on a Pixel 3 phone.
Initial work for extending single-image detection to the video domain usually centered on a postprocessing step where per-frame detections are linked together to form tracks, and detection confidences are modified based on other detections in the track. Seq-nms @cite_1 finds tracks via dynamic programming and boosts the confidence of weaker predictions. TCNN @cite_25 @cite_0 provides a pipeline with optical flow to propagate detections across frames and a tracking algorithm to find tubelets for rescoring. These early approaches yielded sizeable performance improvements, but did not fundamentally change the underlying per-frame detection process, which limited their effectiveness.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_25" ], "mid": [ "", "2282391807", "2336589871" ], "abstract": [ "", "Video object detection is challenging because objects that are easily detected in one frame may be difficult to detect in another frame within the same clip. Recently, there have been major advances for doing object detection in a single image. These methods typically contain three phases: (i) object proposal generation (ii) object classification and (iii) post-processing. We propose a modification of the post-processing phase that uses high-scoring object detections from nearby frames to boost scores of weaker detections within the same clip. We show that our method obtains superior results to state-of-the-art single image object detection techniques. Our method placed 3rd in the video object detection (VID) task of the ImageNet Large Scale Visual Recognition Challenge 2015 (ILSVRC2015).", "The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural networks, such as GoogleNet and VGG, novel object detection frameworks, such as R-CNN and its successors, Fast R-CNN, and Faster R-CNN, play an essential role in improving the state of the art. Despite their effectiveness on still images, those frameworks are not specifically designed for object detection from videos. Temporal and contextual information of videos are not fully investigated and utilized. In this paper, we propose a deep learning framework that incorporates temporal and contextual information from tubelets obtained in videos, which dramatically improves the baseline performance of existing still-image detection frameworks when they are applied to videos. It is called T-CNN, i.e., tubelets with convolutional neueral networks. The proposed framework won newly introduced an object-detection-from-video task with provided data in the ImageNet Large-Scale Visual Recognition Challenge 2015. Code is publicly available at https: github.com myfavouritekk T-CNN ." ] }
1903.10172
2924096267
With a single eye fixation lasting a fraction of a second, the human visual system is capable of forming a rich representation of a complex environment, reaching a holistic understanding which facilitates object recognition and detection. This phenomenon is known as recognizing the "gist" of the scene and is accomplished by relying on relevant prior knowledge. This paper addresses the analogous question of whether using memory in computer vision systems can not only improve the accuracy of object detection in video streams, but also reduce the computation time. By interleaving conventional feature extractors with extremely lightweight ones which only need to recognize the gist of the scene, we show that minimal computation is required to produce accurate detections when temporal memory is present. In addition, we show that the memory contains enough information for deploying reinforcement learning algorithms to learn an adaptive inference policy. Our model achieves state-of-the-art performance among mobile methods on the Imagenet VID 2015 dataset, while running at speeds of up to 70+ FPS on a Pixel 3 phone.
Later, @cite_5 discovered that intermediate features in a convolutional neural network could be directly propagated between video frames via optical flow. The DFF framework @cite_5 demonstrated that it is sufficient to compute detections on sparse keyframes and perform feature propagation on all other frames by computing optical flow, which is substantially cheaper. FGFA @cite_14 showed that this idea can also be used to improve accuracy if per-frame detections are densely computed and features from neighboring frames are warped to the current frame and aggregated with weighted averaging. Impression networks @cite_2 balance speed and accuracy by using sparse keyframes but retaining an "impression feature" which is aggregated across keyframes and stores long-term temporal information. Further work by @cite_31 introduces efficient feature aggregation as well as a measure of feature quality after warping, which is used to improve keyframe selection and sparsely replace poorly warped features.
{ "cite_N": [ "@cite_5", "@cite_31", "@cite_14", "@cite_2" ], "mid": [ "2552900565", "2963653352", "2964286567", "2777578098" ], "abstract": [ "Deep convolutional neutral networks have achieved great success on image recognition tasks. Yet, it is non-trivial to transfer the state-of-the-art image recognition networks to videos as per-frame evaluation is too slow and unaffordable. We present deep feature flow, a fast and accurate framework for video recognition. It runs the expensive convolutional sub-network only on sparse key frames and propagates their deep feature maps to other frames via a flow field. It achieves significant speedup as flow computation is relatively fast. The end-to-end training of the whole architecture significantly boosts the recognition accuracy. Deep feature flow is flexible and general. It is validated on two recent large scale video datasets. It makes a large step towards practical video recognition. Code would be released.", "There has been significant progresses for image object detection in recent years. Nevertheless, video object detection has received little attention, although it is more challenging and more important in practical scenarios. Built upon the recent works [37, 36], this work proposes a unified approach based on the principle of multi-frame end-to-end learning of features and cross-frame motion. Our approach extends prior works with three new techniques and steadily pushes forward the performance envelope (speed-accuracy tradeoff), towards high performance video object detection.", "Extending state-of-the-art object detectors from image to video is challenging. The accuracy of detection suffers from degenerated object appearances in videos, e.g., motion blur, video defocus, rare poses, etc. Existing work attempts to exploit temporal information on box level, but such methods are not trained end-to-end. We present flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection. It leverages temporal coherence on feature level instead. It improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy. Our method significantly improves upon strong singleframe baselines in ImageNet VID [33], especially for more challenging fast moving objects. Our framework is principled, and on par with the best engineered systems winning the ImageNet VID challenges 2016, without additional bells-and-whistles. The code would be released.", "Video object detection is more challenging compared to image object detection. Previous works proved that applying object detector frame by frame is not only slow but also inaccurate. Visual clues get weakened by defocus and motion blur, causing failure on corresponding frames. Multi-frame feature fusion methods proved effective in improving the accuracy, but they dramatically sacrifice the speed. Feature propagation based methods proved effective in improving the speed, but they sacrifice the accuracy. So is it possible to improve speed and performance simultaneously? Inspired by how human utilize impression to recognize objects from blurry frames, we propose Impression Network that embodies a natural and efficient feature aggregation mechanism. In our framework, an impression feature is established by iteratively absorbing sparsely extracted frame features. The impression feature is propagated all the way down the video, helping enhance features of low-quality frames. This impression mechanism makes it possible to perform long-range multi-frame feature fusion among sparse keyframes with minimal overhead. It significantly improves per-frame detection baseline on ImageNet VID while being 3 times faster (20 fps). We hope Impression Network can provide a new perspective on video feature enhancement. Code will be made available." ] }
1903.10172
2924096267
With a single eye fixation lasting a fraction of a second, the human visual system is capable of forming a rich representation of a complex environment, reaching a holistic understanding which facilitates object recognition and detection. This phenomenon is known as recognizing the "gist" of the scene and is accomplished by relying on relevant prior knowledge. This paper addresses the analogous question of whether using memory in computer vision systems can not only improve the accuracy of object detection in video streams, but also reduce the computation time. By interleaving conventional feature extractors with extremely lightweight ones which only need to recognize the gist of the scene, we show that minimal computation is required to produce accurate detections when temporal memory is present. In addition, we show that the memory contains enough information for deploying reinforcement learning algorithms to learn an adaptive inference policy. Our model achieves state-of-the-art performance among mobile methods on the Imagenet VID 2015 dataset, while running at speeds of up to 70+ FPS on a Pixel 3 phone.
This paradigm has also been applied to mobile-focused video object detection, which is particularly relevant to this paper. In @cite_13 , flow-guided feature propagation is used on a GRU module with very efficient feature extractor and flow networks to demonstrate that flow-based methods are viable in computationally constrained environments. Our work also applies to the mobile setting, but by interleaving specialized feature extractors rather than using flow to propagate features, we remove the dependence on optical flow and hence the need for optical flow training data and the additional optical flow pre-training stage.
{ "cite_N": [ "@cite_13" ], "mid": [ "2797306806" ], "abstract": [ "Despite the recent success of video object detection on Desktop GPUs, its architecture is still far too heavy for mobiles. It is also unclear whether the key principles of sparse feature propagation and multi-frame feature aggregation apply at very limited computational resources. In this paper, we present a light weight network architecture for video object detection on mobiles. Light weight image object detector is applied on sparse key frames. A very small network, Light Flow, is designed for establishing correspondence across frames. A flow-guided GRU module is designed to effectively aggregate features on key frames. For non-key frames, sparse feature propagation is performed. The whole network can be trained end-to-end. The proposed system achieves 60.2 mAP score at speed of 25.6 fps on mobiles (e.g., HuaWei Mate 8)." ] }
1903.10172
2924096267
With a single eye fixation lasting a fraction of a second, the human visual system is capable of forming a rich representation of a complex environment, reaching a holistic understanding which facilitates object recognition and detection. This phenomenon is known as recognizing the "gist" of the scene and is accomplished by relying on relevant prior knowledge. This paper addresses the analogous question of whether using memory in computer vision systems can not only improve the accuracy of object detection in video streams, but also reduce the computation time. By interleaving conventional feature extractors with extremely lightweight ones which only need to recognize the gist of the scene, we show that minimal computation is required to produce accurate detections when temporal memory is present. In addition, we show that the memory contains enough information for deploying reinforcement learning algorithms to learn an adaptive inference policy. Our model achieves state-of-the-art performance among mobile methods on the Imagenet VID 2015 dataset, while running at speeds of up to 70+ FPS on a Pixel 3 phone.
A third class of video object detection methods involve methods which explicitly process multiple frames of the video simultaneously. D &T @cite_41 combines detection and tracking by adding an RoI tracking operation and loss on pairs of frames, while STSN @cite_10 uses deformable convolutions to sample features from adjacent frames. @cite_44 propose using a scale-time lattice to generate detections in a coarse-to-fine manner. Though D &T and 's methods can improve detection speed by sampling sparse keyframes and propagating results to intermediate frames, all of these works are still focused on high-accuracy detection and are nontrivial to generalize to a mobile setting. Our approach also extracts features from each frame rather than entirely propagating results from keyframes, which allows access to a greater quantity of information.
{ "cite_N": [ "@cite_41", "@cite_44", "@cite_10" ], "mid": [ "2962855257", "2963585656", "2964086649" ], "abstract": [ "Recent approaches for high accuracy detection and tracking of object categories in video consist of complex multistage solutions that become more cumbersome each year. In this paper we propose a ConvNet architecture that jointly performs detection and tracking, solving the task in a simple and effective way. Our contributions are threefold: (i) we set up a ConvNet architecture for simultaneous detection and tracking, using a multi-task objective for frame-based object detection and across-frame track regression; (ii) we introduce correlation features that represent object co-occurrences across time to aid the ConvNet during tracking; and (iii) we link the frame level detections based on our across-frame tracklets to produce high accuracy detections at the video level. Our ConvNet architecture for spatiotemporal object detection is evaluated on the large-scale ImageNet VID dataset where it achieves state-of-the-art results. Our approach provides better single model performance than the winning method of the last ImageNet challenge while being conceptually much simpler. Finally, we show that by increasing the temporal stride we can dramatically increase the tracker speed.", "High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6 at 20 fps, or 79.0 at 62 fps as a performance speed tradeoff.1", "We propose a Spatiotemporal Sampling Network (STSN) that uses deformable convolutions across time for object detection in videos. Our STSN performs object detection in a video frame by learning to spatially sample features from the adjacent frames. This naturally renders the approach robust to occlusion or motion blur in individual frames. Our framework does not require additional supervision, as it optimizes sampling locations directly with respect to object detection performance. Our STSN outperforms the state-of-the-art on the ImageNet VID dataset and compared to prior video object detection methods it uses a simpler design, and does not require optical flow data for training." ] }
1903.09870
2923147261
Learned Neural Network based policies have shown promising results for robot navigation. However, most of these approaches fall short of being used on a real robot -- they require extensive training in environments, most of which do not simulate the visuals and the dynamics of the real world well enough that the resulting policies can be easily deployed. We present a novel Neural Net based policy, , which allows for easy deployment on a real robot. It consists of two sub policies -- a high level policy which can understand real images and perform long range planning expressed in high level commands; a low level policy that can translate the long range plan into low level commands on a specific platform in a safe and robust manner. For every new deployment, the high level policy is trained on an easily obtainable scan of the environment modeling its visuals and layout. We detail the design of such an environment and how one can use it for training a final navigation policy. Further, we demonstrate a learned low-level policy. We deploy the model in a large office building and test it extensively, achieving @math success rate over long navigation runs and outperforming SLAM-based models in the same settings.
As a very old problem, there is an extensive literature on SLAM, planning and robot navigation @cite_1 @cite_32 @cite_23 @cite_5 , which due to space limitations will not be discussed here. It suffices to say though, that our work falls roughly into mapless navigation. The neural policy requires neither a geometric map of the environment nor localization at test time.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_32", "@cite_23" ], "mid": [ "2460657278", "2150839555", "2029143333", "1490092700" ], "abstract": [ "In this paper, we propose a method for semantic parsing the 3D point cloud of an entire building using a hierarchical approach: first, the raw data is parsed into semantically meaningful spaces (e.g. rooms, etc) that are aligned into a canonical reference coordinate system. Second, the spaces are parsed into their structural and building elements (e.g. walls, columns, etc). Performing these with a strong notation of global 3D space is the backbone of our method. The alignment in the first step injects strong 3D priors from the canonical coordinate system into the second step for discovering elements. This allows diverse challenging scenarios as man-made indoor spaces often show recurrent geometric patterns while the appearance features can change drastically. We also argue that identification of structural elements in indoor spaces is essentially a detection problem, rather than segmentation which is commonly used. We evaluated our method on a new dataset of several buildings with a covered area of over 6, 000m2 and over 215 million points, demonstrating robust results readily useful for practical applications.", "Surveys the developments of the last 20 years in the area of vision for mobile robot navigation. Two major components of the paper deal with indoor navigation and outdoor navigation. For each component, we have further subdivided our treatment of the subject on the basis of structured and unstructured environments. For indoor robots in structured environments, we have dealt separately with the cases of geometrical and topological models of space. For unstructured environments, we have discussed the cases of navigation using optical flows, using methods from the appearance-based paradigm, and by recognition of specific objects in the environment.", "Mobile robot vision-based navigation has been the source of countless research contributions, from the domains of both vision and control. Vision is becoming more and more common in applications such as localization, automatic map construction, autonomous navigation, path following, inspection, monitoring or risky situation detection. This survey presents those pieces of work, from the nineties until nowadays, which constitute a wide progress in visual navigation techniques for land, aerial and autonomous underwater vehicles. The paper deals with two major approaches: map-based navigation and mapless navigation. Map-based navigation has been in turn subdivided in metric map-based navigation and topological map-based navigation. Our outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based localization, optical flow, features tracking, plane ground detection tracking, etc... The recent concept of visual sonar has also been revised.", "Dieses Kapitel gibt eine Einfuhrung in die Kartenerstellung und gleichzeitige Lokalisierung mobiler Sensorplattformen. Die gemeinsame Losung dieser beiden Probleme ist eine Voraussetzung fur die Realisierung vieler technischer Systeme von leichten Fluggeraten uber autonome Roboter bis hin zu mobilen Kameras. Als Simultaneous Localization and Mapping bezeichnet man die Aufgabe, die Trajektorie samt Orientierungsinformation einer sich bewegenden Plattform aus Beobachtungen zu schatzen und gleichzeitig eine Karte der Umgebung zu erstellen. Diese Aufgabe ist in vielen realen Systemen von entscheidender Bedeutung: einerseits stellen hochgenaue Karten mitunter einen Wert an sich fur den Benutzer oder eine spezielle Anwendung dar, andererseits benotigen beispielsweise autonome Roboter ein solches Modell, um zielgerichtet selbststandig navigieren zu konnen. Das Simultaneous Localization and Mapping Problem, beziehungsweise Teilprobleme davon, werden, je nach verwendeter Sensorik, auch als Bundelausgleichung, Structure from Motion oder SLAM bezeichnet." ] }
1903.09870
2923147261
Learned Neural Network based policies have shown promising results for robot navigation. However, most of these approaches fall short of being used on a real robot -- they require extensive training in environments, most of which do not simulate the visuals and the dynamics of the real world well enough that the resulting policies can be easily deployed. We present a novel Neural Net based policy, , which allows for easy deployment on a real robot. It consists of two sub policies -- a high level policy which can understand real images and perform long range planning expressed in high level commands; a low level policy that can translate the long range plan into low level commands on a specific platform in a safe and robust manner. For every new deployment, the high level policy is trained on an easily obtainable scan of the environment modeling its visuals and layout. We detail the design of such an environment and how one can use it for training a final navigation policy. Further, we demonstrate a learned low-level policy. We deploy the model in a large office building and test it extensively, achieving @math success rate over long navigation runs and outperforming SLAM-based models in the same settings.
In recent years, RL-learned neural net policies have been explored for navigation. These are usually learned and tested in simulation. Examples include: using A3C on 3D mazes @cite_30 ; A3C on AITHOR @cite_11 ; ADDPG tested using a depth sensor on a real robot @cite_29 ; RL algorithms trained and tested on scans only of real spaces (no real robot) @cite_10 @cite_25 or SUNCG @cite_34 . Due to the large sample complexity of RL, the above methods cannot be learned directly on a real robot.
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_34", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2567015638", "2963428623", "2963088756", "2772390515", "2593841437", "2962887844" ], "abstract": [ "Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks to bootstrap learning. In particular we consider jointly learning the goal-driven reinforcement learning problem with an unsupervised depth prediction task and a self-supervised loop closure classification task. Using this approach we can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, that show that the agent implicitly learns key navigation abilities, with only sparse rewards and without direct supervision.", "We present a learning-based mapless motion planner by taking the sparse 10-dimensional range findings and the target position with respect to the mobile robot coordinate frame as input and the continuous steering commands as output. Traditional motion planners for mobile ground robots with a laser range sensor mostly depend on the obstacle map of the navigation environment where both the highly precise laser sensor and the obstacle map building work of the environment are indispensable. We show that, through an asynchronous deep reinforcement learning method, a mapless motion planner can be trained end-to-end without any manually designed features and prior demonstrations. The trained planner can be directly applied in unseen virtual and real environments. The experiments show that the proposed mapless motion planner can navigate the nonholonomic mobile robot to the desired targets without colliding with any obstacles.", "Towards bridging the gap between machine and human intelligence, it is of utmost importance to introduce environments that are visually realistic and rich in content. In such environments, one can evaluate and improve a crucial property of practical intelligent systems, namely generalization. In this work, we build House3D, a rich, extensible and efficient environment that contains 45,622 human-designed 3D scenes of houses, ranging from single-room studios to multi-storeyed houses, equipped with a diverse set of fully labeled 3D objects, textures and scene layouts, based on the SUNCG dataset (, 2017). With an emphasis on semantic-level generalization, we study the task of concept-driven navigation, RoomNav, using a subset of houses in House3D. In RoomNav, an agent navigates towards a target specified by a semantic concept. To succeed, the agent learns to comprehend the scene it lives in by developing perception, understand the concept by mapping it to the correct semantics, and navigate to the target by obeying the underlying physical rules. We train RL agents with both continuous and discrete action spaces and show their ability to generalize in new unseen environments. In particular, we observe that (1) training is substantially harder on large house sets but results in better generalization, (2) using semantic signals (e.g., segmentation mask) boosts the generalization performance, and (3) gated networks on semantic input signal lead to improved training performance and generalization. We hope House3D, including the analysis of the RoomNav task, serves as a building block towards designing practical intelligent systems and we wish it to be broadly adopted by the community.", "We present MINOS, a simulator designed to support the development of multisensory models for goal-directed navigation in complex indoor environments. The simulator leverages large datasets of complex 3D environments and supports flexible configuration of multimodal sensor suites. We use MINOS to benchmark deep-learning-based navigation methods, to analyze the influence of environmental complexity on navigation performance, and to carry out a controlled study of multimodality in sensorimotor learning. The experiments show that current deep reinforcement learning approaches fail in large realistic environments. The experiments also indicate that multimodality is beneficial in learning to navigate cluttered scenes. MINOS is released open-source to the research community at this http URL . A video that shows MINOS can be found at this https URL", "We introduce a neural architecture for navigation in novel environments. Our proposed architecture learns to map from first-person views and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the planner, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. Our experiments demonstrate that CMP outperforms both reactive strategies and standard memory-based architectures and performs well in novel environments. Furthermore, we show that CMP can also achieve semantically specified goals, such as go to a chair.", "Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment." ] }
1903.09870
2923147261
Learned Neural Network based policies have shown promising results for robot navigation. However, most of these approaches fall short of being used on a real robot -- they require extensive training in environments, most of which do not simulate the visuals and the dynamics of the real world well enough that the resulting policies can be easily deployed. We present a novel Neural Net based policy, , which allows for easy deployment on a real robot. It consists of two sub policies -- a high level policy which can understand real images and perform long range planning expressed in high level commands; a low level policy that can translate the long range plan into low level commands on a specific platform in a safe and robust manner. For every new deployment, the high level policy is trained on an easily obtainable scan of the environment modeling its visuals and layout. We detail the design of such an environment and how one can use it for training a final navigation policy. Further, we demonstrate a learned low-level policy. We deploy the model in a large office building and test it extensively, achieving @math success rate over long navigation runs and outperforming SLAM-based models in the same settings.
A high level environment has been used by @cite_14 without deployment on a real robot. @cite_15 @cite_12 deploy a RL trained neural net policy and use an environment constructed from a traversal. However, their system is not fully deployed on a real robot -- the policy actions are executed by an operator, while our system employs a second low-level policy to execute these actions. Thus, contrary to them we provide a fully deployable navigation solution.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_12" ], "mid": [ "2963867315", "2795911278", "2770679144" ], "abstract": [ "Model-free reinforcement learning has recently been shown to be effective at learning navigation policies from complex image input. However, these algorithms tend to require large amounts of interaction with the environment, which can be prohibitively costly to obtain on robots in the real world. We present an approach for efficiently learning goal-directed navigation policies on a mobile robot, from only a single coverage traversal of recorded data. The navigation agent learns an effective policy over a diverse action space in a large heterogeneous environment consisting of more than 2km of travel, through buildings and outdoor regions that collectively exhibit large variations in visual appearance, self-similarity, and connectivity. We compare pretrained visual encoders that enable precomputation of visual embeddings to achieve a throughput of tens of thousands of transitions per second at training time on a commodity desktop computer, allowing agents to learn from millions of trajectories of experience in a matter of hours. We propose multi- ple forms of computationally efficient stochastic augmentation to enable the learned policy to generalise beyond these precomputed embeddings, and demonstrate successful deployment of the learned policy on the real robot without fine tuning, despite environmental appearance differences at test time. The dataset and code required to reproduce these results and apply the technique to other datasets and robots is made publicly available at rl-navigation.github.io deployable .", "Navigating through unstructured environments is a basic capability of intelligent creatures, and thus is of fundamental interest in the study and development of artificial intelligence. Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation (\"I am here\") and a representation of the goal (\"I am going there\"). Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale. Recognising that successful navigation relies on integration of general policies with locale-specific knowledge, we propose a dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple cities. We present an interactive navigation environment that uses Google StreetView for its photographic content and worldwide coverage, and demonstrate that our learning method allows agents to learn to navigate multiple cities and to traverse to target destinations that may be kilometres away. The project webpage this http URL contains a video summarising our research and showing the trained agent in diverse city environments and on the transfer task, the form to request the StreetLearn dataset and links to further resources. The StreetLearn environment code is available at this https URL", "Recently, model-free reinforcement learning algorithms have been shown to solve challenging problems by learning from extensive interaction with the environment. A significant issue with transferring this success to the robotics domain is that interaction with the real world is costly, but training on limited experience is prone to overfitting. We present a method for learning to navigate, to a fixed goal and in a known environment, on a mobile robot. The robot leverages an interactive world model built from a single traversal of the environment, a pre-trained visual feature encoder, and stochastic environmental augmentation, to demonstrate successful zero-shot transfer under real-world environmental variations without fine-tuning." ] }
1903.09870
2923147261
Learned Neural Network based policies have shown promising results for robot navigation. However, most of these approaches fall short of being used on a real robot -- they require extensive training in environments, most of which do not simulate the visuals and the dynamics of the real world well enough that the resulting policies can be easily deployed. We present a novel Neural Net based policy, , which allows for easy deployment on a real robot. It consists of two sub policies -- a high level policy which can understand real images and perform long range planning expressed in high level commands; a low level policy that can translate the long range plan into low level commands on a specific platform in a safe and robust manner. For every new deployment, the high level policy is trained on an easily obtainable scan of the environment modeling its visuals and layout. We detail the design of such an environment and how one can use it for training a final navigation policy. Further, we demonstrate a learned low-level policy. We deploy the model in a large office building and test it extensively, achieving @math success rate over long navigation runs and outperforming SLAM-based models in the same settings.
An explicit path planning strategy, similar to our approach, has been employed by @cite_20 , in a learned topological graph. However, this path planning is used for inference only, and the system is evaluated in synthetic environments.
{ "cite_N": [ "@cite_20" ], "mid": [ "2963245725" ], "abstract": [ "We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals. The proposed semi-parametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network capable of retrieving nodes from the graph based on observations. The graph stores no metric information, only connectivity of locations corresponding to the nodes. We use SPTM as a planning module in a navigation system. Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals. The SPTM-based agent outperforms existing agents with LSTM memory by a large margin." ] }
1903.09870
2923147261
Learned Neural Network based policies have shown promising results for robot navigation. However, most of these approaches fall short of being used on a real robot -- they require extensive training in environments, most of which do not simulate the visuals and the dynamics of the real world well enough that the resulting policies can be easily deployed. We present a novel Neural Net based policy, , which allows for easy deployment on a real robot. It consists of two sub policies -- a high level policy which can understand real images and perform long range planning expressed in high level commands; a low level policy that can translate the long range plan into low level commands on a specific platform in a safe and robust manner. For every new deployment, the high level policy is trained on an easily obtainable scan of the environment modeling its visuals and layout. We detail the design of such an environment and how one can use it for training a final navigation policy. Further, we demonstrate a learned low-level policy. We deploy the model in a large office building and test it extensively, achieving @math success rate over long navigation runs and outperforming SLAM-based models in the same settings.
Beyond RL algorithms, investigations have been conducted into appropriate architectures, with emphasis on using models with external memory @cite_18 @cite_28 @cite_4 @cite_27 . These approaches, in their current form, are only applied in simulation. Learned low level controllers have been developed @cite_33 , and also combined with traditional PRM planning @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_33", "@cite_28", "@cite_3", "@cite_27" ], "mid": [ "2772545238", "2418628973", "2893359011", "2594903727", "2962917939", "2964332541" ], "abstract": [ "We present an approach for agents to learn representations of a global map from sensor data, to aid their exploration in new environments. To achieve this, we embed procedures mimicking that of traditional Simultaneous Localization and Mapping (SLAM) into the soft attention based addressing of external memory architectures, in which the external memory acts as an internal representation of the environment. This structure encourages the evolution of SLAM-like behaviors inside a completely differentiable deep neural network. We show that this approach can help reinforcement learning agents to successfully explore new environments where long-term memory is essential. We validate our approach in both challenging grid-world environments and preliminary Gazebo experiments. A video of our experiments can be found at: this https URL.", "In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world). We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures. These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks. While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures. Additionally, we evaluate the generalization performance of the architectures on environments not used during training. The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures.", "A longstanding goal of behavior-based robotics is to solve high-level navigation tasks using end to end navigation behaviors that directly map sensors to actions. Navigation behaviors, such as reaching a goal or following a path without collisions, can be learned from exploration and interaction with the environment, but are constrained by the type and quality of a robot's sensors, dynamics, and actuators. Traditional motion planning handles varied robot geometry and dynamics, but typically assumes high-quality observations. Modern vision-based navigation typically considers imperfect or partial observations, but simplifies the robot action space. With both approaches, the transition from simulation to reality can be difficult. Here, we learn two end to end navigation behaviors that avoid moving obstacles: point to point and path following. These policies receive noisy lidar observations and output robot linear and angular velocities. We train these policies in small, static environments with Shaped-DDPG, an adaptation of the Deep Deterministic Policy Gradient (DDPG) reinforcement learning method which optimizes reward and network architecture. Over 500 meters of on-robot experiments show , these policies generalize to new environments and moving obstacles, are robust to sensor, actuator, and localization noise, and can serve as robust building blocks for larger navigation tasks. The path following and point and point policies are 83 and 56 more successful than the baseline, respectively.", "A critical component to enabling intelligent reasoning in partially observable environments is memory. Despite this importance, Deep Reinforcement Learning (DRL) agents have so far used relatively simple memory architectures, with the main methods to overcome partial observability being either a temporal convolution over the past k frames or an LSTM layer. More recent work (, 2016) has went beyond these architectures by using memory networks which can allow more sophisticated addressing schemes over the past k frames. But even these architectures are unsatisfactory due to the reason that they are limited to only remembering information from the last k frames. In this paper, we develop a memory system with an adaptable write operator that is customized to the sorts of 3D environments that DRL agents typically interact with. This architecture, called the Neural Map, uses a spatially structured 2D memory image to learn to store arbitrary information about the environment over long time lags. We demonstrate empirically that the Neural Map surpasses previous DRL memories on a set of challenging 2D and 3D maze environments and show that it is capable of generalizing to environments that were not seen during training.", "We present PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling-based path planning with reinforcement learning (RL). The RL agents learn short-range, point-to-point navigation policies that capture robot dynamics and task constraints without knowledge of the large-scale topology. Next, the sampling-based planners provide roadmaps which connect robot configurations that can be successfully navigated by the RL agent. The same RL agents are used to control the robot under the direction of the planning, enabling long-range navigation. We use the Probabilistic Roadmaps (PRMs) for the sampling-based planner. The RL agents are constructed using feature-based and deep neural net policies in continuous state and action spaces. We evaluate PRM-RL, both in simulation and on-robot, on two navigation tasks with non-trivial robot dynamics: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments with load displacement constraints. Our results show improvement in task completion over both RL agents on their own and traditional sampling-based planners. In the indoor navigation task, PRM-RL successfully completes up to 215 m long trajectories under noisy sensor conditions, and the aerial cargo delivery completes flights over 1000 m without violating the task constraints in an environment 63 million times larger than used in training.", "Planning problems in partially observable environments cannot be solved directly with convolutional networks and require some form of memory. But, even memory networks with sophisticated addressing schemes are unable to learn intelligent reasoning satisfactorily due to the complexity of simultaneously learning to access memory and plan. To mitigate these challenges we propose the Memory Augmented Control Network (MACN). The network splits planning into a hierarchical process. At a lower level, it learns to plan in a locally observed space. At a higher level, it uses a collection of policies computed on locally observed spaces to learn an optimal plan in the global environment it is operating in. The performance of the network is evaluated on path planning tasks in environments in the presence of simple and complex obstacles and in addition, is tested for its ability to generalize to new environments not seen in the training set." ] }
1903.09887
2924535634
We propose a new architecture for distributed image compression from a group of distributed data sources. The proposed architecture, which we refer to as symmetric Encoder-Decoder Convolutional Recurrent Neural Network, is able to significantly outperform the state-of-the-art compression techniques such as JPEG on rate-distortion curves. We also show that by training distributed encoders and joint decoders on correlated data sources, the performance of compression is much better than that by training codecs separately. For 10 distributed sources, our distributed system remarkably performs within 2 dB peak signal-to-noise ratio (PSNR) of that of a single codec trained with all data sources. We experiment distributed sources with different correlations and show how our methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC). Our method is also shown to be robust to the lack of presence of encoded data from a number of distributed sources. To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with Deep Learning.
Though there has been a variety of research on lossy data compression in the past few decades, little attention has been paid to a systematic approach for general and practical distributed code design, especially in the presence of an arbitrary number of nontrivial data sources with arbitrary correlation @cite_24 . A main motivation of this work is to attempt to replace the practical hand-crafted code design with data-driven approaches. To our best knowledge, what we propose is the first data-driven DSC architecture. Unlike hand-crafted quantizers, our neural network-based quantizers show that the correlations among different data sources can be trained by the model parameters. We empirically show that the Slepian-Wolf limit can be achieved with our methodology.
{ "cite_N": [ "@cite_24" ], "mid": [ "2162404506" ], "abstract": [ "In recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf previous milestones in the information revolution. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies. It relies on many components working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies in sensor networks is the distributed source coding (DSC), which refers to the compression of the multiple correlated sensor outputs that does not communicate with each other. DSC allows a many-to-one video coding paradigm that effectively swaps encoder-decoder complexity with respect to conventional video coding, thereby representing a fundamental concept shift in video processing. This article has presented an intensive discussion on two DSC techniques, namely Slepian-Wolf coding and Wyner-Ziv coding. The Slepian and Wolf coding have theoretically shown that separate encoding is as efficient as joint coding for lossless compression in channel coding." ] }
1903.09887
2924535634
We propose a new architecture for distributed image compression from a group of distributed data sources. The proposed architecture, which we refer to as symmetric Encoder-Decoder Convolutional Recurrent Neural Network, is able to significantly outperform the state-of-the-art compression techniques such as JPEG on rate-distortion curves. We also show that by training distributed encoders and joint decoders on correlated data sources, the performance of compression is much better than that by training codecs separately. For 10 distributed sources, our distributed system remarkably performs within 2 dB peak signal-to-noise ratio (PSNR) of that of a single codec trained with all data sources. We experiment distributed sources with different correlations and show how our methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC). Our method is also shown to be robust to the lack of presence of encoded data from a number of distributed sources. To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with Deep Learning.
The standard methods of compression with deep learning roughly fall into two categories. Non-recurrent autoencoders which rely on @math penalty to sparsify the 8-bit integer codes, and recurrent models which introduce binary codes at each iteration. The compression rate of non-recurrent models is not scalable and their performance heavily rely on the sparsity which entropy codec can take advantage of. Another challenge is to well define the derivative of quantizations of bottleneck representations. Ball ' @cite_19 replaced non-differentiable quantization step with a continuous relaxation by adding uniform noises. Toderici @cite_28 , on the other hand, used a stochastic form of binarization. The recurrent model @cite_28 @cite_2 , on the other hand, has scalable compression rates. It generates more codes when the residual difference between the input and output of the model is compressed again.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_2" ], "mid": [ "2276024283", "2552465432", "2597747080" ], "abstract": [ "A large fraction of Internet traffic is now driven by requests from mobile devices with relatively small screens and often stringent bandwidth requirements. Due to these factors, it has become the norm for modern graphics-heavy websites to transmit low-resolution, low-bytecount image previews (thumbnails) as part of the initial page load process to improve apparent page responsiveness. Increasing thumbnail compression beyond the capabilities of existing codecs is therefore a current research focus, as any byte savings will significantly enhance the experience of mobile device users. Toward this end, we propose a general framework for variable-rate image compression and a novel architecture based on convolutional and deconvolutional LSTM recurrent networks. Our models address the main issues that have prevented autoencoder neural networks from competing with existing image compression algorithms: (1) our networks only need to be trained once (not per-image), regardless of input image dimensions and the desired compression rate; (2) our networks are progressive, meaning that the more bits are sent, the more accurate the image reconstruction; and (3) the proposed architecture is at least as efficient as a standard purpose-trained autoencoder for a given number of bits. On a large-scale benchmark of 32 @math 32 thumbnails, our LSTM-based approaches provide better visual quality than (headerless) JPEG, JPEG2000 and WebP, with a storage size that is reduced by 10 or more.", "We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM.", "We propose a method for lossy image compression based on recurrent, convolutional neural networks that outperforms BPG (4:2:0), WebP, JPEG2000, and JPEG as measured by MS-SSIM. We introduce three improvements over previous research that lead to this state-of-the-art result using a single model. First, we modify the recurrent architecture to improve spatial diffusion, which allows the network to more effectively capture and propagate image information through the network's hidden state. Second, in addition to lossless entropy coding, we use a spatially adaptive bit allocation algorithm to more efficiently use the limited number of bits to encode visually complex image regions. Finally, we show that training with a pixel-wise loss weighted by SSIM increases reconstruction quality according to multiple metrics. We evaluate our method on the Kodak and Tecnick image sets and compare against standard codecs as well as recently published methods based on deep neural networks." ] }
1903.09887
2924535634
We propose a new architecture for distributed image compression from a group of distributed data sources. The proposed architecture, which we refer to as symmetric Encoder-Decoder Convolutional Recurrent Neural Network, is able to significantly outperform the state-of-the-art compression techniques such as JPEG on rate-distortion curves. We also show that by training distributed encoders and joint decoders on correlated data sources, the performance of compression is much better than that by training codecs separately. For 10 distributed sources, our distributed system remarkably performs within 2 dB peak signal-to-noise ratio (PSNR) of that of a single codec trained with all data sources. We experiment distributed sources with different correlations and show how our methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC). Our method is also shown to be robust to the lack of presence of encoded data from a number of distributed sources. To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with Deep Learning.
Our methodology is deeply rooted in information-theoretic results on DSC which have been established since 1970s. The Slepian-Wolf @cite_23 Theorem shows that two correlated data sources encoded separately and decoded jointly can perform as well as joint encoding and decoding, and outperform separate encoding and separate decoding. The striking result indicates that as long as the codes are jointly decoded, there can be no loss in coding efficiency even the codes are separately encoded. Cover @cite_13 generalizes the achievability of Slepian-Wolf coding to arbitrary number of correlated sources. Wyner-Ziv Coding @cite_20 gives a rate-distortion curve as an extension to lossy cases.
{ "cite_N": [ "@cite_20", "@cite_13", "@cite_23" ], "mid": [ "2150412388", "2155676967", "2099213070" ], "abstract": [ "Let (X_ k , Y_ k ) ^ _ k=1 be a sequence of independent drawings of a pair of dependent random variables X, Y . Let us say that X takes values in the finite set X . It is desired to encode the sequence X_ k in blocks of length n into a binary stream of rate R , which can in turn be decoded as a sequence X _ k , where X _ k X , the reproduction alphabet. The average distortion level is (1 n) ^ n _ k=1 E[D(X_ k , X _ k )] , where D(x, x ) 0, x X , x X , is a preassigned distortion measure. The special assumption made here is that the decoder has access to the side information Y_ k . In this paper we determine the quantity R (d) , defined as the infimum ofrates R such that (with > 0 arbitrarily small and with suitably large n )communication is possible in the above setting at an average distortion level (as defined above) not exceeding d + . The main result is that R (d) = [I(X;Z) - I(Y;Z)] , where the infimum is with respect to all auxiliary random variables Z (which take values in a finite set Z ) that satisfy: i) Y,Z conditionally independent given X ; ii) there exists a function f: Y Z X , such that E[D(X,f(Y,Z))] d . Let R_ X | Y (d) be the rate-distortion function which results when the encoder as well as the decoder has access to the side information Y_ k . In nearly all cases it is shown that when d > 0 then R (d) > R_ X|Y (d) , so that knowledge of the side information at the encoder permits transmission of the X_ k at a given distortion level using a smaller transmission rate. This is in contrast to the situation treated by Slepian and Wolf [5] where, for arbitrarily accurate reproduction of X_ k , i.e., d = for any >0 , knowledge of the side information at the encoder does not allow a reduction of the transmission rate.", "If (X_i, Y_i) _ i=1 ^ is a sequence of independent identically distributed discrete random pairs with (X_i, Y_i) p(x,y) , Slepian and Wolf have shown that the X process and the Y process can be separately described to a common receiver at rates R_X and R_Y hits per symbol if R_X + R_Y > H(X,Y), R_X > H(X ), R_Y > H(Y ) . A simpler proof of this result will be given. As a consequence it is established that the Slepian-Wolf theorem is true without change for arbitrary ergodic processes (X_i,Y_i) _ i=1 ^ and countably infinite alphabets. The extension to an arbitrary number of processes is immediate.", "Correlated information sequences ,X_ -1 ,X_0,X_1, and ,Y_ -1 ,Y_0,Y_1, are generated by repeated independent drawings of a pair of discrete random variables X, Y from a given bivariate distribution P_ XY (x,y) . We determine the minimum number of bits per character R_X and R_Y needed to encode these sequences so that they can be faithfully reproduced under a variety of assumptions regarding the encoders and decoders. The results, some of which are not at all obvious, are presented as an admissible rate region R in the R_X - R_Y plane. They generalize a similar and well-known result for a single information sequence, namely R_X H (X) for faithful reproduction." ] }
1903.09887
2924535634
We propose a new architecture for distributed image compression from a group of distributed data sources. The proposed architecture, which we refer to as symmetric Encoder-Decoder Convolutional Recurrent Neural Network, is able to significantly outperform the state-of-the-art compression techniques such as JPEG on rate-distortion curves. We also show that by training distributed encoders and joint decoders on correlated data sources, the performance of compression is much better than that by training codecs separately. For 10 distributed sources, our distributed system remarkably performs within 2 dB peak signal-to-noise ratio (PSNR) of that of a single codec trained with all data sources. We experiment distributed sources with different correlations and show how our methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC). Our method is also shown to be robust to the lack of presence of encoded data from a number of distributed sources. To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with Deep Learning.
Some researchers have also shown the applicability of DSC on still images @cite_25 . In practical applications, low complexity video encoding benefits from the DSC framework which can transfer the complexity of encoder to decoder @cite_10 @cite_26 . Scalable Video Coding can also be incorporated with DSC @cite_6 . These proposed methods indicate the feasibility of DSC in our problem setting.
{ "cite_N": [ "@cite_26", "@cite_10", "@cite_25", "@cite_6" ], "mid": [ "2169809669", "113850913", "1555669211", "1621018383" ], "abstract": [ "In current interframe video compression systems, the encoder performs predictive coding to exploit the similarities of successive frames. The Wyner-Ziv theorem on source coding with side information available only at the decoder suggests that an asymmetric video codec, where individual frames are encoded separately, but decoded conditionally (given temporally adjacent frames) could achieve similar efficiency. We report the first results on a Wyner-Ziv coding scheme for motion video that uses intraframe encoding, but interframe decoding.", "", "We propose a compression scheme for still images, by exploiting the theory of Distributed Coding of correlated multi-sources. Two corrupted versions of an image are encoded separately but decoding jointly. Our approach results in twofold. i) use of decomposition of low-pass wavelet co-efficients for creating the Side Information, and ii) variable-length coset creation by estimating the bit-rate of the cosets on the encoder using the joint distribution statistics of the original image and the side info. In the case of coding for mobile terminals, the proposed codec exploits the channel coding principles in order to have a simple encoder with a low transmission rate and high PSNR. Experimental results are given for lossy encoding case.", "Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4 H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks" ] }
1903.10223
2924093022
A multivariate ridge function is a function of the form @math , where @math is univariate and @math . We show that the recovery of an unknown ridge function defined on the hypercube @math with Lipschitz-regular profile @math suffers from the curse of dimensionality when the recovery error is measured in the @math -norm, even if we allow randomized algorithms. If a limited number of components of @math is substantially larger than the others, then the curse of dimensionality is not present and the problem is weakly tractable provided the profile @math is sufficiently regular.
An alternative to the component-wise positivity of the ridge vector that also guarantees polynomial tractability is to assume that for some given @math . This assumption works both for ridge functions defined on the hypercube and for ridge functions defined on the Euclidean ball @math whereas does not lead to a polynomially tractable problem for ridge functions defined on the hypercube, see @cite_25 . Assumption has been studied in @cite_22 @cite_1 @cite_25 , where @cite_1 studies also the effect of noisy measurements. @cite_25 , it has been shown that recovery of ridge functions defined on the Euclidean ball in general suffers from the curse of dimensionality. This finding is based on novel two-sided estimates that reduce the decay behavior of the worst-case recovery error to the decay behavior of entropy numbers of @math -balls. whereas @math implies weak tractability for sufficiently large @math .
{ "cite_N": [ "@cite_22", "@cite_1", "@cite_25" ], "mid": [ "1644425553", "2077625299", "2138611516" ], "abstract": [ "Let us assume that f is a continuous function defined on the unit ball of ℝ d , of the form f(x)=g(Ax), where A is a k×d matrix and g is a function of k variables for k≪d. We are given a budget m∈ℕ of possible point evaluations f(x i ), i=1,…,m, of f, which we are allowed to query in order to construct a uniform approximating function. Under certain smoothness and variation assumptions on the function g, and an arbitrary choice of the matrix A, we present in this paper a sampling choice of the points xi drawn at random for each function approximation; algorithms (Algorithm 1 and Algorithm 2) for computing the approximating function, whose complexity is at most polynomial in the dimension d and in the number m of points. Due to the arbitrariness of A, the sampling points will be chosen according to suitable random distributions, and our results hold with overwhelming probability. Our approach uses tools taken from the compressed sensing framework, recent Chernoff bounds for sums of positive semidefinite matrices, and classical stability bounds for invariant subspaces of singular value decompositions.", "We present effective algorithms for uniform approximation of multivariate functions satisfying some prescribed inner structure. We extend, in several directions, the analysis of recovery of ridge functions f ( x ) = g ( a , x ) as performed earlier by one of the authors and his coauthors. We consider ridge functions defined on the unit cube - 1 , 1 ] d as well as recovery of ridge functions defined on the unit ball from noisy measurements. We conclude with the study of functions of the type f ( x ) = g ( ? a - x ? l 2 d 2 ) .", "We study the properties of ridge functions (f(x)=g(a x) ) in high dimensions (d ) from the viewpoint of approximation theory. The function classes considered consist of ridge functions such that the profile (g ) is a member of a univariate Lipschitz class with smoothness ( >0 ) (including infinite smoothness) and the ridge direction (a ) has (p )-norm ( a _p 1 ). First, we investigate entropy numbers in order to quantify the compactness of these ridge function classes in (L_ ). We show that they are essentially as compact as the class of univariate Lipschitz functions. Second, we examine sampling numbers and consider two extreme cases. In the case (p=2 ), sampling ridge functions on the Euclidean unit ball suffers from the curse of dimensionality. Moreover, it is as difficult as sampling general multivariate Lipschitz functions, which is in sharp contrast to the result on entropy numbers. When we additionally assume that all feasible profiles have a first derivative uniformly bounded away from zero at the origin, the complexity of sampling ridge functions reduces drastically to the complexity of sampling univariate Lipschitz functions. In between, the sampling problem’s degree of difficulty varies, depending on the values of ( ) and (p ). Surprisingly, we see almost the entire hierarchy of tractability levels as introduced in the recent monographs by Novak and Woźniakowski." ] }
1903.10223
2924093022
A multivariate ridge function is a function of the form @math , where @math is univariate and @math . We show that the recovery of an unknown ridge function defined on the hypercube @math with Lipschitz-regular profile @math suffers from the curse of dimensionality when the recovery error is measured in the @math -norm, even if we allow randomized algorithms. If a limited number of components of @math is substantially larger than the others, then the curse of dimensionality is not present and the problem is weakly tractable provided the profile @math is sufficiently regular.
There is an obvious generalization of the ridge function model, namely functions of the form [ f(x) = g(Ax), g: ^m , A ^ m d , ] where @math is supposed to be much smaller than the ambient dimension @math . @cite_33 , such functions are called . Following the ideas of @cite_35 , the paper @cite_22 develops an efficient algorithm for the recovery of generalized ridge functions defined on Euclidean balls, provided the function @math fulfills certain integral conditions and the rows of the matrix are compressible. In the case @math , the integral conditions are fulfilled, e.g., if we assume . Instead of compressibility assumptions on the rows of @math , the work @cite_28 assumes that the matrix @math is a low-rank tensor and obtains an algorithms that requires only polynomially many function samples.
{ "cite_N": [ "@cite_28", "@cite_35", "@cite_22", "@cite_33" ], "mid": [ "2012411663", "1983767920", "1644425553", "" ], "abstract": [ "We consider the problem of learning multi-ridge functions of the form f (x) = g(Ax) from point evaluations of f. We assume that the function f is defined on an l(2)-ball in R-d, g is twice continuously differentiable almost everywhere, and A is an element of R-kxd is a rank k matrix, where k << d. We propose a randomized, polynomial-complexity sampling scheme for estimating such functions. Our theoretical developments leverage recent techniques from low rank matrix recovery, which enables us to derive a polynomial time estimator of the function f along with uniform approximation guarantees. We prove that our scheme can also be applied for learning functions of the form: f(x) = Sigma(k)(i=1) g(i)(a(i)(T)x), provided f satisfies certain smoothness conditions in a neighborhood around the origin. We also characterize the noise robustness of the scheme. Finally, we present numerical examples to illustrate the theoretical bounds in action. (C) 2014 Elsevier Inc. All rights reserved.", "This paper is about an inverse problem. We assume we are given a functionf(x) which is some sum of ridge functions of the form ?mi=1gi(ai·x) and we just know an upper bound onm. We seek to identify the functionsgiand also to identify the directionsaifrom such limited information. Several ways to solve this nonlinear problem are discussed in this work.", "Let us assume that f is a continuous function defined on the unit ball of ℝ d , of the form f(x)=g(Ax), where A is a k×d matrix and g is a function of k variables for k≪d. We are given a budget m∈ℕ of possible point evaluations f(x i ), i=1,…,m, of f, which we are allowed to query in order to construct a uniform approximating function. Under certain smoothness and variation assumptions on the function g, and an arbitrary choice of the matrix A, we present in this paper a sampling choice of the points xi drawn at random for each function approximation; algorithms (Algorithm 1 and Algorithm 2) for computing the approximating function, whose complexity is at most polynomial in the dimension d and in the number m of points. Due to the arbitrariness of A, the sampling points will be chosen according to suitable random distributions, and our results hold with overwhelming probability. Our approach uses tools taken from the compressed sensing framework, recent Chernoff bounds for sums of positive semidefinite matrices, and classical stability bounds for invariant subspaces of singular value decompositions.", "" ] }
1903.10223
2924093022
A multivariate ridge function is a function of the form @math , where @math is univariate and @math . We show that the recovery of an unknown ridge function defined on the hypercube @math with Lipschitz-regular profile @math suffers from the curse of dimensionality when the recovery error is measured in the @math -norm, even if we allow randomized algorithms. If a limited number of components of @math is substantially larger than the others, then the curse of dimensionality is not present and the problem is weakly tractable provided the profile @math is sufficiently regular.
In @math -bit compressed sensing, the aim is to recover a compressible signal @math from @math -bit measurements @math , @math , given that @math for some unknown, univariate @math such that for a standard normal random variable @math , see @cite_11 and the references there. Note that the goal here is only to recover the vector @math and not the non-linearity @math . Further, note the similarity between and the integral condition discussed in @cite_22 . In particular, it is clear that is fulfilled if @math is a continuous function with @math .
{ "cite_N": [ "@cite_22", "@cite_11" ], "mid": [ "1644425553", "2964322027" ], "abstract": [ "Let us assume that f is a continuous function defined on the unit ball of ℝ d , of the form f(x)=g(Ax), where A is a k×d matrix and g is a function of k variables for k≪d. We are given a budget m∈ℕ of possible point evaluations f(x i ), i=1,…,m, of f, which we are allowed to query in order to construct a uniform approximating function. Under certain smoothness and variation assumptions on the function g, and an arbitrary choice of the matrix A, we present in this paper a sampling choice of the points xi drawn at random for each function approximation; algorithms (Algorithm 1 and Algorithm 2) for computing the approximating function, whose complexity is at most polynomial in the dimension d and in the number m of points. Due to the arbitrariness of A, the sampling points will be chosen according to suitable random distributions, and our results hold with overwhelming probability. Our approach uses tools taken from the compressed sensing framework, recent Chernoff bounds for sums of positive semidefinite matrices, and classical stability bounds for invariant subspaces of singular value decompositions.", "This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We demonstrate that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We show that an -sparse signal in can be accurately estimated from m = O(s log(n s)) single-bit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1 2. Worst-case (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O (s log (2n s)) Bernoulli trials are sufficient to estimate a coefficient vector in which is approximately -sparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1-bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension." ] }
1903.10157
2924875562
Multi-scale approach has been used for blind image video deblurring problems to yield excellent performance for both conventional and recent deep-learning-based state-of-the-art methods. Bicubic down-sampling is a typical choice for multi-scale approach to reduce spatial dimension after filtering with a fixed kernel. However, this fixed kernel may be sub-optimal since it may destroy important information for reliable deblurring such as strong edges. We propose convolutional neural network (CNN)-based down-scale methods for multi-scale deep-learning-based non-uniform single image deblurring. We argue that our CNN-based down-scaling effectively reduces the spatial dimension of the original image, while learned kernels with multiple channels may well-preserve necessary details for deblurring tasks. For each scale, we adopt to use RCAN (Residual Channel Attention Networks) as a backbone network to further improve performance. Our proposed method yielded state-of-the-art performance on GoPro dataset by large margin. Our proposed method was able to achieve 2.59dB higher PSNR than the current state-of-the-art method by Tao. Our proposed CNN-based down-scaling was the key factor for this excellent performance since the performance of our network without it was decreased by 1.98dB. The same networks trained with GoPro set were also evaluated on large-scale Su dataset and our proposed method yielded 1.15dB better PSNR than the Tao's method. Qualitative comparisons on Lai dataset also confirmed the superior performance of our proposed method over other state-of-the-art methods.
Conventional approaches to blind single image video deblurring usually require to explicitly estimate blur kernels. There have been several works on estimating uniform blurs using optimization algorithm with coarse-to-fine multi-scale approach @cite_3 , using a model of the spatial randomness of noise and a local smoothness prior @cite_34 , exploiting blurred strong edges to reliably estimate blur kernel @cite_29 , and developing a metric to measure the usefullness of image edges for blur kernel estimation @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_34", "@cite_3" ], "mid": [ "1598281290", "1976730913", "2141115311", "2098535678" ], "abstract": [ "We discuss a few new motion deblurring problems that are significant to kernel estimation and non-blind deconvolution. We found that strong edges do not always profit kernel estimation, but instead under certain circumstance degrade it. This finding leads to a new metric to measure the usefulness of image edges in motion deblurring and a gradient selection process to mitigate their possible adverse effect. We also propose an efficient and high-quality kernel estimation method based on using the spatial prior and the iterative support detection (ISD) kernel refinement, which avoids hard threshold of the kernel elements to enforce sparsity. We employ the TV-l1 deconvolution model, solved with a new variable substitution scheme to robustly suppress noise.", "This paper presents a fast deblurring method that produces a deblurring result from a single image of moderate size in a few seconds. We accelerate both latent image estimation and kernel estimation in an iterative deblurring process by introducing a novel prediction step and working with image derivatives rather than pixel values. In the prediction step, we use simple image processing techniques to predict strong edges from an estimated latent image, which will be solely used for kernel estimation. With this approach, a computationally efficient Gaussian prior becomes sufficient for deconvolution to estimate the latent image, as small deconvolution artifacts can be suppressed in the prediction. For kernel estimation, we formulate the optimization function using image derivatives, and accelerate the numerical process by reducing the number of Fourier transforms needed for a conjugate gradient method. We also show that the formulation results in a smaller condition number of the numerical system than the use of pixel values, which gives faster convergence. Experimental results demonstrate that our method runs an order of magnitude faster than previous work, while the deblurring quality is comparable. GPU implementation facilitates further speed-up, making our method fast enough for practical use.", "We present a new algorithm for removing motion blur from a single image. Our method computes a deblurred image using a unified probabilistic model of both blur kernel estimation and unblurred image restoration. We present an analysis of the causes of common artifacts found in current deblurring methods, and then introduce several novel terms within this probabilistic model that are inspired by our analysis. These terms include a model of the spatial randomness of noise in the blurred image, as well a new local smoothness prior that reduces ringing artifacts by constraining contrast in the unblurred image wherever the blurred image exhibits low contrast. Finally, we describe an effficient optimization scheme that alternates between blur kernel estimation and unblurred image restoration until convergence. As a result of these steps, we are able to produce high quality deblurred results in low computation time. We are even able to produce results of comparable quality to techniques that require additional input images beyond a single blurry photograph, and to methods that require additional hardware.", "Camera shake during exposure leads to objectionable image blur and ruins many photographs. Conventional blind deconvolution methods typically assume frequency-domain constraints on images, or overly simplified parametric forms for the motion path during camera shake. Real camera motions can follow convoluted paths, and a spatial domain prior can better maintain visually salient image characteristics. We introduce a method to remove the effects of camera shake from seriously blurred images. The method assumes a uniform camera blur over the image and negligible in-plane camera rotation. In order to estimate the blur from the camera shake, the user must specify an image region without saturation effects. We show results for a variety of digital photographs taken from personal photo collections." ] }
1903.10157
2924875562
Multi-scale approach has been used for blind image video deblurring problems to yield excellent performance for both conventional and recent deep-learning-based state-of-the-art methods. Bicubic down-sampling is a typical choice for multi-scale approach to reduce spatial dimension after filtering with a fixed kernel. However, this fixed kernel may be sub-optimal since it may destroy important information for reliable deblurring such as strong edges. We propose convolutional neural network (CNN)-based down-scale methods for multi-scale deep-learning-based non-uniform single image deblurring. We argue that our CNN-based down-scaling effectively reduces the spatial dimension of the original image, while learned kernels with multiple channels may well-preserve necessary details for deblurring tasks. For each scale, we adopt to use RCAN (Residual Channel Attention Networks) as a backbone network to further improve performance. Our proposed method yielded state-of-the-art performance on GoPro dataset by large margin. Our proposed method was able to achieve 2.59dB higher PSNR than the current state-of-the-art method by Tao. Our proposed CNN-based down-scaling was the key factor for this excellent performance since the performance of our network without it was decreased by 1.98dB. The same networks trained with GoPro set were also evaluated on large-scale Su dataset and our proposed method yielded 1.15dB better PSNR than the Tao's method. Qualitative comparisons on Lai dataset also confirmed the superior performance of our proposed method over other state-of-the-art methods.
There have also been many works on predicting non-uniform blurs assuming spatially linear blur @cite_41 , simplified camera motion (from 6D to 3D) @cite_24 , parametrized geometric model in terms of camera rotation velocity during exposure @cite_9 , filter flow framework based blur model @cite_22 , L0 sparse expression for blurs @cite_38 , and dark channel prior @cite_10 . There was also an attempt to exploit multiple images from videos assuming spatially varying blur @cite_32 . There have also been some works to utilize segmentation information by assuming uniform blur on each segmentation area @cite_33 and to segment motion blur using convex optimization @cite_31 , to simplify motion model as local linear without segmentation using coarse-to-fine approach @cite_37 , and to use bidirectional optical flows for video deblurring @cite_20 .
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_22", "@cite_33", "@cite_41", "@cite_9", "@cite_32", "@cite_24", "@cite_31", "@cite_10", "@cite_20" ], "mid": [ "2167307343", "", "2132244934", "2133957619", "", "2043529138", "1996109124", "1598936309", "2118456997", "2474628748", "1917431891" ], "abstract": [ "We show in this paper that the success of previous maximum a posterior (MAP) based blur removal methods partly stems from their respective intermediate steps, which implicitly or explicitly create an unnatural representation containing salient image structures. We propose a generalized and mathematically sound L0 sparse expression, together with a new effective method, for motion deblurring. Our system does not require extra filtering during optimization and demonstrates fast energy decreasing, making a small number of iterations enough for convergence. It also provides a unified framework for both uniform and non-uniform motion deblurring. We extensively validate our method and show comparison with other approaches with respect to convergence speed, running time, and result quality.", "", "Camera shake leads to non-uniform image blurs. State-of-the-art methods for removing camera shake model the blur as a linear combination of homographically transformed versions of the true image. While this is conceptually interesting, the resulting algorithms are computationally demanding. In this paper we develop a forward model based on the efficient filter flow framework, incorporating the particularities of camera shake, and show how an efficient algorithm for blur removal can be obtained. Comprehensive comparisons on a number of real-world blurry images show that our approach is not only substantially faster, but it also leads to better deblurring results.", "This paper addresses the problem of restoring images subjected to unknown and spatially varying blur caused by defocus or linear (say, horizontal) motion. The estimation of the global (non-uniform) image blur is cast as a multi-label energy minimization problem. The energy is the sum of unary terms corresponding to learned local blur estimators, and binary ones corresponding to blur smoothness. Its global minimum is found using Ishikawa's method by exploiting the natural order of discretized blur values for linear motions and defocus. Once the blur has been estimated, the image is restored using a robust (non-uniform) deblurring algorithm based on sparse regularization with global image statistics. The proposed algorithm outputs both a segmentation of the image into uniform-blur layers and an estimate of the corresponding sharp image. We present qualitative results on real images, and use synthetic data to quantitatively compare our approach to the publicly available implementation of Chakrabarti et al", "", "Photographs taken in low-light conditions are often blurry as a result of camera shake, i.e. a motion of the camera while its shutter is open. Most existing deblurring methods model the observed blurry image as the convolution of a sharp image with a uniform blur kernel. However, we show that blur from camera shake is in general mostly due to the 3D rotation of the camera, resulting in a blur that can be significantly non-uniform across the image. We propose a new parametrized geometric model of the blurring process in terms of the rotational motion of the camera during exposure. This model is able to capture non-uniform blur in an image due to camera shake using a single global descriptor, and can be substituted into existing deblurring algorithms with only small modifications. To demonstrate its effectiveness, we apply this model to two deblurring problems; first, the case where a single blurry image is available, for which we examine both an approximate marginalization approach and a maximum a posteriori approach, and second, the case where a sharp but noisy image of the scene is available in addition to the blurry image. We show that our approach makes it possible to model and remove a wider class of blurs than previous approaches, including uniform blur as a special case, and demonstrate its effectiveness with experiments on synthetic and real images.", "In this paper, we show how to generate a sharp panorama from a set of motion-blurred video frames. Our technique is based on joint global motion estimation and multi-frame deblurring. It also automatically computes the duty cycle of the video, namely the percentage of time between frames that is actually exposure time. The duty cycle is necessary for allowing the blur kernels to be accurately extracted and then removed. We demonstrate our technique on a number of videos.", "We present a novel single image deblurring method to estimate spatially non-uniform blur that results from camera shake. We use existing spatially invariant deconvolution methods in a local and robust way to compute initial estimates of the latent image. The camera motion is represented as a Motion Density Function (MDF) which records the fraction of time spent in each discretized portion of the space of all possible camera poses. Spatially varying blur kernels are derived directly from the MDF. We show that 6D camera motion is well approximated by 3 degrees of motion (in-plane translation and rotation) and analyze the scope of this approximation. We present results on both synthetic and captured data. Our system out-performs current approaches which make the assumption of spatially invariant blur.", "Most conventional single image deblurring methods assume that the underlying scene is static and the blur is caused by only camera shake. In this paper, in contrast to this restrictive assumption, we address the deblurring problem of general dynamic scenes which contain multiple moving objects as well as camera shake. In case of dynamic scenes, moving objects and background have different blur motions, so the segmentation of the motion blur is required for deblurring each distinct blur motion accurately. Thus, we propose a novel energy model designed with the weighted sum of multiple blur data models, which estimates different motion blurs and their associated pixel-wise weights, and resulting sharp image. In this framework, the local weights are determined adaptively and get high values when the corresponding data models have high data fidelity. And, the weight information is used for the segmentation of the motion blur. Non-local regularization of weights are also incorporated to produce more reliable segmentation results. A convex optimization-based method is used for the solution of the proposed energy model. Experimental results demonstrate that our method outperforms conventional approaches in deblurring both dynamic scenes and static scenes.", "We present a simple and effective blind image deblurring method based on the dark channel prior. Our work is inspired by the interesting observation that the dark channel of blurred images is less sparse. While most image patches in the clean image contain some dark pixels, these pixels are not dark when averaged with neighboring highintensity pixels during the blur process. This change in the sparsity of the dark channel is an inherent property of the blur process, which we both prove mathematically and validate using training data. Therefore, enforcing the sparsity of the dark channel helps blind deblurring on various scenarios, including natural, face, text, and low-illumination images. However, sparsity of the dark channel introduces a non-convex non-linear optimization problem. We introduce a linear approximation of the min operator to compute the dark channel. Our look-up-table-based method converges fast in practice and can be directly extended to non-uniform deblurring. Extensive experiments show that our method achieves state-of-the-art results on deblurring natural images and compares favorably methods that are well-engineered for specific scenarios.", "Several state-of-the-art video deblurring methods are based on a strong assumption that the captured scenes are static. These methods fail to deblur blurry videos in dynamic scenes. We propose a video deblurring method to deal with general blurs inherent in dynamic scenes, contrary to other methods. To handle locally varying and general blurs caused by various sources, such as camera shake, moving objects, and depth variation in a scene, we approximate pixel-wise kernel with bidirectional optical flows. Therefore, we propose a single energy model that simultaneously estimates optical flows and latent frames to solve our deblurring problem. We also provide a framework and efficient solvers to optimize the energy model. By minimizing the proposed energy function, we achieve significant improvements in removing blurs and estimating accurate optical flows in blurry frames. Extensive experimental results demonstrate the superiority of the proposed method in real and challenging videos that state-of-the-art methods fail in either deblurring or optical flow estimation." ] }
1903.10157
2924875562
Multi-scale approach has been used for blind image video deblurring problems to yield excellent performance for both conventional and recent deep-learning-based state-of-the-art methods. Bicubic down-sampling is a typical choice for multi-scale approach to reduce spatial dimension after filtering with a fixed kernel. However, this fixed kernel may be sub-optimal since it may destroy important information for reliable deblurring such as strong edges. We propose convolutional neural network (CNN)-based down-scale methods for multi-scale deep-learning-based non-uniform single image deblurring. We argue that our CNN-based down-scaling effectively reduces the spatial dimension of the original image, while learned kernels with multiple channels may well-preserve necessary details for deblurring tasks. For each scale, we adopt to use RCAN (Residual Channel Attention Networks) as a backbone network to further improve performance. Our proposed method yielded state-of-the-art performance on GoPro dataset by large margin. Our proposed method was able to achieve 2.59dB higher PSNR than the current state-of-the-art method by Tao. Our proposed CNN-based down-scaling was the key factor for this excellent performance since the performance of our network without it was decreased by 1.98dB. The same networks trained with GoPro set were also evaluated on large-scale Su dataset and our proposed method yielded 1.15dB better PSNR than the Tao's method. Qualitative comparisons on Lai dataset also confirmed the superior performance of our proposed method over other state-of-the-art methods.
Since the advent of deep learning @cite_35 , many blind single image video deblurring works employed deep neural networks for estimating blur kernels and or original sharp images from given blurred input images. There are several works to predict non-uniform blur kernels explicitly: predicting the probabilistic distribution of motion blur at the patch level @cite_39 , estimating the complex Fourier coefficients of a deconvolution filter @cite_11 , performing blur kernel estimation by division in Fourier space from extracted deep features @cite_27 , and analyzing the spectral content of blurry image patches by reblurring them @cite_0 .
{ "cite_N": [ "@cite_35", "@cite_39", "@cite_0", "@cite_27", "@cite_11" ], "mid": [ "", "1916935112", "2776004874", "1457323852", "2300657047" ], "abstract": [ "", "In this paper, we address the problem of estimating and removing non-uniform motion blur from a single blurry image. We propose a deep learning approach to predicting the probabilistic distribution of motion blur at the patch level using a convolutional neural network (CNN). We further extend the candidate set of motion kernels predicted by the CNN using carefully designed image rotations. A Markov random field model is then used to infer a dense non-uniform motion blur field enforcing motion smoothness. Finally, motion blur is removed by a non-uniform deblurring model using patch-level image prior. Experimental evaluations show that our approach can effectively estimate and remove complex non-uniform motion blur that is not handled well by previous approaches.", "We present an approach for blind image deblurring, which handles non-uniform blurs. Our algorithm has two main components: (i) A new method for recovering the unknown blur-field directly from the blurry image, and (ii) A method for deblurring the image given the recovered non-uniform blur-field. Our blur-field estimation is based on analyzing the spectral content of blurry image patches by Re-blurring them. Being unrestricted by any training data, it can handle a large variety of blur sizes, yielding superior blur-field estimation results compared to training-based deep-learning methods. Our non-uniform deblurring algorithm is based on the internal image-specific patch-recurrence prior. It attempts to recover a sharp image which, on one hand – results in the blurry image under our estimated blur-field, and on the other hand – maximizes the internal recurrence of patches within and across scales of the recovered sharp image. The combination of these two components gives rise to a blind-deblurring algorithm, which exceeds the performance of state-of-the-art CNN-based blind-deblurring by a significant margin, without the need for any training data.", "We describe a learning-based approach to blind image deconvolution. It uses a deep layered architecture, parts of which are borrowed from recent work on neural network learning, and parts of which incorporate computations that are specific to image deconvolution. The system is trained end-to-end on a set of artificially generated training examples, enabling competitive performance in blind deconvolution, both with respect to quality and runtime.", "We present a new method for blind motion deblurring that uses a neural network trained to compute estimates of sharp image patches from observations that are blurred by an unknown motion kernel. Instead of regressing directly to patch intensities, this network learns to predict the complex Fourier coefficients of a deconvolution filter to be applied to the input patch for restoration. For inference, we apply the network independently to all overlapping patches in the observed image, and average its outputs to form an initial estimate of the sharp image. We then explicitly estimate a single global blur kernel by relating this estimate to the observed image, and finally perform non-blind deconvolution with this kernel. Our method exhibits accuracy and robustness close to state-of-the-art iterative methods, while being much faster when parallelized on GPU hardware." ] }
1903.10157
2924875562
Multi-scale approach has been used for blind image video deblurring problems to yield excellent performance for both conventional and recent deep-learning-based state-of-the-art methods. Bicubic down-sampling is a typical choice for multi-scale approach to reduce spatial dimension after filtering with a fixed kernel. However, this fixed kernel may be sub-optimal since it may destroy important information for reliable deblurring such as strong edges. We propose convolutional neural network (CNN)-based down-scale methods for multi-scale deep-learning-based non-uniform single image deblurring. We argue that our CNN-based down-scaling effectively reduces the spatial dimension of the original image, while learned kernels with multiple channels may well-preserve necessary details for deblurring tasks. For each scale, we adopt to use RCAN (Residual Channel Attention Networks) as a backbone network to further improve performance. Our proposed method yielded state-of-the-art performance on GoPro dataset by large margin. Our proposed method was able to achieve 2.59dB higher PSNR than the current state-of-the-art method by Tao. Our proposed CNN-based down-scaling was the key factor for this excellent performance since the performance of our network without it was decreased by 1.98dB. The same networks trained with GoPro set were also evaluated on large-scale Su dataset and our proposed method yielded 1.15dB better PSNR than the Tao's method. Qualitative comparisons on Lai dataset also confirmed the superior performance of our proposed method over other state-of-the-art methods.
There are also many works to directly estimate the original sharp image from the given blurred input image without explicitly estimating non-uniform blur kernels. For video blind deblurring, there have been some works to exploit temporal information: blending temporal information in spatio-temporal recurrent network for online video deblurring @cite_17 , taking temporal information into account with recurrent deblur network consisting of several deblur blocks @cite_12 , and developing an encoder-decoder network with the input of multiple video frames to accumulate information across frames @cite_18 . There are a few works for blind single image deblurring without temporal information. Xu proposed a direct estimation method of the original sharp image based on conventional optimization to approximate deconvolution by a series of convolution steps using deep neural networks @cite_7 . Later, Nah proposed a multi-scale network architecture with Gaussian pyramid and multi-scale loss functions @cite_26 and Tao proposed convolution long short-term memory (LSTM)-based multi-scale deep neural network for single image deblurring @cite_28 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_28", "@cite_12", "@cite_17" ], "mid": [ "2738579427", "2560533888", "2124964692", "2964030969", "2963136136", "2609107023" ], "abstract": [ "Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on the alignment of nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task that requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high frame rate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.", "Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.", "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.", "In single image deblurring, the \"coarse-to-fine\" scheme, i.e. gradually restoring the sharp image on different resolutions in a pyramid, is very successful in both traditional optimization-based methods and recent neural-network-based approaches. In this paper, we investigate this strategy and propose a Scale-recurrent Network (SRN-DeblurNet) for this deblurring task. Compared with the many recent learning-based approaches in [25], it has a simpler network structure, a smaller number of parameters and is easier to train. We evaluate our method on large-scale deblurring datasets with complex motion. Results show that our method can produce better quality results than state-of-the-arts, both quantitatively and qualitatively.", "As handheld video cameras are now commonplace and available in every smartphone, images and videos can be recorded almost everywhere at anytime. However, taking a quick shot frequently yields a blurry result due to unwanted camera shake during recording or moving objects in the scene. Removing these artifacts from the blurry recordings is a highly ill-posed problem as neither the sharp image nor the motion blur kernel is known. Propagating information between multiple consecutive blurry observations can help restore the desired sharp image or video. In this work, we propose an efficient approach to produce a significant amount of realistic training data and introduce a novel recurrent network architecture to deblur frames taking temporal information into account, which can efficiently handle arbitrary spatial and temporal input sizes.", "State-of-the-art video deblurring methods are capable of removing non-uniform blur caused by unwanted camera shake and or object motion in dynamic scenes. However, most existing methods are based on batch processing and thus need access to all recorded frames, rendering them computationally demanding and time-consuming and thus limiting their practical use. In contrast, we propose an online (sequential) video deblurring method based on a spatio-temporal recurrent network that allows for realtime performance. In particular, we introduce a novel architecture which extends the receptive field while keeping the overall size of the network small to enable fast execution. In doing so, our network is able to remove even large blur caused by strong camera shake and or fast moving objects. Furthermore, we propose a novel network layer that enforces temporal consistency between consecutive frames by dynamic temporal blending which compares and adap- tively (at test time) shares features obtained at different time steps. We show the superiority of the proposed method in an extensive experimental evaluation." ] }
1903.10157
2924875562
Multi-scale approach has been used for blind image video deblurring problems to yield excellent performance for both conventional and recent deep-learning-based state-of-the-art methods. Bicubic down-sampling is a typical choice for multi-scale approach to reduce spatial dimension after filtering with a fixed kernel. However, this fixed kernel may be sub-optimal since it may destroy important information for reliable deblurring such as strong edges. We propose convolutional neural network (CNN)-based down-scale methods for multi-scale deep-learning-based non-uniform single image deblurring. We argue that our CNN-based down-scaling effectively reduces the spatial dimension of the original image, while learned kernels with multiple channels may well-preserve necessary details for deblurring tasks. For each scale, we adopt to use RCAN (Residual Channel Attention Networks) as a backbone network to further improve performance. Our proposed method yielded state-of-the-art performance on GoPro dataset by large margin. Our proposed method was able to achieve 2.59dB higher PSNR than the current state-of-the-art method by Tao. Our proposed CNN-based down-scaling was the key factor for this excellent performance since the performance of our network without it was decreased by 1.98dB. The same networks trained with GoPro set were also evaluated on large-scale Su dataset and our proposed method yielded 1.15dB better PSNR than the Tao's method. Qualitative comparisons on Lai dataset also confirmed the superior performance of our proposed method over other state-of-the-art methods.
Multi-scale approaches for single image video deblurring have two different types of down-scaling. One is a simple filtering & down-sampling operation so that local information will be encoded with reduced spatial dimension @cite_3 @cite_29 @cite_26 @cite_28 . The other is a deep neural network based global information encoding of multiple video frames (e.g., 5) with much further reduced spatial dimension (up to @math ) and increased channels (up to 512) @cite_18 . There has been no work on learning based down-scaling to encode local information as an extension of Gaussian bicubic down-sampling in multi-scale single image deblurring.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_28", "@cite_29", "@cite_3" ], "mid": [ "2738579427", "2560533888", "2964030969", "1976730913", "2098535678" ], "abstract": [ "Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on the alignment of nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task that requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high frame rate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.", "Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.", "In single image deblurring, the \"coarse-to-fine\" scheme, i.e. gradually restoring the sharp image on different resolutions in a pyramid, is very successful in both traditional optimization-based methods and recent neural-network-based approaches. In this paper, we investigate this strategy and propose a Scale-recurrent Network (SRN-DeblurNet) for this deblurring task. Compared with the many recent learning-based approaches in [25], it has a simpler network structure, a smaller number of parameters and is easier to train. We evaluate our method on large-scale deblurring datasets with complex motion. Results show that our method can produce better quality results than state-of-the-arts, both quantitatively and qualitatively.", "This paper presents a fast deblurring method that produces a deblurring result from a single image of moderate size in a few seconds. We accelerate both latent image estimation and kernel estimation in an iterative deblurring process by introducing a novel prediction step and working with image derivatives rather than pixel values. In the prediction step, we use simple image processing techniques to predict strong edges from an estimated latent image, which will be solely used for kernel estimation. With this approach, a computationally efficient Gaussian prior becomes sufficient for deconvolution to estimate the latent image, as small deconvolution artifacts can be suppressed in the prediction. For kernel estimation, we formulate the optimization function using image derivatives, and accelerate the numerical process by reducing the number of Fourier transforms needed for a conjugate gradient method. We also show that the formulation results in a smaller condition number of the numerical system than the use of pixel values, which gives faster convergence. Experimental results demonstrate that our method runs an order of magnitude faster than previous work, while the deblurring quality is comparable. GPU implementation facilitates further speed-up, making our method fast enough for practical use.", "Camera shake during exposure leads to objectionable image blur and ruins many photographs. Conventional blind deconvolution methods typically assume frequency-domain constraints on images, or overly simplified parametric forms for the motion path during camera shake. Real camera motions can follow convoluted paths, and a spatial domain prior can better maintain visually salient image characteristics. We introduce a method to remove the effects of camera shake from seriously blurred images. The method assumes a uniform camera blur over the image and negligible in-plane camera rotation. In order to estimate the blur from the camera shake, the user must specify an image region without saturation effects. We show results for a variety of digital photographs taken from personal photo collections." ] }
1903.09973
2944451619
The low-rank tensor approximation is very promising for the compression of deep neural networks. We propose a new simple and efficient iterative approach, which alternates low-rank factorization with a smart rank selection and fine-tuning. We demonstrate the efficiency of our method comparing to non-iterative ones. Our approach improves the compression rate while maintaining the accuracy for a variety of tasks.
There are several works devoted to the deep convolutional neural networks compression. Authors of @cite_26 proposed a pipeline that consists of three different methods: pruning, trained quantization, and Huffman coding. They demonstrated the possibility of significantly reducing storage requirements by combining different techniques. Our method differs since we focus not only on compression ratio but also on speedup and seamless integration into any framework
{ "cite_N": [ "@cite_26" ], "mid": [ "2119144962" ], "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency." ] }
1903.09973
2944451619
The low-rank tensor approximation is very promising for the compression of deep neural networks. We propose a new simple and efficient iterative approach, which alternates low-rank factorization with a smart rank selection and fine-tuning. We demonstrate the efficiency of our method comparing to non-iterative ones. Our approach improves the compression rate while maintaining the accuracy for a variety of tasks.
Several approaches based on different algorithms of low-rank approximation were proposed in @cite_18 @cite_10 . The authors of @cite_19 have demonstrated the successful application of singular value decomposition (SVD) to fully connected layers. Further, the authors of @cite_18 found a way to decompose 4-dimensional convolutional kernel tensor by applying canonical polyadic (CP) decomposition. But these approaches are able to compress only one or a couple of layers. Moreover, for each layer the rank is unique and the process of rank selection has to be performed manually every time.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_10" ], "mid": [ "2167215970", "2963048316", "" ], "abstract": [ "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.", "Abstract: We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the second step, this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels. After such replacement, the entire network is fine-tuned on the training data using standard backpropagation process. We evaluate this approach on two CNNs and show that it is competitive with previous approaches, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Thus, for the 36-class character classification CNN, our approach obtains a 8.5x CPU speedup of the whole network with only minor accuracy drop (1 from 91 to 90 ). For the standard ImageNet architecture (AlexNet), the approach speeds up the second convolution layer by a factor of 4x at the cost of @math increase of the overall top-5 classification error.", "" ] }
1903.09973
2944451619
The low-rank tensor approximation is very promising for the compression of deep neural networks. We propose a new simple and efficient iterative approach, which alternates low-rank factorization with a smart rank selection and fine-tuning. We demonstrate the efficiency of our method comparing to non-iterative ones. Our approach improves the compression rate while maintaining the accuracy for a variety of tasks.
Another way to compress a whole network was introduced in @cite_21 . The approach used in their work is automated. Authors combined two different decompositions to be able to compress both fully connected and convolutional layers. To compress fully connected layers they adopted the approach used in @cite_19 and applied SVD. For convolutional layers, the authors applied a Tucker decomposition @cite_16 . Unlike @cite_18 @cite_10 , the authors have found a way to automatically select ranks without any manual search. Ranks are determined by a global analytic solution of variational Bayesian matrix factorization (VBMF) @cite_11 . We found that the global analytic VBMF provides ranks for which it is difficult to restore the initial accuracy by fine-tuning for deep networks. In our algorithm, we use the global analytic EVBMF but to select the extremal ranks which will be weakened afterward.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_19", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2963048316", "2962988160", "2167215970", "1963826206", "", "2115120746" ], "abstract": [ "Abstract: We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the second step, this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels. After such replacement, the entire network is fine-tuned on the training data using standard backpropagation process. We evaluate this approach on two CNNs and show that it is competitive with previous approaches, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Thus, for the 36-class character classification CNN, our approach obtains a 8.5x CPU speedup of the whole network with only minor accuracy drop (1 from 91 to 90 ). For the standard ImageNet architecture (AlexNet), the approach speeds up the second convolution layer by a factor of 4x at the cost of @math increase of the overall top-5 classification error.", "Abstract: Although the latest high-end smartphone has powerful CPU and GPU, running deeper convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on mobile devices is challenging. To deploy deep CNNs on mobile devices, we present a simple and effective scheme to compress the entire CNN, which we call one-shot whole network compression. The proposed scheme consists of three steps: (1) rank selection with variational Bayesian matrix factorization, (2) Tucker decomposition on kernel tensor, and (3) fine-tuning to recover accumulated loss of accuracy, and each step can be easily implemented using publicly available tools. We demonstrate the effectiveness of the proposed scheme by testing the performance of various compressed CNNs (AlexNet, VGGS, GoogLeNet, and VGG-16) on the smartphone. Significant reductions in model size, runtime, and energy consumption are obtained, at the cost of small loss in accuracy. In addition, we address the important implementation level issue on 1?1 convolution, which is a key operation of inception module of GoogLeNet as well as CNNs compressed by our proposed scheme.", "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.", "The model for three-mode factor analysis is discussed in terms of newer applications of mathematical processes including a type of matrix process termed the Kronecker product and the definition of combination variables. Three methods of analysis to a type of extension of principal components analysis are discussed. Methods II and III are applicable to analysis of data collected for a large sample of individuals. An extension of the model is described in which allowance is made for unique variance for each combination variable when the data are collected for a large sample of individuals.", "", "The variational Bayesian (VB) approach is one of the best tractable approximations to the Bayesian estimation, and it was demonstrated to perform well in many applications. However, its good performance was not fully understood theoretically. For example, VB sometimes produces a sparse solution, which is regarded as a practical advantage of VB, but such sparsity is hardly observed in the rigorous Bayesian estimation. In this paper, we focus on probabilistic PCA and give more theoretical insight into the empirical success of VB. More specifically, for the situation where the noise variance is unknown, we derive a sufficient condition for perfect recovery of the true PCA dimensionality in the large-scale limit when the size of an observed matrix goes to infinity. In our analysis, we obtain bounds for a noise variance estimator and simple closed-form solutions for other parameters, which themselves are actually very useful for better implementation of VB-PCA." ] }
1903.09973
2944451619
The low-rank tensor approximation is very promising for the compression of deep neural networks. We propose a new simple and efficient iterative approach, which alternates low-rank factorization with a smart rank selection and fine-tuning. We demonstrate the efficiency of our method comparing to non-iterative ones. Our approach improves the compression rate while maintaining the accuracy for a variety of tasks.
CP decomposition which was used by @cite_18 is a special case of Tucker decomposition, where the core tensor is constrained to be superdiagonal. In our approach, we use Tucker-2 decomposition. To compress fully connected layers we adopt SVD as it was proposed in @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_18" ], "mid": [ "2167215970", "2963048316" ], "abstract": [ "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.", "Abstract: We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the second step, this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels. After such replacement, the entire network is fine-tuned on the training data using standard backpropagation process. We evaluate this approach on two CNNs and show that it is competitive with previous approaches, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Thus, for the 36-class character classification CNN, our approach obtains a 8.5x CPU speedup of the whole network with only minor accuracy drop (1 from 91 to 90 ). For the standard ImageNet architecture (AlexNet), the approach speeds up the second convolution layer by a factor of 4x at the cost of @math increase of the overall top-5 classification error." ] }
1903.10146
2924726114
We propose a universal image reconstruction method to represent detailed images purely from binary sparse edge and flat color domain. Inspired by the procedures of painting, our framework, based on generative adversarial network, consists of three phases: Imitation Phase aims at initializing networks, followed by Generating Phase to reconstruct preliminary images. Moreover, Refinement Phase is utilized to fine-tune preliminary images into final outputs with details. This framework allows our model generating abundant high frequency details from sparse input information. We also explore the defects of disentangling style latent space implicitly from images, and demonstrate that explicit color domain in our model performs better on controllability and interpretability. In our experiments, we achieve outstanding results on reconstructing realistic images and translating hand drawn drafts into satisfactory paintings. Besides, within the domain of edge-to-image translation, our model PI-REC outperforms existing state-of-the-art methods on evaluations of realism and accuracy, both quantitatively and qualitatively.
Sketch-to-image (S2I) synthesis. The main methods of S2I synthesis domain could be divided into two: indirect retrieval and direct synthesis. Sketch Based Image Retrieval (SBIR) attempts to bridge the domain gap between features extracted from sketches and photos @cite_34 @cite_6 @cite_8 @cite_49 . However, bag-of-words models with lots of extracted features @cite_5 are problematic to match edges with unaligned hand drawn sketches. Cross-modal retrieval is applied into S2I synthesis problem using deep neural networks, which is able to do instance-level @cite_3 @cite_46 or category-level @cite_43 @cite_18 S2I retrieval. Nevertheless, It is challenging for SBIR to complete pixel-level synthesis or to consider style as input owing to the self-limitation of retrieval. Scribbler @cite_26 succeeds to introduce GAN into S2I synthesis field without retrieval, which uses dense sketch and color stroke as inputs. However, color stroke as style input confuses the network about which area to colorize when content input is sparse. SketchyGAN @cite_0 has a truly sparse sketch input while the style cannot be user-defined.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_8", "@cite_6", "@cite_3", "@cite_0", "@cite_43", "@cite_49", "@cite_5", "@cite_46", "@cite_34" ], "mid": [ "2585630030", "2560481159", "1975771248", "1983556963", "2467281799", "2963561004", "2963142510", "2153404544", "2921743338", "2466618734", "2048546747" ], "abstract": [ "Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.", "Several recent works have used deep convolutional networks to generate realistic imagery. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach generates more realistic, diverse, and controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.", "We address the problem of fast, large scale sketch-based image retrieval, searching in a database of over one million images. We show that current retrieval methods do not scale well towards large databases in the context of interactively supervised search and propose two different approaches for which we objectively evaluate that they significantly outperform existing approaches. The proposed descriptors are constructed such that both the full color image and the sketch undergo exactly the same preprocessing steps. We first search for an image with similar structure, analyzing gradient orientations. Then, best matching images are clustered based on dominant color distributions, to offset the lack of color-based decision during the initial search. Overall, the query results demonstrate that the system offers intuitive access to large image databases using a user-friendly sketch-and-browse interface.", "In this paper, we showcase the MindFinder system, which is an interactive sketch-based image search engine. Different from existing work, most of which is limited to a small scale database or only enables single modality input, MindFinder is a sketch-based multimodal search engine for million-level database. It enables users to sketch major curves of the target image in their mind, and also supports tagging and coloring operations to better express their search intentions. Owning to a friendly interface, our system supports multiple actions, which help users to flexibly design their queries. After each operation, top returned images are updated in real time, based on which users could interactively refine their initial thoughts until ideal images are returned. The novelty of the MindFinder system includes the following two aspects: 1) A multimodal searching scheme is proposed to retrieve images which meet users' requirements not only in structure, but also in semantic meaning and color tone. 2) An indexing framework is designed to make MindFinder scalable in terms of database size, memory cost, and response time. By scaling up the database to more than two million images, MindFinder not only helps users to easily present whatever they are imagining, but also has the potential to retrieve the most desired images in their mind.", "We investigate the problem of fine-grained sketch-based image retrieval (SBIR), where free-hand human sketches are used as queries to perform instance-level retrieval of images. This is an extremely challenging task because (i) visual comparisons not only need to be fine-grained but also executed cross-domain, (ii) free-hand (finger) sketches are highly abstract, making fine-grained matching harder, and most importantly (iii) annotated cross-domain sketch-photo datasets required for training are scarce, challenging many state-of-the-art machine learning techniques. In this paper, for the first time, we address all these challenges, providing a step towards the capabilities that would underpin a commercial sketch-based image retrieval application. We introduce a new database of 1,432 sketchphoto pairs from two categories with 32,000 fine-grained triplet ranking annotations. We then develop a deep tripletranking model for instance-level SBIR with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data. Extensive experiments are carried out to contribute a variety of insights into the challenges of data sufficiency and over-fitting avoidance when training deep networks for finegrained cross-domain ranking tasks.", "Synthesizing realistic images from human drawn sketches is a challenging problem in computer graphics and vision. Existing approaches either need exact edge maps, or rely on retrieval of existing photographs. In this work, we propose a novel Generative Adversarial Network (GAN) approach that synthesizes plausible images from 50 categories including motorcycles, horses and couches. We demonstrate a data augmentation technique for sketches which is fully automatic, and we show that the augmented data is helpful to our task. We introduce a new network building block suitable for both the generator and discriminator which improves the information flow by injecting the input image at multiple scales. Compared to state-of-the-art image translation methods, our approach generates more realistic images and achieves significantly higher Inception Scores.", "Deep generative models provide powerful tools for distributions over complicated manifolds, such as those of natural images. But many of these methods, including generative adversarial networks (GANs), can be difficult to train, in part because they are prone to mode collapse, which means that they characterize only a few modes of the true distribution. To address this, we introduce VEEGAN, which features a reconstructor network, reversing the action of the generator by mapping from data to noise. Our training objective retains the original asymptotic consistency guarantee of GANs, and can be interpreted as a novel autoencoder loss over the noise. In sharp contrast to a traditional autoencoder over data points, VEEGAN does not require specifying a loss function over the data, but rather only over the representations, which are standard normal by assumption. On an extensive set of synthetic and real world image datasets, VEEGAN indeed resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples.", "We introduce a benchmark for evaluating the performance of large-scale sketch-based image retrieval systems. The necessary data are acquired in a controlled user study where subjects rate how well given sketch image pairs match. We suggest how to use the data for evaluating the performance of sketch-based image retrieval systems. The benchmark data as well as the large image database are made publicly available for further studies of this type. Furthermore, we develop new descriptors based on the bag-of-features approach and use the benchmark to demonstrate that they significantly outperform other descriptors in the literature.", "Most conditional generation tasks expect diverse outputs given a single conditional context. However, conditional generative adversarial networks (cGANs) often focus on the prior conditional information and ignore the input noise vectors, which contribute to the output variations. Recent attempts to resolve the mode collapse issue for cGANs are usually task-specific and computationally expensive. In this work, we propose a simple yet effective regularization term to address the mode collapse issue for cGANs. The proposed method explicitly maximizes the ratio of the distance between generated images with respect to the corresponding latent codes, thus encouraging the generators to explore more minor modes during training. This mode seeking regularization term is readily applicable to various conditional generation tasks without imposing training overhead or modifying the original network structures. We validate the proposed algorithm on three conditional image synthesis tasks including categorical generation, image-to-image translation, and text-to-image synthesis with different baseline models. Both qualitative and quantitative results demonstrate the effectiveness of the proposed regularization method for improving diversity without loss of quality.", "We present the Sketchy database, the first large-scale collection of sketch-photo pairs. We ask crowd workers to sketch particular photographic objects sampled from 125 categories and acquire 75,471 sketches of 12,500 objects. The Sketchy database gives us fine-grained associations between particular photos and sketches, and we use this to train cross-domain convolutional networks which embed sketches and photographs in a common feature space. We use our database as a benchmark for fine-grained retrieval and show that our learned representation significantly outperforms both hand-crafted features as well as deep features trained for sketch or photo classification. Beyond image retrieval, we believe the Sketchy database opens up new opportunities for sketch and image understanding and synthesis.", "Retrieving images to match with a hand-drawn sketch query is a highly desired feature, especially with the popularity of devices with touch screens. Although query-by-sketch has been extensively studied since 1990s, it is still very challenging to build a real-time sketch-based image search engine on a large-scale database due to the lack of effective and efficient matching indexing solutions. The explosive growth of web images and the phenomenal success of search techniques have encouraged us to revisit this problem and target at solving the problem of web-scale sketch-based image retrieval. In this work, a novel index structure and the corresponding raw contour-based matching algorithm are proposed to calculate the similarity between a sketch query and natural images, and make sketch-based image retrieval scalable to millions of images. The proposed solution simultaneously considers storage cost, retrieval accuracy, and efficiency, based on which we have developed a real-time sketch-based image search engine by indexing more than 2 million images. Extensive experiments on various retrieval tasks (basic shape search, specific image search, and similar image search) show better accuracy and efficiency than state-of-the-art methods." ] }
1903.10146
2924726114
We propose a universal image reconstruction method to represent detailed images purely from binary sparse edge and flat color domain. Inspired by the procedures of painting, our framework, based on generative adversarial network, consists of three phases: Imitation Phase aims at initializing networks, followed by Generating Phase to reconstruct preliminary images. Moreover, Refinement Phase is utilized to fine-tune preliminary images into final outputs with details. This framework allows our model generating abundant high frequency details from sparse input information. We also explore the defects of disentangling style latent space implicitly from images, and demonstrate that explicit color domain in our model performs better on controllability and interpretability. In our experiments, we achieve outstanding results on reconstructing realistic images and translating hand drawn drafts into satisfactory paintings. Besides, within the domain of edge-to-image translation, our model PI-REC outperforms existing state-of-the-art methods on evaluations of realism and accuracy, both quantitatively and qualitatively.
Image-to-image (I2I) translation. Isola @math @math . @cite_29 proposes the first unified framework Pix2Pix for I2I translation utilizing conditional GANs (cGANs) @cite_1 ,using semantic label map or edge as input. It has an overall capability on diverse image translation tasks inc -lu -ding edge-to-image (E2I) translation. Based on these fin -dings, CycleGAN @cite_42 introduced cycle-consistency loss and exploit cross-domain mapping for unsupervised training. However, the methods above are only appropriate to one-to-one domain translation. Recent researches focus on multi-modal I2I translation @cite_39 @cite_25 @cite_36 tasks which could transform images across domains. The random latent style is merged into the structure of pix2pixHD @cite_9 to generate diverse styles, which is still uncontrollable. BicycleGAN @cite_4 includes style vector bijection and self-cycle structure into the generator in order to output diverse reconstructions. However, its style of output from example-guided style image is not accurate under complex cases. We explore the defects further in Section . Unsupervised multi-modal I2I translation methods @cite_48 are proposed to fit the unpaired datasets. Whereas, in our subject of reconstruction from sparse information, edges we need could be extracted from original images to form paired datasets. Thus, adopting unsupervised training in our research is redundant.
{ "cite_N": [ "@cite_4", "@cite_36", "@cite_48", "@cite_29", "@cite_42", "@cite_1", "@cite_9", "@cite_39", "@cite_25" ], "mid": [ "", "2963344645", "2885192629", "", "2962793481", "2125389028", "2963800363", "2768626898", "2905393998" ], "abstract": [ "", "The past year alone has seen unprecedented leaps in the area of learning-based image translation, namely Cycle-GAN, by But experiments so far have been tailored to merely two domains at a time, and scaling them to more would require an quadratic number of models to be trained. And with two-domain models taking days to train on current hardware, the number of domains quickly becomes limited by the time and resources required to process them. In this paper, we propose a multi-component image translation model and training scheme which scales linearly - both in resource consumption and time required - with the number of domains. We demonstrate its capabilities on a dataset of paintings by 14 different artists and on images of the four different seasons in the Alps. Note that 14 data groups would need (14 choose 2) = 91 different CycleGAN models: a total of 182 generator discriminator pairs; whereas our model requires only 14 generator discriminator pairs.", "Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for many applications: (1) the lack of aligned training pairs and (2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for producing diverse outputs without paired training images. To achieve diversity, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Using the disentangled features as inputs greatly reduces mode collapse. To handle unpaired training data, we introduce a novel cross-cycle consistency loss. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks. We validate the effectiveness of our approach through extensive evaluation.", "", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.", "Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.", "Cross-domain mapping has been a very active topic in recent years. Given one image, its main purpose is to translate it to the desired target domain, or multiple domains in the case of multiple labels. This problem is highly challenging due to three main reasons: (i) unpaired datasets, (ii) multiple attributes, and (iii) the multimodality associated with the translation. Most of the existing state-of-the-art has focused only on two reasons, i.e. producing disentangled representations from unpaired datasets in a one-to-one domain translation or producing multiple unimodal attributes from unpaired datasets. In this work, we propose a joint framework of diversity and multi-mapping image-to-image translations, using a single generator to conditionally produce countless and unique fake images that hold the underlying characteristics of the source image. Extensive experiments over different datasets demonstrate the effectiveness of our proposed approach with comparisons to the state-of-the-art in both multi-label and multimodal problems. Additionally, our method is able to generalize under different scenarios: continuous style interpolation, continuous label interpolation, and multi-label mapping." ] }
1903.10180
2955366688
Data from software repositories have become an important foundation for the empirical study of software engineering processes. A recurring theme in the repository mining literature is the inference of developer networks capturing e.g. collaboration, coordination, or communication, from the commit history of projects. Most of the studied networks are based on the co-authorship of software artefacts defined at the level of files, modules, or packages. While this approach has led to insights into the social aspects of software development, it neglects detailed information on code changes and code ownership, e.g. which exact lines of code have been authored by which developers, that is contained in the commit log of software projects. Addressing this issue, we introduce git2net, a scalable python software that facilitates the extraction of fine-grained co-editing networks in large git repositories. It uses text mining techniques to analyse the detailed history of textual modifications within files. This information allows us to construct directed, weighted, and time-stamped networks, where a link signifies that one developer has edited a block of source code originally written by another developer. Our tool is applied in case studies of an Open Source and a commercial software project. We argue that it opens up a massive new source of high-resolution data on human collaboration patterns.
Given the large body of work using network analysis to study software development processes, we restrict our overview to related works that address the reconstruction of social networks from software repositories. A broader view on applications of graph-based data analysis and modelling techniques in empirical software engineering---including works on (technical) dependency networks that are outside the scope of our work---is, e.g., available in @cite_22 @cite_32 @cite_51 .
{ "cite_N": [ "@cite_51", "@cite_32", "@cite_22" ], "mid": [ "1821453161", "1968451194", "2032219610" ], "abstract": [ "Large collaborative software engineering projects are interesting examples for evolving complex systems. The complexity of these systems unfolds both in evolving software structures, as well as in the social dynamics and organization of development teams. Due to the adoption of Open Source practices and the increasing use of online support infrastructures, large-scale data sets covering both the social and technical dimension of collaborative software engineering processes are increasingly becoming available. In the analysis of these data, a growing number of studies employ a network perspective, using methods and abstractions from network science to generate insights about software engineering processes. Featuring a collection of inspiring works in this area, with this topical issue, we intend to give an overview of state-of-the-art research. We hope that this collection of articles will stimulate downstream applications of network-based data mining techniques in empirical software engineering.", "To improve software productivity and quality, software engineers are increasingly applying data mining algorithms to various software engineering tasks. However, mining SE data poses several challenges. The authors present various algorithms to effectively mine sequences, graphs, and text from such data.", "Suppose you're a software team manager who's responsible for delivering a software product by a specific date, and your team uses a code integration system (referred to as a build in IBM Rational Jazz and in this article) to integrate its work before delivery. When the build fails, your team needs to spend extra time diagnosing the integration issue and reworking code. As the manager, you suspect that your team failed to communicate about a code dependency, which broke the build. Your team needs to quickly disseminate information about its interdependent work to achieve a successful integration build. How can you understand your team's communication? Social-network analysis can give you insight into the team's communication patterns that might have caused the build's failure." ] }
1903.10180
2955366688
Data from software repositories have become an important foundation for the empirical study of software engineering processes. A recurring theme in the repository mining literature is the inference of developer networks capturing e.g. collaboration, coordination, or communication, from the commit history of projects. Most of the studied networks are based on the co-authorship of software artefacts defined at the level of files, modules, or packages. While this approach has led to insights into the social aspects of software development, it neglects detailed information on code changes and code ownership, e.g. which exact lines of code have been authored by which developers, that is contained in the commit log of software projects. Addressing this issue, we introduce git2net, a scalable python software that facilitates the extraction of fine-grained co-editing networks in large git repositories. It uses text mining techniques to analyse the detailed history of textual modifications within files. This information allows us to construct directed, weighted, and time-stamped networks, where a link signifies that one developer has edited a block of source code originally written by another developer. Our tool is applied in case studies of an Open Source and a commercial software project. We argue that it opens up a massive new source of high-resolution data on human collaboration patterns.
A number of studies use operational data on software projects to construct graphs or networks where nodes capture developers while links capture social interactions and or work dependencies between developers. To this end, a first line of works has used data that directly capture communication , e.g. via IRC channels @cite_40 , E-Mail exchanges @cite_4 @cite_23 @cite_5 @cite_6 @cite_13 , mailing lists @cite_50 , or communication via issue trackers @cite_1 @cite_43 @cite_53 @cite_8 @cite_48 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_48", "@cite_53", "@cite_1", "@cite_6", "@cite_43", "@cite_40", "@cite_23", "@cite_50", "@cite_5", "@cite_13" ], "mid": [ "2076279155", "2158760041", "1995488392", "1975064998", "2086354305", "2005406219", "1515552913", "2103562041", "2100502494", "1988320117", "2152529876", "2162940346" ], "abstract": [ "Communication & Co-ordination activities are central to large software projects, but are difficult to observe and study in traditional (closed-source, commercial) settings because of the prevalence of informal, direct communication modes. OSS projects, on the other hand, use the internet as the communication medium,and typically conduct discussions in an open, public manner. As a result, the email archives of OSS projects provide a useful trace of the communication and co-ordination activities of the participants. However, there are various challenges that must be addressed before this data can be effectively mined. Once this is done, we can construct social networks of email correspondents, and begin to address some interesting questions. These include questions relating to participation in the email; the social status of different types of OSS participants; the relationship of email activity and commit activity (in the CVS repositories) and the relationship of social status with commit activity. In this paper, we begin with a discussion of our infrastructure (including a novel use of Scientific Workflow software) and then discuss our approach to mining the email archives; and finally we present some preliminary results from our data analysis.", "Efficient bug triaging procedures are an important precondition for successful collaborative software engineering projects. Triaging bugs can become a laborious task particularly in open source software (OSS) projects with a large base of comparably inexperienced part-time contributors. In this paper, we propose an efficient and practical method to identify valid bug reports which a) refer to an actual software bug, b) are not duplicates and c) contain enough information to be processed right away. Our classification is based on nine measures to quantify the social embeddedness of bug reporters in the collaboration network. We demonstrate its applicability in a case study, using a comprehensive data set of more than 700, 000 bug reports obtained from the Bugzilla installation of four major OSS communities, for a period of more than ten years. For those projects that exhibit the lowest fraction of valid bug reports, we find that the bug reporters' position in the collaboration network is a strong indicator for the quality of bug reports. Based on this finding, we develop an automated classification scheme that can easily be integrated into bug tracking platforms and analyze its performance in the considered OSS communities. A support vector machine (SVM) to identify valid bug reports based on the nine measures yields a precision of up to 90.3 with an associated recall of 38.9 . With this, we significantly improve the results obtained in previous case studies for an automated early identification of bugs that are eventually fixed. Furthermore, our study highlights the potential of using quantitative measures of social organization in collaborative software engineering. It also opens a broad perspective for the integration of social awareness in the design of support infrastructures.", "Social organization and division of labor crucially influence the performance of collaborative software engineering efforts. In this paper, we provide a quantitative analysis of the relation between social organization and performance in Gentoo, an Open Source community developing a Linux distribution. We study the structure and dynamics of collaborations as recorded in the project's bug tracking system over a period of ten years. We identify a period of increasing centralization after which most interactions in the community were mediated by a single central contributor. In this period of maximum centralization, the central contributor unexpectedly left the project, thus posing a significant challenge for the community. We quantify how the rise, the activity as well as the subsequent sudden dropout of this central contributor affected both the social organization and the bug handling performance of the Gentoo community. We analyze social organization from the perspective of network theory and augment our quantitative findings by interviews with prominent members of the Gentoo community which shared their personal insights.", "Open source software projects are characterized as self organizing and dynamic in which volunteers around the world primarily driven by self-motivation (and not necessarily monetary compensation) contribute and collaborate to a software product. In contrast to close source or proprietary software, the organizational structure and task allocation in an open source project setting is unstructured. Software project managers perform risk, threat and vulnerability analysis to gain insights into the organizational structure for de-risking or risk mitigation. For example, it is important for a project manager to have an understanding of critical employees, core team, subject matter experts, sub-groups, leaders and communication bridges. Software repositories such as defect tracking systems, versioning systems and mailing lists contains a wealth of valuable information that can be mined for solving practically useful software engineering tasks. In this paper, we present a systematic approach to mine defect tracking system for risk, threat and vulnerability analysis in a software project. We derive a collaboration network from a defect tracking system and apply social network analysis techniques to investigate the derived network for the purpose of risk and vulnerability analysis. We perform empirical analysis on bug report data of Mozilla Firefox project and present the results of our analysis. We demonstrate how important information pertaining to risk and vulnerability can be uncovered using network analysis techniques from static record keeping software archive such as the bug tracking system.", "Drawing on social network theories and previous studies, this research examines the dynamics of social network structures in open source software (OSS) teams. Three projects were selected from SourceForge.net in terms of their similarities as well as their differences. Monthly data were extracted from the bug tracking systems in order to achieve a longitudinal view of the interaction pattern of each project. Social network analysis was used to generate the indices of social structure. The finding suggests that the interaction pattern of OSS projects evolves from a single hub at the beginning to a core periphery model as the projects move forward.", "With the growing number of large scale software projects, software development and maintenance demands the participation of larger groups. Having a thorough understanding of the group of developers is critical for improving development and maintenance quality and reducing cost. In contrast to most commercial software endeavors, developers in open source software (OSS) projects enjoy more freedom to organize and contribute to a project in their own working style. Their interactions through various means in the project generate a latent developer social network (DSN). We have observed that developers and their relationships in these DSNs change continually under the influence of differences in the set of active developers and their changing activities. Revealing and understanding the structure and evolution of these social networks as well as their similarities and differences from other more general social networks (GSNs) is of value to our software engineering community, as it allows us to begin building an understanding of how well the findings from other fields based on GSNs apply to DSN. In this paper, we compare DSNs with popular GSNs such as Facebook, Twitter, Cyworld (a large social network in South Korea), and the Amazon recommendation network. We found, for instance, that while most social networks exhibit power law degree distributions, our DSNs do not. In addition, we also examine how DSNs evolve over time, highlighting how events within a project (such as a release of new software or the departure of prominent developers) impact the makeup of the DSNs, and observe the evolution of topological properties such as modularity and the paths of communities within these networks.", "This paper furthers inquiry into the social structure of free and open source software (FLOSS) teams by undertaking social network analysis across time. Contrary to expectations, we confirmed earlier findings of a wide distribution of centralizations even when examining the networks over time. The paper also provides empirical evidence that while change at the center of FLOSS projects is relatively uncommon, participation across the project communities is highly skewed, with many participants appearing for only one period. Surprisingly, large project teams are not more likely to undergo change at their centers.", "In this paper, we seek to shed light on how communication networks in geographically distributed projects evolve in order to address the limits of the modular design strategy. We collected data from a geographically distributed software development project covering 39 months of activity. Our analysis showed that over time a group of developers emerge as the liaisons between formal teams and geographical locations. In addition to handling the communication and coordination load across teams and locations, those engineers contributed the most to the development effort.", "A critical factor in work group coordination, communication has been studied extensively. Yet, we are missing objective evidence of the relationship between successful coordination outcome and communication structures. Using data from IBM's Jazz™ project, we study communication structures of development teams with high coordination needs. We conceptualize coordination outcome by the result of their code integration build processes (successful or failed) and study team communication structures with social network measures. Our results indicate that developer communication plays an important role in the quality of software integrations. Although we found that no individual measure could indicate whether a build will fail or succeed, we leveraged the combination of communication structure measures into a predictive model that indicates whether an integration will fail. When used for five project teams, our predictive model yielded recall values between 55 and 75 , and precision values between 50 to 76 .", "Open source software (OSS) development teams use electronic means, such as emails, instant messaging, or forums, to conduct open and public discussions. Researchers investigated mailing lists considering them as a hub for project communication. Prior work focused on specific aspects of emails, for example the handling of patches, traceability concerns, or social networks. This led to insights pertaining to the investigated aspects, but not to a comprehensive view of what developers communicate about. Our objective is to increase the understanding of development mailing lists communication. We quantitatively and qualitatively analyzed a sample of 506 email threads from the development mailing list of a major OSS project, Lucene. Our investigation reveals that implementation details are discussed only in about 35 of the threads, and that a range of other topics is discussed. Moreover, core developers participate in less than 75 of the threads. We observed that the development mailing list is not the main player in OSS project communication, as it also includes other channels such as the issue repository.", "Source code is the target and final outcome of software development. By focusing our research and analysis on source code only, we risk forgetting that software is the product of human efforts, where communication plays a pivotal role. One of the most used communications means are emails, which have become vital for any distributed development project. Analyzing email archives is non-trivial, due to the noisy and unstructured nature of emails, the vast amounts of information, the unstandardized storage systems, and the gap with development tools. We present Miler, a toolset that allows the exploration of this form of communication, in the context of software maintenance and evolution. With Miler we can retrieve data from mailing list repositories in different formats, model emails as first-class entities, and transparently store them in databases. Miler offers tools and support for navigating the content, manually labelling emails with discussed source code entities, automatically linking emails to source code, measuring code entities' popularity in mailing lists, exposing structured content in the unstructured content, and integrating email communication in an IDE.", "In distributed software development synchronized actions are important for completion of complex, interleaved tasks that require the abilities of multiple people. Synchronous development is manifested when file commits by two developers are close together in time and modify the same files. Here we propose quantitative methods for identifying synchronized activities in OSS projects, and use them to relate developer synchronization with effective productivity and communication. In particular, we define co-commit bursts and communication bursts, as intervals of time rich in co-commit and correspondence activities, respectively, and construct from them smoothed time series which can be, subsequently, correlated to discover synchrony. We find that synchronized co-commits between developers are associated with their effective productivity and coordination: during co-commit bursts, vs. at other times, the project size grows faster even though the overall coding effort slows down. We also find strong correlation between synchronized co-commits and communication, that is, for pairs of developers, more co-commit bursts are accompanied with more communication bursts, and their relationship follows closely a linear model. In addition, synchronized co-commits and communication activities occur very close together in time, thus, they can also be thought of as synchronizing each other. This study can help with better understanding collaborative mechanisms in OSS and the role communication plays in distributed software engineering." ] }
1903.10180
2955366688
Data from software repositories have become an important foundation for the empirical study of software engineering processes. A recurring theme in the repository mining literature is the inference of developer networks capturing e.g. collaboration, coordination, or communication, from the commit history of projects. Most of the studied networks are based on the co-authorship of software artefacts defined at the level of files, modules, or packages. While this approach has led to insights into the social aspects of software development, it neglects detailed information on code changes and code ownership, e.g. which exact lines of code have been authored by which developers, that is contained in the commit log of software projects. Addressing this issue, we introduce git2net, a scalable python software that facilitates the extraction of fine-grained co-editing networks in large git repositories. It uses text mining techniques to analyse the detailed history of textual modifications within files. This information allows us to construct directed, weighted, and time-stamped networks, where a link signifies that one developer has edited a block of source code originally written by another developer. Our tool is applied in case studies of an Open Source and a commercial software project. We argue that it opens up a massive new source of high-resolution data on human collaboration patterns.
While data on direct developer communication facilitate the construction of meaningful social networks, they are often not available, e.g. due to privacy concerns. To address such settings, researchers have developed methods to or collaboration networks based on developer actions recorded in code repositories like CVS , SVN , or git . A common approach starts from or networks, which map the relation between a developer and the artefacts (i.e. files, modules, binaries, etc.) that he or she contributed to @cite_7 @cite_9 @cite_2 @cite_19 . The resulting directed bipartite developer-artefact networks @cite_15 can then be projected onto , where undirected links between two developers @math and @math indicate that @math and @math have modified at least one common artefact. The authors of @cite_46 @cite_44 have studied co-change based on a large corpus of CVS repositories of Open Source Software projects.
{ "cite_N": [ "@cite_7", "@cite_46", "@cite_9", "@cite_44", "@cite_19", "@cite_2", "@cite_15" ], "mid": [ "2070321219", "1970143603", "2145574830", "2147006888", "2020681151", "2143177027", "" ], "abstract": [ "The practice of software development can likely be improved if an externalized model of each programmer's knowledge of a particular code base is available. Some tools already assume a useful form of such a model can be created from data collected during development, such as expertise recommenders that use information about who has changed each file to suggest who might answer questions about particular parts of a system. In this paper, we report on an empirical study that investigates whether a programmer's activity can be used to build a model of what a programmer knows about a code base. In this study, nineteen professional Java programmers completed a series of questionnaires about the code on which they were working. These questionnaires were generated automatically and asked about program elements a programmer had worked with frequently and recently and ones that he had not. We found that a degree of interest model based on this frequency and recency of interaction can often indicate the parts of the code base for which the programmer has knowledge. We also determined a number of factors that may be used to improve the model, such as authorship of program elements, the role of elements, and the task being performed.", "In this paper we investigate the relationship between class dependency and change propagation in Java software. By analyzing 35 large Open Source Java projects, we find that in the majority of the projects more than half of the dependencies are never involved in change propagation. Furthermore, our analysis shows that only a few dependencies are transmitting the majority of change propagation events. An additional analysis reveals that this concentration cannot be explained by the different ages of the dependencies. The conclusion is that the dependency structure alone is a poor measure for the change dynamics. This contrasts with current literature.", "Ownership is a key aspect of large-scale software development. We examine the relationship between different ownership measures and software failures in two large software projects: Windows Vista and Windows 7. We find that in all cases, measures of ownership such as the number of low-expertise developers, and the proportion of ownership for the top owner have a relationship with both pre-release faults and post-release failures. We also empirically identify reasons that low-expertise developers make changes to components and show that the removal of low-expertise contributions dramatically decreases the performance of contribution based defect prediction. Finally we provide recommendations for source code change policies and utilization of resources such as code inspections based on our results.", "Technological artifacts such as software often comprise a large number of modules; more than twenty thousand in the case of the Java software Eclipse. While on the micro-level this system is modular, how should the building blocks be arranged on the macro-level? In the literature this question has mainly been addressed with the same arguments already used to advocate modularity on the micro-level: Dependencies should be minimized as they impede optimization and flexibility of the system. In contrast to this I argue that along with a change from the micro view to the macro view also the argumentation has to change. In this paper, I analyze the theoretical ramifications of dependency between modules on the macro-level. In particular, I argue that macro-level dependencies are first weak dependencies, and second, foster flexibility and change efficiency. This argumentation is supported by an empirical analysis of 35 software architectures. Data show that dependency relations seldom cause change propagation. Furthermore, high dependency in the architecture negatively correlates with the occurrence of large change events. Thus, higher interdependency is associated with higher evolvability and more efficient change.", "Building non-trivial software is a social endeavor. Therefore, understanding the social network of developers is key to the study of software development organizations. We present a graph representation of the commit behavior of developers within the Apache Software Foundation for 2010 and 2011. Relationships between developers in the network represent collaborative commit behavior. Several similarity and summary metrics have been pre-calculated. The data, along with the tools that were used to create it and some further discussion, can be found at: http: sequoia.cs.byu.edu lab ?page=artifacts apacheGraphs.", "In a traditional sense, ownership determines rights and duties in regard to an object, for example a property. The owner of source code usually refers to the person that invented the code. However, larger code artifacts, such as files, are usually composed by multiple engineers contributing to the entity over time through a series of changes. Frequently, the person with the highest contribution, e.g. the most number of code changes, is defined as the code owner and takes responsibility for it. Thus, code ownership relates to the knowledge engineers have about code. Lacking responsibility and knowledge about code can reduce code quality. In an earlier study, [1] showed that Windows binaries that lacked clear code ownership were more likely to be defect prone. However recommendations for large artifacts such as binaries are usually not actionable. E.g. changing the concept of binaries and refactoring them to ensure strong ownership would violate system architecture principles. A recent replication study by [2] on open source software replicate the original results and lead to doubts about the general concept of ownership impacting code quality. In this paper, we replicated and extended the previous two ownership studies [1, 2] and reflect on their findings. Further, we define several new ownership metrics to investigate the dependency between ownership and code quality on file and directory level for 4 major Microsoft products. The results confirm the original findings by [1] that code ownership correlates with code quality. Using new and refined code ownership metrics we were able to classify source files that contained at least one bug with a median precision of 0.74 and a median recall of 0.38. On directory level, we achieve a precision of 0.76 and a recall of 0.60.", "" ] }
1903.10180
2955366688
Data from software repositories have become an important foundation for the empirical study of software engineering processes. A recurring theme in the repository mining literature is the inference of developer networks capturing e.g. collaboration, coordination, or communication, from the commit history of projects. Most of the studied networks are based on the co-authorship of software artefacts defined at the level of files, modules, or packages. While this approach has led to insights into the social aspects of software development, it neglects detailed information on code changes and code ownership, e.g. which exact lines of code have been authored by which developers, that is contained in the commit log of software projects. Addressing this issue, we introduce git2net, a scalable python software that facilitates the extraction of fine-grained co-editing networks in large git repositories. It uses text mining techniques to analyse the detailed history of textual modifications within files. This information allows us to construct directed, weighted, and time-stamped networks, where a link signifies that one developer has edited a block of source code originally written by another developer. Our tool is applied in case studies of an Open Source and a commercial software project. We argue that it opens up a massive new source of high-resolution data on human collaboration patterns.
The majority of works mining social networks from software repositories build on this general idea. In @cite_19 @cite_28 @cite_31 @cite_34 @cite_45 a file-based notion of co-authorship is used to construct , where a link between two developers signifies that they have committed the same file at least once. The authors of @cite_49 adopt a module-based definition, assuming that two developers are linked in the co-authorship network if they have contributed to at least one common module. Taking a similar approach, Huang and Liu use information on modified file paths in SourceForge repositories to infer relations between authors editing the same part of a project. Incorporating the time stamps of commits, Pohl and Diehl used a file-based co-authorship definition to construct developer networks that can be analysed and visualised using methods from dynamic network analysis @cite_10 . The authors of recently developed a similar approach to study the ecosystem of software projects on gitHub . To this end, they define project-level co-commit networks, i.e. a projection of commits where two developers are linked if they committed to the same Open Source project. provided a related study, analysing ten years of data from the Open Source project hosting platform SourceForge .
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_45", "@cite_49", "@cite_31", "@cite_34", "@cite_10" ], "mid": [ "", "2020681151", "2964131530", "1978596374", "2107672258", "1987706161", "1937334562" ], "abstract": [ "", "Building non-trivial software is a social endeavor. Therefore, understanding the social network of developers is key to the study of software development organizations. We present a graph representation of the commit behavior of developers within the Apache Software Foundation for 2010 and 2011. Relationships between developers in the network represent collaborative commit behavior. Several similarity and summary metrics have been pre-calculated. The data, along with the tools that were used to create it and some further discussion, can be found at: http: sequoia.cs.byu.edu lab ?page=artifacts apacheGraphs.", "Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of “dynamical spillover” showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways.", "The huge quantities of data available in the CVS repositories of large, long-lived libre (free, open source) software projects, and the many interrelationships among those data offer opportunities for extracting large amounts of valuable information about their structure, evolution and internal processes. Unfortunately, the sheer volume of that information renders it almost unusable without applying methodologies which highlight the relevant information for a given aspect of the project. In this paper, we propose the use of a well known set of methodologies (social network analysis) for characterizing libre software projects, their evolution over time and their internal structure. In addition, we show how we have applied such methodologies to real cases, and extract some preliminary conclusions from that experience.", "Software fails and fixing it is expensive. Research in failure prediction has been highly successful at modeling software failures. Few models, however, consider the key cause of failures in software: people. Understanding the structure of developer collaboration could explain a lot about the reliability of the final product. We examine this collaboration structure with the developer network derived from code churn information that can predict failures at the file level. We conducted a case study involving a mature Nortel networking product of over three million lines of code. Failure prediction models were developed using test and post-release failure data from two releases, then validated against a subsequent release. One model's prioritization revealed 58 of the failures in 20 of the files compared with the optimal prioritization that would have found 61 in 20 of the files, indicating that a significant correlation exists between file-based developer network metrics and failures.", "This paper presents a technique for visualizing the interactions between developers in software project evolution. The goal is to produce a visualization that shows more detail than animated software histories, like code_swarm [15], but keeps the same focus on aesthetics and presentation. Our software evolution storylines technique draws inspiration from XKCD's \"Movie Narrative Charts\" and the aesthetic design of metro maps. We provide the algorithm, design choices, and examine the results of using the storylines technique. Our conclusion is that the it is able to show more details when compared to animated software project history videos. However, it does not scale to the largest projects, such as Eclipse and Mozilla.", "The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions can make predictions and mechanistic understanding more accurate. The drawback, however, is that there are not so many methods available, partly because temporal networks is a relatively young field, partly because it is more difficult to develop such methods compared to for static networks. In this colloquium, we review the methods to analyze and model temporal networks and processes taking place on them, focusing mainly on the last three years. This includes the spreading of infectious disease, opinions, rumors, in social networks; information packets in computer networks; various types of signaling in biology, and more. We also discuss future directions." ] }
1903.10180
2955366688
Data from software repositories have become an important foundation for the empirical study of software engineering processes. A recurring theme in the repository mining literature is the inference of developer networks capturing e.g. collaboration, coordination, or communication, from the commit history of projects. Most of the studied networks are based on the co-authorship of software artefacts defined at the level of files, modules, or packages. While this approach has led to insights into the social aspects of software development, it neglects detailed information on code changes and code ownership, e.g. which exact lines of code have been authored by which developers, that is contained in the commit log of software projects. Addressing this issue, we introduce git2net, a scalable python software that facilitates the extraction of fine-grained co-editing networks in large git repositories. It uses text mining techniques to analyse the detailed history of textual modifications within files. This information allows us to construct directed, weighted, and time-stamped networks, where a link signifies that one developer has edited a block of source code originally written by another developer. Our tool is applied in case studies of an Open Source and a commercial software project. We argue that it opens up a massive new source of high-resolution data on human collaboration patterns.
These works have typically constructed based on joint contributions to files, modules, or projects. Such coarse-grained definitions of co-authorship networks introduce a potential issue: They do not distinguish between (i) links between developers that are due to contributions to the same artefact, and (ii) links that are due to commit sequences where one developer builds upon and or redacts the particular lines of source code previously authored by another developer. Networks defined based on the latter type of of code regions are likely associated with a stronger need for coordination and communication than the mere fact that developers edited the same file or module @cite_54 . So far, few studies have adopted such fine-grained approaches to create developer collaboration networks. Notable exceptions include the function-level co-editing networks constructed by @cite_39 . The authors further argue that, using file-based definitions of collaboration networks, network analytic methods fail to identify meaningful communities. The authors of @cite_29 constructed line-based co-editing networks, showing that such an analysis (i) yields insights into the coordination structures of software teams, and (ii) provides new ways to test long-standing hypotheses about cooperative work from social psychology.
{ "cite_N": [ "@cite_54", "@cite_29", "@cite_39" ], "mid": [ "2147018965", "2271240940", "2028735428" ], "abstract": [ "Task dependencies drive the need to coordinate work activities. We describe a technique for using automatically generated archi-val data to compute coordination requirements, i.e., who must coordinate with whom to get the work done. Analysis of data from a large software development project revealed that coordina-tion requirements were highly volatile, and frequently extended beyond team boundaries. Congruence between coordination re-quirements and coordination activities shortened development time. Developers, particularly the most productive ones, changed their use of electronic communication media over time, achieving higher congruence. We discuss practical implications of our technique for the design of collaborative and awareness tools.", "Complex software development projects rely on the contribution of teams of developers, who are required to collaborate and coordinate their efforts. The productivity of such development teams, i.e., how their size is related to the produced output, is an important consideration for project and schedule management as well as for cost estimation. The majority of studies in empirical software engineering suggest that - due to coordination overhead - teams of collaborating developers become less productive as they grow in size. This phenomenon is commonly paraphrased as Brooks' law of software project management, which states that \"adding manpower to a software project makes it later\". Outside software engineering, the non-additive scaling of productivity in teams is often referred to as the Ringelmann effect, which is studied extensively in social psychology and organizational theory. Conversely, a recent study suggested that in Open Source Software (OSS) projects, the productivity of developers increases as the team grows in size. Attributing it to collective synergetic effects, this surprising finding was linked to the Aristotelian quote that \"the whole is more than the sum of its parts\". Using a data set of 58 OSS projects with more than 580,000 commits contributed by more than 30,000 developers, in this article we provide a large-scale analysis of the relation between size and productivity of software development teams. Our findings confirm the negative relation between team size and productivity previously suggested by empirical software engineering research, thus providing quantitative evidence for the presence of a strong Ringelmann effect. Using fine-grained data on the association between developers and source code files, we investigate possible explanations for the observed relations between team size and productivity. In particular, we take a network perspective on developer-code associations in software development teams and show that the magnitude of the decrease in productivity is likely to be related to the growth dynamics of co-editing networks which can be interpreted as a first-order approximation of coordination requirements.", "Effective software engineering demands a coordinated effort. Unfortunately, a comprehensive view on developer coordination is rarely available to support software-engineering decisions, despite the significant implications on software quality, software architecture, and developer productivity. We present a fine-grained, verifiable, and fully automated approach to capture a view on developer coordination, based on commit information and source-code structure, mined from version-control systems. We apply methodology from network analysis and machine learning to identify developer communities automatically. Compared to previous work, our approach is fine-grained, and identifies statistically significant communities using order-statistics and a community-verification technique based on graph conductance. To demonstrate the scalability and generality of our approach, we analyze ten open-source projects with complex and active histories, written in various programming languages. By surveying 53 open-source developers from the ten projects, we validate the authenticity of inferred community structure with respect to reality. Our results indicate that developers of open-source projects form statistically significant community structures and this particular view on collaboration largely coincides with developers' perceptions of real-world collaboration." ] }
1903.10180
2955366688
Data from software repositories have become an important foundation for the empirical study of software engineering processes. A recurring theme in the repository mining literature is the inference of developer networks capturing e.g. collaboration, coordination, or communication, from the commit history of projects. Most of the studied networks are based on the co-authorship of software artefacts defined at the level of files, modules, or packages. While this approach has led to insights into the social aspects of software development, it neglects detailed information on code changes and code ownership, e.g. which exact lines of code have been authored by which developers, that is contained in the commit log of software projects. Addressing this issue, we introduce git2net, a scalable python software that facilitates the extraction of fine-grained co-editing networks in large git repositories. It uses text mining techniques to analyse the detailed history of textual modifications within files. This information allows us to construct directed, weighted, and time-stamped networks, where a link signifies that one developer has edited a block of source code originally written by another developer. Our tool is applied in case studies of an Open Source and a commercial software project. We argue that it opens up a massive new source of high-resolution data on human collaboration patterns.
While such a fine-grained analysis of the co-editing behaviour of developers has its advantages, it also introduces challenges that have so far limited its adoption. First and foremost, it requires a detailed analysis of file modifications and makes it necessary to identify the original author for every modified line of code affected in each commit. Requiring a potentially large number of git operations for every commit being analysed, such an analysis is both complicated to implement as well as time-consuming to perform. Compared to other approaches, which often merely require a suitable query in structured databases like ghTorrent @cite_3 @cite_12 , a tool that facilitates this task for very large repositories is still missing.
{ "cite_N": [ "@cite_12", "@cite_3" ], "mid": [ "2620921533", "2122414758" ], "abstract": [ "GitHub is the largest collaborative source code hosting site built on top of the Git version control system. The availability of a comprehensive API has made GitHub a target for many software engineering and online collaboration research efforts. In our work, we have discovered that a) obtaining data from GitHub is not trivial, b) the data may not be suitable for all types of research, and c) improper use can lead to biased results. In this tutorial, we analyze how data from GitHub can be used for large-scale, quantitative research, while avoiding common pitfalls. We use the GHTorrent dataset, a queryable offline mirror of the GitHub API data, to draw examples from and present pitfall avoidance strategies.", "A common requirement of many empirical software engineering studies is the acquisition and curation of data from software repositories. During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive rest api, which enables researchers to retrieve both the commits to the projects' repositories and events generated through user actions on project resources. GHTorrent aims to create a scalable off line mirror of GitHub's event streams and persistent data, and offer it to the research community as a service. In this paper, we present the project's design and initial implementation and demonstrate how the provided datasets can be queried and processed." ] }
1903.10180
2955366688
Data from software repositories have become an important foundation for the empirical study of software engineering processes. A recurring theme in the repository mining literature is the inference of developer networks capturing e.g. collaboration, coordination, or communication, from the commit history of projects. Most of the studied networks are based on the co-authorship of software artefacts defined at the level of files, modules, or packages. While this approach has led to insights into the social aspects of software development, it neglects detailed information on code changes and code ownership, e.g. which exact lines of code have been authored by which developers, that is contained in the commit log of software projects. Addressing this issue, we introduce git2net, a scalable python software that facilitates the extraction of fine-grained co-editing networks in large git repositories. It uses text mining techniques to analyse the detailed history of textual modifications within files. This information allows us to construct directed, weighted, and time-stamped networks, where a link signifies that one developer has edited a block of source code originally written by another developer. Our tool is applied in case studies of an Open Source and a commercial software project. We argue that it opens up a massive new source of high-resolution data on human collaboration patterns.
Closing this gap, our work introduces a practical and scalable solution for the construction of fine-grained and time-stamped co-editing networks from git repositories. Our work extends the state-of-the-art and facilitates analyses of developer collaboration and coordination in software projects. Providing a new method to construct large, dynamic networks at high temporal resolution we further expect our work to be of interest for the community of researchers developing methods to analyse dynamic (social) networks @cite_10 @cite_16 @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_10" ], "mid": [ "2162691596", "", "1937334562" ], "abstract": [ "Finding patterns of social interaction within a population has wide-ranging applications including: disease modeling, cultural and information transmission, and behavioral ecology. Social interactions are often modeled with networks. A key characteristic of social interactions is their continual change. However, most past analyses of social networks are essentially static in that all information about the time that social interactions take place is discarded. In this paper, we propose a new mathematical and computational framework that enables analysis of dynamic social networks and that explicitly makes use of information about when social interactions occur.", "", "The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions can make predictions and mechanistic understanding more accurate. The drawback, however, is that there are not so many methods available, partly because temporal networks is a relatively young field, partly because it is more difficult to develop such methods compared to for static networks. In this colloquium, we review the methods to analyze and model temporal networks and processes taking place on them, focusing mainly on the last three years. This includes the spreading of infectious disease, opinions, rumors, in social networks; information packets in computer networks; various types of signaling in biology, and more. We also discuss future directions." ] }
1903.09980
2925229117
Deep learning methods have shown promise in unsupervised domain adaptation, which aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution. However, such methods typically learn a domain-invariant representation space to match the marginal distributions of the source and target domains, while ignoring their fine-level structures. In this paper, we propose Cluster Alignment with a Teacher (CAT) for unsupervised domain adaptation, which can effectively incorporate the discriminative clustering structures in both domains for better adaptation. Technically, CAT leverages an implicit ensembling teacher model to reliably discover the class-conditional structure in the feature space for the unlabeled target domain. Then CAT forces the features of both the source and the target domains to form discriminative class-conditional clusters and aligns the corresponding clusters across domains. Empirical results demonstrate that CAT achieves state-of-the-art results in several unsupervised domain adaptation scenarios.
Using a teacher model is inspired by consistency-based methods in semi-supervised learning (SSL) @cite_40 @cite_31 . Recent attempts to apply SSL techniques in UDA include @cite_37 @cite_50 @cite_29 . CAT differs from these previous works in that CAT exploit the discriminative class-conditional structures in both the alignment and classification procedures while they focus on improving the classifier for the target domain by implementing the assumption @cite_12 . CAT imposes a much stronger regularization and assists in a better alignment.
{ "cite_N": [ "@cite_37", "@cite_29", "@cite_40", "@cite_50", "@cite_31", "@cite_12" ], "mid": [ "2963449430", "2770856226", "2951970475", "2962970380", "2592691248", "1983320747" ], "abstract": [ "This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et. al 2017) of temporal ensembling ( 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.", "Recent works showed that Generative Adversarial Networks (GANs) can be successfully applied in unsupervised domain adaptation, where, given a labeled source dataset and an unlabeled target dataset, the goal is to train powerful classifiers for the target samples. In particular, it was shown that a GAN objective function can be used to learn target features indistinguishable from the source ones. In this work, we extend this framework by (i) forcing the learned feature extractor to be domain-invariant, and (ii) training it through data augmentation in the feature space, namely performing feature augmentation. While data augmentation in the image space is a well established technique in deep learning, feature augmentation has not yet received the same level of attention. We accomplish it by means of a feature generator trained by playing the GAN minimax game against source features. Results show that both enforcing domain-invariance and performing feature augmentation lead to superior or comparable performance to state-of-the-art results in several unsupervised domain adaptation benchmarks.", "In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 to 7.05 in SVHN with 500 labels and from 18.63 to 16.55 in CIFAR-10 with 4000 labels, and further to 5.12 and 12.16 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.", "Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint, 2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain. In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption; 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on several visual domain adaptation benchmarks.", "The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .", "This book addresses some theoretical aspects of semisupervised learning (SSL). The book is organized as a collection of different contributions of authors who are experts on this topic. The objectives of this book are to present a large overview of the SSL methods and to classify these methods into four classes that correspond to the first four main parts of the book (this would include generative models; low-density separation methods; graph-based methods; and algorithms). The last two parts are devoted to applications and perspectives of SSL. The book responds to its major objectives and could serve as a basis for an intermediate level graduate course on SSL. It may also serve as a useful self study and reference source for practicing engineers." ] }
1903.09800
2924187062
One decade ago, Bitcoin was introduced, becoming the first cryptocurrency and establishing the concept of "blockchain" as a distributed ledger. As of today, there are many different implementations of cryptocurrencies working over a blockchain, with different approaches and philosofies. However, many of them share one common feature: they require proof-of-work to support the generation of blocks (mining) and, eventually, the generation of money. This proof-of-work scheme often consists on the resolution of a cryptography problem, most commonly breaking a hash value, which can only be achieved through brute-force. The main drawback of proof-of-work is that it requires ridiculously large amounts of energy which do not have any useful outcome beyond supporting the currency. In this paper, we present a theoretical proposal that introduces a proof-of-useful-work scheme to support a cryptocurrency running over a blockchain, which we named Coin.AI. In this system, the mining scheme requires training deep learning models, and a block is only mined when the performance of such model exceeds a threshold. The distributed system allows for nodes to verify the models delivered by miners in an easy way (certainly much more efficiently than the mining process itself), determining when a block is to be generated. Additionally, this paper presents a proof-of-storage scheme for rewarding users that provide storage for the deep learning models, as well as a theoretical dissertation on how the mechanics of the system could be articulated with the ultimate goal of democratizing the access to artificial intelligence.
When it comes to the combination of artificial intelligence and blockchain, there are many works (including projects beyond academia) claiming to combine both fields of study in very different ways. A common case consists on trying to apply machine learning for crypto-trading, or extensively trying to predict the price of a cryptocurrency using machine learning @cite_10 @cite_0 . Other works have focused on the application of artificial intelligence to blockchain security @cite_5 . From a more philosophical perspective, Swan commented about blockchain thinking'', the possibility of formulating thinking as a blockchain process @cite_25 .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_10", "@cite_25" ], "mid": [ "2586112617", "2891067152", "2803148772", "2578476968" ], "abstract": [ "The goal of this paper is to ascertain with what accuracy the direction of Bitcoin price in USD can be predicted. The price data is sourced from the Bitcoin Price Index. The task is achieved with varying degrees of success through the implementation of a Bayesian optimised recurrent neural network (RNN) and a Long Short Term Memory (LSTM) network. The LSTM achieves the highest classification accuracy of 52 and a RMSE of 8 . The popular ARIMA model for time series forecasting is implemented as a comparison to the deep learning models. As expected, the non-linear deep learning methods outperform the ARIMA forecast which performs poorly. Finally, both deep learning models are benchmarked on both a GPU and a CPU with the training time on the GPU outperforming the CPU implementation by 67.7 .", "Blockchain's vast applications in different industries have drawn several researchers to pursue extensive research in securing blockchain technologies. In recent times we could see several institutions coming together to create consortium based blockchain networks such as Hyperledger. Although for applications of blockchain such as Bitcoin, Litcoin, etc. the majority-attack might not be a great threat but for consortium based blockchain networks where we could see several institutions such as public, private, government, etc. are collaborating, the majority-attack might just prove to be a prevalent threat if collusion among these institutions takes place. This paper proposes a methodology where we can use intelligent software agents to monitor the activity of stakeholders in the blockchain networks to detect anomaly such as collusion, using supervised machine learning algorithm and algorithmic game theory and stop the majority attack from taking place.", "Machine learning and AI-assisted trading have attracted growing interest for the past few years. Here, we use this approach to test the hypothesis that the inefficiency of the cryptocurrency market can be exploited to generate abnormal profits. We analyse daily data for cryptocurrencies for the period between Nov. 2015 and Apr. 2018. We show that simple trading strategies assisted by state-of-the-art machine learning algorithms outperform standard benchmarks. Our results show that nontrivial, but ultimately simple, algorithmic mechanisms can help anticipate the short-term evolution of the cryptocurrency market.", "" ] }
1903.09769
2925035065
Weight pruning and weight quantization are two important categories of DNN model compression. Prior work on these techniques are mainly based on heuristics. A recent work developed a systematic frame-work of DNN weight pruning using the advanced optimization technique ADMM (Alternating Direction Methods of Multipliers), achieving one of state-of-art in weight pruning results. In this work, we first extend such one-shot ADMM-based framework to guarantee solution feasibility and provide fast convergence rate, and generalize to weight quantization as well. We have further developed a multi-step, progressive DNN weight pruning and quantization framework, with dual benefits of (i) achieving further weight pruning quantization thanks to the special property of ADMM regularization, and (ii) reducing the search space within each step. Extensive experimental results demonstrate the superior performance compared with prior work. Some highlights: (i) we achieve 246x,36x, and 8x weight pruning on LeNet-5, AlexNet, and ResNet-50 models, respectively, with (almost) zero accuracy loss; (ii) even a significant 61x weight pruning in AlexNet (ImageNet) results in only minor degradation in actual accuracy compared with prior work; (iii) we are among the first to derive notable weight pruning results for ResNet and MobileNet models; (iv) we derive the first lossless, fully binarized (for all layers) LeNet-5 for MNIST and VGG-16 for CIFAR-10; and (v) we derive the first fully binarized (for all layers) ResNet for ImageNet with reasonable accuracy loss.
An early work of weight pruning is @cite_4 . It uses a heuristic, iterative method to prune weights of small magnitudes and retrain the DNN. It achieves 9 @math reduction in the number of weights on AlexNet for ImageNet dataset without accuracy degradation. However, this work achieves relatively low compression rate (2.7 @math for AlexNet) on CONV layers, which are the key computational part in state-of-the-art DNNs @cite_13 @cite_6 . Besides, indices are needed, at least one per weight, to index the relative location of the next weight. This method has been extended in two directions. The first is to improve reduction in the number of weights by using more sophisticated heuristics, e.g., incorporating both weight pruning and growing @cite_30 , using @math regularization @cite_9 , or genetic algorithm @cite_21 . The second is enhancing the actual implementation efficiency by deriving an effective tradeoff between accuracy and compression rate, e.g., the @cite_2 , and incorporating regularity in weight pruning, e.g., the @cite_12 and @cite_9 approaches.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_9", "@cite_21", "@cite_6", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "2963981420", "2963674932", "2963000224", "2768083806", "2194775991", "", "", "2963363373" ], "abstract": [ "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108x and 17.7x respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at https: github.com yiwenguo Dynamic-Network-Surgery.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 .", "Deep neural networks (DNNs) have begun to have a pervasive impact on various applications of machine learning. However, the problem of finding an optimal DNN architecture for large applications is challenging. Common approaches go for deeper and larger DNN architectures but may incur substantial redundancy. To address these problems, we introduce a network growth algorithm that complements network pruning to learn both weights and compact DNN architectures during training. We propose a DNN synthesis tool (NeST) that combines both methods to automate the generation of compact and accurate DNNs. NeST starts with a randomly initialized sparse network called the seed architecture. It iteratively tunes the architecture with gradient-based growth and magnitude-based pruning of neurons and connections. Our experimental results show that NeST yields accurate, yet very compact DNNs, with a wide range of seed architecture selection. For the LeNet-300-100 (LeNet-5) architecture, we reduce network parameters by 70.2x (74.3x) and floating-point operations (FLOPs) by 79.4x (43.7x). For the AlexNet and VGG-16 architectures, we reduce network parameters (FLOPs) by 15.7x (4.6x) and 30.2x (8.6x), respectively. NeST's grow-and-prune paradigm delivers significant additional parameter and FLOPs reduction relative to pruning-only methods.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "", "", "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant." ] }
1903.09769
2925035065
Weight pruning and weight quantization are two important categories of DNN model compression. Prior work on these techniques are mainly based on heuristics. A recent work developed a systematic frame-work of DNN weight pruning using the advanced optimization technique ADMM (Alternating Direction Methods of Multipliers), achieving one of state-of-art in weight pruning results. In this work, we first extend such one-shot ADMM-based framework to guarantee solution feasibility and provide fast convergence rate, and generalize to weight quantization as well. We have further developed a multi-step, progressive DNN weight pruning and quantization framework, with dual benefits of (i) achieving further weight pruning quantization thanks to the special property of ADMM regularization, and (ii) reducing the search space within each step. Extensive experimental results demonstrate the superior performance compared with prior work. Some highlights: (i) we achieve 246x,36x, and 8x weight pruning on LeNet-5, AlexNet, and ResNet-50 models, respectively, with (almost) zero accuracy loss; (ii) even a significant 61x weight pruning in AlexNet (ImageNet) results in only minor degradation in actual accuracy compared with prior work; (iii) we are among the first to derive notable weight pruning results for ResNet and MobileNet models; (iv) we derive the first lossless, fully binarized (for all layers) LeNet-5 for MNIST and VGG-16 for CIFAR-10; and (v) we derive the first fully binarized (for all layers) ResNet for ImageNet with reasonable accuracy loss.
This method leverages the inherent redundancy in the number of bits for weight representation. Many of the prior art work @cite_27 @cite_3 @cite_31 @cite_29 @cite_17 @cite_16 @cite_1 @cite_18 are directed at quantization of weights to binary values, ternary values, or powers of 2 to facilitate hardware implementations, with acceptable accuracy loss. The state-of-the-art techniques @cite_18 @cite_27 adopt an iterative quantization and retraining framework, with some degree of randomness incorporated into the quantization step. This method results in less than 3
{ "cite_N": [ "@cite_18", "@cite_29", "@cite_1", "@cite_3", "@cite_27", "@cite_31", "@cite_16", "@cite_17" ], "mid": [ "2963114950", "2286365479", "2267635276", "2748818695", "2739789140", "2593245696", "2300242332", "2233116163" ], "abstract": [ "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.", "In recent years increasingly complex architectures for deep convolution networks (DCNs) have been proposed to boost the performance on image recognition tasks. However, the gains in performance have come at a cost of substantial increase in computation and model storage resources. Fixed point implementation of DCNs has the potential to alleviate some of these complexities and facilitate potential deployment on embedded hardware. In this paper, we propose a quantizer design for fixed point implementation of DCNs. We formulate and solve an optimization problem to identify optimal fixed point bit-width allocation across DCN layers. Our experiments show that in comparison to equal bitwidth settings, the fixed point DCNs with optimized bit width allocation offer > 20 reduction in the model size without any loss in accuracy on CIFAR-10 benchmark. We also demonstrate that fine-tuning can further enhance the accuracy of fixed point DCNs beyond that of the original floating point model. In doing so, we report a new state-of-the-art fixed point performance of 6.78 error-rate on CIFAR-10 benchmark.", "We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time. At train-time the binary weights and activations are used for computing the parameter gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs, we conducted two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. We also report our preliminary results on the challenging ImageNet dataset. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line.", "Quantization is considered as one of the most effective methods to optimize the inference cost of neural network models for their deployment to mobile and embedded systems, which have tight resource constraints. In such approaches, it is critical to provide low-cost quantization under a tight accuracy loss constraint (e.g., 1 ). In this paper, we propose a novel method for quantizing weights and activations based on the concept of weighted entropy. Unlike recent work on binary-weight neural networks, our approach is multi-bit quantization, in which weights and activations can be quantized by any number of bits depending on the target accuracy. This facilitates much more flexible exploitation of accuracy-performance trade-off provided by different levels of quantization. Moreover, our scheme provides an automated quantization flow based on conventional training algorithms, which greatly reduces the design-time effort to quantize the network. According to our extensive evaluations based on practical neural network models for image classification (AlexNet, GoogLeNet and ResNet-50 101), object detection (R-FCN with 50-layer ResNet), and language modeling (an LSTM network), our method achieves significant reductions in both the model size and the amount of computation with minimal accuracy loss. Also, compared to existing quantization schemes, ours provides higher accuracy with a similar resource constraint and requires much lower design effort.", "Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network.", "This paper presents incremental network quantization (INQ), a novel method, targeting to efficiently convert any pre-trained full-precision convolutional neural network (CNN) model into a low-precision version whose weights are constrained to be either powers of two or zero. Unlike existing methods which are struggled in noticeable accuracy loss, our INQ has the potential to resolve this issue, as benefiting from two innovations. On one hand, we introduce three interdependent operations, namely weight partition, group-wise quantization and re-training. A well-proven measure is employed to divide the weights in each layer of a pre-trained CNN model into two disjoint groups. The weights in the first group are responsible to form a low-precision base, thus they are quantized by a variable-length encoding method. The weights in the other group are responsible to compensate for the accuracy loss from the quantization, thus they are the ones to be re-trained. On the other hand, these three operations are repeated on the latest re-trained group in an iterative manner until all the weights are converted into low-precision ones, acting as an incremental network quantization and accuracy enhancement procedure. Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed method. Specifically, at 5-bit quantization, our models have improved accuracy than the 32-bit floating-point references. Taking ResNet-18 as an example, we further show that our quantized models with 4-bit, 3-bit and 2-bit ternary weights have improved or very similar accuracy against its 32-bit floating-point baseline. Besides, impressive results with the combination of network pruning and INQ are also reported. The code is available at this https URL.", "We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32 ( ) memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58 ( ) faster convolutional operations (in terms of number of the high precision operations) and 32 ( ) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than (16 , ) in top-1 accuracy. Our code is available at: http: allenai.org plato xnornet.", "Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 6× speed-up and 15 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second." ] }
1903.09795
2922825525
Prognostics or Remaining Useful Life (RUL) Estimation from multi-sensor time series data is useful to enable condition-based maintenance and ensure high operational availability of equipment. We propose a novel deep learning based approach for Prognostics with Uncertainty Quantification that is useful in scenarios where: (i) access to labeled failure data is scarce due to rarity of failures (ii) future operational conditions are unobserved and (iii) inherent noise is present in the sensor readings. All three scenarios mentioned are unavoidable sources of uncertainty in the RUL estimation process often resulting in unreliable RUL estimates. To address (i), we formulate RUL estimation as an Ordinal Regression (OR) problem, and propose LSTM-OR: deep Long Short Term Memory (LSTM) network based approach to learn the OR function. We show that LSTM-OR naturally allows for incorporation of censored operational instances in training along with the failed instances, leading to more robust learning. To address (ii), we propose a simple yet effective approach to quantify predictive uncertainty in the RUL estimation models by training an ensemble of LSTM-OR models. Through empirical evaluation on C-MAPSS turbofan engine benchmark datasets, we demonstrate that LSTM-OR is significantly better than the commonly used deep metric regression based approaches for RUL estimation, especially when failed training instances are scarce. Further, our uncertainty quantification approach yields high quality predictive uncertainty estimates while also leading to improved RUL estimates compared to single best LSTM-OR models.
: Another class of approaches is based on metric regression. Unlike trajectory similarity based methods which rely on comparison of trends, metric regression methods attempt to learn a function to directly map sensor data to RUL, e.g. @cite_30 @cite_39 @cite_15 @cite_20 @cite_5 @cite_37 @cite_21 . Such methods can better deal with non-monotonic and noisy scenarios by learning to focus on the relevant underlying trends irrespective of noise. Within metric regression methods, few methods consider non-temporal models such as Support Vector Regression for learning the mapping from values of sensors at a given time instance to RUL, e.g. @cite_39 @cite_15 .
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_21", "@cite_39", "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "2110787940", "2744067593", "2892736377", "2030166992", "2750643767", "2038012452", "2415594836" ], "abstract": [ "This paper presents an approach and solution to the IEEE 2008 Prognostics and Health Management conference challenge problem. The solution utilizes an advanced recurrent neural network architecture to estimate the remaining useful life of the system. The recurrent neural network is trained with back-propagation through time gradient calculations, an Extended Kalman Filter training method, and evolutionary algorithms to generate an accurate and compact algorithm. This solution placed second overall in the competition with a very small margin between the first and second place finishers.", "Remaining Useful Life (RUL) of a component or a system is defined as the length from the current time to the end of the useful life. Accurate RUL estimation plays a critical role in Prognostics and Health Management(PHM). Data driven approaches for RUL estimation use sensor data and operational data to estimate RUL. Traditional regression based approaches and recent Convolutional Neural Network (CNN) approach use features created from sliding windows to build models. However, sequence information is not fully considered in these approaches. Sequence learning models such as Hidden Markov Models (HMMs) and Recurrent Neural Networks (RNNs) have flaws when modeling sequence information. HMMs are limited to discrete hidden states and are known to have issues when modeling long-term dependencies in the data. RNNs also have issues with long-term dependencies. In this work, we propose a Long Short-Term Memory (LSTM) approach for RUL estimation, which can make full use of the sensor sequence information and expose hidden patterns within sensor data with multiple operating conditions, fault and degradation models. Extensive experiments using three widely adopted Prognostics and Health Management data sets show that LSTM for RUL estimation significantly outperforms traditional approaches for RUL estimation as well as Convolutional Neural Network (CNN).", "We describe the approach – submitted as part of the 2018 PHM Data Challenge – for estimating time-to-failure or Remaining Useful Life (RUL) of Ion Mill Etching Systems in an online fashion using data from multiple sensors. RUL estimation from multi-sensor data can be considered as learning a regression function that maps a multivariate time series to a real-valued number, i.e. the RUL. We use a deep Recurrent Neural Network (RNN) to learn the metric regression function from multivariate time series. We highlight practical aspects of the RUL estimation problem in this data challenge such as i) multiple operating conditions, ii) lack of knowledge of exact onset of failure or degradation, iii) different operational behavior across tools in terms of range of values of parameters, etc. We describe our solution in the context of these challenges. Importantly, multiple modes of failure are possible in an ion mill etching system; therefore, it is desirable to estimate the RUL with respect to each of the failure modes. The data challenge considers three such modes of failures and requires estimating RULs with respect to each one, implying learning three metric regression functions - one corresponding to each failure mode. We propose a simple yet effective extension to existing methods of RUL estimation using RNN based regression to learn a single deep RNN model that can simultaneously estimate RULs corresponding to all three failure modes. Our best model is an ensemble of two such RNN models and achieves a score of 1:91 X 10^7 on the final validation set..", "Abstract Prognostics and health management (PHM) of rotating machines is gaining importance in industry and allows increasing reliability and decreasing machines’ breakdowns. Bearings are one of the most components present in mechanical equipments and one of their most common failures. So, to assess machines’ degradations, fault prognostic of bearings is developed in this paper. The proposed method relies on two steps (an offline step and an online step) to track the health state and predict the remaining useful life (RUL) of the bearings. The offline step is used to learn the degradation models of the bearings whereas the online step uses these models to assess the current health state of the bearings and predict their RUL. During the offline step, vibration signals acquired on the bearings are processed to extract features, which are then exploited to learn models that represent the evolution of the degradations. For this purpose, the isometric feature mapping reduction technique (ISOMAP) and support vector regression (SVR) are used. The method is applied on a laboratory experimental degradations related to bearings. The obtained results show that the method can effectively model the evolution of the degradations and predict the RUL of the bearings.", "We consider the problem of estimating the remaining useful life (RUL) of a system or a machine from sensor data. Many approaches for RUL estimation based on sensor data make assumptions about how machines degrade. Additionally, sensor data from machines is noisy and often suffers from missing values in many practical settings. We propose Embed-RUL: a novel approach for RUL estimation from sensor data that does not rely on any degradation-trend assumptions, is robust to noise, and handles missing values. Embed-RUL utilizes a sequence-to-sequence model based on Recurrent Neural Networks (RNNs) to generate embeddings for multivariate time series subsequences. The embeddings for normal and degraded machines tend to be different, and are therefore found to be useful for RUL estimation. We show that the embeddings capture the overall pattern in the time series while filtering out the noise, so that the embeddings of two machines with similar operational behavior are close to each other, even when their sensor readings have significant and varying levels of noise content. We perform experiments on publicly available turbofan engine dataset and a proprietary real-world dataset, and demonstrate that Embed-RUL outperforms the previously reported state-of-the-art on several metrics.", "Abstract Lithium-ion batteries are used as the main power source in many electronic and electrical devices. In particular, with the growth in battery-powered electric vehicle development, the lithium-ion battery plays a critical role in the reliability of vehicle systems. In order to provide timely maintenance and replacement of battery systems, it is necessary to develop a reliable and accurate battery health diagnostic that takes a prognostic approach. Therefore, this paper focuses on two main methods to determine a battery's health: (1) Battery State-of-Health (SOH) monitoring and (2) Remaining Useful Life (RUL) prediction. Both of these are calculated by using a filter algorithm known as the Support Vector Regression-Particle Filter (SVR-PF). Models for battery SOH monitoring based on SVR-PF are developed with novel capacity degradation parameters introduced to determine battery health in real time. Moreover, the RUL prediction model is proposed, which is able to provide the RUL value and update the RUL probability distribution to the End-of-Life cycle. Results for both methods are presented, showing that the proposed SOH monitoring and RUL prediction methods have good performance and that the SVR-PF has better monitoring and prediction capability than the standard particle filter (PF).", "Prognostics technique aims to accurately estimate the Remaining Useful Life (RUL) of a subsystem or a component using sensor data, which has many real world applications. However, many of the existing algorithms are based on linear models, which cannot capture the complex relationship between the sensor data and RUL. Although Multilayer Perceptron (MLP) has been applied to predict RUL, it cannot learn salient features automatically, because of its network structure. A novel deep Convolutional Neural Network (CNN) based regression approach for estimating the RUL is proposed in this paper. Although CNN has been applied on tasks such as computer vision, natural language processing, speech recognition etc., this is the first attempt to adopt CNN for RUL estimation in prognostics. Different from the existing CNN structure for computer vision, the convolution and pooling filters in our approach are applied along the temporal dimension over the multi-channel sensor data to incorporate automated feature learning from raw sensor signals in a systematic way. Through the deep architecture, the learned features are the higher-level abstract representation of low-level raw sensor signals. Furthermore, feature learning and RUL estimation are mutually enhanced by the supervised feedback. We compared with several state-of-the-art algorithms on two publicly available data sets to evaluate the effectiveness of this proposed approach. The encouraging results demonstrate that our proposed deep convolutional neural network based regression approach for RUL estimation is not only more efficient but also more accurate." ] }
1903.09795
2922825525
Prognostics or Remaining Useful Life (RUL) Estimation from multi-sensor time series data is useful to enable condition-based maintenance and ensure high operational availability of equipment. We propose a novel deep learning based approach for Prognostics with Uncertainty Quantification that is useful in scenarios where: (i) access to labeled failure data is scarce due to rarity of failures (ii) future operational conditions are unobserved and (iii) inherent noise is present in the sensor readings. All three scenarios mentioned are unavoidable sources of uncertainty in the RUL estimation process often resulting in unreliable RUL estimates. To address (i), we formulate RUL estimation as an Ordinal Regression (OR) problem, and propose LSTM-OR: deep Long Short Term Memory (LSTM) network based approach to learn the OR function. We show that LSTM-OR naturally allows for incorporation of censored operational instances in training along with the failed instances, leading to more robust learning. To address (ii), we propose a simple yet effective approach to quantify predictive uncertainty in the RUL estimation models by training an ensemble of LSTM-OR models. Through empirical evaluation on C-MAPSS turbofan engine benchmark datasets, we demonstrate that LSTM-OR is significantly better than the commonly used deep metric regression based approaches for RUL estimation, especially when failed training instances are scarce. Further, our uncertainty quantification approach yields high quality predictive uncertainty estimates while also leading to improved RUL estimates compared to single best LSTM-OR models.
: Deep temporal models such as those based on RNNs ( @cite_30 @cite_14 @cite_5 @cite_37 ) or Convolutional Neural Networks (CNNs) ( @cite_20 ) can capture the degradation trends better compared to non-temporal models, and are proven to perform better. Moreover, these models can be trained in an end-to-end learning manner without requiring feature engineering. Despite all these advantages of deep models, they are prone to overfitting in often-encountered practical scenarios where the number of failed instances is small, and most of the data is censored. Our approach based on ordinal regression provisions for dealing with such scenarios, by using censored instances in addition to failed instances to obtain more robust models.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_14", "@cite_5", "@cite_20" ], "mid": [ "2110787940", "2744067593", "2513477101", "2750643767", "2415594836" ], "abstract": [ "This paper presents an approach and solution to the IEEE 2008 Prognostics and Health Management conference challenge problem. The solution utilizes an advanced recurrent neural network architecture to estimate the remaining useful life of the system. The recurrent neural network is trained with back-propagation through time gradient calculations, an Extended Kalman Filter training method, and evolutionary algorithms to generate an accurate and compact algorithm. This solution placed second overall in the competition with a very small margin between the first and second place finishers.", "Remaining Useful Life (RUL) of a component or a system is defined as the length from the current time to the end of the useful life. Accurate RUL estimation plays a critical role in Prognostics and Health Management(PHM). Data driven approaches for RUL estimation use sensor data and operational data to estimate RUL. Traditional regression based approaches and recent Convolutional Neural Network (CNN) approach use features created from sliding windows to build models. However, sequence information is not fully considered in these approaches. Sequence learning models such as Hidden Markov Models (HMMs) and Recurrent Neural Networks (RNNs) have flaws when modeling sequence information. HMMs are limited to discrete hidden states and are known to have issues when modeling long-term dependencies in the data. RNNs also have issues with long-term dependencies. In this work, we propose a Long Short-Term Memory (LSTM) approach for RUL estimation, which can make full use of the sensor sequence information and expose hidden patterns within sensor data with multiple operating conditions, fault and degradation models. Extensive experiments using three widely adopted Prognostics and Health Management data sets show that LSTM for RUL estimation significantly outperforms traditional approaches for RUL estimation as well as Convolutional Neural Network (CNN).", "Many approaches for estimation of Remaining Useful Life (RUL) of a machine, using its operational sensor data, make assumptions about how a system degrades or a fault evolves, e.g., exponential degradation. However, in many domains degradation may not follow a pattern. We propose a Long Short Term Memory based Encoder-Decoder (LSTM-ED) scheme to obtain an unsupervised health index (HI) for a system using multi-sensor time-series data. LSTM-ED is trained to reconstruct the time-series corresponding to healthy state of a system. The reconstruction error is used to compute HI which is then used for RUL estimation. We evaluate our approach on publicly available Turbofan Engine and Milling Machine datasets. We also present results on a real-world industry dataset from a pulverizer mill where we find significant correlation between LSTM-ED based HI and maintenance costs.", "We consider the problem of estimating the remaining useful life (RUL) of a system or a machine from sensor data. Many approaches for RUL estimation based on sensor data make assumptions about how machines degrade. Additionally, sensor data from machines is noisy and often suffers from missing values in many practical settings. We propose Embed-RUL: a novel approach for RUL estimation from sensor data that does not rely on any degradation-trend assumptions, is robust to noise, and handles missing values. Embed-RUL utilizes a sequence-to-sequence model based on Recurrent Neural Networks (RNNs) to generate embeddings for multivariate time series subsequences. The embeddings for normal and degraded machines tend to be different, and are therefore found to be useful for RUL estimation. We show that the embeddings capture the overall pattern in the time series while filtering out the noise, so that the embeddings of two machines with similar operational behavior are close to each other, even when their sensor readings have significant and varying levels of noise content. We perform experiments on publicly available turbofan engine dataset and a proprietary real-world dataset, and demonstrate that Embed-RUL outperforms the previously reported state-of-the-art on several metrics.", "Prognostics technique aims to accurately estimate the Remaining Useful Life (RUL) of a subsystem or a component using sensor data, which has many real world applications. However, many of the existing algorithms are based on linear models, which cannot capture the complex relationship between the sensor data and RUL. Although Multilayer Perceptron (MLP) has been applied to predict RUL, it cannot learn salient features automatically, because of its network structure. A novel deep Convolutional Neural Network (CNN) based regression approach for estimating the RUL is proposed in this paper. Although CNN has been applied on tasks such as computer vision, natural language processing, speech recognition etc., this is the first attempt to adopt CNN for RUL estimation in prognostics. Different from the existing CNN structure for computer vision, the convolution and pooling filters in our approach are applied along the temporal dimension over the multi-channel sensor data to incorporate automated feature learning from raw sensor signals in a systematic way. Through the deep architecture, the learned features are the higher-level abstract representation of low-level raw sensor signals. Furthermore, feature learning and RUL estimation are mutually enhanced by the supervised feedback. We compared with several state-of-the-art algorithms on two publicly available data sets to evaluate the effectiveness of this proposed approach. The encouraging results demonstrate that our proposed deep convolutional neural network based regression approach for RUL estimation is not only more efficient but also more accurate." ] }
1903.09795
2922825525
Prognostics or Remaining Useful Life (RUL) Estimation from multi-sensor time series data is useful to enable condition-based maintenance and ensure high operational availability of equipment. We propose a novel deep learning based approach for Prognostics with Uncertainty Quantification that is useful in scenarios where: (i) access to labeled failure data is scarce due to rarity of failures (ii) future operational conditions are unobserved and (iii) inherent noise is present in the sensor readings. All three scenarios mentioned are unavoidable sources of uncertainty in the RUL estimation process often resulting in unreliable RUL estimates. To address (i), we formulate RUL estimation as an Ordinal Regression (OR) problem, and propose LSTM-OR: deep Long Short Term Memory (LSTM) network based approach to learn the OR function. We show that LSTM-OR naturally allows for incorporation of censored operational instances in training along with the failed instances, leading to more robust learning. To address (ii), we propose a simple yet effective approach to quantify predictive uncertainty in the RUL estimation models by training an ensemble of LSTM-OR models. Through empirical evaluation on C-MAPSS turbofan engine benchmark datasets, we demonstrate that LSTM-OR is significantly better than the commonly used deep metric regression based approaches for RUL estimation, especially when failed training instances are scarce. Further, our uncertainty quantification approach yields high quality predictive uncertainty estimates while also leading to improved RUL estimates compared to single best LSTM-OR models.
: A set of techniques for deep survival analysis have been proposed in the medical domain, e.g. @cite_13 @cite_38 . On similar lines, an approach to combine deep learning and survival analysis for asset health management has been proposed in @cite_22 . However, it is not clear as to how such approaches can be adapted for RUL estimation applications, as they focus on estimating the survival probability at a given point in time, and cannot provide RUL estimates. Further, @cite_9 proposes an approach that leverages adversarial learning for doing time-event modeling in health domain. On the other hand, LSTM-OR is capable of providing RUL estimates using time series sensor data.
{ "cite_N": [ "@cite_38", "@cite_9", "@cite_13", "@cite_22" ], "mid": [ "2618421739", "2798234853", "2753919178", "" ], "abstract": [ "An accurate model of patient-specific kidney graft survival distributions can help to improve shared-decision making in the treatment and care of patients. In this paper, we propose a deep learning method that directly models the survival function instead of estimating the hazard function to predict survival times for graft patients based on the principle of multi-task learning. By learning to jointly predict the time of the event, and its rank in the cox partial log likelihood framework, our deep learning approach outperforms, in terms of survival time prediction quality and concordance index, other common methods for survival analysis, including the Cox Proportional Hazards model and a network trained on the cox partial log-likelihood.", "Modern health data science applications leverage abundant molecular and electronic health data, providing opportunities for machine learning to build statistical models to support clinical practice. Time-to-event analysis, also called survival analysis, stands as one of the most representative examples of such statistical models. We present a novel deep-network-based approach that leverages adversarial learning to address a key challenge in modern time-to-event modeling: nonparametric estimation of event-time distributions. We also introduce a principled cost function to exploit information from censored events (events that occur subsequent to the observation window). Unlike most time-to-event models, we focus on the estimation of time-to-event distributions, rather than time ordering. We validate our model on both benchmark and real datasets, demonstrating that the proposed formulation yields significant performance gains relative to a parametric alternative, which we also propose.", "Medical practitioners use survival models to explore and understand the relationships between patients’ covariates (e.g. clinical and genetic features) and the effectiveness of various treatment options. Standard survival models like the linear Cox proportional hazards model require extensive feature engineering or prior medical knowledge to model treatment interaction at an individual level. While nonlinear survival methods, such as neural networks and survival forests, can inherently model these high-level interaction terms, they have yet to be shown as effective treatment recommender systems. We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient’s covariates and treatment effectiveness in order to provide personalized treatment recommendations. We perform a number of experiments training DeepSurv on simulated and real survival data. We demonstrate that DeepSurv performs as well as or better than other state-of-the-art survival models and validate that DeepSurv successfully models increasingly complex relationships between a patient’s covariates and their risk of failure. We then show how DeepSurv models the relationship between a patient’s features and effectiveness of different treatment options to show how DeepSurv can be used to provide individual treatment recommendations. Finally, we train DeepSurv on real clinical studies to demonstrate how it’s personalized treatment recommendations would increase the survival time of a set of patients. The predictive and modeling capabilities of DeepSurv will enable medical researchers to use deep neural networks as a tool in their exploration, understanding, and prediction of the effects of a patient’s characteristics on their risk of failure.", "" ] }
1903.09795
2922825525
Prognostics or Remaining Useful Life (RUL) Estimation from multi-sensor time series data is useful to enable condition-based maintenance and ensure high operational availability of equipment. We propose a novel deep learning based approach for Prognostics with Uncertainty Quantification that is useful in scenarios where: (i) access to labeled failure data is scarce due to rarity of failures (ii) future operational conditions are unobserved and (iii) inherent noise is present in the sensor readings. All three scenarios mentioned are unavoidable sources of uncertainty in the RUL estimation process often resulting in unreliable RUL estimates. To address (i), we formulate RUL estimation as an Ordinal Regression (OR) problem, and propose LSTM-OR: deep Long Short Term Memory (LSTM) network based approach to learn the OR function. We show that LSTM-OR naturally allows for incorporation of censored operational instances in training along with the failed instances, leading to more robust learning. To address (ii), we propose a simple yet effective approach to quantify predictive uncertainty in the RUL estimation models by training an ensemble of LSTM-OR models. Through empirical evaluation on C-MAPSS turbofan engine benchmark datasets, we demonstrate that LSTM-OR is significantly better than the commonly used deep metric regression based approaches for RUL estimation, especially when failed training instances are scarce. Further, our uncertainty quantification approach yields high quality predictive uncertainty estimates while also leading to improved RUL estimates compared to single best LSTM-OR models.
: Recently, @cite_4 proposed the use of dropout at the inference time to provide Bayesian approximation in the RUL estimation. Further, @cite_10 proposed the use of an ensemble of neural networks for predictive uncertainty estimation and demonstrated their use in comparison to Bayesian methods. Similarly, we also use an ensemble of LSTM networks to estimate the empirical uncertainty in RUL predictions.
{ "cite_N": [ "@cite_10", "@cite_4" ], "mid": [ "2963238274", "2964059111" ], "abstract": [ "Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.", "Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning." ] }
1903.09604
2952223920
In trick-taking card games, a two-step process of state sampling and evaluation is widely used to approximate move values. While the evaluation component is vital, the accuracy of move value estimates is also fundamentally linked to how well the sampling distribution corresponds the true distribution. Despite this, recent work in trick-taking card game AI has mainly focused on improving evaluation algorithms with limited work on improving sampling. In this paper, we focus on the effect of sampling on the strength of a player and propose a novel method of sampling more realistic states given move history. In particular, we use predictions about locations of individual cards made by a deep neural network --- trained on data from human gameplay - in order to sample likely worlds for evaluation. This technique, used in conjunction with Perfect Information Monte Carlo (PIMC) search, provides a substantial increase in cardplay strength in the popular trick-taking card game of Skat.
Perfect Information Monte Carlo (PIMC) search @cite_10 has been successfully applied to popular trick-taking card games like Contract Bridge @cite_5 , Skat @cite_11 , Hearts and Spades @cite_3 . PIMC has been heavily criticized over the years, starting with Frank and Basin because it naively evades the imperfect information elements of the game tree. However, it has remained relevant because it is still among the state-of-the-art algorithms for these games. In the authors attempt to understand this success and conclude that for classes of games with certain properties, including trick-taking card games, PIMC will not suffer large losses in comparison to a game-theoretic solution.''
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_3", "@cite_11" ], "mid": [ "2145224651", "51928152", "1575146834", "1540121360" ], "abstract": [ "This paper investigates the problems arising in the construction of a program to play the game of contract bridge. These problems include both the difficulty of solving the game's perfect information variant, and techniques needed to address the fact that bridge is not, in fact, a perfect information game. GIB, the program being described, involves five separate technical advances: partition search, the practical application of Monte Carlo techniques to realistic problems, a focus on achievable sets to solve problems inherent in the Monte Carlo approach, an extension of alpha-beta pruning from total orders to arbitrary distributive lattices, and the use of squeaky wheel optimization to find approximately optimal solutions to cardplay problems. GIB is currently believed to be of approximately expert caliber, and is currently the strongest computer bridge program in the world.", "", "The UCT algorithm has been exceedingly popular for Go, a two-player game, significantly increasing the playing strength of Go programs in a very short time. This paper provides an analysis of the UCT algorithm in multi-player games, showing that UCT, when run in a multi-player game, is computing a mixed-strategy equilibrium, as opposed to maxn, which computes a pure-strategy equilibrium. We analyze the performance of UCT in several known domains and show that it performs as well or better than existing algorithms.", "Skat is Germany's national card game played by millions of players around the world. In this paper, we present the world's first computer skat player that plays at the level of human experts. This performance is achieved by improving state evaluations using game data produced by human players and by using these state evaluations to perform inference on the unobserved hands of opposing players. Our results demonstrate the gains from adding inference to an imperfect information game player and show that training on data from average human players can result in expert-level playing strength." ] }
1903.09604
2952223920
In trick-taking card games, a two-step process of state sampling and evaluation is widely used to approximate move values. While the evaluation component is vital, the accuracy of move value estimates is also fundamentally linked to how well the sampling distribution corresponds the true distribution. Despite this, recent work in trick-taking card game AI has mainly focused on improving evaluation algorithms with limited work on improving sampling. In this paper, we focus on the effect of sampling on the strength of a player and propose a novel method of sampling more realistic states given move history. In particular, we use predictions about locations of individual cards made by a deep neural network --- trained on data from human gameplay - in order to sample likely worlds for evaluation. This technique, used in conjunction with Perfect Information Monte Carlo (PIMC) search, provides a substantial increase in cardplay strength in the popular trick-taking card game of Skat.
Furtak and Buro implement a recursive variant of PIMC (IIMC) to alleviate some of the issues pointed out by Frank and Basin --- resulting in the current state-of-the-art player for Skat. Elsewhere, Information Set Monte Carlo Tree Search (ISMCTS) @cite_1 addresses the same issues in three different domains, but as Furtak and Buro argue, the resulting move values are biased because the player leaks private information to their playout adversaries by only sampling states consistent with the player's private information and allowing the strategies of the playout adversaries to adapt across rollouts. Sampling inconsistent states makes the search space intractable for many applications. proposes a method for biasing MCTS techniques by boosting the scores of nodes reached by following actions that are judged likely to be played by humans according to a supervised model. Applying this to ISMCTS in the imperfect information setting is straightforward, but the resulting algorithm neglects the action history occurring before the root of the search and samples states uniformly from root information set before proceeding. Each of these contributions improve state evaluation quality, but they fail to address the sampling problem investigated in this work.
{ "cite_N": [ "@cite_1" ], "mid": [ "2113228754" ], "abstract": [ "Monte Carlo tree search (MCTS) is an AI technique that has been successfully applied to many deterministic games of perfect information. This paper investigates the application of MCTS methods to games with hidden information and uncertainty. In particular, three new information set MCTS (ISMCTS) algorithms are presented which handle different sources of hidden information and uncertainty in games. Instead of searching minimax trees of game states, the ISMCTS algorithms search trees of information sets, more directly analyzing the true structure of the game. These algorithms are tested in three domains with different characteristics, and it is demonstrated that our new algorithms outperform existing approaches to handling hidden information and uncertainty in games." ] }
1903.09604
2952223920
In trick-taking card games, a two-step process of state sampling and evaluation is widely used to approximate move values. While the evaluation component is vital, the accuracy of move value estimates is also fundamentally linked to how well the sampling distribution corresponds the true distribution. Despite this, recent work in trick-taking card game AI has mainly focused on improving evaluation algorithms with limited work on improving sampling. In this paper, we focus on the effect of sampling on the strength of a player and propose a novel method of sampling more realistic states given move history. In particular, we use predictions about locations of individual cards made by a deep neural network --- trained on data from human gameplay - in order to sample likely worlds for evaluation. This technique, used in conjunction with Perfect Information Monte Carlo (PIMC) search, provides a substantial increase in cardplay strength in the popular trick-taking card game of Skat.
Kermit @cite_11 @cite_12 uses a table-based procedure that takes opponent bids or declarations into account in order to infer the likelihood of states within an information set. Unlike our work, Kermit does not use the sequence of actions during the cardplay phase for further inference --- only marginalizing over its own private cards and those that have already been played. Ginsberg's bridge-playing GIB @cite_5 was the first successful application of PIMC in a trick-taking card game. GIB also appears to perform some state inference in that it samples a set @math of deals consistent with both the bidding and play'', but details regarding the inference are absent from the paper.
{ "cite_N": [ "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2145224651", "2003861737", "1540121360" ], "abstract": [ "This paper investigates the problems arising in the construction of a program to play the game of contract bridge. These problems include both the difficulty of solving the game's perfect information variant, and techniques needed to address the fact that bridge is not, in fact, a perfect information game. GIB, the program being described, involves five separate technical advances: partition search, the practical application of Monte Carlo techniques to realistic problems, a focus on achievable sets to solve problems inherent in the Monte Carlo approach, an extension of alpha-beta pruning from total orders to arbitrary distributive lattices, and the use of squeaky wheel optimization to find approximately optimal solutions to cardplay problems. GIB is currently believed to be of approximately expert caliber, and is currently the strongest computer bridge program in the world.", "Perfect information Monte Carlo (PIMC) search is the method of choice for constructing strong Al systems for trick-taking card games. PIMC search evaluates moves in imperfect information games by repeatedly sampling worlds based on state inference and estimating move values by solving the corresponding perfect information scenarios. PIMC search performs well in trick-taking card games despite the fact that it suffers from the strategy fusion problem, whereby the game's information set structure is ignored because moves are evaluated opportunistically in each world. In this paper we describe imperfect information Monte Carlo (IIMC) search, which aims at mitigating this problem by basing move evaluation on more realistic playout sequences rather than perfect information move values. We show that RecPIMC - a recursive IIMC search variant based on perfect information evaluation - performs considerably better than PIMC search in a large class of synthetic imperfect information games and the popular card game of Skat, for which PIMC search is the state-of-the-art cardplay algorithm.", "Skat is Germany's national card game played by millions of players around the world. In this paper, we present the world's first computer skat player that plays at the level of human experts. This performance is achieved by improving state evaluations using game data produced by human players and by using these state evaluations to perform inference on the unobserved hands of opposing players. Our results demonstrate the gains from adding inference to an imperfect information game player and show that training on data from average human players can result in expert-level playing strength." ] }
1903.09837
2922562294
Curve text or arbitrary shape text is very common in real-world scenarios. In this paper, we propose a novel framework with the local segmentation network (LSN) followed by the curve connection to detect text in horizontal, oriented and curved forms. The LSN is composed of two elements, i.e., proposal generation to get the horizontal rectangle proposals with high overlap with text and text segmentation to find the arbitrary shape text region within proposals. The curve connection is then designed to connect the local mask to the detection results. We conduct experiments using the proposed framework on two real-world curve text detection datasets and demonstrate the effectiveness over previous approaches.
Scene text detection has drawn growing attention from computer vision communities in recent years. With the astonishing development of object detection, the state-of-the-art frameworks such as Faster-RCNN @cite_3 and SSD @cite_14 have been widely applied to text detection field. However, compared with object detection that only needs general localization for objects, scene text detection requires precise positioning for characters. Therefore, many remarkable methods based on object detection for scene text detection have been proposed. These methods focus on more precise positioning for characters and can be roughly classified into two categories, , the anchor-based methods and the link-based methods.
{ "cite_N": [ "@cite_14", "@cite_3" ], "mid": [ "2193145675", "2613718673" ], "abstract": [ "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn." ] }
1903.09837
2922562294
Curve text or arbitrary shape text is very common in real-world scenarios. In this paper, we propose a novel framework with the local segmentation network (LSN) followed by the curve connection to detect text in horizontal, oriented and curved forms. The LSN is composed of two elements, i.e., proposal generation to get the horizontal rectangle proposals with high overlap with text and text segmentation to find the arbitrary shape text region within proposals. The curve connection is then designed to connect the local mask to the detection results. We conduct experiments using the proposed framework on two real-world curve text detection datasets and demonstrate the effectiveness over previous approaches.
. The horizontal box in object detection was widely used in document text detection but the orientation of box is various when facing scene text detection. Ma al @cite_11 proposed a multi-oriented scene text detection approach by generating six-orientations anchors at each point of the feature map. Quadrilateral anchor was introduced in @cite_12 to detect text with tighter quadrangle. Zhou al @cite_2 combined these representations together and proposed an efficient detector using a single fully convolution network with two branches. Liu al @cite_18 applied 14 landmark points to represent curved text flexibly. All of these methods aforementioned focused on pursuing a tighter boundary of text but they were limited due to their templates that lack variability to cover text with extremely aspect ratio, such as long text or non-quadrilateral text.
{ "cite_N": [ "@cite_18", "@cite_2", "@cite_12", "@cite_11" ], "mid": [ "2772800855", "2605982830", "2604243686", "2593539516" ], "abstract": [ "Scene text detection has been made great progress in recent years. The detection manners are evolving from axis-aligned rectangle to rotated rectangle and further to quadrangle. However, current datasets contain very little curve text, which can be widely observed in scene images such as signboard, product name and so on. To raise the concerns of reading curve text in the wild, in this paper, we construct a curve text dataset named CTW1500, which includes over 10k text annotations in 1,500 images (1000 for training and 500 for testing). Based on this dataset, we pioneering propose a polygon based curve text detector (CTD) which can directly detect curve text without empirical combination. Moreover, by seamlessly integrating the recurrent transverse and longitudinal offset connection (TLOC), the proposed method can be end-to-end trainable to learn the inherent connection among the position offsets. This allows the CTD to explore context information instead of predicting points independently, resulting in more smooth and accurate detection. We also propose two simple but effective post-processing methods named non-polygon suppress (NPS) and polygonal non-maximum suppression (PNMS) to further improve the detection accuracy. Furthermore, the proposed approach in this paper is designed in an universal manner, which can also be trained with rectangular or quadrilateral bounding boxes without extra efforts. Experimental results on CTW-1500 demonstrate our method with only a light backbone can outperform state-of-the-art methods with a large margin. By evaluating only in the curve or non-curve subset, the CTD + TLOC can still achieve the best results. Code is available at this https URL", "Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.", "Detecting incidental scene text is a challenging task because of multi-orientation, perspective distortion, and variation of text size, color and scale. Retrospective research has only focused on using rectangular bounding box or horizontal sliding window to localize text, which may result in redundant background noise, unnecessary overlap or even information loss. To address these issues, we propose a new Convolutional Neural Networks (CNNs) based method, named Deep Matching Prior Network (DMPNet), to detect text with tighter quadrangle. First, we use quadrilateral sliding windows in several specific intermediate convolutional layers to roughly recall the text with higher overlapping area and then a shared Monte-Carlo method is proposed for fast and accurate computing of the polygonal areas. After that, we designed a sequential protocol for relative regression which can exactly predict text with compact quadrangle. Moreover, a auxiliary smooth Ln loss is also proposed for further regressing the position of text, which has better overall performance than L2 loss and smooth L1 loss in terms of robustness and stability. The effectiveness of our approach is evaluated on a public word-level, multi-oriented scene text database, ICDAR 2015 Robust Reading Competition Challenge 4 Incidental scene text localization. The performance of our method is evaluated by using F-measure and found to be 70.64 , outperforming the existing state-of-the-art method with F-measure 63.76 .", "This paper introduces a novel rotation-based framework for arbitrary-oriented text detection in natural scene images. We present the Rotation Region Proposal Networks , which are designed to generate inclined proposals with text orientation angle information. The angle information is then adapted for bounding box regression to make the proposals more accurately fit into the text region in terms of the orientation. The Rotation Region-of-Interest pooling layer is proposed to project arbitrary-oriented proposals to a feature map for a text region classifier. The whole framework is built upon a region-proposal-based architecture, which ensures the computational efficiency of the arbitrary-oriented text detection compared with previous text detection systems. We conduct experiments using the rotation-based framework on three real-world scene text detection datasets and demonstrate its superiority in terms of effectiveness and efficiency over previous approaches." ] }
1903.09837
2922562294
Curve text or arbitrary shape text is very common in real-world scenarios. In this paper, we propose a novel framework with the local segmentation network (LSN) followed by the curve connection to detect text in horizontal, oriented and curved forms. The LSN is composed of two elements, i.e., proposal generation to get the horizontal rectangle proposals with high overlap with text and text segmentation to find the arbitrary shape text region within proposals. The curve connection is then designed to connect the local mask to the detection results. We conduct experiments using the proposed framework on two real-world curve text detection datasets and demonstrate the effectiveness over previous approaches.
. Link-based methods are more robust when facing scene with long text or non-quadrilateral text. Tian al @cite_7 introduced CTPN using LSTM @cite_4 to link several text proposals. Shi al @cite_17 proposed the SegLink framework that decomposes the text into segments and links and links text segments together. And Tian al @cite_9 introduced a graph method called Min-Cost Flow to link single characters. However, these methods are based on a strong prior by restricting all the proposals to link should lie in a line. This hypothesis, in a manner, makes the problem easier and tractable but can not handle curved text.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_7", "@cite_17" ], "mid": [ "2239285313", "2513222501", "2519818067", "2605076167" ], "abstract": [ "The prevalent scene text detection approach follows four sequential steps comprising character candidate detection, false character candidate removal, text line extraction, and text line verification. However, errors occur and accumulate throughout each of these sequential steps which often lead to low detection performance. To address these issues, we propose a unified scene text detection system, namely Text Flow, by utilizing the minimum cost (min-cost) flow network model. With character candidates detected by cascade boosting, the min-cost flow network model integrates the last three sequential steps into a single process which solves the error accumulation problem at both character level and text line level effectively. The proposed technique has been tested on three public datasets, i.e, ICDAR2011 dataset, ICDAR2013 dataset and a multilingual dataset and it outperforms the state-of-the-art methods on all three datasets with much higher recall and F-score. The good performance on the multilingual dataset shows that the proposed technique can be used for the detection of texts in different languages.", "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it'.", "We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The CTPN detects a text line in a sequence of fine-scale text proposals directly in convolutional feature maps. We develop a vertical anchor mechanism that jointly predicts location and text non-text score of each fixed-width proposal, considerably improving localization accuracy. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model. This allows the CTPN to explore rich context information of image, making it powerful to detect extremely ambiguous text. The CTPN works reliably on multi-scale and multi-language text without further post-processing, departing from previous bottom-up methods requiring multi-step post filtering. It achieves 0.88 and 0.61 F-measure on the ICDAR 2013 and 2015 benchmarks, surpassing recent results [8, 35] by a large margin. The CTPN is computationally efficient with 0.14 s image, by using the very deep VGG16 model [27]. Online demo is available: http: textdet.com .", "Most state-of-the-art text detection methods are specific to horizontal Latin text and are not fast enough for real-time applications. We introduce Segment Linking (SegLink), an oriented text detection method. The main idea is to decompose text into two locally detectable elements, namely segments and links. A segment is an oriented box covering a part of a word or text line, A link connects two adjacent segments, indicating that they belong to the same word or text line. Both elements are detected densely at multiple scales by an end-to-end trained, fully-convolutional neural network. Final detections are produced by combining segments connected by links. Compared with previous methods, SegLink improves along the dimensions of accuracy, speed, and ease of training. It achieves an f-measure of 75.0 on the standard ICDAR 2015 Incidental (Challenge 4) benchmark, outperforming the previous best by a large margin. It runs at over 20 FPS on 512x512 images. Moreover, without modification, SegLink is able to detect long lines of non-Latin text, such as Chinese." ] }
1903.09837
2922562294
Curve text or arbitrary shape text is very common in real-world scenarios. In this paper, we propose a novel framework with the local segmentation network (LSN) followed by the curve connection to detect text in horizontal, oriented and curved forms. The LSN is composed of two elements, i.e., proposal generation to get the horizontal rectangle proposals with high overlap with text and text segmentation to find the arbitrary shape text region within proposals. The curve connection is then designed to connect the local mask to the detection results. We conduct experiments using the proposed framework on two real-world curve text detection datasets and demonstrate the effectiveness over previous approaches.
Recently, several approaches have been proposed to detect text with form of arbitrary shapes and the curve text datasets were provided for research. @cite_18 proposed a polygon-based curve text detector (CTD) which can directly detect curve text without empirical combination. The framework of TextSnake @cite_13 considered a text instance as a sequence of ordered disks. To deal with the problem of separation of the close text instances, Li al @cite_10 designed the PSENet by a progressive scale algorithm to gradually expands the pre-defined kernels. Different from previous methods, our local segmentation network combines the advantages of link-based methods and anchor-based methods using proposals to rough locate text and get accurate boundary by text segmentation. Our method also achieves state-of-the-art performance on the recent curve text detection datasets.
{ "cite_N": [ "@cite_10", "@cite_18", "@cite_13" ], "mid": [ "2806327167", "2772800855", "2810028092" ], "abstract": [ "Scene text detection has witnessed rapid progress especially with the recent development of convolutional neural networks. However, there still exists two challenges which prevent the algorithm into industry applications. On the one hand, most of the state-of-art algorithms require quadrangle bounding box which is in-accurate to locate the texts with arbitrary shape. On the other hand, two text instances which are close to each other may lead to a false detection which covers both instances. Traditionally, the segmentation-based approach can relieve the first problem but usually fail to solve the second challenge. To address these two challenges, in this paper, we propose a novel Progressive Scale Expansion Network (PSENet), which can precisely detect text instances with arbitrary shapes. More specifically, PSENet generates the different scale of kernels for each text instance, and gradually expands the minimal scale kernel to the text instance with the complete shape. Due to the fact that there are large geometrical margins among the minimal scale kernels, our method is effective to split the close text instances, making it easier to use segmentation-based methods to detect arbitrary-shaped text instances. Extensive experiments on CTW1500, Total-Text, ICDAR 2015 and ICDAR 2017 MLT validate the effectiveness of PSENet. Notably, on CTW1500, a dataset full of long curve texts, PSENet achieves a F-measure of 74.3 at 27 FPS, and our best F-measure (82.2 ) outperforms state-of-art algorithms by 6.6 . The code will be released in the future.", "Scene text detection has been made great progress in recent years. The detection manners are evolving from axis-aligned rectangle to rotated rectangle and further to quadrangle. However, current datasets contain very little curve text, which can be widely observed in scene images such as signboard, product name and so on. To raise the concerns of reading curve text in the wild, in this paper, we construct a curve text dataset named CTW1500, which includes over 10k text annotations in 1,500 images (1000 for training and 500 for testing). Based on this dataset, we pioneering propose a polygon based curve text detector (CTD) which can directly detect curve text without empirical combination. Moreover, by seamlessly integrating the recurrent transverse and longitudinal offset connection (TLOC), the proposed method can be end-to-end trainable to learn the inherent connection among the position offsets. This allows the CTD to explore context information instead of predicting points independently, resulting in more smooth and accurate detection. We also propose two simple but effective post-processing methods named non-polygon suppress (NPS) and polygonal non-maximum suppression (PNMS) to further improve the detection accuracy. Furthermore, the proposed approach in this paper is designed in an universal manner, which can also be trained with rectangular or quadrilateral bounding boxes without extra efforts. Experimental results on CTW-1500 demonstrate our method with only a light backbone can outperform state-of-the-art methods with a large margin. By evaluating only in the curve or non-curve subset, the CTD + TLOC can still achieve the best results. Code is available at this https URL", "Driven by deep neural networks and large scale datasets, scene text detection methods have progressed substantially over the past years, continuously refreshing the performance records on various standard benchmarks. However, limited by the representations (axis-aligned rectangles, rotated rectangles or quadrangles) adopted to describe text, existing methods may fall short when dealing with much more free-form text instances, such as curved text, which are actually very common in real-world scenarios. To tackle this problem, we propose a more flexible representation for scene text, termed as TextSnake, which is able to effectively represent text instances in horizontal, oriented and curved forms. In TextSnake, a text instance is described as a sequence of ordered, overlapping disks centered at symmetric axes, each of which is associated with potentially variable radius and orientation. Such geometry attributes are estimated via a Fully Convolutional Network (FCN) model. In experiments, the text detector based on TextSnake achieves state-of-the-art or comparable performance on Total-Text and SCUT-CTW1500, the two newly published benchmarks with special emphasis on curved text in natural images, as well as the widely-used datasets ICDAR 2015 and MSRA-TD500. Specifically, TextSnake outperforms the baseline on Total-Text by more than 40 in F-measure." ] }
1903.09762
2925234320
Model-free reinforcement learning (RL) provides an attractive approach for learning control policies directly in high dimensional state spaces. However, many goal-oriented tasks involving sparse rewards remain difficult to solve with state-of-the-art model-free RL algorithms, even in simulation. One of the key difficulties is that deep RL, due to its relatively poor sample complexity, often requires a prohibitive number of trials to obtain a learning signal. We propose a novel, non-sparse reward function for robotic RL tasks by leveraging physical priors in the form of a time-to-reach (TTR) function computed from an approximate system dynamics model. TTR functions come from the optimal control field and measure the minimal time required to move from any state to the goal. However, TTR functions are intractable to compute for complex systems, so we compute it in a lower-dimensional state space, and then do a simple transformation to convert it into a TTR-based reward function for the MDP in RL tasks. Our TTR-based reward function provides highly-informative rewards that account for system dynamics.
Integration of RL and neural networks has a long history @cite_0 @cite_12 . With recent exciting achievements of deep learning @cite_10 , deep neural network has been prevailing in RL in different areas such as games, robotics and NLP. The deep Q-network, which was developed to learn to play a range of Atari 2600 video games at a superhuman level directly from image pixels, provides solutions for instability of function approximation techniques in the class of value-based RL method @cite_11 . Policy-based methods, including Guided policy search (GPS) @cite_39 , Trust region policy optimization (TRPO) @cite_18 and proximal policy optimization (PPO) @cite_5 aims to directly find policies by means of gradient-free or gradient-based method. As shown in our work, we evaluate our reward shaping method on two robotic tasks with policy-based model-free learning algorithm and illustrates competitive performance.
{ "cite_N": [ "@cite_18", "@cite_39", "@cite_0", "@cite_5", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2949608212", "2104733512", "2121863487", "2736601468", "", "2076063813", "2145339207" ], "abstract": [ "We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.", "Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running.", "Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.", "We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a \"surrogate\" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.", "", "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action." ] }
1903.09762
2925234320
Model-free reinforcement learning (RL) provides an attractive approach for learning control policies directly in high dimensional state spaces. However, many goal-oriented tasks involving sparse rewards remain difficult to solve with state-of-the-art model-free RL algorithms, even in simulation. One of the key difficulties is that deep RL, due to its relatively poor sample complexity, often requires a prohibitive number of trials to obtain a learning signal. We propose a novel, non-sparse reward function for robotic RL tasks by leveraging physical priors in the form of a time-to-reach (TTR) function computed from an approximate system dynamics model. TTR functions come from the optimal control field and measure the minimal time required to move from any state to the goal. However, TTR functions are intractable to compute for complex systems, so we compute it in a lower-dimensional state space, and then do a simple transformation to convert it into a TTR-based reward function for the MDP in RL tasks. Our TTR-based reward function provides highly-informative rewards that account for system dynamics.
Exploration inefficiency, as the root cause of high sample complexity, remains a major challenge in RL @cite_29 . "Deep exploration" strategy address this problem by maintaining a distribution over possible values and bootstrapping from it with random initialization @cite_29 . Curriculum-based approaches paved another possible path by first training on easier sub-problems, and increasing the difficulty of sub-problems as training progresses until the original task becomes appropriate to train on directly @cite_17 . Hierarchical RL framework tries to improve exploration efficiency by build modules with different level, where top-level module learns a policy over options (subgoals) and the bottom-level module learns policies to accomplish the objective of each option @cite_35 @cite_14 @cite_34 .
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_29", "@cite_34", "@cite_17" ], "mid": [ "1488730473", "2109910161", "2963938771", "2963262099", "" ], "abstract": [ "This paper presents a new approach to hierarchical reinforcement learning based on the MAXQ decomposition of the value function. The MAXQ decomposition has both a procedural semantics—as a subroutine hierarchy—and a declarative semantics—as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. Conditions under which the MAXQ decomposition can represent the optimal value function are derived. The paper defines a hierarchical Q learning algorithm, proves its convergence, and shows experimentally that it can learn much faster than ordinary “flat” Q learning. Finally, the paper discusses some interesting issues that arise in hierarchical reinforcement learning including the hierarchical credit assignment problem and non-hierarchical execution of the MAXQ hierarchy.", "Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem.", "Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as '-greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.", "Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. One of the key difficulties is insufficient exploration, resulting in an agent being unable to learn robust policies. Intrinsically motivated agents can explore new behavior for their own sake rather than to directly solve external goals. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical action-value functions, operating at different temporal scales, with goal-driven intrinsically motivated deep reinforcement learning. A top-level q-value function learns a policy over intrinsic goals, while a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse and delayed feedback: (1) a complex discrete stochastic decision process with stochastic transitions, and (2) the classic ATARI game -'Montezuma's Revenge'.", "" ] }
1903.09756
2923242463
Access control is an important component for web services such as a cloud. Current clouds tend to design the access control mechanism together with the policy language on their own. It leads to two issues: (i) a cloud user has to learn different policy languages to use multiple clouds, and (ii) a cloud service provider has to customize an authorization mechanism based on its business requirement, which brings high development cost. In this work, a new access control policy language called PERM modeling language (PML) is proposed to express various access control models such as access control list (ACL), role-based access control (RBAC) and attribute-based access control (ABAC), etc. PML's enforcement mechanism is designed in an interpreter-on-interpreter manner, which not only secures the authorization code with sandboxing, but also extends PML to all programming languages that support Lua. PML is already adopted by real-world projects such as Intel's RMD, VMware's Dispatch, Orange's Gobis and so on, which proves PML's usability. The performance evaluation on OpenStack, CloudStack and Amazon Web Services (AWS) shows PML's enforcement overhead per request is under 5.9us.
Access control as a service (ACaaS), proposed in @cite_2 by , provided a new cloud service to provide comprehensive and fine-grained access control. It is claimed to support multiple access control models, whereas there is no evidence that this approach applies to the models except RBAC. And this work is highly based on IAM provided by AWS, which makes it difficult to apply for other clouds.
{ "cite_N": [ "@cite_2" ], "mid": [ "2014540924" ], "abstract": [ "Organizations and enterprises have been outsourcing their computation, storage, and workflows to Infrastructure-as-a-Service (IaaS) based cloud platforms. The heterogeneity and high diversity of IaaS cloud environment demand a comprehensive and fine-grained access control mechanism, in order to meet dynamic, extensible, and highly configurable security requirements of these cloud consumers. However, existing security mechanisms provided by IaaS cloud providers do not satisfy these requirements. To address such an emergent demand, we propose a new cloud service called access control as a service (ACaaS), a service-oriented architecture in cloud to support multiple access control models, with the spirit of plug gable access control modules in modern operating systems. As a proof-of-concept reference prototype, we design and implement ACaaS_RBAC to provide role-based access control (RBAC) for Amazon Web Services (AWS), where cloud customers can easily integrate the service into enterprise applications in order to extend RBAC policy enforcement in AWS." ] }
1903.09756
2923242463
Access control is an important component for web services such as a cloud. Current clouds tend to design the access control mechanism together with the policy language on their own. It leads to two issues: (i) a cloud user has to learn different policy languages to use multiple clouds, and (ii) a cloud service provider has to customize an authorization mechanism based on its business requirement, which brings high development cost. In this work, a new access control policy language called PERM modeling language (PML) is proposed to express various access control models such as access control list (ACL), role-based access control (RBAC) and attribute-based access control (ABAC), etc. PML's enforcement mechanism is designed in an interpreter-on-interpreter manner, which not only secures the authorization code with sandboxing, but also extends PML to all programming languages that support Lua. PML is already adopted by real-world projects such as Intel's RMD, VMware's Dispatch, Orange's Gobis and so on, which proves PML's usability. The performance evaluation on OpenStack, CloudStack and Amazon Web Services (AWS) shows PML's enforcement overhead per request is under 5.9us.
OpenStack access control (OSAC), proposed in @cite_20 by , has presented a formalized description for conceptions in Keystone, such as domains and tenants in addition to roles. It further proposed a domain trust extension for OSAC to facilitate secure cross-domain authorization. This work is orthogonal to ours, since it mainly focuses on the enhancement of Keystone. The domain trust decision made by OSAC can be used as an attribute in PML, which increases the granularity of the access control.
{ "cite_N": [ "@cite_20" ], "mid": [ "1055523" ], "abstract": [ "OpenStack has been rapidly established as the most popular open-source platform for cloud Infrastrusture-as-a-Service in this fast moving industry. In response to increasing access control requirements from its users, the OpenStack identity service Keystone has introduced several entities, such as domains and projects in addition to roles, resulting in a rather complex and somewhat obscure authorization model. In this paper, we present a formalized description of the core OpenStack access control (OSAC). We further propose a domain trust extension for OSAC to facilitate secure cross-domain authorization. We have implemented a proof-of-concept prototype of this trust extension based on Keystone. The authorization delay introduced by the domain trusts is 0.7 percent on average in our experiments." ] }
1903.09756
2923242463
Access control is an important component for web services such as a cloud. Current clouds tend to design the access control mechanism together with the policy language on their own. It leads to two issues: (i) a cloud user has to learn different policy languages to use multiple clouds, and (ii) a cloud service provider has to customize an authorization mechanism based on its business requirement, which brings high development cost. In this work, a new access control policy language called PERM modeling language (PML) is proposed to express various access control models such as access control list (ACL), role-based access control (RBAC) and attribute-based access control (ABAC), etc. PML's enforcement mechanism is designed in an interpreter-on-interpreter manner, which not only secures the authorization code with sandboxing, but also extends PML to all programming languages that support Lua. PML is already adopted by real-world projects such as Intel's RMD, VMware's Dispatch, Orange's Gobis and so on, which proves PML's usability. The performance evaluation on OpenStack, CloudStack and Amazon Web Services (AWS) shows PML's enforcement overhead per request is under 5.9us.
The work proposed in @cite_16 by , has defined a formal ABAC specification suitable for infrastructure as a service (IaaS) and implemented it in OpenStack. It includes two models: the operational model @math and the administrative model @math , which provide fine-grained access control for tenants. However, this work does not support external functions or decision combination, which limits its flexibility to express a customized model.
{ "cite_N": [ "@cite_16" ], "mid": [ "1992204376" ], "abstract": [ "Cloud Infrastructure as a Service (IaaS), where traditional IT infrastructure resources such as compute, storage and networking are owned by a cloud service provider (CSP) and offered as on-demand virtual resources to customers (tenants), is the fastest maturing service model in cloud computing. The transformation of physical resources into virtual offers great flexibility to CSP customers including network based remote collaborative administration. This flexibility can be fully availed only if complemented by commensurately flexible access control to the customers remote IT resources by the customer's IT users. Since customer policies in this regard can vary greatly, the CSP needs a flexible model to accommodate diverse policy requirements. In this paper, we investigate attribute-based access control (ABAC) in cloud IaaS. In ABAC, access requests are evaluated based on the attributes of cloud tenant users and those of objects such as virtual machines, storage volumes, networks, etc. We investigate the access control models supported by commercial IaaS providers such as Amazon AWS and opensource OpenStack, as well as other models in the literature, which mostly use role-based access control (RBAC). We demonstrate their limitations and motivate the need for ABAC support to realize the true potential of IaaS. Building on prior published ABAC models we define a formal ABAC model suitable for IaaS. As proof-of-concept we implement this model in OpenStack, a widely-used open source cloud IaaS software platform. We discuss enforcement alternatives in this context and partially evaluate their performance." ] }
1903.09756
2923242463
Access control is an important component for web services such as a cloud. Current clouds tend to design the access control mechanism together with the policy language on their own. It leads to two issues: (i) a cloud user has to learn different policy languages to use multiple clouds, and (ii) a cloud service provider has to customize an authorization mechanism based on its business requirement, which brings high development cost. In this work, a new access control policy language called PERM modeling language (PML) is proposed to express various access control models such as access control list (ACL), role-based access control (RBAC) and attribute-based access control (ABAC), etc. PML's enforcement mechanism is designed in an interpreter-on-interpreter manner, which not only secures the authorization code with sandboxing, but also extends PML to all programming languages that support Lua. PML is already adopted by real-world projects such as Intel's RMD, VMware's Dispatch, Orange's Gobis and so on, which proves PML's usability. The performance evaluation on OpenStack, CloudStack and Amazon Web Services (AWS) shows PML's enforcement overhead per request is under 5.9us.
Attribute-based access control (ABAC) @cite_12 broadened the conception of roles into attributes compared to role-based access control (RBAC) @cite_0 . Extensible Access Control Markup Language (XACML), proposed in @cite_13 by OASIS, is primarily based on the ABAC model by defining structural elements such as Rule , Policy and Policy Set . Combining algorithms like deny-overrides and permit-overrides are also provided to resolve conflicts between rules or policies. XACML has specified the whole architecture about the supporting entities like PEP, PDP, PIP and the exchanging structures between those entities.
{ "cite_N": [ "@cite_0", "@cite_13", "@cite_12" ], "mid": [ "2166602595", "", "2749040653" ], "abstract": [ "Security administration of large systems is complex, but it can be simplified by a role-based access control approach. This article explains why RBAC is receiving renewed attention as a method of security administration and review, describes a framework of four reference models developed to better understand RBAC and categorizes different implementations, and discusses the use of RBAC to manage itself.", "", "This document provides Federal agencies with a definition of attribute based access control (ABAC). ABAC is a logical access control methodology where authorization to perform a set of operations is determined by evaluating attributes associated with the subject, object, requested operations, and, in some cases, environment conditions against policy, rules, or relationships that describe the allowable operations for a given set of attributes. This document also provides considerations for using ABAC to improve information sharing within organizations and between organizations while maintaining control of that information." ] }
1903.09756
2923242463
Access control is an important component for web services such as a cloud. Current clouds tend to design the access control mechanism together with the policy language on their own. It leads to two issues: (i) a cloud user has to learn different policy languages to use multiple clouds, and (ii) a cloud service provider has to customize an authorization mechanism based on its business requirement, which brings high development cost. In this work, a new access control policy language called PERM modeling language (PML) is proposed to express various access control models such as access control list (ACL), role-based access control (RBAC) and attribute-based access control (ABAC), etc. PML's enforcement mechanism is designed in an interpreter-on-interpreter manner, which not only secures the authorization code with sandboxing, but also extends PML to all programming languages that support Lua. PML is already adopted by real-world projects such as Intel's RMD, VMware's Dispatch, Orange's Gobis and so on, which proves PML's usability. The performance evaluation on OpenStack, CloudStack and Amazon Web Services (AWS) shows PML's enforcement overhead per request is under 5.9us.
Ponder, proposed in @cite_18 , was a policy specification language for distributed systems. It supports access control by providing authorization, delegation, information filtering and refrain policies. Ponder also supports obligation policies that are event triggered condition-action rules for policy based management of networks and distributed systems. PML only supports authorization rules, but it uses expression evaluation to maximize the flexibility. Ponder can be extended by supporting external functions but it still lacks many features like the tenants, arithmetic and logical operators, etc. It supports static conflict detection but lacks decision combination (similar to conflict resolution).
{ "cite_N": [ "@cite_18" ], "mid": [ "2168884369" ], "abstract": [ "The Ponder language provides a common means of specifying security policies that map onto various access control implementation mechanisms for firewalls, operating systems, databases and Java. It supports obligation policies that are event triggered condition-action rules for policy based management of networks and distributed systems. Ponder can also be used for security management activities such as registration of users or logging and auditing events for dealing with access to critical resources or security violations. Key concepts of the language include roles to group policies relating to a position in an organisation, relationships to define interactions between roles and management structures to define a configuration of roles and relationships pertaining to an organisational unit such as a department. These reusable composite policy specifications cater for the complexity of large enterprise information systems. Ponder is declarative, strongly-typed and object-oriented which makes the language flexible, extensible and adaptable to a wide range of management requirements." ] }
1903.09756
2923242463
Access control is an important component for web services such as a cloud. Current clouds tend to design the access control mechanism together with the policy language on their own. It leads to two issues: (i) a cloud user has to learn different policy languages to use multiple clouds, and (ii) a cloud service provider has to customize an authorization mechanism based on its business requirement, which brings high development cost. In this work, a new access control policy language called PERM modeling language (PML) is proposed to express various access control models such as access control list (ACL), role-based access control (RBAC) and attribute-based access control (ABAC), etc. PML's enforcement mechanism is designed in an interpreter-on-interpreter manner, which not only secures the authorization code with sandboxing, but also extends PML to all programming languages that support Lua. PML is already adopted by real-world projects such as Intel's RMD, VMware's Dispatch, Orange's Gobis and so on, which proves PML's usability. The performance evaluation on OpenStack, CloudStack and Amazon Web Services (AWS) shows PML's enforcement overhead per request is under 5.9us.
Security policy language (SPL), proposed in @cite_17 by , supports the expression of entities, their relations and the comparison of properties and quantifiers from different policies. SPL lacks the external function support, which is a primary restriction for more flexible policy evaluation. And it also relies heavily on policy rules, which is unnecessary for some scenarios.
{ "cite_N": [ "@cite_17" ], "mid": [ "1492689271" ], "abstract": [ "Most organizations use several security policies to control different systems and data, comprising in this way a global complex policy. These security policies are often scattered over different environments, each one with its own security model and domain of administration, making them difficult to administer and understand. Moreover, some applications (e.g. workflow), often need to cross several of these security domains and satisfy each one of their policies, which is very difficult to accomplish when these policies are scattered over the organization, in conflict with each other and frequently expressed in differ-" ] }
1903.09730
2924282518
Class imbalance is a long-standing problem relevant to a number of real-world applications of deep learning. Oversampling techniques, which are effective for handling class imbalance in classical learning systems, can not be directly applied to end-to-end deep learning systems. We propose a three-player adversarial game between a convex generator, a multi-class classifier network, and a real fake discriminator to perform oversampling in deep learning systems. The convex generator generates new samples from the minority classes as convex combinations of existing instances, aiming to fool both the discriminator as well as the classifier into misclassifying the generated samples. Consequently, the artificial samples are generated at critical locations near the peripheries of the classes. This, in turn, adjusts the classifier induced boundaries in a way which is more likely to reduce misclassification from the minority classes. Extensive experiments on multiple class imbalanced image datasets establish the efficacy of our proposal.
The success of SMOTE @cite_53 @cite_45 has inspired several improvements. For example, @cite_50 @cite_10 attempt to selectively oversample minority class points lying close to the class boundaries. Works like @cite_17 @cite_4 @cite_2 , on the other hand, asymmetrically oversample the minority class such that more synthetic points are generated surrounding the instances which are difficult to classify. Although these methods achieved commendable improvement on classical classifiers, they can neither be extended to deep learning techniques nor be applied to images, respectively due to the end-to-end structure of deep learning algorithms and a lack of proper notion of distance between images.
{ "cite_N": [ "@cite_4", "@cite_53", "@cite_45", "@cite_50", "@cite_2", "@cite_10", "@cite_17" ], "mid": [ "2083551746", "2148143831", "", "2132791018", "2087240369", "", "2104933073" ], "abstract": [ "Class imbalance learning tackles supervised learning problems where some classes have significantly more examples than others. Most of the existing research focused only on binary-class cases. In this paper, we study multiclass imbalance problems and propose a dynamic sampling method (DyS) for multilayer perceptrons (MLP). In DyS, for each epoch of the training process, every example is fed to the current MLP and then the probability of it being selected for training the MLP is estimated. DyS dynamically selects informative data to train the MLP. In order to evaluate DyS and understand its strength and weakness, comprehensive experimental studies have been carried out. Results on 20 multiclass imbalanced data sets show that DyS can outperform the compared methods, including pre-sample methods, active learning methods, cost-sensitive methods, and boosting-type methods.", "An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of \"normal\" examples with only a small percentage of \"abnormal\" or \"interesting\" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of oversampling the minority (abnormal)cla ss and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space)tha n only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space)t han varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC)and the ROC convex hull strategy.", "", "In recent years, mining with imbalanced data sets receives more and more attentions in both theoretical and practical aspects. This paper introduces the importance of imbalanced data sets and their broad application domains in data mining, and then summarizes the evaluation metrics and the existing methods to evaluate and solve the imbalance problem. Synthetic minority over-sampling technique (SMOTE) is one of the over-sampling methods addressing this problem. Based on SMOTE method, this paper presents two new minority over-sampling methods, borderline-SMOTE1 and borderline-SMOTE2, in which only the minority examples near the borderline are over-sampled. For the minority class, experiments show that our approaches achieve better TP rate and F-value than SMOTE and random over-sampling methods.", "Imbalanced learning problems contain an unequal distribution of data samples among different classes and pose a challenge to any classifier as it becomes hard to learn the minority class samples. Synthetic oversampling methods address this problem by generating the synthetic minority class samples to balance the distribution between the samples of the majority and minority classes. This paper identifies that most of the existing oversampling methods may generate the wrong synthetic minority samples in some scenarios and make learning tasks harder. To this end, a new method, called Majority Weighted Minority Oversampling TEchnique (MWMOTE), is presented for efficiently handling imbalanced learning problems. MWMOTE first identifies the hard-to-learn informative minority class samples and assigns them weights according to their euclidean distance from the nearest majority class samples. It then generates the synthetic samples from the weighted informative minority class samples using a clustering approach. This is done in such a way that all the generated samples lie inside some minority class cluster. MWMOTE has been evaluated extensively on four artificial and 20 real-world data sets. The simulation results show that our method is better than or comparable with some other existing methods in terms of various assessment metrics, such as geometric mean (G-mean) and area under the receiver operating curve (ROC), usually known as area under curve (AUC).", "", "This paper presents a novel adaptive synthetic (ADASYN) sampling approach for learning from imbalanced data sets. The essential idea of ADASYN is to use a weighted distribution for different minority class examples according to their level of difficulty in learning, where more synthetic data is generated for minority class examples that are harder to learn compared to those minority examples that are easier to learn. As a result, the ADASYN approach improves learning with respect to the data distributions in two ways: (1) reducing the bias introduced by the class imbalance, and (2) adaptively shifting the classification decision boundary toward the difficult examples. Simulation analyses on several machine learning data sets show the effectiveness of this method across five evaluation metrics." ] }
1903.09730
2924282518
Class imbalance is a long-standing problem relevant to a number of real-world applications of deep learning. Oversampling techniques, which are effective for handling class imbalance in classical learning systems, can not be directly applied to end-to-end deep learning systems. We propose a three-player adversarial game between a convex generator, a multi-class classifier network, and a real fake discriminator to perform oversampling in deep learning systems. The convex generator generates new samples from the minority classes as convex combinations of existing instances, aiming to fool both the discriminator as well as the classifier into misclassifying the generated samples. Consequently, the artificial samples are generated at critical locations near the peripheries of the classes. This, in turn, adjusts the classifier induced boundaries in a way which is more likely to reduce misclassification from the minority classes. Extensive experiments on multiple class imbalanced image datasets establish the efficacy of our proposal.
Extending GANs for semi-supervised learning, works like @cite_7 @cite_8 fused a @math -class classifier with the discriminator by introducing an extra output line to identify the fake samples. On the other hand, @cite_46 proposed a @math -class discriminator which makes uncertain predictions for fake images. Additionally, @cite_39 proposed a shared discriminator-cum-classifier network which makes two separate sets of predictions using two different output layers. These approaches can loosely be considered to be related to GAMO as these also incorporate a classifier into the adversarial learning scheme.
{ "cite_N": [ "@cite_46", "@cite_39", "@cite_7", "@cite_8" ], "mid": [ "2178768799", "2548275288", "2963425170", "" ], "abstract": [ "In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method - which we dub categorical generative adversarial networks (or CatGAN) - on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM).", "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data.", "Semi-supervised learning methods using Generative adversarial networks (GANs) have shown promising empirical success recently. Most of these methods use a shared discriminator classifier which discriminates real examples from fake while also predicting the class label. Motivated by the ability of the GANs generator to capture the data manifold well, we propose to estimate the tangent space to the data manifold using GANs and employ it to inject invariances into the classifier. In the process, we propose enhancements over existing methods for learning the inverse mapping (i.e., the encoder) which greatly improves in terms of semantic similarity of the reconstructed sample with the input sample. We observe considerable empirical gains in semi-supervised learning over baselines, particularly in the cases when the number of labeled examples is low. We also provide insights into how fake examples influence the semi-supervised learning procedure.", "" ] }
1903.09397
2923357736
We construct algebraic geometric codes from del Pezzo surfaces and focus on the ones having Picard rank one and the codes associated to the anticanonical class. We give explicit constructions of del Pezzo surfaces of degree 4, 5 and 6, compute the parameters of the associated anticanonical codes and study their isomorphisms arising from the automorphisms of the surface. We obtain codes with excellent parameters and some of them turn out to beat the best known codes listed on the database codetable.
Let us finally mention some works on codes on other blowups of the plane @cite_8 @cite_2 . These blowups are no longer del Pezzo surfaces: since the blown up points lie on one or two lines, they are not in general position. Moreover the evaluation set is not the set of rational points of the surface, but the torus @math .
{ "cite_N": [ "@cite_2", "@cite_8" ], "mid": [ "2030750486", "2099044707" ], "abstract": [ "Here we improve the lower bound on the minimum distance for the evaluation codes obtained by Davis from certain blowing-ups of the planes (anticanonical surfaces).", "Algebraic geometric codes (or AG codes) provide a way to correct errors that occur during the transmission of digital information. AG codes on curves have been studied extensively, but much less work has been done for AG codes on higher dimensional varieties. In particular, we seek good bounds for the minimum distance. We study AG codes on anticanonical surfaces coming from blow-ups of P2 at points on a line and points on the union of two lines. We can compute the dimension of such codes exactly due to known results. For certain families of these codes, we prove an exact result on the minimum distance. For other families, we obtain lower bounds on the minimum distance." ] }
1903.09402
2949902818
Sharing perceptual data with other vehicles enhances the traffic safety of autonomous vehicles because it helps vehicles locate other vehicles and pedestrians in their blind spots. Such safety applications require high throughput and short delay, which cannot be achieved by conventional microwave vehicular communication systems. Therefore, millimeter-wave (mmWave) communications are considered to be a key technology for sharing perceptual data because of their wide bandwidth. One of the challenges of data sharing in mmWave communications is broadcasting because narrow-beam directional antennas are used to obtain high gain. Because many vehicles should share their perceptual data to others within a short time frame in order to enlarge the areas that can be perceived based on shared perceptual data, an efficient scheduling for concurrent transmission that improves spatial reuse is required for perceptual data sharing. This paper proposes a data sharing algorithm that employs a graph-based concurrent transmission scheduling. The proposed algorithm realizes concurrent transmission to improve spatial reuse by designing a rule that is utilized to determine if the two pairs of transmitters and receivers interfere with each other by considering the radio propagation characteristics of narrow-beam antennas. A prioritization method that considers the geographical information in perceptual data is also designed to enlarge perceivable areas in situations where data sharing time is limited and not all data can be shared. Simulation results demonstrate that the proposed algorithm doubles the area of the cooperatively perceivable region compared with a conventional algorithm that does not consider mmWave communications because the proposed algorithm achieves high-throughput transmission by improving spatial reuse. The prioritization also enlarges the perceivable region by a maximum of 20 .
Dissemination algorithms with directional antennas for VANETs have been studied by many researchers. @cite_17 presented theoretical analysis of content dissemination time in vehicular networks with directional antennas and demonstrated that directional antennas accelerate content propagation. @cite_10 proposed a broadcast protocol for directional antennas in VANETs. In this protocol, the furthest receiver forwards data packets along road segments and a directional repeater forwards the data in multiple directions at intersections. In contrast to the protocol in @cite_10 , which considers the positions of transmitters, our algorithm considers the positions where data are obtained to achieve a large perceivable region.
{ "cite_N": [ "@cite_10", "@cite_17" ], "mid": [ "1986905072", "2079665585" ], "abstract": [ "The topology of a vehicular ad hoc network (VANET) changes rapidly due to high-speed movement of vehicles, so traditional mobile ad hoc network (MANET) broadcast protocol may not work efficiently in VANET. This paper proposes a distance-based broadcast protocol called Efficient Directional Broadcast (EDB) for VANET using directional antennas. In EDB, only the furthest receiver is responsible to forward the packet in the opposite direction where the packet arrives. Besides, a directional repeater located at the intersection helps disseminating the packets to the vehicles on other road segments of different directions. We evaluate the performance of EDB based on a real mobility model generated by live GPS data of taxis in the city of Shanghai. The result shows that EDB is effective and favorable for VANET.", "We study the performance of collaborative vehicular content dissemination, where the content is distributed within the network by vehicle-to-vehicle opportunistic communications and the vehicle nodes are equipped with directional antennas. Through analysing a large real-world vehicle trace, we adopt an accurate mobility model of Levy-walk to set up the realistic vehicular network simulation environment. Using a fluid approximation, we derive a theoretical model to depict the system performance of content dissemination time. The accuracy of the proposed analysis is confirmed by simulation results, which also show that the directional antenna performs better than the omni-directional antenna in our considered scenario, especially when the antenna beam is well scheduled with small beamwidth and high beam steering rate." ] }
1903.09402
2949902818
Sharing perceptual data with other vehicles enhances the traffic safety of autonomous vehicles because it helps vehicles locate other vehicles and pedestrians in their blind spots. Such safety applications require high throughput and short delay, which cannot be achieved by conventional microwave vehicular communication systems. Therefore, millimeter-wave (mmWave) communications are considered to be a key technology for sharing perceptual data because of their wide bandwidth. One of the challenges of data sharing in mmWave communications is broadcasting because narrow-beam directional antennas are used to obtain high gain. Because many vehicles should share their perceptual data to others within a short time frame in order to enlarge the areas that can be perceived based on shared perceptual data, an efficient scheduling for concurrent transmission that improves spatial reuse is required for perceptual data sharing. This paper proposes a data sharing algorithm that employs a graph-based concurrent transmission scheduling. The proposed algorithm realizes concurrent transmission to improve spatial reuse by designing a rule that is utilized to determine if the two pairs of transmitters and receivers interfere with each other by considering the radio propagation characteristics of narrow-beam antennas. A prioritization method that considers the geographical information in perceptual data is also designed to enlarge perceivable areas in situations where data sharing time is limited and not all data can be shared. Simulation results demonstrate that the proposed algorithm doubles the area of the cooperatively perceivable region compared with a conventional algorithm that does not consider mmWave communications because the proposed algorithm achieves high-throughput transmission by improving spatial reuse. The prioritization also enlarges the perceivable region by a maximum of 20 .
Dissemination algorithms for local information were proposed in @cite_2 @cite_14 @cite_6 . In @cite_2 , a scalable dissemination protocol, called segment-oriented data abstraction and dissemination (SODAD), and its application, self-organizing traffic-information system (SOTIS), were proposed. SOTIS is a mechanism for gathering traffic information sensed by vehicles. It aggregates the received traffic information from road segments and sends only up-to-date information to vehicles. In @cite_14 , Zone Flooding and Zone Diffusion were proposed to suppress redundant data broadcasting. In Zone Flooding, only vehicles in a flooding zone forward received packets. Zone Diffusion is a data aggregation method considering geographical information, where vehicles merge road environment data as it is received and broadcast only merged data. @cite_6 proposed controlling the frequency of information broadcasting and selecting the data to send to reduce communication traffic. Although these studies considered the geographical information in each datum, they did not focus on concurrent transmission.
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_2" ], "mid": [ "2164781149", "2612592701", "2105096404" ], "abstract": [ "Vehicular ad-hoc networks is an emerging research area focussing on communication infrastructures that support vehicles and road-signs in distributing road-state data such as information about hazardous road conditions ahead, approaching emergency vehicles, and traffic delays. Vehicular ad-hoc networks combine the areas of sensor networks (data acquisition) with mobile ad-hoc networks (highly dynamic topology and lack of pre-existing infrastructure). One of the main challenges of vehicular ad-hoc networks is the data dissemination protocols capable of distributing road-state information among vehicles. This paper presents two candidates for dissemination protocols: a zone flooding protocol and a zone diffusion protocol. The two protocols combine ideas from sensor networks and geocasting to ensure that data is aggregated and distributed only in a bounded geographical area. We present a comparative simulation study of the two protocols evaluating their relative performance using conventional metrics (such as network load) as well as application-specific metrics (such as awareness). The simulation study has been conducted using the network simulator 2 (NS-2) and has highlighted key properties of the two protocols that can be used as a basis for selecting the most appropriate protocol.", "When we drive a car, we often want to know the situation of roads near our destination. Ishihara et. al. proposed a VANET based information sharing system named Real-Time Visual Car Navigation System, in which a driver can obtain Location-Dependent Information (LDI), such as pictures or videos of his her Point of Interest (POI) by telling the automotive equipment, e.g. car navigation device, the POI. The simplest way to provide LDI to vehicles is flooding the LDI to all vehicles in the VANET, but unneeded LDI may be disseminated, and network resources may be wasted. We have proposed a data dissemination scheme based on Demand map for effectively disseminating LDI to multiple vehicles that need it. In this scheme, each vehicle has Demand map (Dmap), a data set representing the geographical distribution of the strength of demands for LDI. Each vehicle exchanges a subset of data constituting a Dmap (Dmap Information: DMI) with other vehicles. According to the information in the Dmap, each vehicle preferentially sends new LDI or forwarded LDI strongly demanded to the area containing many vehicles that demand the LDI. In this paper, we propose strategies for controlling the frequency of sending DMI and strategies for selecting DMI to be sent for reducing the communication traffic without loss of the accuracy of information in Dmaps. Simulation results show that one of the proposed strategies, LTZ strategy, achieves high accuracy of Dmaps with a small amount of communication traffic.", "Intervehicle communication (IVC) is an emerging topic in research and application that is getting increasing attention from all major car manufacturers. In this paper, a novel method for scalable information dissemination in highly mobile ad hoc networks is proposed: segment-oriented data abstraction and dissemination (SODAD). With SODAD, information can be distributed in an information range multiple orders of magnitude larger than the transmission range of the air interface, even if only 1 -3 of all vehicles are equipped with an IVC system, e.g., during market introduction. By restricting the method to the dissemination of map position-based data, scalability is achieved. In the second half of this paper, an example application for the SODAD method is presented: a self-organizing traffic-information system (SOTIS). In SOTIS, a car is equipped with a satellite navigation receiver, an IVC system, and a digital map. Each individual vehicle collects traffic information for its local area. Using the digital map, the traffic information is analyzed based on road segments. By distributing the information in the ad hoc intervehicle network using the SODAD method, a decentralized traffic information system is created. The performance of the proposed methods is evaluated using network simulation with vehicular mobility models. Simulation results for typical scenarios are presented. Furthermore, a prototype implementation based on commercially available standard hardware demonstrates the feasibility of the proposed approach." ] }
1903.09402
2949902818
Sharing perceptual data with other vehicles enhances the traffic safety of autonomous vehicles because it helps vehicles locate other vehicles and pedestrians in their blind spots. Such safety applications require high throughput and short delay, which cannot be achieved by conventional microwave vehicular communication systems. Therefore, millimeter-wave (mmWave) communications are considered to be a key technology for sharing perceptual data because of their wide bandwidth. One of the challenges of data sharing in mmWave communications is broadcasting because narrow-beam directional antennas are used to obtain high gain. Because many vehicles should share their perceptual data to others within a short time frame in order to enlarge the areas that can be perceived based on shared perceptual data, an efficient scheduling for concurrent transmission that improves spatial reuse is required for perceptual data sharing. This paper proposes a data sharing algorithm that employs a graph-based concurrent transmission scheduling. The proposed algorithm realizes concurrent transmission to improve spatial reuse by designing a rule that is utilized to determine if the two pairs of transmitters and receivers interfere with each other by considering the radio propagation characteristics of narrow-beam antennas. A prioritization method that considers the geographical information in perceptual data is also designed to enlarge perceivable areas in situations where data sharing time is limited and not all data can be shared. Simulation results demonstrate that the proposed algorithm doubles the area of the cooperatively perceivable region compared with a conventional algorithm that does not consider mmWave communications because the proposed algorithm achieves high-throughput transmission by improving spatial reuse. The prioritization also enlarges the perceivable region by a maximum of 20 .
The authors of @cite_20 proposed vehicle pairing and beam-width controlling for mmWave VANETs. In the protocol in @cite_20 , pairs of transmitters and receivers are selected based on matching theory and beam widths are determined via particle swarm optimization. This protocol successfully improves throughput and reduces delay by considering SINR. Other concurrent transmission methods for mmWave communications have been proposed for WSN, rather than VANETs @cite_9 @cite_19 @cite_11 . The authors of @cite_9 formulated the concurrent transmission scheduling problem as an optimization problem to maximize the number of flows to satisfy the quality-of-service requirements of each flow. In @cite_19 , relay selection and spatial reuse were jointly optimized to improve network throughput and a blockage robust algorithm was proposed. The authors of @cite_11 minimized transmission time by solving an optimization problem. Although these algorithms for concurrent transmissions presented in @cite_20 @cite_9 @cite_19 @cite_11 achieved efficient spatial reuse, redundant data were transmitted because their primary objective was not data sharing and thus, they did not consider situations where the same data are sent to different receivers. Additionally, they did not consider geographical information.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_20", "@cite_11" ], "mid": [ "2045797271", "2042822440", "2950634569", "2062742770" ], "abstract": [ "With the increase in emerging bandwidth-intensive applications, millimeter-wave (mmWave) communications in 60-GHz band have become a hot topic. There are two unique features that sharply distinguish mmWave wireless personal area networks (WPANs) from other networks using lower carrier frequencies. First, mmWave links use high-gain directional antennas to overcome the high propagation loss. Second, mmWave links are easily blocked by the human body and furniture. In this paper, we develop a blockage robust and efficient directional medium access control (MAC) protocol (BRDMAC), which overcomes the blockage problem by relaying. Relay selection and spatial reuse are jointly optimized to improve network performance. In BRDMAC, relay selection and transmission scheduling algorithms are proposed to compute near-optimal relay selection to maximize spatial reuse and compute near-optimal schedules with respect to the total transmission time, respectively. Finally, extensive simulation results show that BRDMAC performs better in terms of delay and throughput, compared with other existing protocols while attaining a good fairness performance.", "In this paper, a concurrent transmission scheduling algorithm is proposed to enhance the resource utilization efficiency for multi-Gbps millimeter-wave (mmWave) networks. Specifically, we exploit spatial-time division multiple access (STDMA) to improve the system throughput by allowing both non-interfering and interfering links to transmit concurrently, considering the high propagation loss at mmWave band and the utilization of directional antenna. Concurrent transmission scheduling in mmWave networks is formulated as an optimization model to maximize the number of flows scheduled in the network such that the quality of service (QoS) requirement of each flow is satisfied. We further decompose the optimization problem and propose a flip-based heuristic scheduling algorithm with low computational complexity to solve the problem. Extensive simulations demonstrate that the proposed algorithm can significantly improve the network performance in terms of network throughput and the number of supported flows.", "Recently, millimeter-wave (mmWave) bands have been postulated as a means to accommodate the foreseen extreme bandwidth demands in vehicular communications, which result from the dissemination of sensory data to nearby vehicles for enhanced environmental awareness and improved safety level. However, the literature is particularly scarce in regards to principled resource allocation schemes that deal with the challenging radio conditions posed by the high mobility of vehicular scenarios. In this paper, we propose a novel framework that blends together matching theory and swarm intelligence to dynamically and efficiently pair vehicles and optimize both transmission and reception beamwidths. This is done by jointly considering channel state information and queue state information when establishing vehicle-to-vehicle (V2V) links. To validate the proposed framework, simulation results are presented and discussed, where the throughput performance as well as the latency reliability tradeoffs of the proposed approach are assessed and compared with several baseline approaches recently proposed in the literature. The results obtained in this paper show performance gains of 25 in reliability and delay for ultra-dense vehicular scenarios with 50 more active V2V links than the baselines. These results shed light on the operational limits and practical feasibility of mmWave bands, as a viable radio access solution for future high-rate V2V communications.", "With directional transmission, 60 GHz wireless personal area networks (WPAN) are capable of providing larger spatial reuse ratio as compared to omni-directional networks. In this work, a device cooperation scheme is proposed to improve throughput of the 60 GHz WPAN that employs spatial reuse. The throughput maximization is formulated as a linear programming problem and an optimal solution is provided. WPAN performance with and without device cooperation is studied via simulations. The results demonstrate that device cooperation provides significant throughput improvement as compared to non-cooperative transmission." ] }
1903.09402
2949902818
Sharing perceptual data with other vehicles enhances the traffic safety of autonomous vehicles because it helps vehicles locate other vehicles and pedestrians in their blind spots. Such safety applications require high throughput and short delay, which cannot be achieved by conventional microwave vehicular communication systems. Therefore, millimeter-wave (mmWave) communications are considered to be a key technology for sharing perceptual data because of their wide bandwidth. One of the challenges of data sharing in mmWave communications is broadcasting because narrow-beam directional antennas are used to obtain high gain. Because many vehicles should share their perceptual data to others within a short time frame in order to enlarge the areas that can be perceived based on shared perceptual data, an efficient scheduling for concurrent transmission that improves spatial reuse is required for perceptual data sharing. This paper proposes a data sharing algorithm that employs a graph-based concurrent transmission scheduling. The proposed algorithm realizes concurrent transmission to improve spatial reuse by designing a rule that is utilized to determine if the two pairs of transmitters and receivers interfere with each other by considering the radio propagation characteristics of narrow-beam antennas. A prioritization method that considers the geographical information in perceptual data is also designed to enlarge perceivable areas in situations where data sharing time is limited and not all data can be shared. Simulation results demonstrate that the proposed algorithm doubles the area of the cooperatively perceivable region compared with a conventional algorithm that does not consider mmWave communications because the proposed algorithm achieves high-throughput transmission by improving spatial reuse. The prioritization also enlarges the perceivable region by a maximum of 20 .
The authors of @cite_8 proposed an RSU-controlled scheduling that maximizes system throughput in hybrid V2I V2V communications. This algorithm realizes concurrent dissemination based on the graph theory. It also adopts a data caching mechanism. The algorithm proposed in @cite_8 generates graphs for dissemination scheduling, where the set of vertices represents potential transmissions consisting of a transmitter, receiver, and data, and the set of edges represents pairs of transmissions that cannot be performed at the same time. The authors of @cite_8 proved that optimal scheduling can be obtained by solving the MWIS problem for a generated graph. However, because the algorithm in @cite_8 assumes omnidirectional antennas, interference calculations must be extended for mmWave communications, where narrow-beam directional antennas are utilized. Additionally, there is still room to improve the efficiency of data transmissions for cooperative perception by leveraging the geographical information in perceptual data. Thus, a data sharing algorithm in mmWave vehicular networks that increases perceivable regions should be developed for traffic safety, especially when data sharing time is limited.
{ "cite_N": [ "@cite_8" ], "mid": [ "2443552644" ], "abstract": [ "This paper presents the first study on scheduling for cooperative data dissemination in a hybrid infrastructure-to-vehicle (I2V) and vehicle-to-vehicle (V2V) communication environment. We formulate the novel problem of cooperative data scheduling (CDS). Each vehicle informs the road-side unit (RSU) the list of its current neighboring vehicles and the identifiers of the retrieved and newly requested data. The RSU then selects sender and receiver vehicles and corresponding data for V2V communication, while it simultaneously broadcasts a data item to vehicles that are instructed to tune into the I2V channel. The goal is to maximize the number of vehicles that retrieve their requested data. We prove that CDS is NP-hard by constructing a polynomial-time reduction from the Maximum Weighted Independent Set (MWIS) problem. Scheduling decisions are made by transforming CDS to MWIS and using a greedy method to approximately solve MWIS. We build a simulation model based on realistic traffic and communication characteristics and demonstrate the superiority and scalability of the proposed solution. The proposed model and solution, which are based on the centralized scheduler at the RSU, represent the first known vehicular ad hoc network (VANET) implementation of software defined network (SDN) concept." ] }
1903.09243
2925229769
The speed and accuracy with which robots are able to interpret natural language is fundamental to realizing effective human-robot interaction. A great deal of attention has been paid to developing models and approximate inference algorithms that improve the efficiency of language understanding. However, existing methods still attempt to reason over a representation of the environment that is flat and unnecessarily detailed, which limits scalability. An open problem is then to develop methods capable of producing the most compact environment model sufficient for accurate and efficient natural language understanding. We propose a model that leverages environment-related information encoded within instructions to identify the subset of observations and perceptual classifiers necessary to perceive a succinct, instruction-specific environment representation. The framework uses three probabilistic graphical models trained from a corpus of annotated instructions to infer salient scene semantics, perceptual classifiers, and grounded symbols. Experimental results on two robots operating in different environments demonstrate that by exploiting the content and the structure of the instructions, our method learns compact environment representations that significantly improve the efficiency of natural language symbol grounding.
Given a natural language utterance, grounding methods @cite_5 @cite_32 @cite_27 attempt to associate each word in the utterance with its corresponding referent in this environment model and the robot's symbolic action space. Semantic parsing-based methods @cite_47 @cite_17 @cite_45 similarly map natural language to meaning representations, typically in the form of a lambda calculus. Early work in grounding @cite_33 @cite_11 employs manually engineered correspondences and features between words in a flat representation of the environment. Modern day methods @cite_37 @cite_29 @cite_32 @cite_27 @cite_20 take a statistical approach to language grounding (and similarly for inverse grounding @cite_46 @cite_34 @cite_44 ) that employs probabilistic models that relate words to their corresponding referents according to the hierarchical structure of language, enabling the resolution of complex free-form language. These models are typically learned from annotated natural language corpora as well as through interaction with humans @cite_45 @cite_42 @cite_40 . Probabilistic grounding models have been shown to be effective at interpreting cooking instructions @cite_35 , learning spatial relations in semantic maps @cite_28 @cite_31 , and directing mobile manipulators @cite_4 , among others.
{ "cite_N": [ "@cite_35", "@cite_47", "@cite_29", "@cite_42", "@cite_44", "@cite_5", "@cite_20", "@cite_4", "@cite_46", "@cite_17", "@cite_37", "@cite_28", "@cite_32", "@cite_27", "@cite_40", "@cite_34", "@cite_33", "@cite_45", "@cite_31", "@cite_11" ], "mid": [ "2922052901", "2070789508", "1949907236", "2230046320", "2889756642", "2107019937", "2198355466", "1706049732", "2296455319", "46490633", "2069809153", "2976490211", "2236233024", "2007820193", "2740888779", "2904313270", "1525482321", "2293350124", "2029806546", "2019178608" ], "abstract": [ "", "Mobile robots that interact with humans in an intuitive way must be able to follow directions provided by humans in unconstrained natural language. In this work we investigate how statistical machine translation techniques can be used to bridge the gap between natural language route instructions and a map of an environment built by a robot. Our approach uses training data to learn to translate from natural language instructions to an automatically-labeled map. The complexity of the translation process is controlled by taking advantage of physical constraints imposed by the map. As a result, our technique can efficiently handle uncertainty in both map labeling and parsing. Our experiments demonstrate the promising capabilities achieved by our approach.", "n order for robots to engage in dialog with human teammates, they must have the ability to map between words in the language and aspects of the external world. A solution to this symbol grounding problem (Harnad, 1990) would enable a robot to interpret commands such as “Drive over to receiving and pick up the tire pallet.” In this article we describe several of our results that use probabilistic inference to address the symbol grounding problem. Our specific approach is to develop models that factor according to the linguistic structure of a command. We first describe an early result, a generative model that factors according to the sequential structure of language, and then discuss our new framework, generalized grounding graphs (G3). The G3 framework dynamically instantiates a probabilistic graphical model for a natural language input, enabling a mapping between words in language and concrete objects, places, paths and events in the external world. We report on corpus-based experiments where the robot is able to learn and use word meanings in three real-world tasks: indoor navigation, spatial language video retrieval, and mobile manipulation.", "This paper reports recent progress on modeling the grounded co-acquisition of syntax and semantics of locative spatial language in developmental robots. We show how a learner robot can learn to produce and interpret spatial utterances in guided-learning interactions with a tutor robot (equipped with a system for producing English spatial phrases). The tutor guides the learning process by simplifying the challenges and complexity of utterances, gives feedback, and gradually increases the complexity of the language to be learnt. Our experiments show promising results towards long-term, incremental acquisition of natural language in a process of co-development of syntax and semantics.", "Effective communication between humans often embeds both temporal and spatial context. While spatial context captures the geographic settings of objects in the environment, temporal context describes their changes over time. In this paper, we propose temporal spatial inverse semantics (TeSIS) to extend the inverse semantics approach to also consider the temporal context for robots communicating with humans. Inverse semantics generates natural language requests while taking into account how well the human listeners would interpret those requests given the current spatial context. Compared to inverse semantics, our approach incorporates also temporal context by referring to spatial context information in the past. To achieve this, we extend the sentence structure in inverse semantics to generate sentences that can refer to not only the current but also previous states of the environment. A new metric based on the extended sentence structure is developed by breaking a single sentence into multiple independent sentences that refer to environment states at different times. Using this approach, we are able to generate sentences such as “Please pick up the cup beside the oven that was on the dining table”. To evaluate our approach, we randomly generate scenarios in an experimental domain. Each scenario includes the description of the current and several immediate previous states. Natural language sentences are then generated for these scenarios using both inverse semantics that uses only the spatial context and our approach. Amazon MTurk is used to compare the sentences generated and results show that TeSIS achieves better accuracy, sometimes by a significant margin, than the baseline.", "There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the \"symbol grounding problem\": How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) \"iconic representations\" , which are analogs of the proximal sensory projections of distal objects and events, and (2) \"categorical representations\" , which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) \"symbolic representations\" , grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., \"An X is a Y that is Z\"). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic \"module,\" however; the symbolic functions would emerge as an intrinsically \"dedicated\" symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded.", "Natural language interfaces are powerful tools that enables humans and robots to convey information without the need for extensive training or complex graphical interfaces. Statistical techniques that employ probabilistic graphical models have proven effective at interpreting symbols that represent commands and observations for robot direction-following and object manipulation. A limitation of these approaches is their inefficiency in dealing with larger and more complex symbolic representations. Herein, we present a model for language understanding that uses parse trees and environment models both to learn the structure of probabilistic graphical models and to perform inference over this learned structure for symbol grounding. This model, called the Hierarchical Distributed Correspondence Graph (HDCG), exploits information about symbols that are expressed in the corpus to construct minimalist graphical models that are more efficient to search. In a series of comparative experiments, we demonstrate a significant improvement in efficiency without loss in accuracy over contemporary approaches for human-robot interaction.", "One long-standing challenge in robotics is the realization of mobile autonomous robots able to operate safely in human workplaces, and be accepted by the human occupants. We describe the development of a multiton robotic forklift intended to operate alongside people and vehicles, handling palletized materials within existing, active outdoor storage facilities. The system has four novel characteristics. The first is a multimodal interface that allows users to efficiently convey task-level commands to the robot using a combination of pen-based gestures and natural language speech. These tasks include the manipulation, transport, and placement of palletized cargo within dynamic, human-occupied warehouses. The second is the robot's ability to learn the visual identity of an object from a single user-provided example and use the learned model to reliably and persistently detect objects despite significant spatial and temporal excursions. The third is a reliance on local sensing that allows the robot to handle variable palletized cargo and navigate within dynamic, minimally prepared environments without a global positioning system. The fourth concerns the robot's operation in close proximity to people, including its human supervisor, pedestrians who may cross or block its path, moving vehicles, and forklift operators who may climb inside the robot and operate it manually. This is made possible by interaction mechanisms that facilitate safe, effective operation around people. This paper provides a comprehensive description of the system's architecture and implementation, indicating how real-world operational requirements motivated key design choices. We offer qualitative and quantitative analyses of the robot operating in real settings and discuss the lessons learned from our effort.", "Our goal is to build robots that can robustly interact with humans using natural language. This problem is challenging because human language is filled with ambiguity, and furthermore, due to limitations in sensing, the robot's perception of its environment might be much more limited than that of its human partner. To enable a robot to recover from a failure to understand a natural language utterance, this paper describes an information-theoretic strategy for asking targeted clarifying questions and using information from the answer to disambiguate the language. To identify good questions, we derive an estimate of the robot's uncertainty about the mapping between specific phrases in the language and aspects of the external world. This metric enables the robot to ask a targeted question about the parts of the language for which it is most uncertain. After receiving an answer, the robot fuses information from the command, the question, and the answer in a joint probabilistic graphical model in the G3 framework. When using answers to questions, we show the robot is able to infer mappings between parts of the language and concrete object groundings in the external world with higher accuracy than by using information from the command alone. Furthermore, we demonstrate that by effectively selecting which questions to ask, the robot is able to achieve significant performance gains while asking many fewer questions than baseline metrics.", "As robots become more ubiquitous and capable of performing complex tasks, the importance of enabling untrained users to interact with them has increased. In response, unconstrained natural-language interaction with robots has emerged as a significant research area. We discuss the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system. Our approach learns a parser based on example pairs of English commands and corresponding control language expressions. We evaluate this approach in the context of following route instructions through an indoor environment, and demonstrate that our system can learn to translate English commands into sequences of desired actions, while correctly capturing the semantic intent of statements involving complex control structures. The procedural nature of our formal representation allows a robot to interpret route instructions online while moving through a previously unknown environment.", "Speaking using unconstrained natural language is an intuitive and flexible way for humans to interact with robots. Understanding this kind of linguistic input is challenging because diverse words and phrases must be mapped into structures that the robot can understand, and elements in those structures must be grounded in an uncertain environment. We present a system that follows natural language directions by extracting a sequence of spatial description clauses from the linguistic input and then infers the most probable path through the environment given only information about the environmental geometry and detected visible objects. We use a probabilistic graphical model that factors into three key components. The first component grounds landmark phrases such as \"the computers\" in the perceptual frame of the robot by exploiting co-occurrence statistics from a database of tagged images such as Flickr. Second, a spatial reasoning component judges how well spatial relations such as \"past the computers\" describe a path. Finally, verb phrases such as \"turn right\" are modeled according to the amount of change in orientation in the path. Our system follows 60 of the directions in our corpus to within 15 meters of the true destination, significantly outperforming other approaches.", "", "This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs (G3), dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command's hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as \"Put the tire pallet on the truck.\" The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot's performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system's performance. We demonstrate that our system can successfully follow many natural language commands from the corpus.", "Natural language interfaces for robot control aspire to find the best sequence of actions that reflect the behavior intended by the instruction. This is difficult because of the diversity of language, variety of environments, and heterogeneity of tasks. Previous work has demonstrated that probabilistic graphical models constructed from the parse structure of natural language can be used to identify motions that most closely resemble verb phrases. Such approaches however quickly succumb to computational bottlenecks imposed by construction and search the space of possible actions. Planning constraints, which define goal regions and separate the admissible and inadmissible states in an environment model, provide an interesting alternative to represent the meaning of verb phrases. In this paper we present a new model called the Distributed Correspondence Graph (DCG) to infer the most likely set of planning constraints from natural language instructions. A trajectory planner then uses these planning constraints to find a sequence of actions that resemble the instruction. Separating the problem of identifying the action encoded by the language into individual steps of planning constraint inference and motion planning enables us to avoid computational costs associated with generation and evaluation of many trajectories. We present experimental results from comparative experiments that demonstrate improvements in efficiency in natural language understanding without loss of accuracy.", "", "", "Abstract : The paper describes a system for the computer understanding of English. The system answers questions, executes commands, and accepts information in normal English dialog. It uses semantic information and context to understand discourse and to disambiguate sentences. It combines a complete syntactic analysis of each sentence with a 'heuristic understander' which uses different kinds of information about a sentence, other parts of the discourse, and general information about the world in deciding what the sentence means.", "Intelligent robots frequently need to understand requests from naive users through natural language. Previous approaches either cannot account for language variation, e.g., keyword search, or require gathering large annotated corpora, which can be expensive and cannot adapt to new variation. We introduce a dialog agent for mobile robots that understands human instructions through semantic parsing, actively resolves ambiguities using a dialog manager, and incrementally learns from human-robot conversations by inducing training data from user paraphrases. Our dialog agent is implemented and tested both on a web interface with hundreds of users via Mechanical Turk and on a mobile robot over several days, tasked with understanding navigation and delivery requests through natural language in an office environment. In both contexts, We observe significant improvements in user satisfaction after learning from conversations.", "We describe a semantic mapping algorithm that learns human-centric environment models by interpreting natural language utterances. Underlying the approach is a coupled metric, topological, and semantic representation of the environment that enables the method to fuse information from natural language descriptions with low-level metric and appearance data. We extend earlier work with a novel formulation that incorporates spatial layout into a topological representation of the environment. We also describe a factor graph formulation of the semantic properties that encodes human-centric concepts such as type and colloquial name for each mapped region. The algorithm infers these properties by combining the user’s natural language descriptions with imageand laser-based scene classification. We also propose a mechanism to more effectively ground natural language descriptions of distant regions using semantic cues from other modalities. We describe how the algorithm employs this learned semantic information to propose valid topological hypotheses, leading to more accurate topological and metric maps. We demonstrate that integrating language with other sensor data increases the accuracy of the achieved spatial-semantic representation of the environment.", "How can we build robots that engage in fluid spoken conversations with people, moving beyond canned responses to words and towards actually understanding? As a step towards addressing this question, we introduce a robotic architecture that provides a basis for grounding word meanings. The architecture provides perceptual, procedural, and affordance representations for grounding words. A perceptually-coupled on-line simulator enables sensory-motor representations that can shift points of view. Held together, we show that this architecture provides a rich set of data structures and procedures that provide the foundations for grounding the meaning of certain classes of words." ] }
1903.09341
2923728956
This paper describes multichannel speech enhancement for improving automatic speech recognition (ASR) in noisy environments. Recently, the minimum variance distortionless response (MVDR) beamforming has widely been used because it works well if the steering vector of speech and the spatial covariance matrix (SCM) of noise are given. To estimating such spatial information, conventional studies take a supervised approach that classifies each time-frequency (TF) bin into noise or speech by training a deep neural network (DNN). The performance of ASR, however, is degraded in an unknown noisy environment. To solve this problem, we take an unsupervised approach that decomposes each TF bin into the sum of speech and noise by using multichannel nonnegative matrix factorization (MNMF). This enables us to accurately estimate the SCMs of speech and noise not from observed noisy mixtures but from separated speech and noise components. In this paper, we propose online MVDR beamforming by effectively initializing and incrementally updating the parameters of MNMF. Another main contribution is to comprehensively investigate the performances of ASR obtained by various types of spatial filters, i.e., time-invariant and variant versions of MVDR beamformers and those of rank-1 and full-rank multichannel Wiener filters, in combination with MNMF. The experimental results showed that the proposed method outperformed the state-of-the-art DNN-based beamforming method in unknown environments that did not match training data.
There are several variants of beamforming such as DS @cite_30 MVDR @cite_53 , GEV @cite_46 beamforming and MWF @cite_42 @cite_54 . The DS beamforming @cite_30 uses only the steering vector of target speech and the other methods additionally use the SCM of noise. The GEV beamforming aims to maximize the signal-to-noise ratio (SNR) @cite_46 without putting any assumptions on the acoustic transfer function from the speaker to the array and the SCM of the noise. The MVDR beamforming and the MWF, on the other hand, assume that the time-frequency (TF) bins of speech and noise spectrograms are distributed according to complex Gaussian distributions @cite_53 @cite_42 @cite_54 . In , we review the relationships between MVDR beamforming and rank-1 and full-rank MWF in terms of the propagation process and and the filter estimation strategy.
{ "cite_N": [ "@cite_30", "@cite_54", "@cite_53", "@cite_42", "@cite_46" ], "mid": [ "2148613904", "", "2060108923", "2168729028", "2158143227" ], "abstract": [ "When performing speaker diarization on recordings from meetings, multiple microphones of different qualities are usually available and distributed around the meeting room. Although several approaches have been proposed in recent years to take advantage of multiple microphones, they are either too computationally expensive and not easily scalable or they cannot outperform the simpler case of using the best single microphone. In this paper, the use of classic acoustic beamforming techniques is proposed together with several novel algorithms to create a complete frontend for speaker diarization in the meeting room domain. New techniques we are presenting include blind reference-channel selection, two-step time delay of arrival (TDOA) Viterbi postprocessing, and a dynamic output signal weighting algorithm, together with using such TDOA values in the diarization to complement the acoustic information. Tests on speaker diarization show a 25 relative improvement on the test set compared to using a single most centrally located microphone. Additional experimental results show improvements using these techniques in a speech recognition task.", "", "An overview of beamforming from a signal-processing perspective is provided, with an emphasis on recent research. Data-independent, statistically optimum, adaptive, and partially adaptive beamforming are discussed. Basic notation, terminology, and concepts are included. Several beamformer implementations are briefly described. >", "Several contributions have been made so far to develop optimal multichannel linear filtering approaches and show their ability to reduce the acoustic noise. However, there has not been a clear unifying theoretical analysis of their performance in terms of both noise reduction and speech distortion. To fill this gap, we analyze the frequency-domain (non-causal) multichannel linear filtering for noise reduction in this paper. For completeness, we consider the noise reduction constrained optimization problem that leads to the parameterized multichannel non-causal Wiener filter (PMWF). Our contribution is fivefold. First, we formally show that the minimum variance distortionless response (MVDR) filter is a particular case of the PMWF by properly formulating the constrained optimization problem of noise reduction. Second, we propose new simplified expressions for the PMWF, the MVDR, and the generalized sidelobe canceller (GSC) that depend on the signals' statistics only. In contrast to earlier works, these expressions are explicitly independent of the channel transfer function ratios. Third, we quantify the theoretical gains and losses in terms of speech distortion and noise reduction when using the PWMF by establishing new simplified closed-form expressions for three performance measures, namely, the signal distortion index, the noise reduction factor (originally proposed in the paper titled ldquoNew insights into the noise reduction Wiener filter,rdquo by J. Chen (IEEE Transactions on Audio, Speech, and Language Processing, Vol. 15, no. 4, pp. 1218-1234, Jul. 2006) to analyze the single channel time-domain Wiener filter), and the output signal-to-noise ratio (SNR). Fourth, we analyze the effects of coherent and incoherent noise in addition to the benefits of utilizing multiple microphones. Fifth, we propose a new proof for the a posteriori SNR improvement achieved by the PMWF. Finally, we provide some simulations results to corroborate the findings of this work.", "Maximizing the output signal-to-noise ratio (SNR) of a sensor array in the presence of spatially colored noise leads to a generalized eigenvalue problem. While this approach has extensively been employed in narrowband (antenna) array beamforming, it is typically not used for broadband (microphone) array beamforming due to the uncontrolled amount of speech distortion introduced by a narrowband SNR criterion. In this paper, we show how the distortion of the desired signal can be controlled by a single-channel post-filter, resulting in a performance comparable to the generalized minimum variance distortionless response beamformer, where arbitrary transfer functions relate the source and the microphones. Results are given both for directional and diffuse noise. A novel gradient ascent adaptation algorithm is presented, and its good convergence properties are experimentally revealed by comparison with alternatives from the literature. A key feature of the proposed beamformer is that it operates blindly, i.e., it neither requires knowledge about the array geometry nor an explicit estimation of the transfer functions from source to sensors or the direction-of-arrival." ] }
1903.09341
2923728956
This paper describes multichannel speech enhancement for improving automatic speech recognition (ASR) in noisy environments. Recently, the minimum variance distortionless response (MVDR) beamforming has widely been used because it works well if the steering vector of speech and the spatial covariance matrix (SCM) of noise are given. To estimating such spatial information, conventional studies take a supervised approach that classifies each time-frequency (TF) bin into noise or speech by training a deep neural network (DNN). The performance of ASR, however, is degraded in an unknown noisy environment. To solve this problem, we take an unsupervised approach that decomposes each TF bin into the sum of speech and noise by using multichannel nonnegative matrix factorization (MNMF). This enables us to accurately estimate the SCMs of speech and noise not from observed noisy mixtures but from separated speech and noise components. In this paper, we propose online MVDR beamforming by effectively initializing and incrementally updating the parameters of MNMF. Another main contribution is to comprehensively investigate the performances of ASR obtained by various types of spatial filters, i.e., time-invariant and variant versions of MVDR beamformers and those of rank-1 and full-rank multichannel Wiener filters, in combination with MNMF. The experimental results showed that the proposed method outperformed the state-of-the-art DNN-based beamforming method in unknown environments that did not match training data.
TF Mask estimation has actively been studied for computing the SCMs of speech and noise @cite_25 @cite_18 @cite_23 @cite_11 @cite_24 @cite_28 @cite_40 @cite_49 . Our unsupervised method is different from DNN-based mask estimation @cite_18 @cite_23 @cite_11 @cite_24 @cite_28 @cite_40 @cite_49 in two ways. First, our method decomposes each TF bin into the sum of speech and noise, while the mask-based methods calculate the SCM of speech from noisy TF bins without any decomposition. Second, our method uses no training data, while in general the DNN-based methods need a sufficient number of pairs of noisy spectrograms and ideal binary masks (IBMs). The performance of the DNN-based mask estimation would be degraded in unseen conditions that are not covered by the training data because of overfitting to the training data.
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_24", "@cite_40", "@cite_23", "@cite_49", "@cite_25", "@cite_11" ], "mid": [ "2702006285", "", "2714487941", "2762862560", "2398042854", "2892163332", "2586584460", "" ], "abstract": [ "Recently, time-frequency mask-based beamforming has been extensively studied as the frontend of deep neural network (DNN) based automatic speech recognition (ASR) in noisy environments. Two mask estimation approaches have been separately developed for this beamforming method, namely the the DNN-based approach, which exploits the time-frequency features of the signal, and the spatial clustering-based approach, which exploits the spatial features of the signal. This paper proposes a new method that integrates the two approaches in a probabilistic way to further improve mask estimation by exploiting the advantages of both approaches. Experiments using the real data of the CHiME-3 multichannel noisy speech corpus show that the proposed method almost always outperforms the conventional approaches in terms of word error rate (WER) improvement.", "", "Acoustic beamforming has played a key role in the robust automatic speech recognition (ASR) applications. Accurate estimates of the speech and noise spatial covariance matrices (SCM) are crucial for successfully applying the minimum variance distortionless response (MVDR) beamforming. Reliable estimation of time-frequency (TF) masks can improve the estimation of the SCMs and significantly improve the performance of the MVDR beamforming in ASR tasks. In this paper, we focus on the TF mask estimation using recurrent neural networks (RNN). Specifically, our methods include training the RNN to estimate the speech and noise masks independently, training the RNN to minimize the ASR cost function directly, and performing multiple passes to iteratively improve the mask estimation. The proposed methods are evaluated individually and overally on the CHiME-4 challenge. The results show that the proposed methods improve the ASR performance individually and also work complementarily. The overall performance achieves a word error rate of 8.9 with 6-microphone configuration, which is much better than 12.0 achieved with the state-of-the-art MVDR implementation.", "", "We present a neural network based approach to acoustic beamforming. The network is used to estimate spectral masks from which the Cross-Power Spectral Density matrices of speech and noise are estimated, which in turn are used to compute the beamformer coefficients. The network training is independent of the number and the geometric configuration of the microphones. We further show that it is possible to train the network on clean speech only, avoiding the need for stereo data with separated speech and noise. Two types of networks are evaluated. One small feed-forward network with only one hidden layer and one more elaborated bi-directional Long Short-Term Memory network. We compare our system with different parametric approaches to mask estimation and using different beamforming algorithms. We show that our system yields superior results, both in terms of perceptual speech quality and with respect to speech recognition error rate. The results for the simple feed-forward network are especially encouraging considering its low computational requirements.", "The recently-proposed deep clustering algorithm represents a fundamental advance towards solving the cocktail party problem in the single-channel case. When multiple microphones are available, spatial information can be leveraged to differentiate signals from different directions. This study combines spectral and spatial features in a deep clustering framework so that the complementary spectral and spatial information can be simultaneously exploited to improve speech separation. We find that simply encoding inter-microphone phase patterns as additional input features during deep clustering provides a significant improvement in separation performance, even with random microphone array geometry. Experiments on a spatial-ized version of the wsj0-2mix dataset show the strong potential of the proposed algorithm for speech separation in reverberant environments.", "This paper considers acoustic beamforming for noise robust automatic speech recognition. A beamformer attenuates background noise by enhancing sound components coming from a direction specified by a steering vector. Hence, accurate steering vector estimation is paramount for successful noise reduction. Recently, time-frequency masking has been proposed to estimate the steering vectors that are used for a beamformer. In particular, we have developed a new form of this approach, which uses a speech spectral model based on a complex Gaussian mixture model CGMM to estimate the time-frequency masks needed for steering vector estimation, and extended the CGMM-based beamformer to an online speech enhancement scenario. Our previous experiments showed that the proposed CGMM-based approach outperforms a recently proposed mask estimator based on a Watson mixture model and the baseline speech enhancement system of the CHiME-3 challenge. This paper provides additional experimental results for our online processing, which achieves performance comparable to that of batch processing with a suitable block-batch size. This online version reduces the CHiME-3 word error rate WER on the evaluation set from 8.37 to 8.06 . Moreover, in this paper, we introduce a probabilistic prior distribution for a spatial correlation matrix a CGMM parameter, which enables more stable steering vector estimation in the presence of interfering speakers. In practice, the performance of the proposed online beamformer degrades with observations that contain only noise or and interference because of the failure of the CGMM parameter estimation. The introduced spatial prior enables the target speaker's parameter to avoid overfitting to noise or and interference. Experimental results show that the spatial prior reduces the WER from 38.4 to 29.2 in a conversation recognition task compared with the CGMM-based approach without the prior, and outperforms a conventional online speech enhancement approach.", "" ] }
1903.09341
2923728956
This paper describes multichannel speech enhancement for improving automatic speech recognition (ASR) in noisy environments. Recently, the minimum variance distortionless response (MVDR) beamforming has widely been used because it works well if the steering vector of speech and the spatial covariance matrix (SCM) of noise are given. To estimating such spatial information, conventional studies take a supervised approach that classifies each time-frequency (TF) bin into noise or speech by training a deep neural network (DNN). The performance of ASR, however, is degraded in an unknown noisy environment. To solve this problem, we take an unsupervised approach that decomposes each TF bin into the sum of speech and noise by using multichannel nonnegative matrix factorization (MNMF). This enables us to accurately estimate the SCMs of speech and noise not from observed noisy mixtures but from separated speech and noise components. In this paper, we propose online MVDR beamforming by effectively initializing and incrementally updating the parameters of MNMF. Another main contribution is to comprehensively investigate the performances of ASR obtained by various types of spatial filters, i.e., time-invariant and variant versions of MVDR beamformers and those of rank-1 and full-rank multichannel Wiener filters, in combination with MNMF. The experimental results showed that the proposed method outperformed the state-of-the-art DNN-based beamforming method in unknown environments that did not match training data.
The major limitation of most DNN-based methods is that only single-channel magnitude spectrograms are used for mask estimation by discarding the spatial information such as ILDs and IPDs. Recently, Wang et al. @cite_49 and Pertil " a @cite_40 have investigated the use of ILDs and IPDs as acoustic features for mask estimation. Erdogan et al. @cite_44 proposed a method for estimating a phase-sensitive filter in single-channel speech enhancement. For comparative evaluation, inspired by these state-of-the-art methods, we use both spatial and magnitude features for DNN-based multichannel mask estimation.
{ "cite_N": [ "@cite_44", "@cite_40", "@cite_49" ], "mid": [ "1482149378", "2762862560", "2892163332" ], "abstract": [ "Separation of speech embedded in non-stationary interference is a challenging problem that has recently seen dramatic improvements using deep network-based methods. Previous work has shown that estimating a masking function to be applied to the noisy spectrum is a viable approach that can be improved by using a signal-approximation based objective function. Better modeling of dynamics through deep recurrent networks has also been shown to improve performance. Here we pursue both of these directions. We develop a phase-sensitive objective function based on the signal-to-noise ratio (SNR) of the reconstructed signal, and show that in experiments it yields uniformly better results in terms of signal-to-distortion ratio (SDR). We also investigate improvements to the modeling of dynamics, using bidirectional recurrent networks, as well as by incorporating speech recognition outputs in the form of alignment vectors concatenated with the spectral input features. Both methods yield further improvements, pointing to tighter integration of recognition with separation as a promising future direction.", "", "The recently-proposed deep clustering algorithm represents a fundamental advance towards solving the cocktail party problem in the single-channel case. When multiple microphones are available, spatial information can be leveraged to differentiate signals from different directions. This study combines spectral and spatial features in a deep clustering framework so that the complementary spectral and spatial information can be simultaneously exploited to improve speech separation. We find that simply encoding inter-microphone phase patterns as additional input features during deep clustering provides a significant improvement in separation performance, even with random microphone array geometry. Experiments on a spatial-ized version of the wsj0-2mix dataset show the strong potential of the proposed algorithm for speech separation in reverberant environments." ] }
1903.09341
2923728956
This paper describes multichannel speech enhancement for improving automatic speech recognition (ASR) in noisy environments. Recently, the minimum variance distortionless response (MVDR) beamforming has widely been used because it works well if the steering vector of speech and the spatial covariance matrix (SCM) of noise are given. To estimating such spatial information, conventional studies take a supervised approach that classifies each time-frequency (TF) bin into noise or speech by training a deep neural network (DNN). The performance of ASR, however, is degraded in an unknown noisy environment. To solve this problem, we take an unsupervised approach that decomposes each TF bin into the sum of speech and noise by using multichannel nonnegative matrix factorization (MNMF). This enables us to accurately estimate the SCMs of speech and noise not from observed noisy mixtures but from separated speech and noise components. In this paper, we propose online MVDR beamforming by effectively initializing and incrementally updating the parameters of MNMF. Another main contribution is to comprehensively investigate the performances of ASR obtained by various types of spatial filters, i.e., time-invariant and variant versions of MVDR beamformers and those of rank-1 and full-rank multichannel Wiener filters, in combination with MNMF. The experimental results showed that the proposed method outperformed the state-of-the-art DNN-based beamforming method in unknown environments that did not match training data.
Multichannel extensions of NMF @cite_35 @cite_27 @cite_10 @cite_0 @cite_29 @cite_1 represent the complex spectrograms of multichannel mixture signals by using the SCMs and low-rank power spectrograms of multiple source signals. Ozerov et al. @cite_27 pioneered the use of NMF for multichannel source separation, where the SCMs are restricted to rank-1 matrices and the cost function based on the Itakura-Saito (IS) divergence is minimized. This model was extended to have full-rank SCMs @cite_10 . Sawada et al. @cite_8 introduced partitioning parameters to have a set of basis spectra shared by all sources and derived a majorization-minimization (MM) algorithm. Nikunen and Virtanen @cite_29 proposed a similar model that represents the SCM of each source as the weighted sum of direction-dependent SCMs. While these methods can be used in a underdetermined case, Kitamura et al. @cite_1 proposed independent low-rank matrix analysis (ILRMA) for a determined case by restricting the SCMs of @cite_8 to rank-1 matrices. This can be viewed as a unified model of NMF and independent vector analysis (IVA) and is robust to initialization.
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_29", "@cite_1", "@cite_0", "@cite_27", "@cite_10" ], "mid": [ "2787211894", "2168273590", "2021196544", "2412956798", "", "2113990625", "2117332620" ], "abstract": [ "This paper presents new statistical methods of multichannel audio source separation based on unified source and spatial models that, respectively, represent the generative process of latent source spectrograms and that of observed mixture spectrograms. One possibility of the source model is a factor model based on nonnegative matrix factorization that represents each time-frequency (TF) bin as the weighted sum of basis spectra. Another possibility is a mixture model inspired by latent Dirichlet allocation that exclusively classifies each TF bin into one of basis spectra. Similarly, the spatial model can either be a factor model that represents each TF bin as the weighted sum of source spectra or a mixture model that classifies each bin into one of those spectra. To unify these models in a principled manner and incorporate prior knowledge of a microphone array, we propose hierarchical Bayesian models of all the source–spatial combinations (factor–factor, mixture–factor, factor–mixture, and mixture–mixture models) and derive efficient Gibbs sampling algorithms for posterior inference. Experimental results showed that the proposed unified models outperformed the state-of-the-art method using only the spatial mixture model. Among the four unified models, the spatial factor model tended to work better than the spatial mixture model in exchange for larger computational cost, and the choice of source models had a little impact on the performance and computational cost.", "This paper presents new formulations and algorithms for multichannel extensions of non-negative matrix factorization (NMF). The formulations employ Hermitian positive semidefinite matrices to represent a multichannel version of non-negative elements. Multichannel Euclidean distance and multichannel Itakura-Saito (IS) divergence are defined based on appropriate statistical models utilizing multivariate complex Gaussian distributions. To minimize this distance divergence, efficient optimization algorithms in the form of multiplicative updates are derived by using properly designed auxiliary functions. Two methods are proposed for clustering NMF bases according to the estimated spatial property. Convolutive blind source separation (BSS) is performed by the multichannel extensions of NMF with the clustering mechanism. Experimental results show that 1) the derived multiplicative update rules exhibited good convergence behavior, and 2) BSS tasks for several music sources with two microphones and three instrumental parts were evaluated successfully.", "This paper addresses the problem of sound source separation from a multichannel microphone array capture via estimation of source spatial covariance matrix (SCM) of a short-time Fourier transformed mixture signal. In many conventional audio separation algorithms the source mixing parameter estimation is done separately for each frequency thus making them prone to errors and leading to suboptimal source estimates. In this paper we propose a SCM model which consists of a weighted sum of direction of arrival (DoA) kernels and estimate only the weights dependent on the source directions. In the proposed algorithm, the spatial properties of the sources become jointly optimized over all frequencies, leading to more coherent source estimates and mitigating the effect of spatial aliasing at high frequencies. The proposed SCM model is combined with a linear model for magnitudes and the parameter estimation is formulated in a complex-valued non-negative matrix factorization (CNMF) framework. Simulations consist of recordings done with a hand-held device sized array having multiple microphones embedded inside the device casing. Separation quality of the proposed algorithm is shown to exceed the performance of existing state of the art separation methods with two sources when evaluated by objective separation quality metrics.", "This paper addresses the determined blind source separation problem and proposes a new effective method unifying independent vector analysis (IVA) and nonnegative matrix factorization (NMF). IVA is a state-of-the-art technique that utilizes the statistical independence between sources in a mixture signal, and an efficient optimization scheme has been proposed for IVA. However, since the source model in IVA is based on a spherical multivariate distribution, IVA cannot utilize specific spectral structures such as the harmonic structures of pitched instrumental sounds. To solve this problem, we introduce NMF decomposition as the source model in IVA to capture the spectral structures. The formulation of the proposed method is derived from conventional multichannel NMF (MNMF), which reveals the relationship between MNMF and IVA. The proposed method can be optimized by the update rules of IVA and single-channel NMF. Experimental results show the efficacy of the proposed method compared with IVA and MNMF in terms of separation accuracy and convergence speed.", "", "We consider inference in a general data-driven object-based model of multichannel audio data, assumed generated as a possibly underdetermined convolutive mixture of source signals. We work in the short-time Fourier transform (STFT) domain, where convolution is routinely approximated as linear instantaneous mixing in each frequency band. Each source STFT is given a model inspired from nonnegative matrix factorization (NMF) with the Itakura-Saito divergence, which underlies a statistical model of superimposed Gaussian components. We address estimation of the mixing and source parameters using two methods. The first one consists of maximizing the exact joint likelihood of the multichannel data using an expectation-maximization (EM) algorithm. The second method consists of maximizing the sum of individual likelihoods of all channels using a multiplicative update algorithm inspired from NMF methodology. Our decomposition algorithms are applied to stereo audio source separation in various settings, covering blind and supervised separation, music and speech sources, synthetic instantaneous and convolutive mixtures, as well as professionally produced music recordings. Our EM method produces competitive results with respect to state-of-the-art as illustrated on two tasks from the international Signal Separation Evaluation Campaign (SiSEC 2008).", "We address the problem of blind audio source separation in the under-determined and convolutive case. The contribution of each source to the mixture channels in the time-frequency domain is modeled by a zero-mean Gaussian random vector with a full rank covariance matrix composed of two terms: a variance which represents the spectral properties of the source and which is modeled by a nonnegative matrix factorization (NMF) model and another full rank covariance matrix which encodes the spatial properties of the source contribution in the mixture. We address the estimation of these parameters by maximizing the likelihood of the mixture using an expectation-maximization (EM) algorithm. Theoretical propositions are corroborated by experimental studies on stereo reverberant music mixtures." ] }
1903.09366
2924136311
One problem in the application of reinforcement learning to real-world problems is the curse of dimensionality on the action space. Macro actions, a sequence of primitive actions, have been studied to diminish the dimensionality of the action space with regard to the time axis. However, previous studies relied on humans defining macro actions or assumed macro actions as repetitions of the same primitive actions. We present Factorized Macro Action Reinforcement Learning (FaMARL) which autonomously learns disentangled factor representation of a sequence of actions to generate macro actions that can be directly applied to general reinforcement learning algorithms. FaMARL exhibits higher scores than other reinforcement learning algorithms on environments that require an extensive amount of search.
Applying a sequence of actions to reinforcement learning has been studied @cite_21 @cite_17 @cite_6 @cite_19 . Fine Grained Action Repetition (FiGAR) successfully adopts macro actions into deep reinforcement learning @cite_21 , showing that Asynchronous Advantage Actor-Critic (A3C) @cite_15 , an asynchronous variant of deep reinforcement learning algorithm, with a learning time scale of repeating the action as well as the action itself scores higher than that with primitive actions in Atari 2600 Games.
{ "cite_N": [ "@cite_21", "@cite_6", "@cite_19", "@cite_15", "@cite_17" ], "mid": [ "2950462959", "2605102581", "2428834750", "2964043796", "2442341664" ], "abstract": [ "Reinforcement Learning algorithms can learn complex behavioral patterns for sequential decision making tasks wherein an agent interacts with an environment and acquires feedback in the form of rewards sampled from it. Traditionally, such algorithms make decisions, i.e., select actions to execute, at every single time step of the agent-environment interactions. In this paper, we propose a novel framework, Fine Grained Action Repetition (FiGAR), which enables the agent to decide the action as well as the time scale of repeating it. FiGAR can be used for improving any Deep Reinforcement Learning algorithm which maintains an explicit policy estimate by enabling temporal abstractions in the action space. We empirically demonstrate the efficacy of our framework by showing performance improvements on top of three policy search algorithms in different domains: Asynchronous Advantage Actor Critic in the Atari 2600 domain, Trust Region Policy Optimization in Mujoco domain and Deep Deterministic Policy Gradients in the TORCS car racing domain.", "", "Deep reinforcement learning has been shown to be a powerful framework for learning policies from complex high-dimensional sensory inputs to actions in complex tasks, such as the Atari domain. In this paper, we explore output representation modeling in the form of temporal abstraction to improve convergence and reliability of deep reinforcement learning approaches. We concentrate on macro-actions, and evaluate these on different Atari 2600 games, where we show that they yield significant improvements in learning speed. Additionally, we show that they can even achieve better scores than DQN. We offer analysis and explanation for both convergence and final results, revealing a problem deep RL approaches have with sparse reward signals.", "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "We present a novel deep recurrent neural network architecture that learns to build implicit plans in an end-to-end manner by purely interacting with an environment in reinforcement learning setting. The network builds an internal plan, which is continuously updated upon observation of the next input from the environment. It can also partition this internal representation into contiguous sub- sequences by learning for how long the plan can be committed to - i.e. followed without re-planing. Combining these properties, the proposed model, dubbed STRategic Attentive Writer (STRAW) can learn high-level, temporally abstracted macro- actions of varying lengths that are solely learnt from data without any prior information. These macro-actions enable both structured exploration and economic computation. We experimentally demonstrate that STRAW delivers strong improvements on several ATARI games by employing temporally extended planning strategies (e.g. Ms. Pacman and Frostbite). It is at the same time a general algorithm that can be applied on any sequence data. To that end, we also show that when trained on text prediction task, STRAW naturally predicts frequent n-grams (instead of macro-actions), demonstrating the generality of the approach." ] }
1903.09366
2924136311
One problem in the application of reinforcement learning to real-world problems is the curse of dimensionality on the action space. Macro actions, a sequence of primitive actions, have been studied to diminish the dimensionality of the action space with regard to the time axis. However, previous studies relied on humans defining macro actions or assumed macro actions as repetitions of the same primitive actions. We present Factorized Macro Action Reinforcement Learning (FaMARL) which autonomously learns disentangled factor representation of a sequence of actions to generate macro actions that can be directly applied to general reinforcement learning algorithms. FaMARL exhibits higher scores than other reinforcement learning algorithms on environments that require an extensive amount of search.
Hausknecht proposed using a parameterized continuous action space in the reinforcement learning framework @cite_10 . This approach, however, is limited in the fact that the action has to be selected at every time step, and humans need to parameterize the action. FaMARL can be viewed as an expansion of this model to time series.
{ "cite_N": [ "@cite_10" ], "mid": [ "2253991908" ], "abstract": [ "Recent work has shown that deep neural networks are capable of approximating both value functions and policies in reinforcement learning domains featuring continuous state and action spaces. However, to the best of our knowledge no previous work has succeeded at using deep neural networks in structured (parameterized) continuous action spaces. To fill this gap, this paper focuses on learning within the domain of simulated RoboCup soccer, which features a small set of discrete action types, each of which is parameterized with continuous variables. The best learned agent can score goals more reliably than the 2012 RoboCup champion agent. As such, this paper represents a successful extension of deep reinforcement learning to the class of parameterized action space MDPs." ] }
1903.09291
2951868790
Structured pruning of filters or neurons has received increased focus for compressing convolutional neural networks. Most existing methods rely on multi-stage optimizations in a layer-wise manner for iteratively pruning and retraining which may not be optimal and may be computation intensive. Besides, these methods are designed for pruning a specific structure, such as filter or block structures without jointly pruning heterogeneous structures. In this paper, we propose an effective structured pruning approach that jointly prunes filters as well as other structures in an end-to-end manner. To accomplish this, we first introduce a soft mask to scale the output of these structures by defining a new objective function with sparsity regularization to align the output of baseline and network with this mask. We then effectively solve the optimization problem by generative adversarial learning (GAL), which learns a sparse soft mask in a label-free and an end-to-end manner. By forcing more scaling factors in the soft mask to zero, the fast iterative shrinkage-thresholding algorithm (FISTA) can be leveraged to fast and reliably remove the corresponding structures. Extensive experiments demonstrate the effectiveness of GAL on different datasets, including MNIST, CIFAR-10 and ImageNet ILSVRC 2012. For example, on ImageNet ILSVRC 2012, the pruned ResNet-50 achieves 10.88 Top-5 error and results in a factor of 3.7x speedup. This significantly outperforms state-of-the-art methods.
Network pruning focuses on removing network connections in non-structured or structured manner as introduced in Section . Early work in non-structured pruning @cite_37 and @cite_8 proposed a saliency measurement to remove redundant weights determined by the second-order derivative matrix of the loss function the weights. Han @cite_10 @cite_46 proposed an iterative thresholding to remove unimportant weights with small absolute values. Guo @cite_54 proposed a connection splicing to avoid incorrect weight pruning, which can reduce the accuracy loss of the pruned network. In contrast, structured pruning can reduce the network size and achieve fast inference without specialized packages. Li @cite_36 proposed a magnitude-based pruning to remove filters and their corresponding feature maps by calculating the @math -norm of filters in a layer-wise manner. A Taylor expansion based criterion was proposed in @cite_52 to iteratively prune one filter and then fine-tune the pruned network. This is, however, prohibitively costly for deep networks. Unlike these multi-stage and layer-wise pruning methods, our method prunes the network with the sparse soft mask by an end-to-end training that achieves much better results as quantitatively shown in our experiments.
{ "cite_N": [ "@cite_37", "@cite_8", "@cite_36", "@cite_54", "@cite_52", "@cite_46", "@cite_10" ], "mid": [ "2114766824", "2125389748", "2962965870", "2963981420", "2963287528", "2964299589", "2963674932" ], "abstract": [ "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.", "We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.", "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.", "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108x and 17.7x respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at https: github.com yiwenguo Dynamic-Network-Surgery.", "We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation-a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.", "Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy." ] }
1903.09291
2951868790
Structured pruning of filters or neurons has received increased focus for compressing convolutional neural networks. Most existing methods rely on multi-stage optimizations in a layer-wise manner for iteratively pruning and retraining which may not be optimal and may be computation intensive. Besides, these methods are designed for pruning a specific structure, such as filter or block structures without jointly pruning heterogeneous structures. In this paper, we propose an effective structured pruning approach that jointly prunes filters as well as other structures in an end-to-end manner. To accomplish this, we first introduce a soft mask to scale the output of these structures by defining a new objective function with sparsity regularization to align the output of baseline and network with this mask. We then effectively solve the optimization problem by generative adversarial learning (GAL), which learns a sparse soft mask in a label-free and an end-to-end manner. By forcing more scaling factors in the soft mask to zero, the fast iterative shrinkage-thresholding algorithm (FISTA) can be leveraged to fast and reliably remove the corresponding structures. Extensive experiments demonstrate the effectiveness of GAL on different datasets, including MNIST, CIFAR-10 and ImageNet ILSVRC 2012. For example, on ImageNet ILSVRC 2012, the pruned ResNet-50 achieves 10.88 Top-5 error and results in a factor of 3.7x speedup. This significantly outperforms state-of-the-art methods.
In line with our work, sparse scaling parameters @cite_60 @cite_5 in batch normalization (BN) or in the specific structures @cite_58 were obtained by supervised training with a class-labelled dataset. In contrast, our approach obtains the sparse soft mask with label-free data and can transfer to other scenarios with unseen labels.
{ "cite_N": [ "@cite_5", "@cite_58", "@cite_60" ], "mid": [ "2964001144", "2963382930", "2962851801" ], "abstract": [ "Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions on resource- limited scenarios. A widely-used practice in relevant work assumes that a smaller- norm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs), which does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computational difficult and not always useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: the first being to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels being constant, and the second being to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interesting aspects and the competitive performance.", "Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy state-of-the-art models into real-world applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an end-to-end manner. In our framework, a new type of parameter – scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative fine-tuning, our method is trained fully end-to-end in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive depth and width selection. Code is available at: https: github.com huangzehao sparse-structure-selection.", "The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations." ] }
1903.09291
2951868790
Structured pruning of filters or neurons has received increased focus for compressing convolutional neural networks. Most existing methods rely on multi-stage optimizations in a layer-wise manner for iteratively pruning and retraining which may not be optimal and may be computation intensive. Besides, these methods are designed for pruning a specific structure, such as filter or block structures without jointly pruning heterogeneous structures. In this paper, we propose an effective structured pruning approach that jointly prunes filters as well as other structures in an end-to-end manner. To accomplish this, we first introduce a soft mask to scale the output of these structures by defining a new objective function with sparsity regularization to align the output of baseline and network with this mask. We then effectively solve the optimization problem by generative adversarial learning (GAL), which learns a sparse soft mask in a label-free and an end-to-end manner. By forcing more scaling factors in the soft mask to zero, the fast iterative shrinkage-thresholding algorithm (FISTA) can be leveraged to fast and reliably remove the corresponding structures. Extensive experiments demonstrate the effectiveness of GAL on different datasets, including MNIST, CIFAR-10 and ImageNet ILSVRC 2012. For example, on ImageNet ILSVRC 2012, the pruned ResNet-50 achieves 10.88 Top-5 error and results in a factor of 3.7x speedup. This significantly outperforms state-of-the-art methods.
While state-of-the-art CNNs with compact architectures have been explored with hand-crafted design @cite_20 @cite_49 @cite_43 , automatic search of neural architectures is also becoming popular. Recent work on searching models with reinforcement learning @cite_55 @cite_31 @cite_6 @cite_7 or genetic algorithms @cite_11 @cite_25 greatly improve the performance of neural networks. However, the search space of these methods is extremely large, which requires significant computational overhead to search and select the best model from hundreds of models. In contrast, our method learns a compact neural architecture by a single training, which is more efficient. Group sparsity regularization on filters @cite_41 or multiple structures including filter shapes and layers @cite_15 has been proposed to sparsify them during training. This is also less efficient and cannot reliably remove the sparse structures since only stochastic gradient descent is used.
{ "cite_N": [ "@cite_7", "@cite_15", "@cite_41", "@cite_55", "@cite_6", "@cite_43", "@cite_49", "@cite_31", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2886851211", "2963000224", "566555209", "2556833785", "2964081807", "2778955544", "2724359148", "2963374479", "", "2612445135", "2949264490" ], "abstract": [ "Model compression is an effective technique to efficiently deploy neural network models on mobile devices which have limited computation resources and tight power budgets. Conventional model compression techniques rely on hand-crafted features and require domain experts to explore the large design space trading off among model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose AutoML for Model Compression (AMC) which leverages reinforcement learning to efficiently sample the design space and can improve the model compression quality. We achieved state-of-the-art model compression results in a fully automated way without any human efforts. Under 4 ( ) FLOPs reduction, we achieved 2.7 better accuracy than the hand-crafted model compression method for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet-V1 and achieved a speedup of 1.53 ( ) on the GPU (Titan Xp) and 1.95 ( ) on an Android phone (Google Pixel 1), with negligible loss of accuracy.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 .", "We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.", "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.", "Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.", "In this paper, we present a simple and modularized neural network architecture, named interleaved group convolutional neural networks (IGCNets). The main point lies in a novel building block, a pair of two successive interleaved group convolutions: primary group convolution and secondary group convolution. The two group convolutions are complementary: (i) the convolution on each partition in primary group convolution is a spatial convolution, while on each partition in secondary group convolution, the convolution is a point-wise convolution; (ii) the channels in the same secondary partition come from different primary partitions. We discuss one representative advantage: Wider than a regular convolution with the number of parameters and the computation complexity preserved. We also show that regular convolutions, group convolution with summation fusion, and the Xception block are special cases of interleaved group convolutions. Empirical results over standard benchmarks, CIFAR-10, CIFAR-100, SVHN and ImageNet demonstrate that our networks are more efficient in using parameters and computation complexity with similar or higher accuracy.", "We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8 ) than recent MobileNet on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves 13x actual speedup over AlexNet while maintaining comparable accuracy.", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.", "", "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.", "Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Our goal is to minimize human participation, so we employ evolutionary algorithms to discover such networks automatically. Despite significant computational requirements, we show that it is now possible to evolve models with accuracies within the range of those published in the last year. Specifically, we employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting from trivial initial conditions and reaching accuracies of 94.6 (95.6 for ensemble) and 77.0 , respectively. To do this, we use novel and intuitive mutation operators that navigate large search spaces; we stress that no human participation is required once evolution starts and that the output is a fully-trained model. Throughout this work, we place special emphasis on the repeatability of results, the variability in the outcomes and the computational requirements." ] }
1903.09291
2951868790
Structured pruning of filters or neurons has received increased focus for compressing convolutional neural networks. Most existing methods rely on multi-stage optimizations in a layer-wise manner for iteratively pruning and retraining which may not be optimal and may be computation intensive. Besides, these methods are designed for pruning a specific structure, such as filter or block structures without jointly pruning heterogeneous structures. In this paper, we propose an effective structured pruning approach that jointly prunes filters as well as other structures in an end-to-end manner. To accomplish this, we first introduce a soft mask to scale the output of these structures by defining a new objective function with sparsity regularization to align the output of baseline and network with this mask. We then effectively solve the optimization problem by generative adversarial learning (GAL), which learns a sparse soft mask in a label-free and an end-to-end manner. By forcing more scaling factors in the soft mask to zero, the fast iterative shrinkage-thresholding algorithm (FISTA) can be leveraged to fast and reliably remove the corresponding structures. Extensive experiments demonstrate the effectiveness of GAL on different datasets, including MNIST, CIFAR-10 and ImageNet ILSVRC 2012. For example, on ImageNet ILSVRC 2012, the pruned ResNet-50 achieves 10.88 Top-5 error and results in a factor of 3.7x speedup. This significantly outperforms state-of-the-art methods.
The proposed generative adversarial learning for structured pruning is also related to knowledge distillation (KD) to a certain extent. KD transfers knowledge from the teacher to the student using different kinds of knowledge (, dark knowledge @cite_4 @cite_51 and attention @cite_32 ). Hinton @cite_4 introduced dark knowledge for model compression, which uses the softened final output of a complicated teacher network to teach a small student network. Romero @cite_51 proposed FitNets to train the student network by combining dark knowledge and the knowledge from the teacher's hint layer. Zagoruyko @cite_32 transferred the knowledge from attention maps from the teacher's hidden layer to improve the performance of a student network. Unlike other methods, we do not require labels to train the pruned network. Furthermore, we directly copy the architecture of the student network from the teacher without being designed by experts, and then automatically learn how to prune the student network.
{ "cite_N": [ "@cite_51", "@cite_4", "@cite_32" ], "mid": [ "2964118293", "1821462560", "2561238782" ], "abstract": [ "Abstract: While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures." ] }
1903.09291
2951868790
Structured pruning of filters or neurons has received increased focus for compressing convolutional neural networks. Most existing methods rely on multi-stage optimizations in a layer-wise manner for iteratively pruning and retraining which may not be optimal and may be computation intensive. Besides, these methods are designed for pruning a specific structure, such as filter or block structures without jointly pruning heterogeneous structures. In this paper, we propose an effective structured pruning approach that jointly prunes filters as well as other structures in an end-to-end manner. To accomplish this, we first introduce a soft mask to scale the output of these structures by defining a new objective function with sparsity regularization to align the output of baseline and network with this mask. We then effectively solve the optimization problem by generative adversarial learning (GAL), which learns a sparse soft mask in a label-free and an end-to-end manner. By forcing more scaling factors in the soft mask to zero, the fast iterative shrinkage-thresholding algorithm (FISTA) can be leveraged to fast and reliably remove the corresponding structures. Extensive experiments demonstrate the effectiveness of GAL on different datasets, including MNIST, CIFAR-10 and ImageNet ILSVRC 2012. For example, on ImageNet ILSVRC 2012, the pruned ResNet-50 achieves 10.88 Top-5 error and results in a factor of 3.7x speedup. This significantly outperforms state-of-the-art methods.
Note that our approach is orthogonal to other compression approaches, such as low-rank decomposition @cite_40 @cite_14 @cite_30 @cite_21 @cite_56 , or parameter quantization @cite_26 @cite_24 @cite_45 . We can integrate our approach into the above methods to achieve higher compression and speedup rates.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_26", "@cite_21", "@cite_56", "@cite_24", "@cite_40", "@cite_45" ], "mid": [ "2962988160", "2963048316", "", "1902041153", "2894994475", "2963122961", "2167215970", "2963521187" ], "abstract": [ "Abstract: Although the latest high-end smartphone has powerful CPU and GPU, running deeper convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on mobile devices is challenging. To deploy deep CNNs on mobile devices, we present a simple and effective scheme to compress the entire CNN, which we call one-shot whole network compression. The proposed scheme consists of three steps: (1) rank selection with variational Bayesian matrix factorization, (2) Tucker decomposition on kernel tensor, and (3) fine-tuning to recover accumulated loss of accuracy, and each step can be easily implemented using publicly available tools. We demonstrate the effectiveness of the proposed scheme by testing the performance of various compressed CNNs (AlexNet, VGGS, GoogLeNet, and VGG-16) on the smartphone. Significant reductions in model size, runtime, and energy consumption are obtained, at the cost of small loss in accuracy. In addition, we address the important implementation level issue on 1?1 convolution, which is a key operation of inception module of GoogLeNet as well as CNNs compressed by our proposed scheme.", "Abstract: We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the second step, this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels. After such replacement, the entire network is fine-tuned on the training data using standard backpropagation process. We evaluate this approach on two CNNs and show that it is competitive with previous approaches, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Thus, for the 36-class character classification CNN, our approach obtains a 8.5x CPU speedup of the whole network with only minor accuracy drop (1 from 91 to 90 ). For the standard ImageNet architecture (AlexNet), the approach speeds up the second convolution layer by a factor of 4x at the cost of @math increase of the overall top-5 classification error.", "", "This paper aims to accelerate the test-time computation of deep convolutional neural networks (CNNs). Unlike existing methods that are designed for approximating linear filters or linear responses, our method takes the nonlinear units into account. We minimize the reconstruction error of the nonlinear responses, subject to a low-rank constraint which helps to reduce the complexity of filters. We develop an effective solution to this constrained nonlinear optimization problem. An algorithm is also presented for reducing the accumulated error when multiple layers are approximated. A whole-model speedup ratio of 4× is demonstrated on a large network trained for ImageNet, while the top-5 error rate is only increased by 0.9 . Our accelerated model has a comparably fast speed as the “AlexNet” [11], but is 4.7 more accurate.", "Convolutional neural networks (CNNs) have achieved remarkable success in various computer vision tasks, which are extremely powerful to deal with massive training data by using tens of millions of parameters. However, CNNs often cost significant memory and computation consumption, which prohibits their usage in resource-limited environments such as mobile or embedded devices. To address the above issues, the existing approaches typically focus on either accelerating the convolutional layers or compressing the fully-connected layers separatedly, without pursuing a joint optimum. In this paper, we overcome such a limitation by introducing a holistic CNN compression framework, termed LRDKT, which works throughout both convolutional and fully-connected layers. First, a low-rank decomposition (LRD) scheme is proposed to remove redundancies across both convolutional kernels and fullyconnected matrices, which has a novel closed-form solver to significantly improve the efficiency of the existing iterative optimization solvers. Second, a novel knowledge transfer (KT) based training scheme is introduced. To recover the accumulated accuracy loss and overcome the vanishing gradient, KT explicitly aligns outputs and intermediate responses from a teacher (original) network to its student (compressed) network. We have comprehensively analyzed and evaluated the compression and speedup ratios of the proposed model on MNIST and ILSVRC 2012 benchmarks. In both benchmarks, the proposed scheme has demonstrated superior performance gains over the state-of-the-art methods. We also demonstrate the proposed compression scheme for the task of transfer learning, including domain adaptation and object detection, which show exciting performance gains over the state-of-the-arts. Our source code and compressed models are available at https: github.com ShaohuiLin LRDKT.", "The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.", "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.", "This paper tackles the problem of training a deep convolutional neural network with both low-precision weights and low-bitwidth activations. Optimizing a low-precision network is very challenging since the training process can easily get trapped in a poor local minima, which results in substantial accuracy loss. To mitigate this problem, we propose three simple-yet-effective approaches to improve the network training. First, we propose to use a two-stage optimization strategy to progressively find good local minima. Specifically, we propose to first optimize a net with quantized weights and then quantized activations. This is in contrast to the traditional methods which optimize them simultaneously. Second, following a similar spirit of the first method, we propose another progressive optimization approach which progressively decreases the bit-width from high-precision to low-precision during the course of training. Third, we adopt a novel learning scheme to jointly train a full-precision model alongside the low-precision one. By doing so, the full-precision model provides hints to guide the low-precision model training. Extensive experiments on various datasets (i.e., CIFAR-100 and ImageNet) show the effectiveness of the proposed methods. To highlight, using our methods to train a 4-bit precision network leads to no performance decrease in comparison with its full-precision counterpart with standard network architectures (i.e., AlexNet and ResNet-50)." ] }
1903.09343
2950544565
The Mondrian process represents an elegant and powerful approach for space partition modelling. However, as it restricts the partitions to be axis-aligned, its modelling flexibility is limited. In this work, we propose a self-consistent Binary Space Partitioning (BSP)-Tree process to generalize the Mondrian process. The BSP-Tree process is an almost surely right continuous Markov jump process that allows uniformly distributed oblique cuts in a two-dimensional convex polygon. The BSP-Tree process can also be extended using a non-uniform probability measure to generate direction differentiated cuts. The process is also self-consistent, maintaining distributional invariance under a restricted subdomain. We use Conditional-Sequential Monte Carlo for inference using the tree structure as the high-dimensional variable. The BSP-Tree process's performance on synthetic data partitioning and relational modelling demonstrates clear inferential improvements over the standard Mondrian process and other related methods.
Stochastic partition processes aim to divide a product space into meaningful blocks. A popular application of such processes is modelling relational data whereby the interactions within each block tend to be homogeneous. For state-of-the-art stochastic partition processes, partitioning strategies vary, including regular-grids @cite_6 , hierarchical partitions @cite_20 @cite_15 and entry-by-entry strategies @cite_18 . A regular-grid stochastic partition process constitutes separate partition processes on each dimension of the multi-dimensional array. The resulting orthogonal interactions between two dimensions will exhibit regular grids, which can represent interacting intensities. Typical regular-grid partition models include the infinite relational model (IRM) @cite_6 and the overlapping communities extension of mixed-membership stochastic blockmodels @cite_25 . Regular-grid partition models are widely used in real-world applications for modeling graph data @cite_4 @cite_16 @cite_1 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_1", "@cite_6", "@cite_15", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "118546003", "2101065161", "", "2097266862", "", "2070323919", "2107107106", "2144207912" ], "abstract": [ "This paper proposes a novel stochastic process that represents the arbitrary rectangular partitioning of an infinite-dimensional matrix as the conditional projective limit. Rectangular partitioning is used in relational data analysis, and is classified into three types: regular grid, hierarchical, and arbitrary. Conventionally, a variety of probabilistic models have been advanced for the first two, including the product of Chinese restaurant processes and the Mondrian process. However, existing models for arbitrary partitioning are too complicated to permit the analysis of the statistical behaviors of models, which places very severe capability limits on relational data analysis. In this paper, we propose a new probabilistic model of arbitrary partitioning called the rectangular tiling process (RTP). Our model has a sound mathematical base in projective systems and infinite extension of conditional probabilities, and is capable of representing partitions of infinite elements as found in ordinary Bayesian nonparametric models.", "We propose a new probabilistic model for analyzing dynamic evolutions of relational data, such as additions, deletions and split & merge, of relation clusters like communities in social networks. Our proposed model abstracts observed time-varying object-object relationships into relationships between object clusters. We extend the infinite Hidden Markov model to follow dynamic and time-sensitive changes in the structure of the relational data and to estimate a number of clusters simultaneously. We show the usefulness of the model through experiments with synthetic and real-world data sets.", "", "Relationships between concepts account for a large proportion of semantic knowledge. We present a nonparametric Bayesian model that discovers systems of related concepts. Given data involving several sets of entities, our model discovers the kinds of entities in each set and the relations between kinds that are possible or likely. We apply our approach to four problems: clustering objects and features, learning ontologies, discovering kinship systems, and discovering structure in political data.", "", "Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models for complex networks can be derived and point out relevant literature.", "Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks.", "We describe a novel class of distributions, called Mondrian processes, which can be interpreted as probability distributions over kd-tree data structures. Mondrian processes are multidimensional generalizations of Poisson processes and this connection allows us to construct multidimensional generalizations of the stick-breaking process described by Sethuraman (1994), recovering the Dirichlet process in one dimension. After introducing the Aldous-Hoover representation for jointly and separately exchangeable arrays, we show how the process can be used as a nonparametric prior distribution in Bayesian models of relational data." ] }
1903.09343
2950544565
The Mondrian process represents an elegant and powerful approach for space partition modelling. However, as it restricts the partitions to be axis-aligned, its modelling flexibility is limited. In this work, we propose a self-consistent Binary Space Partitioning (BSP)-Tree process to generalize the Mondrian process. The BSP-Tree process is an almost surely right continuous Markov jump process that allows uniformly distributed oblique cuts in a two-dimensional convex polygon. The BSP-Tree process can also be extended using a non-uniform probability measure to generate direction differentiated cuts. The process is also self-consistent, maintaining distributional invariance under a restricted subdomain. We use Conditional-Sequential Monte Carlo for inference using the tree structure as the high-dimensional variable. The BSP-Tree process's performance on synthetic data partitioning and relational modelling demonstrates clear inferential improvements over the standard Mondrian process and other related methods.
The Mondrian process (MP) @cite_20 @cite_13 and its variant the Ostomachion process (OP) @cite_3 , can produce hierarchical partitions on a product space. The MP recursively generates axis-aligned cuts on a unit hypercube and partitions the space in a hierarchical fashion known as the @math d-tree ( @cite_2 also considers a tree-consistent partition model, but it is not a Bayesian nonparametric model). While using similar hierarchical partition structures, the OP additionally allows oblique cuts for flexible partitions, however it does not guarantee the important self-consistency property.
{ "cite_N": [ "@cite_3", "@cite_13", "@cite_20", "@cite_2" ], "mid": [ "2515646153", "1564629734", "2144207912", "2134647431" ], "abstract": [ "Stochastic partition processes for exchangeable graphs produce axis-aligned blocks on a product space. In relational modeling, the resulting blocks uncover the underlying interactions between two sets of entities of the relational data. Although some flexible axis-aligned partition processes, such as the Mondrian process, have been able to capture complex interacting patterns in a hierarchical fashion, they are still in short of capturing dependence between dimensions. To overcome this limitation, we propose the Ostomachion process (OP), which relaxes the cutting direction by allowing for oblique cuts. The partitions generated by an OP are convex polygons that can capture inter-dimensional dependence. The OP also exhibits interesting properties: 1) Along the time line the cutting times can be characterized by a homogeneous Poisson process, and 2) on the partition space the areas of the resulting components comply with a Dirichlet distribution. We can thus control the expected number of cuts and the expected areas of components through hyper-parameters. We adapt the reversible-jump MCMC algorithm for inferring OP partition structures. The experimental results on relational modeling and decision tree classification have validated the merit of the OP.", "We investigate the class of computable probability distributions and explore the fundamental limitations of using this class to describe and compute conditional distributions. In addition to proving the existence of noncomputable conditional distributions, and thus ruling out the possibility of generic probabilistic inference algorithms (even inefficient ones), we highlight some positive results showing that posterior inference is possible in the presence of additional structure like exchangeability and noise, both of which are common in Bayesian hierarchical modeling. This theoretical work bears on the development of probabilistic programming languages (which enable the specification of complex probabilistic models) and their implementations (which can be used to perform Bayesian reasoning). The probabilistic programming approach is particularly well suited for defining infinite-dimensional, recursively-defined stochastic processes of the sort used in nonparametric Bayesian statistics. We present a new construction of the Mondrian process as a partition-valued Markov process in continuous time, which can be viewed as placing a distribution on an infinite kd-tree data structure. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)", "We describe a novel class of distributions, called Mondrian processes, which can be interpreted as probability distributions over kd-tree data structures. Mondrian processes are multidimensional generalizations of Poisson processes and this connection allows us to construct multidimensional generalizations of the stick-breaking process described by Sethuraman (1994), recovering the Dirichlet process in one dimension. After introducing the Aldous-Hoover representation for jointly and separately exchangeable arrays, we show how the process can be used as a nonparametric prior distribution in Bayesian models of relational data.", "The objects in many real-world domains can be organized into hierarchies, where each internal node picks out a category of objects. Given a collection of features and relations defined over a set of objects, an annotated hierarchy includes a specification of the categories that are most useful for describing each individual feature and relation. We define a generative model for annotated hierarchies and the features and relations that they describe, and develop a Markov chain Monte Carlo scheme for learning annotated hierarchies. We show that our model discovers interpretable structure in several real-world data sets." ] }
1903.09189
2923164151
We present a coarse-to-fine approach based semi-autonomous teleoperation system using vision guidance. The system is optimized for long range teleoperation tasks under time-delay network conditions and does not require prior knowledge of the remote scene. Our system initializes with a self exploration behavior that senses the remote surroundings through a freely mounted eye-in-hand web cam. The self exploration stage estimates hand-eye calibration and provides a telepresence interface via real-time 3D geometric reconstruction. The human operator is able to specify a visual task through the interface and a coarse-to-fine controller guides the remote robot enabling our system to work in high latency networks. Large motions are guided by coarse 3D estimation, whereas fine motions use image cues (IBVS). Network data transmission cost is minimized by sending only sparse points and a final image to the human side. Experiments from Singapore to Canada on multiple tasks were conducted to show our system's capability to work in long range teleoperation tasks.
Long range teleoperation control has lots of work done using Internet connections @cite_2 , @cite_26 . While it provides a possibility for convenient operation regardless of location, it typically omits a common limitation in practice: network conditions. Low bandwidth and high latency networks are more common, for example, in tasks such as space missions (e.g., 6 to 44 minutes delay in Martian Rover @cite_5 ) where an Internet connection is not available.
{ "cite_N": [ "@cite_5", "@cite_26", "@cite_2" ], "mid": [ "", "2766995814", "2076447811" ], "abstract": [ "", "Nowadays, we can no longer imagine that Internet can be considered just a network of computers. We also have to state that it's becoming even more a network of things. With this in mind, this paper introduces a long range Internet connected robot teleoperation system based on Internet of Things (IoT). The aim is to support operators during remote teleoperation of robotic systems in situations in which the remote control devices loose the connection with the on-board receivers. This situation can occur either due to the lost of the line-of-sight or due to possible damages of the devices. IoT allows to connect remote and mobile things or machines. It also assets through the use of wireless communications and low-cost sensors, computing and storage devices. Approaches based on IoT have been already applied in traffic monitoring, smart homes, smart parking management, vehicle tracking system and other industrial applications. We used the system to control a mobile robot without range barriers provided that the robot and the remote control device must be connected to the Internet.", "Today’s Internet technology provides a convenient way for us to develop an integrated network environment for the diversified applications of different robotic systems. To be successful in real‐world applications, Internet‐based robots require a high degree of autonomy and local intelligence to deal with the restricted bandwidth and arbitrary transmission delay of the Internet. This paper describes the first step toward building such an Internet‐based robotic system for teleoperation in the University of Essex. The system has a standard network protocol and an interactive human‐machine interface. Using a Web browser, a remote operator can control the mobile robot to navigate in our laboratory with visual feedback and a simulated environment map via the Internet. The employment of an intuitive user interface enables Internet users to control the mobile robot and implement useful tasks remotely. Although at its first stage, the developed system has the potential to be extended to many real‐world applications such as tele‐manufacturing, tele‐training and tele‐service." ] }