aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1903.04154
2922003464
In a graph convolutional network, we assume that the graph @math is generated with respect to some observation noise. We make small random perturbations @math of the graph and try to improve generalization. Based on quantum information geometry, we can have quantitative measurements on the scale of @math . We try to maximize the intrinsic scale of the permutation with a small budget while minimizing the loss based on the perturbed @math . Our proposed model can consistently improve graph convolutional networks on semi-supervised node classification tasks with reasonable computational overhead. We present two different types of geometry on the manifold of graphs: one is for measuring the intrinsic change of a graph; the other is for measuring how such changes can affect externally a graph neural network. These new analytical tools will be useful in developing a good understanding of graph neural networks and fostering new techniques.
Tools from quantum information geometry are applied to machine learning @cite_15 @cite_5 but not yet ported to the domain of graph neural networks. In information geometry one can have different matrix divergences @cite_39 that can be applied on p.s.d. matrices. We point the reader to related definitions of the discrete Fisher information @cite_12 without illuminate the details.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_12", "@cite_39" ], "mid": [ "2964259376", "2964068697", "2942681667", "597395834" ], "abstract": [ "Embedding complex objects as vectors in low dimensional spaces is a longstanding problem in machine learning. We propose in this work an extension of that approach, which consists in embedding objects as elliptical probability distributions, namely distributions whose densities have elliptical level sets. We endow these measures with the 2-Wasserstein metric, with two important benefits: For such measures, the squared 2-Wasserstein metric has a closed form, equal to the sum of the squared Euclidean distance between means and the squared Bures metric between covariance matrices. The latter is a Riemannian metric between positive semi-definite matrices, which turns out to be Euclidean on a suitable factor representation of such matrices, which is valid on the entire geodesic between these matrices. The 2-Wasserstein distance boils down to the usual Euclidean metric when comparing Diracs, and therefore provides the natural framework to extend point embeddings. We show that for these reasons Wasserstein elliptical embeddings are more intuitive and yield tools that are better behaved numerically than the alternative choice of Gaussian embeddings with the Kullback-Leibler divergence. In particular, and unlike previous work based on the KL geometry, we learn elliptical distributions that are not necessarily diagonal. We demonstrate the interest of elliptical embeddings by using them for visualization, to compute embeddings of words, and to reflect entanglement or hypernymy.", "Abstract The metric d ( A , B ) = tr A + tr B − 2 tr ( A 1 ∕ 2 B A 1 ∕ 2 ) 1 ∕ 2 1 ∕ 2 on the manifold of n × n positive definite matrices arises in various optimisation problems, in quantum information and in the theory of optimal transport. It is also related to Riemannian geometry. In the first part of this paper we study this metric from the perspective of matrix analysis, simplifying and unifying various proofs. Then we develop a theory of a mean of two, and a barycentre of several, positive definite matrices with respect to this metric. We explain some recent work on a fixed point iteration for computing this Wasserstein barycentre. Our emphasis is on ideas natural to matrix analysis.", "Existing popular methods for semi-supervised learning with Graph Neural Networks (such as the Graph Convolutional Network) provably cannot learn a general class of neighborhood mixing relationships. To address this weakness, we propose a new model, MixHop, that can learn these relationships, including difference operators, by repeatedly mixing feature representations of neighbors at various distances. Mixhop requires no additional memory or computational complexity, and outperforms on challenging baselines. In addition, we propose sparsity regularization that allows us to visualize how the network prioritizes neighborhood information across different graph datasets. Our analysis of the learned architectures reveals that neighborhood mixing varies per datasets.", "Book dedication.- Preface.- Part I: State-of-the-art surveys & original matrix theory work.- Part II: Advanced matrix theory for radar processing.- Part III: Matrix-based signal processing applications.- Index of terms" ] }
1903.04101
2951072361
With the recent advances in solving large, zero-sum extensive form games, there is a growing interest in the inverse problem of inferring underlying game parameters given only access to agent actions. Although a recent work provides a powerful differentiable end-to-end learning frameworks which embed a game solver within a deep-learning framework, allowing unknown game parameters to be learned via backpropagation, this framework faces significant limitations when applied to boundedly rational human agents and large scale problems, leading to poor practicality. In this paper, we address these limitations and propose a framework that is applicable for more practical settings. First, seeking to learn the rationality of human agents in complex two-player zero-sum games, we draw upon well-known ideas in decision theory to obtain a concise and interpretable agent behavior model, and derive solvers and gradients for end-to-end learning. Second, to scale up to large, real-world scenarios, we propose an efficient first-order primal-dual method which exploits the structure of extensive-form games, yielding significantly faster computation for both game solving and gradient computation. When tested on randomly generated games, we report speedups of orders of magnitude over previous approaches. We also demonstrate the effectiveness of our model on both real-world one-player settings and synthetic data.
We now turn our attention to 2-player games. Seminal work by McKelvey propose QRE as a noisy alternative to NE. @cite_8 . Similar to logit choice models, the QRE is the equilibrium obtained when payoffs are perturbed by noise obeying a Gumbel distribution. Formally, @math is a QRE of a normal form game with action sets @math and @math for the two players and payoff matrix @math if where @math is a parameter governing the level of agent rationality. Observe that as @math , players behave uniformly at random, while @math approaches a NE as @math . For zero-sum games, it is further known @cite_0 that QRE is the unique solution of the following convex-concave program
{ "cite_N": [ "@cite_0", "@cite_8" ], "mid": [ "2254533881", "2264897026" ], "abstract": [ "We investigate a class of reinforcement learning dynamics where players adjust their strategies based on their actions' cumulative payoffs over time-specifically, by playing mixed strategies that maximize their expected cumulative payoff minus a regularization term. A widely studied example is exponential reinforcement learning, a process induced by an entropic regularization term which leads mixed strategies to evolve according to the replicator dynamics. However, in contrast to the class of regularization functions used to define smooth best responses in models of stochastic fictitious play, the functions used in this paper need not be infinitely steep at the boundary of the simplex; in fact, dropping this requirement gives rise to an important dichotomy between steep and nonsteep cases. In this general framework, we extend several properties of exponential learning, including the elimination of dominated strategies, the asymptotic stability of strict Nash equilibria, and the convergence of time-averaged trajectories in zero-sum games with an interior Nash equilibrium.", "We investigate the use of standard statistical models for quantal choice in a game theoretic setting. Players choose strategies based on relative expected utility and assume other players do so as well. We define a quantal response equilibrium (ORE) as a fixed point of this process and establish existence. For a logit specification of the error structure, we show that as the error goes to zero, QRE approaches a subset of Nash equilibria and also implies a unique selection from the set of Nash equilibria in generic games. We fit the model to a variety of experimental data sets by using maximum likelihood estimation. Journal of Economic Literature Classification Numbers: C19, C44, C72, C92." ] }
1903.04101
2951072361
With the recent advances in solving large, zero-sum extensive form games, there is a growing interest in the inverse problem of inferring underlying game parameters given only access to agent actions. Although a recent work provides a powerful differentiable end-to-end learning frameworks which embed a game solver within a deep-learning framework, allowing unknown game parameters to be learned via backpropagation, this framework faces significant limitations when applied to boundedly rational human agents and large scale problems, leading to poor practicality. In this paper, we address these limitations and propose a framework that is applicable for more practical settings. First, seeking to learn the rationality of human agents in complex two-player zero-sum games, we draw upon well-known ideas in decision theory to obtain a concise and interpretable agent behavior model, and derive solvers and gradients for end-to-end learning. Second, to scale up to large, real-world scenarios, we propose an efficient first-order primal-dual method which exploits the structure of extensive-form games, yielding significantly faster computation for both game solving and gradient computation. When tested on randomly generated games, we report speedups of orders of magnitude over previous approaches. We also demonstrate the effectiveness of our model on both real-world one-player settings and synthetic data.
For a two-player extensive form game characterized by a game tree with information sets @math and @math for the min and max player respectively, ling2018game show that when @math , the QRE in reduced normal form of the game is equivalent to the solution of the following regularized min-max problem, where @math and @math are the players' strategies in @cite_15 . In the above, @math is the sequence form payoff matrix and @math and @math are the sequence form constraint matrices. @math denotes the possible actions at information set @math , while @math is the action (from the same player) preceding @math . In the sequence form, one works with @math as opposed to probability vectors. These realization plans represent probabilities of choosing a given sequence, while the constraint matrices @math are matrices containing @math and contain parent-child relationships in the game tree. The sequence form is significantly more compact than the normal form while retaining virtually all of its strategic elements.
{ "cite_N": [ "@cite_15" ], "mid": [ "2136427775" ], "abstract": [ "We propose thesequence formas a new strategic description for an extensive game with perfect recall. It is similar to the normal form but has linear instead of exponential complexity and allows a direct representation and efficient computation of behavior strategies. Pure strategies and their mixed strategy probabilities are replaced by sequences of consecutive choices and their realization probabilities. A zero-sum game is solved by a corresponding linear program that has linear size in the size of the game tree. General two-person games are studied in the paper by Kolleret al, 1996 (Games Econ. Behav.14, 247–259).Journal of Economic LiteratureClassification Number: C72." ] }
1903.03971
2921169870
This paper presents a novel phase reconstruction method (only from a given amplitude spectrogram) by combining a signal-processing-based approach and a deep neural network (DNN). To retrieve a time-domain signal from its amplitude spectrogram, the corresponding phase is required. One of the popular phase reconstruction methods is the Griffin-Lim algorithm (GLA), which is based on the redundancy of the short-time Fourier transform. However, GLA often involves many iterations and produces low-quality signals owing to the lack of prior knowledge of the target signal. In order to address these issues, in this study, we propose an architecture which stacks a sub-block including two GLA-inspired fixed layers and a DNN. The number of stacked sub-blocks is adjustable, and we can trade the performance and computational load based on requirements of applications. The effectiveness of the proposed method is investigated by reconstructing phases from amplitude spectrograms of speeches.
Recently, DNNs including fixed STFT (and iSTFT) layers were considered for treating phase information within the networks. A generative adversarial network (GAN)-based approach to reconstruct a complex-valued spectrogram solely from a given amplitude spectrogram was presented in @cite_22 . The output of the generator (a complex-valued spectrogram) is converted back to the time domain by iSTFT layer and inputted to the discriminator, where this iSTFT layer is essential for its training as discussed in @cite_22 . As another example, a DNN for speech separation @cite_3 employed the multiple input spectrogram inverse (MISI) layer which consists of the pair of STFT and iSTFT as in GLA. The MISI layer is applied to the output of the DNN for speech separation to improve its performance by considering the effect of the phase reconstruction together with the separation. In addition, in @cite_36 , the time-frequency representation was also trained with the DNN for speech separation. The success of these DNNs indicates that considering STFT (and iSTFT) together with a DNN is important for treating phase.
{ "cite_N": [ "@cite_36", "@cite_22", "@cite_3" ], "mid": [ "2900132857", "2964328256", "2799119527" ], "abstract": [ "Progress in solving the cocktail party problem, i.e., separating the speech from multiple overlapping speakers, has recently accelerated with the invention of techniques such as deep clustering and permutation free mask inference. These approaches typically focus on estimating target STFT magnitudes and ignore problems of phase inconsistency. In this paper, we explicitly integrate phase reconstruction into our separation algorithm using a loss function defined on time-domain signals. A deep neural network structure is defined by unfolding a phase reconstruction algorithm and treating each iteration as a layer in our network. Furthermore, instead of using fixed STFT iSTFT time-frequency representations, we allow our network to learn a modified version of these representations from data. We compare several variants of these unfolded phase reconstruction networks achieving state of the art results on the publicly available wsj0-2mix dataset, and show improved performance when the STFT iSTFT-like representations are allowed to adapt.", "In this paper, we address the problem of reconstructing a time-domain signal (or a phase spectrogram) solely from a magnitude spectrogram. Since magnitude spectrograms do not contain phase information, we must restore or infer phase information to reconstruct a time-domain signal. One widely used approach for dealing with the signal reconstruction problem was proposed by Griffin and Lim. This method usually requires many iterations for the signal reconstruction process and depending on the inputs, it does not always produce high-quality audio signals. To overcome these shortcomings, we apply a learning-based approach to the signal reconstruction problem by modeling the signal reconstruction process using a deep neural network and training it using the idea of a generative adversarial network. Experimental evaluations revealed that our method was able to reconstruct signals faster with higher quality than the Griffin-Lim method.", "This paper proposes an end-to-end approach for single-channel speaker-independent multi-speaker speech separation, where time-frequency (T-F) masking, the short-time Fourier transform (STFT), and its inverse are represented as layers within a deep network. Previous approaches, rather than computing a loss on the reconstructed signal, used a surrogate loss based on the target STFT magnitudes. This ignores reconstruction error introduced by phase inconsistency. In our approach, the loss function is directly defined on the reconstructed signals, which are optimized for best separation. In addition, we train through unfolded iterations of a phase reconstruction algorithm, represented as a series of STFT and inverse STFT layers. While mask values are typically limited to lie between zero and one for approaches using the mixture phase for reconstruction, this limitation is less relevant if the estimated magnitudes are to be used together with phase reconstruction. We thus propose several novel activation functions for the output layer of the T-F masking, to allow mask values beyond one. On the publicly-available wsj0-2mix dataset, our approach achieves state-of-the-art 12.6 dB scale-invariant signal-to-distortion ratio (SI-SDR) and 13.1 dB SDR, revealing new possibilities for deep learning based phase reconstruction and representing a fundamental progress towards solving the notoriously-hard cocktail party problem." ] }
1903.04055
2922094103
Context: Software development projects increasingly adopt unit testing as a way to identify and correct program faults early in the construction process. Code that is unit tested should therefore have fewer failures associated with it. Objective: Compare the number of field failures arising in unit tested code against those arising in code that has not been unit tested. Method: We retrieved 2,083,979 crash incident reports associated with the Eclipse integrated development environment project, and processed them to obtain a set of 126,026 unique program failure stack traces associated with a specific popular release. We then run the JaCoCo code test coverage analysis on the same release, obtaining results on the coverage of 216,539 methods and 1,415,253 lines. Finally, we correlated unit tests with failures at the level of tested methods and the number of test-covered lines. Results: Unit-tested code does not appear to be associated with fewer failures. Furthermore, increased code coverage is associated with more failures. Conclusion: Unit testing on its own may not be a sufficient method for preventing program failures.
Among past studies researching the relationship between unit test coverage and software defects, the most related to our work are the ones that examine actual software faults. Surprisingly, these studies do not reach a widespread agreement when it comes to the relationship between the two. More specifically existing findings diverge regarding the hypothesis that a high test coverage leads to fewer defects. @cite_11 , who studied two different industrial software products, agreed with the hypothesis and concluded that code coverage has a negative correlation with the number of defects. On the other hand, @cite_6 , who investigated an industrial software product, also found a negligible decrease in defects when coverage increases and concluded that test unit coverage is not a useful metric for test effectiveness. Furthermore, in a study of seven Java open source projects, found that the majority of methods with defects had not been covered by unit tests @cite_2 , deducing that the absence of unit tests is risky and can lead to failures. On the other hand, in another study of one hundred Java projects, did not find a significant correlation between code coverage and defects @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_2", "@cite_6", "@cite_11" ], "mid": [ "2754335648", "2896065395", "2801646547", "2089175932" ], "abstract": [ "Testing is a pivotal activity in ensuring the quality of software. Code coverage is a common metric used as a yardstick to measure the efficacy and adequacy of testing. However, does higher coverage actually lead to a decline in postrelease bugs? Do files that have higher test coverage actually have fewer bug reports? The direct relationship between code coverage and actual bug reports has not yet been analyzed via a comprehensive empirical study on real bugs. Past studies only involve a few software systems or artificially injected bugs (mutants). In this empirical study, we examine these questions in the context of open-source software projects based on their actual reported bugs. We analyze 100 large open-source Java projects and measure the code coverage of the test cases that come along with these projects. We collect real bugs logged in the issue tracking system after the release of the software and analyze the correlations between code coverage and these bugs. We also collect other metrics such as cyclomatic complexity and lines of code, which are used to normalize the number of bugs and coverage to correlate with other metrics as well as use these metrics in regression analysis. Our results show that coverage has an insignificant correlation with the number of bugs that are found after the release of the software at the project level, and no such correlation at the file level.", "Background: Newspaper headlines still regularly report latent software defects. Such defects have often evaded testing for many years. It remains difficult to identify how well a system has been tested. It also remains difficult to assess how successful at finding defects particular tests are. Coverage and mutation testing are frequently used to asses test effectiveness. We look more deeply at the performance of commonly used JUnit testing by assessing how much JUnit testing was done and how effective that testing was at detecting defects in seven open source systems. Aim: We aim to identify whether defective code has been effectively tested by JUnit tests as non-defective code. We also aim to identify the characteristics of JUnit tests that are related to identifying defects. Methodology: We first extract the defects from seven open source projects using the SZZ algorithm. We match those defects with JUnit tests to identify the proportion of defects that were covered by JUnit tests. We also do the same for non-defective code. We then use Principal Component Analysis and machine learning to investigate the characteristics of JUnit tests that were successful in identifying defects. Results: Our findings suggest that most of the open source systems we investigated are under-tested. On average over 66 of defective methods were not linked to any JUnit tests. We show that the number of methods touched by a JUnit test is strongly related to that test uncovering a defect. Conclusion: More JUnit tests need to be produced for the seven open source systems that we investigate. JUnit tests need to be relatively sophisticated, in particular they should touch more than just one method during the test.", "It is a continuous struggle to understand how much a product should be tested before its delivery to the market. Ericsson, as a global software development company, decided to evaluate the adequacy of the unit-test-coverage criterion that it had employed for years as a guide for sufficiency of testing. Naturally, one can think that if increasing coverage decreases the number of defects significantly, then coverage can be considered a criterion for test sufficiency. To test this hypothesis in practice, we investigated the relationship of unit-test-coverage measures and post-unit-test defects in a large commercial product of Ericsson. The results indicate that high unit-test coverage did not seem to be any tangible help in producing defect-free software.", "Test coverage is a promising measure of test effectiveness and development organizations are interested in cost-effective levels of coverage that provide sufficient fault removal with contained testing effort. We have conducted a multiple-case study on two dissimilar industrial software projects to investigate if test coverage reflects test effectiveness and to find the relationship between test effort and the level of test coverage. We find that in both projects the increase in test coverage is associated with decrease in field reported problems when adjusted for the number of prerelease changes. A qualitative investigation revealed several potential explanations, including code complexity, developer experience, the type of functionality, and remote development teams. All these factors were related to the level of coverage and quality, with coverage having an effect even after these adjustments. We also find that the test effort increases exponentially with test coverage, but the reduction in field problems increases linearly with test coverage. This suggests that for most projects the optimal levels of coverage are likely to be well short of 100 ." ] }
1903.04055
2922094103
Context: Software development projects increasingly adopt unit testing as a way to identify and correct program faults early in the construction process. Code that is unit tested should therefore have fewer failures associated with it. Objective: Compare the number of field failures arising in unit tested code against those arising in code that has not been unit tested. Method: We retrieved 2,083,979 crash incident reports associated with the Eclipse integrated development environment project, and processed them to obtain a set of 126,026 unique program failure stack traces associated with a specific popular release. We then run the JaCoCo code test coverage analysis on the same release, obtaining results on the coverage of 216,539 methods and 1,415,253 lines. Finally, we correlated unit tests with failures at the level of tested methods and the number of test-covered lines. Results: Unit-tested code does not appear to be associated with fewer failures. Furthermore, increased code coverage is associated with more failures. Conclusion: Unit testing on its own may not be a sufficient method for preventing program failures.
The above mentioned studies cover only fixed faults. In our research, we work with stack traces, which enable us to analyze field-reported failures associated with crashes. The associated faults include those that have not been fixed, but exclude other faults that are not associated with crashes, such as divergence from the expected functionality or program freezes. Furthermore, through the crash reports we were unable to know the faulty method associated with the crash. However, by placing our matched crash methods in three groups according to their respective position in the stack trace (in the very first stack frame, within the top-6 and the top-10 stack frames) we could obtain useful bounds backed by empirical evidence @cite_1 regarding the coverage of methods that were likely to be defective.
{ "cite_N": [ "@cite_1" ], "mid": [ "2096598529" ], "abstract": [ "A widely shared belief in the software engineering community is that stack traces are much sought after by developers to support them in debugging. But limited empirical evidence is available to confirm the value of stack traces to developers. In this paper, we seek to provide such evidence by conducting an empirical study on the usage of stack traces by developers from the ECLIPSE project. Our results provide strong evidence to this effect and also throws light on some of the patterns in bug fixing using stack traces. We expect the findings of our study to further emphasize the importance of adding stack traces to bug reports and that in the future, software vendors will provide more support in their products to help general users make such information available when filing bug reports." ] }
1903.04104
2965638258
Fashion landmark detection is a challenging task even using the current deep learning techniques, due to the large variation and non-rigid deformation of clothes. In order to tackle these problems, we propose Spatial-Aware Non-Local (SANL) block, an attentive module in the deep neural network which can utilize spatial and semantic information while capturing global dependency. The attention maps are generated by Grad-CAM or a human parsing segmentation model and then fed into the SANL blocks via attention mechanism. We then establish our fashion landmark detection framework on feature pyramid network, equipped with four SANL blocks in the backbone. It is demonstrated by the experimental results on two large-scale fashion datasets that our proposed fashion landmark detection approach with the SANL blocks outperforms the current state-of-the-art methods considerably. Some supplementary experiments on fine-grained image classification also show the effectiveness of the proposed SANL block.
Powering by the large-scale fashion datasets @cite_19 @cite_20 @cite_18 , deep learning based models for fashion related tasks have boomed rapidly in recent years. The key problems of fashion image understanding includes recognition @cite_2 @cite_34 , retrieval @cite_14 @cite_36 @cite_19 , recommendation @cite_0 @cite_9 , generation @cite_22 @cite_5 , and landmark detection @cite_20 @cite_10 @cite_34 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_36", "@cite_9", "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_34", "@cite_10", "@cite_20" ], "mid": [ "1973255633", "", "", "", "", "", "2471768434", "", "", "2798734012", "2743772526", "2511502099" ], "abstract": [ "In this work, the human parsing task, namely decomposing a human image into semantic fashion body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches @math percent by our ATR framework, significantly higher than @math percent based on the state-of-the-art algorithm [28] .", "", "", "", "", "", "Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion.", "", "", "This paper proposes a knowledge-guided fashion network to solve the problem of visual fashion analysis, e.g., fashion landmark localization and clothing category classification. The suggested fashion model is leveraged with high-level human knowledge in this domain. We propose two important fashion grammars: (i) dependency grammar capturing kinematics-like relation, and (ii) symmetry grammar accounting for the bilateral symmetry of clothes. We introduce Bidirectional Convolutional Recurrent Neural Networks (BCRNNs) for efficiently approaching message passing over grammar topologies, and producing regularized landmark layouts. For enhancing clothing category classification, our fashion network is encoded with two novel attention mechanisms, i.e., landmark-aware attention and category-driven attention. The former enforces our network to focus on the functional parts of clothes, and learns domain-knowledge centered representations, leading to a supervised attention mechanism. The latter is goal-driven, which directly enhances task-related features and can be learned in an implicit, top-down manner. Experimental results on large-scale fashion datasets demonstrate the superior performance of our fashion grammar network.", "Fashion landmarks are functional key points defined on clothes, such as corners of neckline, hemline, and cuff. They have been recently introduced [18]as an effective visual representation for fashion image understanding. However, detecting fashion landmarks are challenging due to background clutters, human poses, and scales. To remove the above variations, previous works usually assumed bounding boxes of clothes are provided in training and test as additional annotations, which are expensive to obtain and inapplicable in practice. This work addresses unconstrained fashion landmark detection, where clothing bounding boxes are not provided in both training and test. To this end, we present a novel Deep LAndmark Network (DLAN), where bounding boxes and landmarks are jointly estimated and trained iteratively in an end-to-end manner. DLAN contains two dedicated modules, including a Selective Dilated Convolution for handling scale discrepancies, and a Hierarchical Recurrent Spatial Transformer for handling background clutters. To evaluate DLAN, we present a large-scale fashion landmark dataset, namely Unconstrained Landmark Database (ULD), consisting of 30K images. Statistics show that ULD is more challenging than existing datasets in terms of image scales, background clutters, and human poses. Extensive experiments demonstrate the effectiveness of DLAN over the state-of-the-art methods. DLAN also exhibits excellent generalization across different clothing categories and modalities, making it extremely suitable for real-world fashion analysis.", "Visual fashion analysis has attracted many attentions in the recent years. Previous work represented clothing regions by either bounding boxes or human joints. This work presents fashion landmark detection or fashion alignment, which is to predict the positions of functional key points defined on the fashion items, such as the corners of neckline, hemline, and cuff. To encourage future studies, we introduce a fashion landmark dataset (The dataset is available at http: mmlab.ie.cuhk.edu.hk projects DeepFashion LandmarkDetection.html.) with over 120K images, where each image is labeled with eight landmarks. With this dataset, we study fashion alignment by cascading multiple convolutional neural networks in three stages. These stages gradually improve the accuracies of landmark predictions. Extensive experiments demonstrate the effectiveness of the proposed method, as well as its generalization ability to pose estimation. Fashion landmark is also compared to clothing bounding boxes and human joints in two applications, fashion attribute prediction and clothes retrieval, showing that fashion landmark is a more discriminative representation to understand fashion images." ] }
1903.04104
2965638258
Fashion landmark detection is a challenging task even using the current deep learning techniques, due to the large variation and non-rigid deformation of clothes. In order to tackle these problems, we propose Spatial-Aware Non-Local (SANL) block, an attentive module in the deep neural network which can utilize spatial and semantic information while capturing global dependency. The attention maps are generated by Grad-CAM or a human parsing segmentation model and then fed into the SANL blocks via attention mechanism. We then establish our fashion landmark detection framework on feature pyramid network, equipped with four SANL blocks in the backbone. It is demonstrated by the experimental results on two large-scale fashion datasets that our proposed fashion landmark detection approach with the SANL blocks outperforms the current state-of-the-art methods considerably. Some supplementary experiments on fine-grained image classification also show the effectiveness of the proposed SANL block.
Fashion landmark detection is a rather new topic in fashion understanding, so there is not much prior work we can refer to. In fact, the related approaches can be roughly divided into two category: coordinate based @cite_19 @cite_20 @cite_10 and heatmap based @cite_34 . FashionNet @cite_19 is based on VGG-16 and learns the coordinate and visibility of the landmark directly. FLD @cite_20 utilizes auto-routing mechanism to reduce the large variations in fashion images. DLAN @cite_10 uses selective dilated convolutions for handling scale discrepancies, and a hierarchical recurrent spatial transformer for handling background clutters. And the most recent work @cite_34 constructed the relation between fashion landmarks which is called fashion grammar'' and proposed bidirectional convolutional recurrent neural network (BCRNN) to learn the landmark heatmaps.
{ "cite_N": [ "@cite_19", "@cite_34", "@cite_10", "@cite_20" ], "mid": [ "2471768434", "2798734012", "2743772526", "2511502099" ], "abstract": [ "Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion.", "This paper proposes a knowledge-guided fashion network to solve the problem of visual fashion analysis, e.g., fashion landmark localization and clothing category classification. The suggested fashion model is leveraged with high-level human knowledge in this domain. We propose two important fashion grammars: (i) dependency grammar capturing kinematics-like relation, and (ii) symmetry grammar accounting for the bilateral symmetry of clothes. We introduce Bidirectional Convolutional Recurrent Neural Networks (BCRNNs) for efficiently approaching message passing over grammar topologies, and producing regularized landmark layouts. For enhancing clothing category classification, our fashion network is encoded with two novel attention mechanisms, i.e., landmark-aware attention and category-driven attention. The former enforces our network to focus on the functional parts of clothes, and learns domain-knowledge centered representations, leading to a supervised attention mechanism. The latter is goal-driven, which directly enhances task-related features and can be learned in an implicit, top-down manner. Experimental results on large-scale fashion datasets demonstrate the superior performance of our fashion grammar network.", "Fashion landmarks are functional key points defined on clothes, such as corners of neckline, hemline, and cuff. They have been recently introduced [18]as an effective visual representation for fashion image understanding. However, detecting fashion landmarks are challenging due to background clutters, human poses, and scales. To remove the above variations, previous works usually assumed bounding boxes of clothes are provided in training and test as additional annotations, which are expensive to obtain and inapplicable in practice. This work addresses unconstrained fashion landmark detection, where clothing bounding boxes are not provided in both training and test. To this end, we present a novel Deep LAndmark Network (DLAN), where bounding boxes and landmarks are jointly estimated and trained iteratively in an end-to-end manner. DLAN contains two dedicated modules, including a Selective Dilated Convolution for handling scale discrepancies, and a Hierarchical Recurrent Spatial Transformer for handling background clutters. To evaluate DLAN, we present a large-scale fashion landmark dataset, namely Unconstrained Landmark Database (ULD), consisting of 30K images. Statistics show that ULD is more challenging than existing datasets in terms of image scales, background clutters, and human poses. Extensive experiments demonstrate the effectiveness of DLAN over the state-of-the-art methods. DLAN also exhibits excellent generalization across different clothing categories and modalities, making it extremely suitable for real-world fashion analysis.", "Visual fashion analysis has attracted many attentions in the recent years. Previous work represented clothing regions by either bounding boxes or human joints. This work presents fashion landmark detection or fashion alignment, which is to predict the positions of functional key points defined on the fashion items, such as the corners of neckline, hemline, and cuff. To encourage future studies, we introduce a fashion landmark dataset (The dataset is available at http: mmlab.ie.cuhk.edu.hk projects DeepFashion LandmarkDetection.html.) with over 120K images, where each image is labeled with eight landmarks. With this dataset, we study fashion alignment by cascading multiple convolutional neural networks in three stages. These stages gradually improve the accuracies of landmark predictions. Extensive experiments demonstrate the effectiveness of the proposed method, as well as its generalization ability to pose estimation. Fashion landmark is also compared to clothing bounding boxes and human joints in two applications, fashion attribute prediction and clothes retrieval, showing that fashion landmark is a more discriminative representation to understand fashion images." ] }
1903.04104
2965638258
Fashion landmark detection is a challenging task even using the current deep learning techniques, due to the large variation and non-rigid deformation of clothes. In order to tackle these problems, we propose Spatial-Aware Non-Local (SANL) block, an attentive module in the deep neural network which can utilize spatial and semantic information while capturing global dependency. The attention maps are generated by Grad-CAM or a human parsing segmentation model and then fed into the SANL blocks via attention mechanism. We then establish our fashion landmark detection framework on feature pyramid network, equipped with four SANL blocks in the backbone. It is demonstrated by the experimental results on two large-scale fashion datasets that our proposed fashion landmark detection approach with the SANL blocks outperforms the current state-of-the-art methods considerably. Some supplementary experiments on fine-grained image classification also show the effectiveness of the proposed SANL block.
Visual attention is rather common in human perception, for example, bright color can easily draw our attention and we can locate a cat just by a glance. Attention mechanism has been introduced to deep neural networks and is widely applied to various computer vision tasks, such as image classification @cite_23 @cite_8 @cite_32 , object detection @cite_12 @cite_15 , image captioning @cite_24 @cite_31 @cite_26 and visual question answering @cite_13 @cite_26 .
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_8", "@cite_32", "@cite_24", "@cite_23", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2550553598", "2745461083", "2963495494", "2737725206", "1514535095", "", "2963093690", "2963668159", "2587037412" ], "abstract": [ "Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism — a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods.", "Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr SPICE BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.", "In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90 error), CIFAR-100 (20.45 error) and ImageNet (4.8 single model and single crop, top-5 error). Note that, our method achieves 0.6 top-1 accuracy improvement with 46 trunk depth and 69 forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.", "Recognizing fine-grained categories (e.g., bird species) is difficult due to the challenges of discriminative region localization and fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that region detection and fine-grained feature learning are mutually correlated and thus can reinforce each other. In this paper, we propose a novel recurrent attention convolutional neural network (RA-CNN) which recursively learns discriminative region attention and region-based feature representation at multiple scales in a mutual reinforced way. The learning at each scale consists of a classification sub-network and an attention proposal sub-network (APN). The APN starts from full images, and iteratively generates region attention from coarse to fine by taking previous prediction as a reference, while the finer scale network takes as input an amplified attended region from previous scale in a recurrent way. The proposed RA-CNN is optimized by an intra-scale classification loss and an inter-scale ranking loss, to mutually learn accurate region attention and fine-grained representation. RA-CNN does not need bounding box part annotations and can be trained end-to-end. We conduct comprehensive experiments and show that RA-CNN achieves the best performance in three fine-grained tasks, with relative accuracy gains of 3.3 , 3.7 , 3.8 , on CUB Birds, Stanford Dogs and Stanford Cars, respectively.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.", "", "Modern deep neural network-based object detection methods typically classify candidate proposals using their interior features. However, global and local surrounding contexts that are believed to be valuable for object detection are not fully exploited by existing methods yet. In this work, we take a step towards understanding what is a robust practice to extract and utilize contextual information to facilitate object detection in practice. Specifically, we consider the following two questions: “how to identify useful global contextual information for detecting a certain object?” and “how to exploit local context surrounding a proposal for better inferring its contents?” We provide preliminary answers to these questions through developing a novel attention to context convolution neural network (AC-CNN)-based object detection model. AC-CNN effectively incorporates global and local contextual information into the region-based CNN (e.g., fast R-CNN and faster R-CNN) detection framework and provides better object detection performance. It consists of one attention-based global contextualized (AGC) subnetwork and one multi-scale local contextualized (MLC) subnetwork. To capture global context, the AGC subnetwork recurrently generates an attention map for an input image to highlight useful global contextual locations, through multiple stacked long short-term memory layers. For capturing surrounding local context, the MLC subnetwork exploits both the inside and outside contextual information of each specific proposal at multiple scales. The global and local context are then fused together for making the final decision for detection. Extensive experiments on PASCAL VOC 2007 and VOC 2012 well demonstrate the superiority of the proposed AC-CNN over well-established baselines.", "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling \"where to look\" or visual attention, it is equally important to model \"what words to listen to\" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3 to 60.5 , and from 61.6 to 63.3 on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1 for VQA and 65.4 for COCO-QA.", "We propose augmenting deep neural networks with an attention mechanism for the visual object detection task. As perceiving a scene, humans have the capability of multiple fixation points, each attended to scene content at different locations and scales. However, such a mechanism is missing in the current state-of-the-art visual object detection methods. Inspired by the human vision system, we propose a novel deep network architecture that imitates this attention mechanism. As detecting objects in an image, the network adaptively places a sequence of glimpses of different shapes at different locations in the image. Evidences of the presence of an object and its location are extracted from these glimpses, which are then fused for estimating the object class and bounding box coordinates. Due to lacks of ground truth annotations of the visual attention mechanism, we train our network using a reinforcement learning algorithm with policy gradients. Experiment results on standard object detection benchmarks show that the proposed network consistently outperforms the baseline networks that does not model the attention mechanism." ] }
1903.03993
2922482459
Process-Mining techniques aim to use event data about past executions to gain insight into how processes are executed. While these techniques are proven to be very valuable, they are less successful to reach their goal if the process is flexible and, hence, events can potentially occur in any order. Furthermore, information systems can record events at very low level, which do not match the high-level concepts known at business level. Without abstracting sequences of events to high-level concepts, the results of applying process mining (e.g., discovered models) easily become very complex and difficult to interpret, which ultimately means that they are of little use. A large body of research exists on event abstraction but typically a large amount of domain knowledge is required to be fed in, which is often not readily available. Other abstraction techniques are unsupervised, which give lower accuracy. This paper puts forward a technique that requires limited domain knowledge that can be easily provided. Traces are divided in sessions, and each session is abstracted as one single high-level activity execution. The abstraction is based on a combination of automatic clustering and visualization methods. The technique was assessed on two case studies that evidently exhibits a large amount of behavior. The results clearly illustrate the benefits of the abstraction to convey knowledge to stakeholders.
provide a number of approaches that, based on some process documentation, map events to higher-level activities @cite_12 @cite_12 @cite_16 @cite_18 , using log-replay techniques and solving constraint-satisfaction problems. The idea of replaying logs onto partial models is also in @cite_7 : the input is a set of models of the life cycles of the high-level activities, where each life-cycle step is manually mapped to low-level events. @cite_2 rely on the provision of one Markov model, where each Markov-model transition is a different high-level activity. In turn, each transition is broken down into a new Markov model where low-level events are modelled. @cite_11 assume process analysts to provide a probabilistic process model with the high-level activities, along with a probabilistic mapping between low-level events and high-level activities. It returns an enumeration of all potential interpretations of each log traces in terms of high-level activities, ranked by the respective likelihood. @cite_14 , authors propose a supervised abstraction technique that is applicable in those case in which annotations with the high-level interpretations of the low-level events are available for a subset of traces.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_2", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2028079011", "2462439317", "2507315788", "2048743347", "169578854", "2590654759", "2242651631" ], "abstract": [ "Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. This event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. While it is essential to map the produced events to activities of a given process model for conformance analysis and process model annotation, it is also an important step for the straightforward interpretation of process discovery results. In order to accomplish this mapping with minimal manual effort, we developed a semi-automatic approach that maps events to activities using the solution of a corresponding constraint satisfaction problem. The approach extracts behavioral profiles from both the log and the model to build constraints to efficiently reduce the number of possible mappings. The evaluation with an industry process model collection and simulated event logs demonstrates the effectiveness of the approach and its robustness towards non-conforming execution logs.", "Process mining techniques focus on extracting insight in processes from event logs. In many cases, events recorded in the event log are too fine-grained, causing process discovery algorithms to discover incomprehensible process models or process models that are not representative of the event log. We show that when process discovery algorithms are only able to discover an unrepresentative process model from a low-level event log, structure in the process can in some cases still be discovered by first abstracting the event log to a higher level of granularity. This gives rise to the challenge to bridge the gap between an original low-level event log and a desired high-level perspective on this log, such that a more structured or more comprehensible process model can be discovered. We show that supervised learning can be leveraged for the event abstraction task when annotations with high-level interpretations of the low-level events are available for a subset of the sequences (i.e., traces). We present a method to generate feature vector representations of events based on XES extensions, and describe an approach to abstract events in an event log with Condition Random Fields using these event features. Furthermore, we propose a sequence-focused metric to evaluate supervised event abstraction results that fits closely to the tasks of process discovery and conformance checking. We conclude this paper by demonstrating the usefulness of supervised event abstraction for obtaining more structured and or more comprehensible process models using both real life event data and synthetic event data.", "Process mining techniques analyze processes based on event data. A crucial assumption for process analysis is that events correspond to occurrences of meaningful activities. Often, low-level events recorded by information systems do not directly correspond to these. Abstraction methods, which provide a mapping from the recorded events to activities recognizable by process workers, are needed. Existing supervised abstraction methods require a full model of the entire process as input and cannot handle noise. This paper proposes a supervised abstraction method based on behavioral activity patterns that capture domain knowledge on the relation between activities and events. Through an alignment between the activity patterns and the low-level event logs an abstracted event log is obtained. Events in the abstracted event log correspond to instantiations of recognizable activities. The method is evaluated with domain experts of a Norwegian hospital using an event log from their digital whiteboard system. The evaluation shows that state-of-the art process mining methods provide valuable insights on the usage of the system when using the abstracted event log, but fail when using the original lower level event log.", "Currently there is a gap between the high level of abstraction at which business processes are modelled and the low level nature of the events that are recorded during process execution. When applying process mining techniques, it is possible to discover the logic behind low-level events but it is difficult to determine the relationship between those low-level events and the high-level activities in a given process model. In this work, we introduce a hierarchical Markov model to capture both the high-level behaviour of activities and the low-level behaviour of events. We also develop an expectation-maximisation technique to discover that kind of hierarchical model from a given event log and a high-level description of the business process. We use this technique to understand the behaviour of agents in business processes, from the control-flow perspective and from the organisational perspective as well. Using an agent-based simulation platform (AOR), we implemented a purchasing process and generated an event log in order to illustrate the benefits of the proposed approach and to compare the results with existing process mining techniques, namely the ones that are available in the ProM framework.", "While the maturity of process mining algorithms increases and more process mining tools enter the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Current approaches for event log abstraction most often try to abstract from the events in an automated way which does not capture the required domain knowledge to fit business activities. This can lead to misinterpretation of discovered process models. We developed an approach which aims to abstract an event log to the same abstraction level which is needed by the business. We use domain knowledge extracted from existing process documentation in order to automatically match events and activities. Our proposed abstraction approach is able to deal with n:m relations between events and activities and also supports concurrency. We evaluated our approach in a case study with a German IT outsourcing company.", "Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology. In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches. The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.", "We consider the scenario where the executions of different business processes are traced into a log, where each trace describes a process instance as a sequence of low-level events representing basic kinds of operations. In this context, we address a novel problem: given a description of the processes' behaviors in terms of high-level activities instead of low-level events, and in the presence of uncertainty in the mapping between events and activities, find all the interpretations of each trace @math . Specifically, an interpretation is a pair @math that provides a two-level \"explanation\" for @math : @math is a sequence of activities that may have triggered the events in @math , and W is a process whose model admits @math . To solve this problem, we propose a probabilistic framework representing \"consistent\" @math 's interpretations, where each interpretation is associated with a probability score." ] }
1903.03968
2921127630
Visual localization is one of the primary capabilities for mobile robots. Long-term visual localization in real time is particularly challenging, in which the robot is required to efficiently localize itself using visual data where appearance may change significantly over time. In this paper, we propose a cloud-based visual localization system targeting at long-term localization in real time. On the robot, we employ two estimators to achieve accurate and real-time performance. One is a sliding-window based visual inertial odometry, which integrates constraints from consecutive observations and self-motion measurements, as well as the constraints induced by localization on the cloud. This estimator builds a local visual submap as the virtual observation which is then sent to the cloud as new localization constraints. The other one is a delayed state Extended Kalman Filter to fuse the pose of the robot localized from the cloud, the local odometry and the high-frequency inertial measurements. On the cloud, we propose a longer sliding-window based localization method to aggregate the virtual observations for larger field of view, leading to more robust alignment between virtual observations and the map. Under this architecture, the robot can achieve drift-free and real-time localization using onboard resources even in a network with limited bandwidth, high latency and existence of package loss, which enables the autonomous navigation in real-world environment. We evaluate the effectiveness of our system on a dataset with challenging seasonal and illuminative variations. We further validate the robustness of the system under challenging network conditions.
VIO uses low-cost visual sensors aided by inertial instruments to provide precise and high-frequency relative pose estimation along the robot trajectory. With the development of VIO using onboard resources, a real-time positional feedback for robot autonomous navigation becomes possible. Generally, there are two branches of methods in this area. The first branch utilizes a nonlinear filter to estimate the pose, which is very efficient and light-weighted, thus is suitable for mobile platform with limited computational resources @cite_24 @cite_18 . Another branch of methods leverages non-linear optimization techniques based on local keyframes, i.e. local bundle adjustment @cite_21 @cite_12 . The optimization based methods can achieve higher performance than the filter based solution, but require more computational resources when the sliding window is long. However, the positional feedback by both branches of methods mentioned above are only reliable in short time as VIO has drift in long duration, calling for correction from the localization.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_21", "@cite_12" ], "mid": [ "2274359774", "2118223742", "2091790851", "" ], "abstract": [ "In this paper, we present a square-root inverse sliding window filter (SR-ISWF) for vision-aided inertial navigation systems (VINS). While regular inverse filters suffer from numerical issues, employing their square-root equivalent enables the usage of single-precision number representations, thus achieving considerable speed ups as compared to doubleprecision alternatives on resource-constrained mobile platforms. Besides a detailed description of the SR-ISWF for VINS, which focuses on the numerical procedures that enable exploiting the problem’s structure for gaining in efficiency, this paper presents a thorough validation of the algorithm’s processing requirements and achieved accuracy. In particular, experiments are conducted using a commercial-grade cell phone, where the proposed algorithm is shown to achieve the same level of estimation accuracy, when compared to state-of-the-art VINS algorithms, with significantly higher speed.", "In this paper, we present an extended Kalman filter (EKF)-based algorithm for real-time vision-aided inertial navigation. The primary contribution of this work is the derivation of a measurement model that is able to express the geometric constraints that arise when a static feature is observed from multiple camera poses. This measurement model does not require including the 3D feature position in the state vector of the EKF and is optimal, up to linearization errors. The vision-aided inertial navigation algorithm we propose has computational complexity only linear in the number of features, and is capable of high-precision pose estimation in large-scale real-world environments. The performance of the algorithm is demonstrated in extensive experimental results, involving a camera IMU system localizing within an urban area.", "Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual-inertial odometry or simultaneous localization and mapping SLAM. While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual-inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual-inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.", "" ] }
1903.03968
2921127630
Visual localization is one of the primary capabilities for mobile robots. Long-term visual localization in real time is particularly challenging, in which the robot is required to efficiently localize itself using visual data where appearance may change significantly over time. In this paper, we propose a cloud-based visual localization system targeting at long-term localization in real time. On the robot, we employ two estimators to achieve accurate and real-time performance. One is a sliding-window based visual inertial odometry, which integrates constraints from consecutive observations and self-motion measurements, as well as the constraints induced by localization on the cloud. This estimator builds a local visual submap as the virtual observation which is then sent to the cloud as new localization constraints. The other one is a delayed state Extended Kalman Filter to fuse the pose of the robot localized from the cloud, the local odometry and the high-frequency inertial measurements. On the cloud, we propose a longer sliding-window based localization method to aggregate the virtual observations for larger field of view, leading to more robust alignment between virtual observations and the map. Under this architecture, the robot can achieve drift-free and real-time localization using onboard resources even in a network with limited bandwidth, high latency and existence of package loss, which enables the autonomous navigation in real-world environment. We evaluate the effectiveness of our system on a dataset with challenging seasonal and illuminative variations. We further validate the robustness of the system under challenging network conditions.
Visual localization in long term remains a open question in the community as the feature matching involving lots of outliers. Some researches resort to the more robust laser map for localization. In @cite_17 , multi-session laser and visual data are used to optimize the laser map and extract the salient and stable subset for visual localization. @cite_4 leverages stable depth information recovered from stereo cameras for registration with prior laser map. Another way to improve localization performance in changing environment is to aggregate the variation in long term. In @cite_0 @cite_10 @cite_27 @cite_13 , topological graph is proposed to manage experiences from multiple sessions of data, where nodes encode sensory data and edges encode relative transformations. Whenever localization fails, measurements are added to the map to enrich diversity, which helps future localization. Note that both classes of methods call for intensive computation to match the feature in a large map and solve sliding-window based optimization for more robust pose estimation.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_0", "@cite_27", "@cite_10", "@cite_17" ], "mid": [ "2560863959", "2909955272", "2561611286", "", "2795158114", "2964307318" ], "abstract": [ "In this paper, we present an online landmark selection method for distributed long-term visual localization systems in bandwidth-constrained environments. Sharing a common map for online localization provides a fleet of autonomous vehicles with the possibility to maintain and access a consistent map source, and therefore reduce redundancy while increasing efficiency. However, connectivity over a mobile network imposes strict bandwidth constraints and thus the need to minimize the amount of exchanged data. The wide range of varying appearance conditions encountered during long-term visual localization offers the potential to reduce data usage by extracting only those visual cues which are relevant at the given time. Motivated by this, we propose an unsupervised method of adaptively selecting landmarks according to how likely these landmarks are to be observable under the prevailing appearance condition. The ranking function this selection is based upon exploits landmark co-observability statistics collected in past traversals through the mapped area. Evaluation is performed over different outdoor environments, large time-scales and varying appearance conditions, including the extreme transition from day-time to night-time, demonstrating that with our appearance-dependent selection method, we can significantly reduce the amount of landmarks used for localization while maintaining or even improving the localization performance.", "As simultaneous localization and mapping (SLAM) techniques have flourished with the advent of 3D Light Detection and Ranging (LiDAR) sensors, accurate 3D maps are readily available. Many researchers turn their attention to localization in a previously acquired 3D map. In this paper, we propose a novel and lightweight camera-only visual positioning algorithm that involves localization within prior 3D LiDAR maps. We aim to achieve the consumer level global positioning system (GPS) accuracy using vision within the urban environment, where GPS signal is unreliable. Via exploiting a stereo camera, depth from the stereo disparity map is matched with 3D LiDAR maps. A full six degree of freedom (DOF) camera pose is estimated via minimizing depth residual. Powered by visual tracking that provides a good initial guess for the localization, the proposed depth residual is successfully applied for camera pose estimation. Our method runs online, as the average localization error is comparable to ones resulting from state-of-the-art approaches. We validate the proposed method as a stand-alone localizer using KITTI dataset and as a module in the SLAM framework using our own dataset.", "Vision-based, route-following algorithms enable autonomous robots to repeat manually taught paths over long distances using inexpensive vision sensors. However, these methods struggle with long-term, outdoor operation due to the challenges of environmental appearance change caused by lighting, weather, and seasons. While techniques exist to address appearance change by using multiple experiences over different environmental conditions, they either provide topological-only localization, require several manually taught experiences in different conditions, or require extensive offline mapping to produce metric localization. For real-world use, we would like to localize metrically to a single manually taught route and gather additional visual experiences during autonomous operations. Accordingly, we propose a novel multi-experience localization (MEL) algorithm developed specifically for route-following applications; it provides continuous, six-degree-of-freedom (6DoF) localization with relative uncertainty to a privileged (manually taught) path using several experiences simultaneously. We validate our algorithm through two experiments: i) an offline performance analysis on a 9km subset of a challenging 27km route-traversal dataset and ii) an online field trial where we demonstrate autonomy on a small 250m loop over the course of a sunny day. Both exhibit significant appearance change due to lighting variation. Through these experiments we show that safe localization can be achieved by bridging the appearance gap.", "", "Long term mapping and localization are the primary components for mobile robots in real world application deployment, of which the crucial challenge is the robustness and stability. In this paper, we introduce a topological local-metric framework (TLF), aiming at dealing with environmental changes, erroneous measurements and achieving constant complexity. TLF organizes the sensor data collected by the robot in a topological graph, of which the geometry is only encoded in the edge, i.e. the relative poses between adjacent nodes, relaxing the global consistency to local consistency. Therefore the TLF is more robust to unavoidable erroneous measurements from sensor information matching since the error is constrained in the local. Based on TLF, as there is no global coordinate, we further propose the localization and navigation algorithms by switching across multiple local metric coordinates. Besides, a lifelong memorizing mechanism is presented to memorize the environmental changes in the TLF with constant complexity, as no global optimization is required. In experiments, the framework and algorithms are evaluated on 21-session data collected by stereo cameras, which are sensitive to illumination, and compared with the state-of-art global consistent framework. The results demonstrate that TLF can achieve similar localization accuracy with that from global consistent framework, but brings higher robustness with lower cost. The localization performance can also be improved from sessions because of the memorizing mechanism. Finally, equipped with TLF, the robot navigates itself in a 1 km session autonomously.", "Long-term visual localization in outdoor environment is a challenging problem, especially faced with the cross-seasonal, bi-directional tasks and changing environment. In this paper we propose a novel visual inertial localization framework that localizes against the LiDAR-built map. Based on the geometry information of the laser map, a hybrid bundle adjustment framework is proposed, which estimates the poses of the cameras with respect to the prior laser map as well as optimizes the state variables of the online visual inertial odometry system simultaneously. For more accurate cross-modal data association, the laser map is optimized using multi-session laser and visual data to extract the salient and stable subset for visual localization. To validate the efficiency of the proposed method, we collect data in south part of our campus in different seasons, along the same and opposite-direction route. In all sessions of localization data, our proposed method gives satisfactory results, and shows the superiority of the hybrid bundle adjustment and map optimization11https: www.youtube.com watch?v=vHAZtv2sBC4." ] }
1903.03968
2921127630
Visual localization is one of the primary capabilities for mobile robots. Long-term visual localization in real time is particularly challenging, in which the robot is required to efficiently localize itself using visual data where appearance may change significantly over time. In this paper, we propose a cloud-based visual localization system targeting at long-term localization in real time. On the robot, we employ two estimators to achieve accurate and real-time performance. One is a sliding-window based visual inertial odometry, which integrates constraints from consecutive observations and self-motion measurements, as well as the constraints induced by localization on the cloud. This estimator builds a local visual submap as the virtual observation which is then sent to the cloud as new localization constraints. The other one is a delayed state Extended Kalman Filter to fuse the pose of the robot localized from the cloud, the local odometry and the high-frequency inertial measurements. On the cloud, we propose a longer sliding-window based localization method to aggregate the virtual observations for larger field of view, leading to more robust alignment between virtual observations and the map. Under this architecture, the robot can achieve drift-free and real-time localization using onboard resources even in a network with limited bandwidth, high latency and existence of package loss, which enables the autonomous navigation in real-world environment. We evaluate the effectiveness of our system on a dataset with challenging seasonal and illuminative variations. We further validate the robustness of the system under challenging network conditions.
Instead of local features, the global image feature is shown to be more robust to appearance change. In @cite_22 @cite_7 , they try to find the topological localization through place recognition and give satisfactory performance. In spite of the faster computation, this class of methods fail to provide metric localization, thus insufficient as a positional feedback for navigation.
{ "cite_N": [ "@cite_22", "@cite_7" ], "mid": [ "2284029970", "2110405746" ], "abstract": [ "Visual place recognition is a challenging problem due to the vast range of ways in which the appearance of real-world places can vary. In recent years, improvements in visual sensing capabilities, an ever-increasing focus on long-term mobile robot autonomy, and the ability to draw on state-of-the-art research in other disciplines—particularly recognition in computer vision and animal navigation in neuroscience—have all contributed to significant advances in visual place recognition systems. This paper presents a survey of the visual place recognition research landscape. We start by introducing the concepts behind place recognition—the role of place recognition in the animal kingdom, how a “place” is defined in a robotics context, and the major components of a place recognition system. Long-term robot operations have revealed that changing appearance can be a significant factor in visual place recognition failure; therefore, we discuss how place recognition solutions can implicitly or explicitly account for appearance change within the environment. Finally, we close with a discussion on the future of visual place recognition, in particular with respect to the rapid advances being made in the related fields of deep learning, semantic scene understanding, and video description.", "Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100 precision with recall rates of up to 60 ." ] }
1903.03968
2921127630
Visual localization is one of the primary capabilities for mobile robots. Long-term visual localization in real time is particularly challenging, in which the robot is required to efficiently localize itself using visual data where appearance may change significantly over time. In this paper, we propose a cloud-based visual localization system targeting at long-term localization in real time. On the robot, we employ two estimators to achieve accurate and real-time performance. One is a sliding-window based visual inertial odometry, which integrates constraints from consecutive observations and self-motion measurements, as well as the constraints induced by localization on the cloud. This estimator builds a local visual submap as the virtual observation which is then sent to the cloud as new localization constraints. The other one is a delayed state Extended Kalman Filter to fuse the pose of the robot localized from the cloud, the local odometry and the high-frequency inertial measurements. On the cloud, we propose a longer sliding-window based localization method to aggregate the virtual observations for larger field of view, leading to more robust alignment between virtual observations and the map. Under this architecture, the robot can achieve drift-free and real-time localization using onboard resources even in a network with limited bandwidth, high latency and existence of package loss, which enables the autonomous navigation in real-world environment. We evaluate the effectiveness of our system on a dataset with challenging seasonal and illuminative variations. We further validate the robustness of the system under challenging network conditions.
Cloud computing provides a potential solution to real-time resource-aware SLAM and localization system, where intensive computation can be transferred to high-end servers, leaving light-weighted algorithms running on the robots. @cite_20 uses keyframes to track the pose of camera locally and sends keyframes to the server for global localization. The single frame localization suffers from the large percentage of outliers when appearance variation exists, thus brings instable estimation to the robot. @cite_11 also utilizes single image data to infer loop closure, thus impacted by the similar problem. In addition, the images are sent over the network in these two methods, which may require high bandwidth. Some methods designed for cooperative system demonstrate insights for the resource-aware problem. @cite_1 proposes an efficient solution for the cooperative mapping problem by introducing augmented variables to parallelize the computation. To decrease the bandwidth requirement, @cite_6 employs sparsification methods to compress the edges, which is also utilized in our system.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_20", "@cite_11" ], "mid": [ "2897551763", "", "1552672384", "2000462596" ], "abstract": [ "In this paper, we address the problem of cooperative mapping (CM) using datasets collected by multiple users at different times, when the transformation between the users’ starting poses is unknown. Specifically, we formulate CM as a constrained optimization problem, in which each user's independently estimated trajectory and map are merged together by imposing geometric constraints between commonly observed point and line features. Additionally, we provide an algorithm for efficiently solving the CM problem, by taking advantage of its structure. The proposed solution is proven to be batch-least-squares (BLS) optimal over all users’ datasets, while it is less memory demanding and lends itself to parallel implementations. In particular, our solution is shown to be faster than the standard BLS solution, when the overlap between the users’ data is small. Furthermore, our algorithm is resource-aware as it is able to consistently trade accuracy for lower processing cost, by retaining only an informative subset of the common-feature constraints. Experimental results based on visual and inertial measurements collected from multiple users within large buildings are used to assess the performance of the proposed CM algorithm.", "", "Recent improvements in image-based localization have produced powerful methods that scale up to the massive 3D models emerging from modern Structure-from-Motion techniques. However, these approaches are too resource intensive to run in real-time, let alone to be implemented on mobile devices. In this paper, we propose to combine the scalability of such a global localization system running on a server with the speed and precision of a local pose tracker on a mobile device. Our approach is both scalable and drift-free by design and eliminates the need for loop closure. We propose two strategies to combine the information provided by local tracking and global localization. We evaluate our system on a large-scale dataset of the historic inner city of Aachen where it achieves interactive framerates at a localization error of less than 50cm while using less than 5MB of memory on the mobile device.", "This paper presents an architecture, protocol, and parallel algorithms for collaborative 3D mapping in the cloud with low-cost robots. The robots run a dense visual odometry algorithm on a smartphone-class processor. Key-frames from the visual odometry are sent to the cloud for parallel optimization and merging with maps produced by other robots. After optimization the cloud pushes the updated poses of the local key-frames back to the robots. All processes are managed by Rapyuta, a cloud robotics framework that runs in a commercial data center. This paper includes qualitative visualization of collaboratively built maps, as well as quantitative evaluation of localization accuracy, bandwidth usage, processing speeds, and map storage." ] }
1903.04049
2922262856
Nowadays, spatial data are ubiquitous in various fields of science, such as transportation and the social Web. A recent research direction in analyzing spatial data is to provide means for "exploratory analysis" of such data where analysts are guided towards interesting options in consecutive analysis iterations. Typically, the guidance component learns analyst's preferences using her explicit feedback, e.g., picking a spatial point or selecting a region of interest. However, it is often the case that analysts forget or don't feel necessary to explicitly express their feedback in what they find interesting. Our approach captures implicit feedback on spatial data. The approach consists of observing mouse moves (as a means of analyst's interaction) and also the explicit analyst's interaction with data points in order to discover interesting spatial regions with dense mouse hovers. In this paper, we define, formalize and explore Interesting Dense Regions (IDRs) which capture preferences of analysts, in order to automatically find interesting spatial highlights. Our approach involves a polygon-based abstraction layer for capturing preferences. Using these IDRs, we highlight points to guide analysts in the analysis process. We discuss the efficiency and effectiveness of our approach through realistic examples and experiments on Airbnb and Yelp datasets.
Information Highlighting. The literature contains few instances of information highlighting approaches @cite_33 @cite_29 @cite_14 @cite_17 . However, all these methods are objective, i.e., they assume that analyst's preferences are given as a constant input and will never change in the future. This limits their functionality for serving scenarios of exploratory analysis. The only way to fulfill spatial guidance'' is to consider the evolutionary and subjective nature of analyst's feedback. In our approach, the feedback vector gets updated in time based on the implicit feedback of the analyst.
{ "cite_N": [ "@cite_29", "@cite_14", "@cite_33", "@cite_17" ], "mid": [ "2118016608", "1961845056", "2151177154", "2105552354" ], "abstract": [ "Coordinated view geovisualizations allow users to interactively pick and attend to data observations across multiple views. This is frequently supported by the transient application of a visual effect to an observation during a mouse selection or rollover. This technique, known as highlighting, is typically implemented using a dedicated bright and saturated color to outline observations. In this paper we present a range of possibilities for alternative approaches to color highlighting, beginning with examples from the range of available visual variables and moving beyond those options to other, non-visual variable methods such as the use of lines to connect highlighted observations. We also describe design criteria for highlighting methods that can be used to predict and test the suitability of different approaches, and apply those criteria to our set of proposed methods to identify potential good candidates for implementation in future systems. Next, we present a set of highlighting types that define bas...", "General visualization tools typically require manual specification of views: analysts must select data variables and then choose which transformations and visual encodings to apply. These decisions often involve both domain and visualization design expertise, and may impose a tedious specification process that impedes exploration. In this paper, we seek to complement manual chart construction with interactive navigation of a gallery of automatically-generated visualizations. We contribute Voyager, a mixed-initiative system that supports faceted browsing of recommended charts chosen according to statistical and perceptual measures. We describe Voyager's architecture, motivating design principles, and methods for generating and interacting with visualization recommendations. In a study comparing Voyager to a manual visualization specification tool, we find that Voyager facilitates exploration of previously unseen data and leads to increased data variable coverage. We then distill design implications for visualization tools, in particular the need to balance rapid exploration and targeted question-answering.", "Highlighting was the basic viewing control mechanism in computer graphics and visualization to guide users’ attention in reading diagrams, images, graphs and digital texts. As the rapid growth of theory and practice in information visualization, highlighting has extended its role that acts as not only a viewing control, but also an interaction control and a graphic recommendation mechanism in knowledge visualization and visual analytics. In this work, we attempt to give a formal summarization and classification of the existing highlighting methods and techniques that can be applied in Information Visualization, Visual Analytics and Knowledge Visualization. We propose a new three-layer model of highlighting. We discuss the responsibilities of each layer in the different stage of the visual information processing.", "This paper presents scented widgets, graphical user interface controls enhanced with embedded visualizations that facilitate navigation in information spaces. We describe design guidelines for adding visual cues to common user interface widgets such as radio buttons, sliders, and combo boxes and contribute a general software framework for applying scented widgets within applications with minimal modifications to existing source code. We provide a number of example applications and describe a controlled experiment which finds that users exploring unfamiliar data make up to twice as many unique discoveries using widgets imbued with social navigation data. However, these differences equalize as familiarity with the data increases." ] }
1903.04049
2922262856
Nowadays, spatial data are ubiquitous in various fields of science, such as transportation and the social Web. A recent research direction in analyzing spatial data is to provide means for "exploratory analysis" of such data where analysts are guided towards interesting options in consecutive analysis iterations. Typically, the guidance component learns analyst's preferences using her explicit feedback, e.g., picking a spatial point or selecting a region of interest. However, it is often the case that analysts forget or don't feel necessary to explicitly express their feedback in what they find interesting. Our approach captures implicit feedback on spatial data. The approach consists of observing mouse moves (as a means of analyst's interaction) and also the explicit analyst's interaction with data points in order to discover interesting spatial regions with dense mouse hovers. In this paper, we define, formalize and explore Interesting Dense Regions (IDRs) which capture preferences of analysts, in order to automatically find interesting spatial highlights. Our approach involves a polygon-based abstraction layer for capturing preferences. Using these IDRs, we highlight points to guide analysts in the analysis process. We discuss the efficiency and effectiveness of our approach through realistic examples and experiments on Airbnb and Yelp datasets.
Online recommendation approaches can also be considered as an information highlighting approach where recommended items count as highlights. Most recommendation algorithms are space-agnostic and do not take into account the spatial information. While few approaches focus on the spatial dimension @cite_7 @cite_24 @cite_23 , they still lack the evolutionary feedback capturing. Moreover, most recommendation methods miss result diversification'', i.e., highlights may not be effective due to overlaps.
{ "cite_N": [ "@cite_24", "@cite_23", "@cite_7" ], "mid": [ "", "2033201131", "1972436494" ], "abstract": [ "", "Recently, result diversification has attracted a lot of attention as a means to improve the quality of results retrieved by user queries. In this paper, we propose a new, intuitive definition of diversity called DisC diversity. A DisC diverse subset of a query result contains objects such that each object in the result is represented by a similar object in the diverse subset and the objects in the diverse subset are dissimilar to each other. We show that locating a minimum DisC diverse subset is an NP-hard problem and provide heuristics for its approximation. We also propose adapting DisC diverse subsets to a different degree of diversification. We call this operation zooming. We present efficient implementations of our algorithms based on the M-tree, a spatial index structure, and experimentally evaluate their performance.", "Recent advances in localization techniques have fundamentally enhanced social networking services, allowing users to share their locations and location-related contents, such as geo-tagged photos and notes. We refer to these social networks as location-based social networks (LBSNs). Location data bridges the gap between the physical and digital worlds and enables a deeper understanding of users' preferences and behavior. This addition of vast geo-spatial datasets has stimulated research into novel recommender systems that seek to facilitate users' travels and social interactions. In this paper, we offer a systematic review of this research, summarizing the contributions of individual efforts and exploring their relations. We discuss the new properties and challenges that location brings to recommender systems for LBSNs. We present a comprehensive survey analyzing 1) the data source used, 2) the methodology employed to generate a recommendation, and 3) the objective of the recommendation. We propose three taxonomies that partition the recommender systems according to the properties listed above. First, we categorize the recommender systems by the objective of the recommendation, which can include locations, users, activities, or social media. Second, we categorize the recommender systems by the methodologies employed, including content-based, link analysis-based, and collaborative filtering-based methodologies. Third, we categorize the systems by the data sources used, including user profiles, user online histories, and user location histories. For each category, we summarize the goals and contributions of each system and highlight the representative research effort. Further, we provide comparative analysis of the recommender systems within each category. Finally, we discuss the available data-sets and the popular methods used to evaluate the performance of recommender systems. Finally, we point out promising research topics for future work. This article presents a panorama of the recommender systems in location-based social networks with a balanced depth, facilitating research into this important research theme." ] }
1903.04049
2922262856
Nowadays, spatial data are ubiquitous in various fields of science, such as transportation and the social Web. A recent research direction in analyzing spatial data is to provide means for "exploratory analysis" of such data where analysts are guided towards interesting options in consecutive analysis iterations. Typically, the guidance component learns analyst's preferences using her explicit feedback, e.g., picking a spatial point or selecting a region of interest. However, it is often the case that analysts forget or don't feel necessary to explicitly express their feedback in what they find interesting. Our approach captures implicit feedback on spatial data. The approach consists of observing mouse moves (as a means of analyst's interaction) and also the explicit analyst's interaction with data points in order to discover interesting spatial regions with dense mouse hovers. In this paper, we define, formalize and explore Interesting Dense Regions (IDRs) which capture preferences of analysts, in order to automatically find interesting spatial highlights. Our approach involves a polygon-based abstraction layer for capturing preferences. Using these IDRs, we highlight points to guide analysts in the analysis process. We discuss the efficiency and effectiveness of our approach through realistic examples and experiments on Airbnb and Yelp datasets.
Feedback Capturing. Several approaches are proposed in the state of the art for capturing different forms of feedback @cite_6 @cite_21 @cite_19 @cite_12 @cite_0 @cite_27 . The common approach is a top- @math processing methodology in order to prune the search space based on the explicit feedback of the analyst and return a small subset of interesting results of size @math . A clear distinction of our proposal is that it doesn't aim for pruning, but leveraging the actual data with potential interesting results (i.e., highlights) that the analyst may miss due to the huge volume of spatial data. Moreover, in a typical top- @math processing algorithm, analyst's choices are limited to @math . On the contrary, our IDR approach enables a freedom of choice where highlights get seamlessly updated with new analyst's choices.
{ "cite_N": [ "@cite_21", "@cite_6", "@cite_0", "@cite_19", "@cite_27", "@cite_12" ], "mid": [ "2145932132", "2126106168", "2093457890", "2518579368", "1994432310", "2091341267" ], "abstract": [ "In this paper, we study the problem of discovering interesting patterns through user's interactive feedback. We assume a set of candidate patterns (ie, frequent patterns) has already been mined. Our goal is to help a particular user effectively discover interesting patterns according to his specific interest. Without requiring a user to explicitly construct a prior knowledge to measure the interestingness of patterns, we learn the user's prior knowledge from his interactive feedback. We propose two models to represent a user's prior: the log linear model and biased belief model. The former is designed for item-set patterns, whereas the latter is also applicable to sequential and structural patterns. To learn these models, we present a two-stage approach, progressive shrinking and clustering, to select sample patterns for feedback. The experimental results on real and synthetic data sets demonstrate the effectiveness of our approach.", "Mining frequent patterns from a hidden dataset is an important task with 43 various real-life applications. In this research, we propose a solution to this problem that is based on Markov Chain Monte Carlo (MCMC) sampling of frequent patterns. Instead of returning all the frequent patterns, the proposed paradigm returns a small set of randomly selected patterns so that the clandestinity of the dataset can be maintained. Our solution also allows interactive sampling, so that the sampled patterns can fulfill the user's requirement effectively. We show experimental results from several real life datasets to validate the capability and usefulness of our solution; in particular, we show examples that by using our proposed solution, an eCommerce marketplace can allow pattern mining on user session data without disclosing the data to the public; such a mining paradigm helps the sellers of the marketplace, which eventually boost the marketplace's own revenue.", "User data is becoming increasingly available in multiple domains ranging from phone usage traces to data on the social Web. The analysis of user data is appealing to scientists who work on population studies, recommendations, and large-scale data analytics. We argue for the need for an interactive analysis to understand the multiple facets of user data and address different analytics scenarios. Since user data is often sparse and noisy, we propose to produce labeled groups that describe users with common properties and develop IUGA, an interactive framework based on group discovery primitives to explore the user space. At each step of IUGA, an analyst visualizes group members and may take an action on the group (add remove members) and choose an operation (exploit explore) to discover more groups and hence more users. Each discovery operation results in k most relevant and diverse groups. We formulate group exploitation and exploration as optimization problems and devise greedy algorithms to enable efficient group discovery. Finally, we design a principled validation methodology and run extensive experiments that validate the effectiveness of IUGA on large datasets for different user space analysis scenarios.", "In this paper, we argue that database systems be augmented with an automated data exploration service that methodically steers users through the data in a meaningful way. Such an automated system is crucial for deriving insights from complex datasets found in many big data applications such as scientific and healthcare applications as well as for reducing the human effort of data exploration. Towards this end, we present AIDE, an Automatic Interactive Data Exploration framework that assists users in discovering new interesting data patterns and eliminate expensive ad-hoc exploratory queries. AIDE relies on a seamless integration of classification algorithms and data management optimization techniques that collectively strive to accurately learn the user interests based on his relevance feedback on strategically collected samples. We present a number of exploration techniques as well as optimizations that minimize the number of samples presented to the user while offering interactive performance. AIDE can deliver highly accurate query predictions for very common conjunctive queries with small user effort while, given a reasonable number of samples, it can predict with high accuracy complex disjunctive queries. It provides interactive performance as it limits the user wait time per iteration of exploration to less than a few seconds.", "It is known that productive pattern discovery from data has to interactively involve the user as directly as possible. State-of-the-art toolboxes require the specification of sophisticated workflows with an explicit selection of a data mining method, all its required parameters, and a corresponding algorithm. This hinders the desired rapid interaction---especially with users that are experts of the data domain rather than data mining experts. In this paper, we present a fundamentally new approach towards user involvement that relies exclusively on the implicit feedback available from the natural analysis behavior of the user, and at the same time allows the user to work with a multitude of pattern classes and discovery algorithms simultaneously without even knowing the details of each algorithm. To achieve this goal, we are relying on a recently proposed co-active learning model and a special feature representation of patterns to arrive at an adaptively tuned user interestingness model. At the same time, we propose an adaptive time-allocation strategy to distribute computation time among a set of underlying mining algorithms. We describe the technical details of our approach, present the user interface for gathering implicit feedback, and provide preliminary evaluation results.", "Interactive ad-hoc analytics over large datasets has become an increasingly popular use case. We detail the challenges encountered when building a distributed system that allows the interactive exploration of a data cube. We introduce DICE, a distributed system that uses a novel session-oriented model for data cube exploration, designed to provide the user with interactive sub-second latencies for specified accuracy levels. A novel framework is provided that combines three concepts: faceted exploration of data cubes, speculative execution of queries and query execution over subsets of data. We discuss design considerations, implementation details and optimizations of our system. Experiments demonstrate that DICE provides a sub-second interactive cube exploration experience at the billion-tuple scale that is at least 33 faster than current approaches." ] }
1903.04049
2922262856
Nowadays, spatial data are ubiquitous in various fields of science, such as transportation and the social Web. A recent research direction in analyzing spatial data is to provide means for "exploratory analysis" of such data where analysts are guided towards interesting options in consecutive analysis iterations. Typically, the guidance component learns analyst's preferences using her explicit feedback, e.g., picking a spatial point or selecting a region of interest. However, it is often the case that analysts forget or don't feel necessary to explicitly express their feedback in what they find interesting. Our approach captures implicit feedback on spatial data. The approach consists of observing mouse moves (as a means of analyst's interaction) and also the explicit analyst's interaction with data points in order to discover interesting spatial regions with dense mouse hovers. In this paper, we define, formalize and explore Interesting Dense Regions (IDRs) which capture preferences of analysts, in order to automatically find interesting spatial highlights. Our approach involves a polygon-based abstraction layer for capturing preferences. Using these IDRs, we highlight points to guide analysts in the analysis process. We discuss the efficiency and effectiveness of our approach through realistic examples and experiments on Airbnb and Yelp datasets.
Few works formulate fusing approaches of explicit and implicit feedbacks to better capture user preferences @cite_1 @cite_28 @cite_5 . Our approach functions purely on implicit feedback and does not require any sort of explicit signal from the analyst.
{ "cite_N": [ "@cite_28", "@cite_5", "@cite_1" ], "mid": [ "1606827124", "2088621849", "2060497456" ], "abstract": [ "In recent years, the proliferation of Volunteered Geographic Information (VGI) has enabled many Internet users to contribute to the construction of rich and increasingly complex spatial datasets. This growth of geo-referenced information and the often loose semantic structure of such data have resulted in spatial information overload. For this reason, a semantic gap has emerged between unstructured geo-spatial datasets and high-level ontological concepts. Filling this semantic gap can help reduce spatial information overload, therefore facilitating both user interactions and the analysis of such interaction. Implicit Feedback analysis is the focus of our work. In this paper we address this problem by proposing a system that executes spatial discovery queries. Our system combines a semantically-rich and spatially-poor ontology (DBpedia) with a spatially-rich and semantically-poor VGI dataset (OpenStreetMap). This technique differs from existing ones, such as the aggregated dataset LinkedGeoData, as it is focused on user interest analysis and takes map scale into account. System architecture, functionality and preliminary results gathered about the system performance are discussed.", "Most collaborative filtering algorithms are based on certain statistical models of user interests built from either explicit feedback (eg: ratings, votes) or implicit feedback (eg: clicks, purchases). Explicit feedbacks are more precise but more difficult to collect from users while implicit feedbacks are much easier to collect though less accurate in reflecting user preferences. In the existing literature, separate models have been developed for either of these two forms of user feedbacks due to their heterogeneous representation. However in most real world recommended systems both explicit and implicit user feedback are abundant and could potentially complement each other. It is desirable to be able to unify these two heterogeneous forms of user feedback in order to generate more accurate recommendations. In this work, we developed matrix factorization models that can be trained from explicit and implicit feedback simultaneously. Experimental results of multiple datasets showed that our algorithm could effectively combine these two forms of heterogeneous user feedback to improve recommendation quality.", "Information overload is a pervasive problem in many application domains. One way of addressing this problem is to create user profiles that filter out irrelevant information while presenting the users with information matching their interests. This approach has not been widely exploited in GIS. In our spatial application, we log user interactions, and implicitly infer their interests from this information to generate a user interest model. In particular, mouse movements and map browsing behaviour are analysed. Experiments presented in this paper examine the accuracy of implicitly determined spatial interests. Personalisation techniques can subsequently be applied to provide users with the most relevant information with regard to their interests." ] }
1903.04049
2922262856
Nowadays, spatial data are ubiquitous in various fields of science, such as transportation and the social Web. A recent research direction in analyzing spatial data is to provide means for "exploratory analysis" of such data where analysts are guided towards interesting options in consecutive analysis iterations. Typically, the guidance component learns analyst's preferences using her explicit feedback, e.g., picking a spatial point or selecting a region of interest. However, it is often the case that analysts forget or don't feel necessary to explicitly express their feedback in what they find interesting. Our approach captures implicit feedback on spatial data. The approach consists of observing mouse moves (as a means of analyst's interaction) and also the explicit analyst's interaction with data points in order to discover interesting spatial regions with dense mouse hovers. In this paper, we define, formalize and explore Interesting Dense Regions (IDRs) which capture preferences of analysts, in order to automatically find interesting spatial highlights. Our approach involves a polygon-based abstraction layer for capturing preferences. Using these IDRs, we highlight points to guide analysts in the analysis process. We discuss the efficiency and effectiveness of our approach through realistic examples and experiments on Airbnb and Yelp datasets.
Region Discovery. Our approach finds interesting dense regions (IDRs) in order to derive analyst's implicit preferences. There exist several approaches to infer a spatial region for a given set of points @cite_9 @cite_16 @cite_26 @cite_15 @cite_13 @cite_2 . The common approach is to cluster points in form of concave and convex polygons. @cite_9 , an algorithm is proposed to verify if a given point @math on the surface of a sphere is located inside, outside, or along the border of an arbitrary spherical polygon. In @cite_16 @cite_26 , a non-convex polygon is constructed from a set of input points on a plane. In @cite_15 @cite_13 , imprecise regions are delineated into a convex or concave polygon. In our approach, it is important to discover regions by capturing mouse move points. In case a concave polygon is constructed, the dents'' of such a polygon may entail points which are not necessarily in @math . In the IDR's algorithm, however, we adapt Quickhull @cite_2 , due its simplicity, efficiency and its natural implementation of convex polygons.
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2032834743", "2077363605", "2153504150", "2112582401", "2164799852", "12110447" ], "abstract": [ "The convex onion-peeling of a set of points is the organization of these points into a sequence of interpolating convex polygons. This method is adequate to detect the shape of the \"center\" of a set of points when this shape is convex. However it reveals inadequate to detect non-convex shapes. Alternatively, we propose an extension of the convex onion-peeling method. It consists in representing a set of points with a sequence of non-convex polylines which are computed using the A-shape descriptor. This method is applied to robust statistical estimation. It is shown that it makes the estimators robust to the presence of outliers by removing suspect samples from the available population.", "An algorithm for determining if any given point,P, on the surface of a sphere is located inside, outside, or along the border of an arbitrary spherical polygon,S, is described. The polygon is described by specifying coordinates of its vertices, and coordinates of some pointX which is known to lie withinS. The algorithm is based on the principle that an arc joiningX andP will cross the border ofS an odd number of times ifP lies outsideS, and an even number of times ifP lies withinS. The algorithm has been implemented as a set of FORTRAN subroutines, and a listing is provided. The algorithm and subroutine package can be used with spherical polygons containing holes, or with composited spherical polygons.", "The convex hull of a set of points is the smallest convex set that contains the points. This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull algorithm with the general-dimension Beneath-Beyond Algorithm. It is similar to the randomized, incremental algorithms for convex hull and delaunay triangulation. We provide empirical evidence that the algorithm runs faster when the input contains nonextreme points and that it used less memory. computational geometry algorithms have traditionally assumed that input sets are well behaved. When an algorithm is implemented with floating-point arithmetic, this assumption can lead to serous errors. We briefly describe a solution to this problem when computing the convex hull in two, three, or four dimensions. The output is a set of “thick” facets that contain all possible exact convex hulls of the input. A variation is effective in five or more dimensions.", "This paper describes several steps in the derivation of boundaries of imprecise regions using the Web as the information source. We discuss how to use the Web to obtain locations that are part of and locations that are not part of the region to be delineated, and then we propose methods to compute the region algorithmically. The methods introduced are evaluated to judge the potential of the approach.", "This paper presents a simple, flexible, and efficient algorithm for constructing a possibly non-convex, simple polygon that characterizes the shape of a set of input points in the plane, termed a characteristic shape. The algorithm is based on the Delaunay triangulation of the points. The shape produced by the algorithm is controlled by a single normalized parameter, which can be used to generate a finite, totally ordered family of related characteristic shapes, varying between the convex hull at one extreme and a uniquely defined shape with minimum area. An optimal O(nlogn) algorithm for computing the shapes is presented. Characteristic shapes possess a number of desirable properties, and the paper includes an empirical investigation of the shapes produced by the algorithm. This investigation provides experimental evidence that with appropriate parameterization the algorithm is able to accurately characterize the shape of a wide range of different point distributions and densities. The experiments detail the effects of changing parameter values and provide an indication of some ''good'' parameter values to use in certain circumstances.", "There are many situations in GIScience where it would be useful to be able to assign a region to characterize the space occupied by a set of points. Such a region should represent the location or configuration of the points as an aggregate, abstracting away from the individual points themselves. In this paper, we call such a region a ‘footprint' for the points. We investigate and compare a number of methods for producing such footprints, with respect to nine general criteria. The discussion identifies a number of potential choices and avenues for further research. Finally, we contrast the related research already conducted in this area, highlighting differences between these existing constructs and our ‘footprints'." ] }
1903.03491
2921651844
Backward diffusion processes appear naturally in image enhancement and deblurring applications. However, the inverse problem of backward diffusion is known to be ill-posed and straightforward numerical algorithms are unstable. So far, existing stabilisation strategies in the literature require sophisticated numerics to solve the underlying initial value problem. Therefore, it is desirable to establish a backward diffusion model which implements a smart stabilisation approach that can be used in combination with a simple numerical scheme. We derive a class of space-discrete one-dimensional backward diffusion as gradient descent of energies where we gain stability by imposing range constraints. Interestingly, these energies are even convex. Furthermore, we establish a comprehensive theory for the time-continuous evolution and we show that stability carries over to a simple explicit time discretisation of our model. Finally, we confirm the stability and usefulness of our technique in experiments in which we enhance the contrast of digital greyscale and colour images.
As mentioned in sec:app:grey:global , applying the global model -- with @math -- is identical to histogram equalisation (a common formulation can e.g. be found in @cite_30 ). Furthermore, there exist other closely related histogram specification techniques -- such as @cite_24 @cite_33 @cite_17 -- which can have the same steady state. If we compare our evolution with the histogram modification flow introduced by Sapiro and Caselles @cite_24 , we see that their flow can also be translated into a combination of repulsion among grey-values and a barrier function. However, in @cite_24 the repulsive force is constant, and the barrier function quadratic. Thus, they cannot be derived from the same kind of interaction between the @math and their reflected counterparts as in our paper.
{ "cite_N": [ "@cite_30", "@cite_33", "@cite_24", "@cite_17" ], "mid": [ "", "2019234714", "2061269324", "1972397034" ], "abstract": [ "", "This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors.", "Abstract The explicit use of partial differential equations (PDEs) in image processing became a major research topic in the past years. In this work we present a framework for histogram (pixel-value distribution) modification via ordinary and partial differential equations. In this way, the image contrast is improved. We show that the histogram can be modified to achieve any given distribution as the steady state solution of an image flow. The contrast modification can be performed while simultaneously reducing noise in a unique PDE, avoiding noise sharpening effects of classical algorithms. The approach is extended to local contrast enhancement as well. A variational interpretation of the flow is presented and theoretical results on the existence of solutions are given.", "We consider the problem of exact histogram specification for digital (quantized) images. The goal is to transform the input digital image into an output (also digital) image that follows a prescribed histogram. Classical histogram modification methods are designed for real-valued images where all pixels have different values, so exact histogram specification is straightforward. Digital images typically have numerous pixels which share the same value. If one imposes the prescribed histogram to a digital image, usually there are numerous ways of assigning the prescribed values to the quantized values of the image. Therefore, exact histogram specification for digital images is an ill-posed problem. In order to guarantee that any prescribed histogram will be satisfied exactly, all pixels of the input digital image must be rearranged in a strictly ordered way. Further, the obtained strict ordering must faithfully account for the specific features of the input digital image. Such a task can be realized if we are able to extract additional representative information (called auxiliary attributes) from the input digital image. This is a real challenge in exact histogram specification for digital images. We propose a new method that efficiently provides a strict and faithful ordering for all pixel values. It is based on a well designed variational approach. Noticing that the input digital image contains quantization noise, we minimize a specialized objective function whose solution is a real-valued image with slightly reduced quantization noise, which remains very close to the input digital image. We show that all the pixels of this real-valued image can be ordered in a strict way with a very high probability. Then transforming the latter image into another digital image satisfying a specified histogram is an easy task. Numerical results show that our method outperforms by far the existing competing methods." ] }
1903.03491
2921651844
Backward diffusion processes appear naturally in image enhancement and deblurring applications. However, the inverse problem of backward diffusion is known to be ill-posed and straightforward numerical algorithms are unstable. So far, existing stabilisation strategies in the literature require sophisticated numerics to solve the underlying initial value problem. Therefore, it is desirable to establish a backward diffusion model which implements a smart stabilisation approach that can be used in combination with a simple numerical scheme. We derive a class of space-discrete one-dimensional backward diffusion as gradient descent of energies where we gain stability by imposing range constraints. Interestingly, these energies are even convex. Furthermore, we establish a comprehensive theory for the time-continuous evolution and we show that stability carries over to a simple explicit time discretisation of our model. Finally, we confirm the stability and usefulness of our technique in experiments in which we enhance the contrast of digital greyscale and colour images.
Another related research topic is the rich field of colour image enhancement which we broach in sec:app:colour . A short review of existing methods -- as well as two new ideas -- is presented in @cite_28 . Therein, Bassiou and Kotropoulus also mention the colour gamut problem for methods which perform contrast enhancement in a different colour space and transform colour coordinates to RGB afterwards. Of particular interest are the publications by Naik and Murthy @cite_12 and Nikolova and Steidl @cite_13 whose ideas are used in sec:app:colour . Both of them suggest -- based on an affine colour transform -- strategies to overcome the colour gamut problem while avoiding colour artefacts in the resulting image. A recent approach which also makes use of these ideas is presented by Tian and Cohen @cite_45 . @cite_6 make use of the HSV colour space to avoid the colour gamut problem when enhancing the contrast of colour images. A variational approach for contrast enhancement which tries to approximate the hue of the input image was recently published by @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_6", "@cite_45", "@cite_13", "@cite_12" ], "mid": [ "2464766066", "2145268773", "2498851312", "2565651106", "1994796550", "2162342922" ], "abstract": [ "The aim of this paper is twofold. First, we propose a new method for enhancing the contrast of gray-value images. We use the difference of the average local contrast measures between the original and the enhanced images within a variational framework. This enables the user to intuitively control the contrast level and the scale of the enhanced details. Moreover, our model avoids large modifications of the original image histogram. Thereby it preserves the global illumination of the scene and it can cope with large areas having similar gray values. The minimizer of the proposed functional is computed by a gradient descent algorithm in connection with a polynomial approximation of the average local contrast measure. The polynomial approximation is computed via Bernstein polynomials. In the second part, the approach is extended to a variational enhancement method for color images. The model approximately preserves the hue of the original image and additionally includes a total variation term to correct the possible noise. The method requires no post- or preprocessing. The minimization problem is solved with a hybrid primal---dual algorithm. Experiments demonstrate the efficiency and the flexibility of the proposed approaches in comparison with state-of-the-art methods.", "A novel color image histogram equalization approach is proposed that exploits the correlation between color components and it is enhanced by a multi-level smoothing technique borrowed from statistical language engineering. Multi-level smoothing aims at dealing efficiently with the problem of unseen color values, either considered independently or in combination with others. It is applied here to the HSI color space for the probability of intensity and the probability of saturation given the intensity, while the hue is left unchanged. Moreover, the proposed approach is extended by an empirical technique, which is based on a hue preserving non-linear transformation, in order to eliminate the gamut problem. This is the second method proposed in the paper. The equalized images by the two methods are compared to those produced by other well-known methods. The better quality of the images equalized by the proposed methods is judged in terms of their visual appeal and objective figures of merit, such as the entropy and the Kullback-Leibler divergence estimates between the resulting color histogram and the multivariate uniform probability density function.", "Conventional contrast enhancement techniques often fail to produce satisfactory results for low-contrast images, and cannot be automatically applied to different images because their processing parameters must be specified manually to produce a satisfactory result for a given image. This work presents a colour-preserving contrast enhancement (CPCE) algorithm for images. Modification to images was performed in the HSV colour-space. The Hue component is preserved (unchanged), luminance modified using Contrast Limited Adaptive Histogram Equalization (CLAHE), while Saturation components were up-scaled using a derived mapping function on the approximate components of its discrete wavelet transform. Implementation was done in MATLAB and compared with CLAHE and Histogram Equalization (HE) algorithms in the RGB colour space. Subjective (visual quality inspection) and objective parameters (Peak-signal-to-noise ratio (PSNR), Absolute Mean Brightness Error (AMBE) and Mean squared error (MSE)) were used for performance evaluation. The method produced images with the lowest MSE, AMBE, and highest PSNR when tested, yet preserved the visual quality of the image.", "In this paper, we present a color consistency technique in order to make images in the same collection share the same color style and to avoid gamut problems. Some previous methods define simple global parameter-based models and use optimizing algorithms to obtain the unknown parameters, which usually cause gamut problems in bright and dark regions. Our method is based on the range-preserving histogram specification and can enforce images to share the same color style, without resulting in gamut problems. We divide the input images into two sets having respectively high visual quality and low visual quality. The high visual quality images are used to make color balance. And then the low visual quality images are color transferred using the previous corrected high quality images. Our experiments indicate that such histogram-based color correction method is better than the compared algorithm.", "Color image enhancement is a complex and challenging task in digital imaging with abundant applications. Preserving the hue of the input image is crucial in a wide range of situations. We propose simple image enhancement algorithms, which conserve the hue and preserve the range (gamut) of the R, G, B channels in an optimal way. In our setup, the intensity input image is transformed into a target intensity image whose histogram matches a specified, well-behaved histogram. We derive a new color assignment methodology where the resulting enhanced image fits the target intensity image. We analyze the obtained algorithms in terms of chromaticity improvement and compare them with the unique and quite popular histogram-based hue and range preserving algorithm of Naik and Murthy. Numerical tests confirm our theoretical results and show that our algorithms perform much better than the Naik-Murthy algorithm. In spite of their simplicity, they compete with well-established alternative methods for images where hue-preservation is desired.", "The first step in many techniques for processing intensity and saturation in color images keeping hue unaltered is the transformation of the image data from RGB space to other color spaces such as LHS, HSI, YIQ, HSV, etc. Transforming from one space to another and processing in these spaces usually generate a gamut problem, i.e., the values of the variables may not be in their respective intervals. We study enhancement techniques for color images theoretically in a generalized setup. A principle is suggested to make the transformations gamut-problem free. Using the same principle, a class of hue-preserving, contrast-enhancing transformations is proposed; they generalize existing grey scale contrast intensification techniques to color images. These transformations are also seen to bypass the above mentioned color coordinate transformations for image enhancement. The developed principle is used to generalize the histogram equalization scheme for grey scale images to color images." ] }
1903.03676
2922217177
In data mining, the data in various business cases (e.g., sales, marketing, and demography) gets refreshed periodically. During the refresh, the old dataset is replaced by a new one. Confirming the quality of the new dataset can be challenging because changes are inevitable. How do analysts distinguish reasonable real-world changes vs. errors related to data capture or data transformation? While some of the errors are easy to spot, the others may be more subtle. In order to detect such types of errors, an analyst will typically have to examine the data manually and assess if the data produced are "believable". Due to the scale of data, such examination is tedious and laborious. Thus, to save the analyst's time, it is important to detect these errors automatically. However, both the literature and the industry are still lacking methods to assess the difference between old and new versions of a dataset during the refresh process. In this paper, we present a comprehensive set of tests for the detection of abnormalities in a refreshed dataset, based on the information obtained from a previous vintage of the dataset. We implement these tests in automated test harness made available as an open-source package, called RESTORE, for R language. The harness accepts flat or hierarchical numeric datasets. We also present a validation case study, where we apply our test harness to hierarchical demographic datasets. The results of the study and feedback from data scientists using the package suggest that RESTORE enables fast and efficient detection of errors in the data as well as decreases the cost of testing.
. In regression testing, test suites can be large, and it can be time-consuming to process all the test cases. Thus, test selection techniques are widely used. Engstr "o @cite_35 @cite_12 report a review of existing regression test selection techniques based on empirical evaluations. Kapfhammer and Soffa @cite_36 as well as Willmor and Embury @cite_24 present test criteria, which capture interactions between an application and a database. @cite_32 introduce a regression test selection technique to selects a subset of existing test cases. This work assumes the presence of non-code changes, such as configuration files of databases. @cite_9 present a similarity- and partition-based test case selection approach for database application regression testing. The test cases are generated from classification tree models. @cite_0 propose a two-phase test selection technique. In phase one, they adopt an impact analysis based on dependencies that exist among the components of database applications. In phase two, they propose two algorithms to reduce the number of test cases. The existing test selection techniques focus on the regression testing for applications rather than the data that these applications ingest. Thus, they are complementary to our work.
{ "cite_N": [ "@cite_35", "@cite_36", "@cite_9", "@cite_32", "@cite_24", "@cite_0", "@cite_12" ], "mid": [ "1998989165", "2131467114", "2135827170", "2104252056", "2150327567", "14293016", "" ], "abstract": [ "Regression testing is the verification that previously functioning software remains after a change. In this paper we report on a systematic review of empirical evaluations of regression test selection techniques, published in major software engineering journals and conferences. Out of 2,923 papers analyzed in this systematic review, we identified 28 papers reporting on empirical comparative evaluations of regression test selection techniques. They report on 38 unique studies (23 experiments and 15 case studies), and in total 32 different techniques for regression test selection are evaluated. Our study concludes that no clear picture of the evaluated techniques can be provided based on existing empirical evidence, except for a small group of related techniques. Instead, we identified a need for more and better empirical studies were concepts are evaluated rather than small variations. It is also necessary to carefully consider the context in which studies are undertaken.", "Although a software application always executes within a particular environment, current testing methods have largely ignored these environmental factors. Many applications execute in an environment that contains a database. In this paper, we propose a family of test adequacy criteria that can be used to assess the quality of test suites for database-driven applications. Our test adequacy criteria use dataflow information that is associated with the entities in a relational database. Furthermore, we develop a unique representation of a database-driven application that facilitates the enumeration of database interaction associations. These associations can reflect an application's definition and use of database entities at multiple levels of granularity. The usage of a tool to calculate intraprocedural database interaction associations for two case study applications indicates that our adequacy criteria can be computed with an acceptable time and space overhead.", "Context: This paper presents an approach for selecting regression test cases in the context of large-scale database applications. We focus on a black-box (specification-based) approach, relying on classification tree models to model the input domain of the system under test (SUT), in order to obtain a more practical and scalable solution. We perform an experiment in an industrial setting where the SUT is a large database application in Norway's tax department. Objective: We investigate the use of similarity-based test case selection for supporting black box regression testing of database applications. We have developed a practical approach and tool (DART) for functional black-box regression testing of database applications. In order to make the regression test approach scalable for large database applications, we needed a test case selection strategy that reduces the test execution costs and analysis effort. We used classification tree models to partition the input domain of the SUT in order to then select test cases. Rather than selecting test cases at random from each partition, we incorporated a similarity-based test case selection, hypothesizing that it would yield a higher fault detection rate. Method: An experiment was conducted to determine which similarity-based selection algorithm was the most suitable in selecting test cases in large regression test suites, and whether similarity-based selection was a worthwhile and practical alternative to simpler solutions. Results: The results show that combining similarity measurement with partition-based test case selection, by using similarity-based test case selection within each partition, can provide improved fault detection rates over simpler solutions when specific conditions are met regarding the partitions. Conclusions: Under the conditions present in the experiment the improvements were marginal. However, a detailed analysis concludes that the similarity-based selection strategy should be applied when a large number of test cases are contained in each partition and there is significant variability within partitions. If these conditions are not present, incorporating similarity measures is not worthwhile, since the gain is negligible over a random selection within each partition.", "Regression testing is an important activity performed to validate modified software, and one of its key tasks is regression test selection (RTS) -- selecting a subset of existing test cases to run on the modified software. Most existing RTS techniques focus on changes made to code components and completely ignore non-code elements, such as configuration files and databases, which can also change and affect the system behavior. To address this issue, we present a new RTS technique that performs accurate test selection in the presence of changes to non-code components. To do this, our technique computes traceability between test cases and the external data accessed by an application, and uses this information to perform RTS in the presence of changes to non-code elements. We present our technique, a prototype implementation of our technique, and a set of preliminary empirical results that illustrate the feasibility, effectiveness, and potential usefulness of our approach.", "Regression testing is a widely-used method for checking whether modifications to software systems have adversely affected the overall functionality. This is potentially an expensive process, since test suites can be large and time-consuming to execute. The overall costs can be reduced if tests that cannot possibly be affected by the modifications are ignored. Various techniques for selecting subsets of tests for re-execution have been proposed, as well as methods for proving that particular test selection criteria do not omit relevant tests. However, current selection techniques are focused on identifying the impact of modifications on program state. They assume that the only factor that can change the result of a test case is the set of input values given for it, while all other influences on the behavior of the program (such as external interrupts or hardware faults) will be constant for each re-execution of the test. This assumption is impractical in the case of an important class of software system, i.e. systems which make use of an external persistent state, such as a database management system, to share information between application invocations. If applied naively to such systems, existing regression test selection algorithms will omit certain test cases which could in fact be affected by the modifications to the code. In this paper, we show why this is the case, and propose a new definition of safety for regression test selection that takes into account the interactions of the program with a database state. We also present an algorithm and associated tool that safely performs test selection for database-driven applications, and (since efficiency is an important concern for test selection algorithms) we propose a variant that defines safety in terms of database state alone. This latter form of safety allows more efficient regression testing to be performed for applications in which program state is used only as a temporary holding space for data from the database. The claims of increased efficiency of both forms of safety are supported by the results of an empirical comparison with existing techniques.", "Database applications features such as Structured Query Language programming, exception handling, integrity constraints, and table triggers pose difficulties for maintenance activities, especially for regression testing that follows modifying database applications. In this chapter, we address these difficulties and propose a two-phase regression testing methodology. In phase 1, we explore control flow and data flow analysis issues of database applications. Then, we propose an impact analysis technique that is based on dependencies that exist among the components of database applications. This analysis leads to selecting test cases from the initial test suite for regression testing the modified application. In phase 2, we propose two algorithms for reducing the number of regression test cases. The Graph Walk algorithm walks through the control flow graph of database modules and selects a safe set of test cases to retest. The Call Graph Firewall algorithm uses a firewall for the inter-procedural level. Our experience with this regression testing methodology shows that the impact analysis technique is adequate for selecting regression tests and that phase 2 techniques can be used for further reduction in the number of these tests.", "" ] }
1903.03546
2922091810
Graph-based transforms have been shown to be powerful tools in terms of image energy compaction. However, when the support increases to best capture signal dependencies, the computation of the basis functions becomes rapidly untractable. This problem is in particular compelling for high dimensional imaging data such as light fields. The use of local transforms with limited supports is a way to cope with this computational difficulty. Unfortunately, the locality of the support may not allow us to fully exploit long term signal dependencies present in both the spatial and angular dimensions in the case of light fields. This paper describes sampling and prediction schemes with local graph-based transforms enabling to efficiently compact the signal energy and exploit dependencies beyond the local graph support. The proposed approach is investigated and is shown to be very efficient in the context of spatio-angular transforms for quasi-lossless compression of light fields.
A graph has been shown to be a useful tool to describe the intrinsic image structure, hence to capture correlation, which is necessary for image compression. An interesting review of graph spectral image processing can be found in @cite_36 .
{ "cite_N": [ "@cite_36" ], "mid": [ "2964228184" ], "abstract": [ "Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2-D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this paper, we overview recent graph spectral techniques in GSP specifically for image video processing. The topics covered include image compression, image restoration, image filtering, and image segmentation." ] }
1903.03546
2922091810
Graph-based transforms have been shown to be powerful tools in terms of image energy compaction. However, when the support increases to best capture signal dependencies, the computation of the basis functions becomes rapidly untractable. This problem is in particular compelling for high dimensional imaging data such as light fields. The use of local transforms with limited supports is a way to cope with this computational difficulty. Unfortunately, the locality of the support may not allow us to fully exploit long term signal dependencies present in both the spatial and angular dimensions in the case of light fields. This paper describes sampling and prediction schemes with local graph-based transforms enabling to efficiently compact the signal energy and exploit dependencies beyond the local graph support. The proposed approach is investigated and is shown to be very efficient in the context of spatio-angular transforms for quasi-lossless compression of light fields.
For image compression, the signal is defined on an undirected connected graph @math which consists of a finite set @math of vertices corresponding to the pixels. A set @math of edges connect each pixel and its 4-nearest neighbors in the spatial domain. By encoding pixel similarities into the weights associated to edges, the undirected graph encodes the image structure. A Fourier-like transform for graph signals called graph Fourier transform (GFT) @cite_28 and many variants @cite_25 @cite_9 @cite_4 @cite_18 @cite_3 @cite_12 have been used as adaptive transforms for coding piecewise smooth and natural images.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_28", "@cite_9", "@cite_3", "@cite_25", "@cite_12" ], "mid": [ "2050320982", "2120485259", "2101491865", "1965106955", "1980811840", "2144392304", "2962978500" ], "abstract": [ "In this letter, we provide a theoretical analysis of optimal predictive transform coding based on the Gaussian Markov random field (GMRF) model. It is shown that the eigen-analysis of the precision matrix of the GMRF model is optimal in decorrelating the signal. The resulting graph transform degenerates to the well-known 2-D discrete cosine transform (DCT) for a particular 2-D first order GMRF, although it is not a unique optimal solution. Furthermore, we present an optimal scheme to perform predictive transform coding based on conditional probabilities of a GMRF model. Such an analysis can be applied to both motion prediction and intra-frame predictive coding, and may lead to improvements in coding efficiency in the future.", "In this paper a graph-based transform is proposed as an alternative to the discrete cosine transform. An image or video signal is represented as a graph signal, where the graph is generated so as not to cross an image edge in a local region, i.e., square block. Then, spectral representation of graph signal is used to form transform kernels by finding eigenvectors of Laplacian matrix of the graph. This method requires to include additional information, i.e., edge map or adjacency matrix, into a bitstream so that a decoder can regenerate the exactly same graph used at an encoder. The novelty of this paper includes finding the optimal adjacency matrix and compressing it using context-based adaptive binary arithmetic coding. Coding efficiency improvement can be achieved when an image block contains arbitrarily shaped edges by applying the transform not across the edges. The proposed transform is applied to coding depth maps used for view synthesis in a multi-view video coding system, and provides 14 bit rate savings on average.", "In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogs to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions.", "Depth map compression is important for efficient network transmission of 3D visual data in texture-plus-depth format, where the observer can synthesize an image of a freely chosen viewpoint via depth-image-based rendering (DIBR) using received neighboring texture and depth maps as anchors. Unlike texture maps, depth maps exhibit unique characteristics like smooth interior surfaces and sharp edges that can be exploited for coding gain. In this paper, we propose a multi-resolution approach to depth map compression using previously proposed graph-based transform (GBT). The key idea is to treat smooth surfaces and sharp edges of large code blocks separately and encode them in different resolutions: encode edges in original high resolution (HR) to preserve sharpness, and encode smooth surfaces in low-pass-filtered and down-sampled low resolution (LR) to save coding bits. Because GBT does not filter across edges, it produces small or zero high-frequency components when coding smooth-surface depth maps and leads to a compact representation in the transform domain. By encoding down-sampled surface regions in LR GBT, we achieve representation compactness for a large block without the high computation complexity associated with an adaptive large-block GBT. At the decoder, encoded LR surfaces are up-sampled and interpolated while preserving encoded HR edges. Experimental results show that our proposed multi-resolution approach using GBT reduced bitrate by 68 compared to native H.264 intra with DCT encoding original HR depth maps, and by 55 compared to single-resolution GBT encoding small blocks.", "Piecewise smooth (PWS) images (e.g., depth maps or animation images) contain unique signal characteristics such as sharp object boundaries and slowly varying interior surfaces. Leveraging on recent advances in graph signal processing, in this paper, we propose to compress the PWS images using suitable graph Fourier transforms (GFTs) to minimize the total signal representation cost of each pixel block, considering both the sparsity of the signal’s transform coefficients and the compactness of transform description. Unlike fixed transforms, such as the discrete cosine transform, we can adapt GFT to a particular class of pixel blocks. In particular, we select one among a defined search space of GFTs to minimize total representation cost via our proposed algorithms, leveraging on graph optimization techniques, such as spectral clustering and minimum graph cuts. Furthermore, for practical implementation of GFT, we introduce two techniques to reduce computation complexity. First, at the encoder, we low-pass filter and downsample a high-resolution (HR) pixel block to obtain a low-resolution (LR) one, so that a LR-GFT can be employed. At the decoder, upsampling and interpolation are performed adaptively along HR boundaries coded using arithmetic edge coding, so that sharp object boundaries can be well preserved. Second, instead of computing GFT from a graph in real-time via eigen-decomposition, the most popular LR-GFTs are pre-computed and stored in a table for lookup during encoding and decoding. Using depth maps and computer-graphics images as examples of the PWS images, experimental results show that our proposed multiresolution-GFT scheme outperforms H.264 intra by 6.8 dB on average in peak signal-to-noise ratio at the same bit rate.", "In this work a new set of edge-adaptive transforms (EATs) is presented as an alternative to the standard DCTs used in image and video coding applications. These transforms avoid filtering across edges in each image block, thus, they avoid creating large high frequency coefficients. These transforms are then combined with the DCT in H.264 AVC and a transform mode selection algorithm is used to choose between DCT and EAT in an RD-optimized manner. These transforms are applied to coding depth maps used for view synthesis in a multi-view video coding system, and provides up to 29 bit rate reduction for a fixed quality in the synthesized views.", "Recent advent in graph signal processing (GSP) has led to the development of new graph-based transforms and wavelets for image video coding, where the underlying graph describes inter-pixel correlations. In this paper, we develop a new transform called signed graph Fourier transform (SGFT), where the underlying graph G contains negative edges that describe anti-correlations between pixel pairs. Specifically, we first construct a one-state Markov process that models both inter-pixel correlations and anti-correlations. We then derive the corresponding precision matrix, and show that the loopy graph Laplacian matrix Q of a graph G with a negative edge and two self-loops at its end nodes is approximately equivalent. This proves that the eigenvectors of Q — called SGFT — approximates the optimal Karhunen-Loeve Transform (KLT). We show the importance of the self-loops in G to ensure Q is positive semi-definite. We prove that the first eigenvector of Q is piecewise constant (PWC), and thus can well approximate a piecewise smooth (PWS) signal like a depth image. Experimental results show that a block-based coding scheme based on SGFT outperforms a previous scheme using graph transforms with only positive edges for several depth images." ] }
1903.03546
2922091810
Graph-based transforms have been shown to be powerful tools in terms of image energy compaction. However, when the support increases to best capture signal dependencies, the computation of the basis functions becomes rapidly untractable. This problem is in particular compelling for high dimensional imaging data such as light fields. The use of local transforms with limited supports is a way to cope with this computational difficulty. Unfortunately, the locality of the support may not allow us to fully exploit long term signal dependencies present in both the spatial and angular dimensions in the case of light fields. This paper describes sampling and prediction schemes with local graph-based transforms enabling to efficiently compact the signal energy and exploit dependencies beyond the local graph support. The proposed approach is investigated and is shown to be very efficient in the context of spatio-angular transforms for quasi-lossless compression of light fields.
The Laplacian matrix @math is symmetric positive semi-definitive and therefore can be diagonalized as: where @math is the matrix whose rows are the eigenvectors of the graph Laplacian and @math is the diagonal matrix whose diagonal elements are the corresponding eigenvalues. The eigenvectors @math of the Laplacian of the graph are analogous to the Fourier bases in the Euclidean domain and allow representing the signals residing on the graph as a linear combination of eigenfunctions akin to Fourier Analysis @cite_28 . This is known as the Graph Fourier transform.
{ "cite_N": [ "@cite_28" ], "mid": [ "2101491865" ], "abstract": [ "In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogs to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions." ] }
1903.03546
2922091810
Graph-based transforms have been shown to be powerful tools in terms of image energy compaction. However, when the support increases to best capture signal dependencies, the computation of the basis functions becomes rapidly untractable. This problem is in particular compelling for high dimensional imaging data such as light fields. The use of local transforms with limited supports is a way to cope with this computational difficulty. Unfortunately, the locality of the support may not allow us to fully exploit long term signal dependencies present in both the spatial and angular dimensions in the case of light fields. This paper describes sampling and prediction schemes with local graph-based transforms enabling to efficiently compact the signal energy and exploit dependencies beyond the local graph support. The proposed approach is investigated and is shown to be very efficient in the context of spatio-angular transforms for quasi-lossless compression of light fields.
A subset of vertices @math is a @cite_1 for signals in @math if @math . It is also shown that @math is a for all signals @math , if and only if @math are linearly independent where @math is the @math smallest eigenvalue of @math and @math is a reduced eigenvector. The term reduced implies taking the rows of the eigenvectors corresponding to the indices of the sampling set @math @cite_35 . It can also be shown that for any minimum uniqueness set @math of size @math for signals in @math , there is always at least one node @math such that @math is a uniqueness set of size @math for signals in @math @cite_35 . This property will be useful for iteratively selecting the set of reference samples from the input light field data.
{ "cite_N": [ "@cite_35", "@cite_1" ], "mid": [ "2891091350", "2095414057" ], "abstract": [ "In this paper we propose a novel vertex based sampling method for k-bandlimited signals lying on arbitrary graphs, that has a reasonable computational complexity and results in low reconstruction error. Our goal is to find the smallest set of vertices that can guarantee a perfect reconstruction of any k-bandlimited signal on any connected graph. We propose to iteratively search for the vertices that yield the minimum reconstruction error, by minimizing the maximum eigenvalue of the error covariance matrix using a linear solver. We compare the performance of our method with state-of-the-art sampling strategies and random sampling on graphs. Experimental results show that our method successfully computes the smallest sample sets on arbitrary graphs without any parameter tuning. It provides a small reconstruction error, and is robust to noise.", "In this paper, we present two localized graph filtering based methods for interpolating graph signals defined on the vertices of arbitrary graphs from only a partial set of samples. The first method is an extension of previous work on reconstructing bandlimited graph signals from partially observed samples. The iterative graph filtering approach very closely approximates the solution proposed in the that work, while being computationally more efficient. As an alternative, we propose a regularization based framework in which we define the cost of reconstruction to be a combination of smoothness of the graph signal and the reconstruction error with respect to the known samples, and find solutions that minimize this cost. We provide both a closed form solution and a computationally efficient iterative solution of the optimization problem. The experimental results on the recommendation system datasets demonstrate effectiveness of the proposed methods." ] }
1903.03546
2922091810
Graph-based transforms have been shown to be powerful tools in terms of image energy compaction. However, when the support increases to best capture signal dependencies, the computation of the basis functions becomes rapidly untractable. This problem is in particular compelling for high dimensional imaging data such as light fields. The use of local transforms with limited supports is a way to cope with this computational difficulty. Unfortunately, the locality of the support may not allow us to fully exploit long term signal dependencies present in both the spatial and angular dimensions in the case of light fields. This paper describes sampling and prediction schemes with local graph-based transforms enabling to efficiently compact the signal energy and exploit dependencies beyond the local graph support. The proposed approach is investigated and is shown to be very efficient in the context of spatio-angular transforms for quasi-lossless compression of light fields.
After building a uniqueness set, a simple way to reconstruct the missing samples is to solve a least-squares problem in the spectral domain @cite_1 . Observing that the signal @math can be written as ] &=& [ ] [ ],
{ "cite_N": [ "@cite_1" ], "mid": [ "2095414057" ], "abstract": [ "In this paper, we present two localized graph filtering based methods for interpolating graph signals defined on the vertices of arbitrary graphs from only a partial set of samples. The first method is an extension of previous work on reconstructing bandlimited graph signals from partially observed samples. The iterative graph filtering approach very closely approximates the solution proposed in the that work, while being computationally more efficient. As an alternative, we propose a regularization based framework in which we define the cost of reconstruction to be a combination of smoothness of the graph signal and the reconstruction error with respect to the known samples, and find solutions that minimize this cost. We provide both a closed form solution and a computationally efficient iterative solution of the optimization problem. The experimental results on the recommendation system datasets demonstrate effectiveness of the proposed methods." ] }
1903.03546
2922091810
Graph-based transforms have been shown to be powerful tools in terms of image energy compaction. However, when the support increases to best capture signal dependencies, the computation of the basis functions becomes rapidly untractable. This problem is in particular compelling for high dimensional imaging data such as light fields. The use of local transforms with limited supports is a way to cope with this computational difficulty. Unfortunately, the locality of the support may not allow us to fully exploit long term signal dependencies present in both the spatial and angular dimensions in the case of light fields. This paper describes sampling and prediction schemes with local graph-based transforms enabling to efficiently compact the signal energy and exploit dependencies beyond the local graph support. The proposed approach is investigated and is shown to be very efficient in the context of spatio-angular transforms for quasi-lossless compression of light fields.
In the special case where @math is of size @math ( @math is therefore a @cite_35 for signals @math ), @math is a square invertible matrix. Equipped with the aforesaid arguments, the formulation in Equation can be further simplified to:
{ "cite_N": [ "@cite_35" ], "mid": [ "2891091350" ], "abstract": [ "In this paper we propose a novel vertex based sampling method for k-bandlimited signals lying on arbitrary graphs, that has a reasonable computational complexity and results in low reconstruction error. Our goal is to find the smallest set of vertices that can guarantee a perfect reconstruction of any k-bandlimited signal on any connected graph. We propose to iteratively search for the vertices that yield the minimum reconstruction error, by minimizing the maximum eigenvalue of the error covariance matrix using a linear solver. We compare the performance of our method with state-of-the-art sampling strategies and random sampling on graphs. Experimental results show that our method successfully computes the smallest sample sets on arbitrary graphs without any parameter tuning. It provides a small reconstruction error, and is robust to noise." ] }
1903.03546
2922091810
Graph-based transforms have been shown to be powerful tools in terms of image energy compaction. However, when the support increases to best capture signal dependencies, the computation of the basis functions becomes rapidly untractable. This problem is in particular compelling for high dimensional imaging data such as light fields. The use of local transforms with limited supports is a way to cope with this computational difficulty. Unfortunately, the locality of the support may not allow us to fully exploit long term signal dependencies present in both the spatial and angular dimensions in the case of light fields. This paper describes sampling and prediction schemes with local graph-based transforms enabling to efficiently compact the signal energy and exploit dependencies beyond the local graph support. The proposed approach is investigated and is shown to be very efficient in the context of spatio-angular transforms for quasi-lossless compression of light fields.
While the aforementioned sampling theorem @cite_1 has been proposed for band-limited signals, we extend those equations to our problem in the following section. More precisely, we deal with signals (i.e. Color Signals) that might not be necessarily band-limited on the underlying graph supports (i.e. Super-Rays).
{ "cite_N": [ "@cite_1" ], "mid": [ "2095414057" ], "abstract": [ "In this paper, we present two localized graph filtering based methods for interpolating graph signals defined on the vertices of arbitrary graphs from only a partial set of samples. The first method is an extension of previous work on reconstructing bandlimited graph signals from partially observed samples. The iterative graph filtering approach very closely approximates the solution proposed in the that work, while being computationally more efficient. As an alternative, we propose a regularization based framework in which we define the cost of reconstruction to be a combination of smoothness of the graph signal and the reconstruction error with respect to the known samples, and find solutions that minimize this cost. We provide both a closed form solution and a computationally efficient iterative solution of the optimization problem. The experimental results on the recommendation system datasets demonstrate effectiveness of the proposed methods." ] }
1903.03546
2922091810
Graph-based transforms have been shown to be powerful tools in terms of image energy compaction. However, when the support increases to best capture signal dependencies, the computation of the basis functions becomes rapidly untractable. This problem is in particular compelling for high dimensional imaging data such as light fields. The use of local transforms with limited supports is a way to cope with this computational difficulty. Unfortunately, the locality of the support may not allow us to fully exploit long term signal dependencies present in both the spatial and angular dimensions in the case of light fields. This paper describes sampling and prediction schemes with local graph-based transforms enabling to efficiently compact the signal energy and exploit dependencies beyond the local graph support. The proposed approach is investigated and is shown to be very efficient in the context of spatio-angular transforms for quasi-lossless compression of light fields.
The availability of commercial light field cameras has given momentum to the development of light field compression algorithms. Many solutions proposed so far adapt standardized image and video compression solutions (in particular HEVC) to light field data. This is the case in @cite_11 @cite_17 @cite_24 @cite_20 , where the authors extend HEVC intra coding modes by adding new prediction modes to exploit similarity between lenslet images. This is also the case in @cite_27 @cite_29 @cite_13 , where the views are encoded as pseudo video sequences using HEVC or the latest JEM software, or in @cite_2 where HEVC is extended for coding an array of views.
{ "cite_N": [ "@cite_11", "@cite_29", "@cite_24", "@cite_27", "@cite_2", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "2029608396", "2564478397", "", "2525683018", "2622195611", "2794141660", "2524177174", "" ], "abstract": [ "Holoscopic imaging is an advantageous solution for glassless 3D video systems, which promises to revolutionize the 3D market in the near future. Besides freeing the user from wearing any viewing device, it supports full motion parallax, improving this way the users' viewing experience. However, in order to provide 3D holoscopic content with convenient visual quality in terms of resolution and 3D perception, ultra-high resolution acquisition and display devices are required. Consequently, efficient video coding tools to deal with this large amount of data become of paramount importance. The recent standardization project called High Efficiency Video Coding (HEVC) addresses the requirements of high resolution video coding, but does not yet address the specific characteristics of 3D holoscopic content. To remedy this situation, this paper proposes to incorporate new prediction modes in HEVC to explore the particular structure of 3D holoscopic content, in order to further improve the performance of HEVC for this type of content. Experimental results, based on the HEVC test model version 4.0 are presented and clearly show the advantages of using this approach.", "Light Fields capturing all light rays at every point in space and in all directions contain very rich information about the scene. This rich description of the scene enables advanced image creation capabilities, such as re-focusing or extended depth of field from a single capture. But, it yields a very high volume of data which needs compression. This paper studies the impact of Light Fields compression on two key functionalities: refocusing and extended focus. The sub-aperture images forming the Light Field are compressed as a video sequence with HEVC. A focus stack and the scene depth map are computed from the compressed light field and are used to render an image with an extended depth of field (called the extended focus image). It has been first observed that the Light Field could be compressed with a factor up to 700 without significantly affecting the visual quality of both refocused and extended focus images. To further analyze the compression effect, a dedicated quality evaluation method based on contrast and gradient measurements is considered to differentiate the natural geometrical blur from the blur resulting from compression. As a second part of the experiments, it is shown that the texture distortion of the in-focus regions in the focus stacks is the main cause of the quality degradation in the extended focus and that the depth errors do not impact the extended focus quality unless the light field is significantly distorted with a compression ratio of around 2000:1.", "", "We propose a pseudo-sequence-based scheme for light field image compression. In our scheme, the raw image captured by a light field camera is decomposed into multiple views according to the lenslet array of that camera. These views constitute a pseudo sequence like video, and the redundancy between views is exploited by a video encoder. The specific coding order of views, prediction structure, and rate allocation have been investigated for encoding the pseudo sequence. Experimental results show the superior performance of our scheme, which achieves as high as 6.6 dB gain compared with directly encoding the raw image by the legacy JPEG.", "Over the last decade, advancements in optical devices have made it possible for new novel image acquisition technologies to appear. Angular information for each spatial point is acquired in addition to the spatial information of the scene that enables 3D scene reconstruction and various post-processing effects. Current generation of plenoptic cameras spatially multiplex the angular information, which implies an increase in image resolution to retain the level of spatial information gathered by conventional cameras. In this work, the resulting plenoptic image is interpreted as a multi-view sequence that is efficiently compressed using the multi-view extension of high efficiency video coding (MV-HEVC). A novel two-dimensional weighted prediction and rate allocation scheme is proposed to adopt the HEVC compression structure to the plenoptic image properties. The proposed coding approach is a response to ICIP 2017 Grand Challenge: Light field Image Coding. The proposed scheme outperforms all ICME-contestants, and improves on the JPEG-anchor of ICME with an average PSNR gain of 7.5 dB and the HEVC-anchor of ICIP 2017 Grand Challenge with an average PSNR gain of 2.4 dB.", "In this paper, we explore the structure of light field (LF) and efficiently improve the performance of the pseudo sequence based lenslet image compression, by optimized sub-view rearrangement, enhanced illumination compensation and adaptive reconstruction filtering. First, the decomposed sub-view images are rearranged into a pseudo sequence according to our optimized scan order based on sub-view correlation. Second, the generated pseudo sequence is compressed with JEM codec in which we enhance the illumination compensation by adaptively selecting reference pixels used in parameter derivation. Finally, to reduce the distortions in lenslet image decomposition and reconstruction, we propose an lenslet reconstruction method by applying adaptive filters to the reconstructed lenslet images to compensate the reconstruction errors. Each filter is derived by minimizing the distortions between the original and reconstructed pixels with same geolocation in the lenslet image. Extensive experimental results show that the proposed method achieves up to 53.7 bit rate reduction over HEVC intra coding and 34.8 over JEM intra coding in terms of BDBR.", "Plenoptic images are one type of light field contents produced by using a combination of a conventional camera and an additional optical component in the form of microlens arrays, which are positioned in front of the image sensor surface. This camera setup can capture a sub-sampling of the light field with high spatial fidelity over a small range, and with a more coarsely sampled angle range. The earliest applications that leverage on the plenoptic image content is image refocusing, non-linear distribution of out-of-focus areas, SNR vs. resolution trade-offs, and 3D-image creation. All functionalities are provided by using post-processing methods. In this work, we evaluate a compression method that we previously proposed for a different type of plenoptic image (focused or plenoptic camera 2.0 contents) than the unfocused or plenoptic camera 1.0 that is used in this Grand Challenge. The method is an extension of the state-of-the-art video compression standard HEVC where we have brought the capability of bi-directional inter-frame prediction into the spatial prediction. The method is evaluated according to the scheme set out by the Grand Challenge, and the results show a high compression efficiency compared with JPEG, i.e., up to 6 dB improvements for the tested images.", "" ] }
1903.03546
2922091810
Graph-based transforms have been shown to be powerful tools in terms of image energy compaction. However, when the support increases to best capture signal dependencies, the computation of the basis functions becomes rapidly untractable. This problem is in particular compelling for high dimensional imaging data such as light fields. The use of local transforms with limited supports is a way to cope with this computational difficulty. Unfortunately, the locality of the support may not allow us to fully exploit long term signal dependencies present in both the spatial and angular dimensions in the case of light fields. This paper describes sampling and prediction schemes with local graph-based transforms enabling to efficiently compact the signal energy and exploit dependencies beyond the local graph support. The proposed approach is investigated and is shown to be very efficient in the context of spatio-angular transforms for quasi-lossless compression of light fields.
Low rank models as well as local Gaussian mixture models in the 4D rays space are proposed in @cite_33 , @cite_7 and @cite_8 respectively. View synthesis based predictive coding has also been investigated in @cite_32 where the authors use a linear approximation computed with Matching Pursuit View synthesis based predictive coding is another research direction followed in @cite_32 where the authors use a linear approximation computed with Matching Pursuit for disparity based view prediction. The authors in @cite_30 and @cite_16 use instead a the convolutional neural network (CNN) architecture proposed in @cite_14 for view synthesis and prediction. The prediction residue is then coded using HEVC @cite_30 , or using local residue transforms (SA-DCT) and coding @cite_16 . The authors in @cite_21 , use a depth based segmentation of the light field into 4D spatio‐angular blocks with prediction followed by JPEG‐2000. View synthesis followed by predictive coding is the approach followed in JPEG-Pleno @cite_26 . While all prior work mentioned above has been dedicated to lossy compression, much less effort has been dedicated to lossless coding of light fields. One can however mention the approach proposed in @cite_5 using differential prediction.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_26", "@cite_33", "@cite_7", "@cite_8", "@cite_21", "@cite_32", "@cite_5", "@cite_16" ], "mid": [ "", "2551052086", "", "2751965900", "2944909777", "2751111769", "2793519528", "2789672498", "2098872661", "2781498243" ], "abstract": [ "", "With the introduction of consumer light field cameras, light field imaging has recently become widespread. However, there is an inherent trade-off between the angular and spatial resolution, and thus, these cameras often sparsely sample in either spatial or angular domain. In this paper, we use machine learning to mitigate this trade-off. Specifically, we propose a novel learning-based approach to synthesize new views from a sparse set of input views. We build upon existing view synthesis techniques and break down the process into disparity and color estimation components. We use two sequential convolutional neural networks to model these two components and train both networks simultaneously by minimizing the error between the synthesized and ground truth images. We show the performance of our approach using only four corner sub-aperture views from the light fields captured by the Lytro Illum camera. Experimental results show that our approach synthesizes high-quality images that are superior to the state-of-the-art techniques on a variety of challenging real-world scenes. We believe our method could potentially decrease the required angular resolution of consumer light field cameras, which allows their spatial resolution to increase.", "", "This paper describes a light field compression scheme based on a novel homography-based low-rank approximation method called HLRA. The HLRA method jointly searches for the set of homographies best aligning the light field views and for the low-rank approximation matrices. The light field views are aligned using either one global homography or multiple homographies depending on how much the disparity across views varies from one depth plane to the other. The light field low-rank representation is then compressed using high efficiency video coding (HEVC). The best pair of rank and quantization parameters of the coding scheme, for a given target bit rate, is predicted with a model defined as a function of light field disparity and texture features. The results are compared with those obtained by directly applying HEVC on the light field views restructured as a pseudovideo sequence. The experiments using different datasets show substantial peak signal to noise ratio (PSNR)-rate gain of our compression algorithm, as well as the accuracy of the proposed parameter prediction model, especially for real light fields. A scalable extension of the coding scheme is finally proposed.", "", "The proposed framework, called Steered Mixture-of-Experts (SMoE), enables a multitude of processing tasks on light fields using a single unified Bayesian model. The underlying assumption is that light field rays are instantiations of a non-linear or non-stationary random process that can be modeled by piecewise stationary processes in the spatial domain. As such, it is modeled as a space-continuous Gaussian Mixture Model. Consequently, the model takes into account different regions of the scene, their edges, and their development along the spatial and disparity dimensions. Applications presented include light field coding, depth estimation, edge detection, segmentation, and view interpolation. The representation is compact, which allows for very efficient compression yielding state-of-the-art coding results for low bit-rates. Furthermore, due to the statistical representation, a vast amount of information can be queried from the model even without having to analyze the pixel values. This allows for “blind” light field processing and classification.", "This paper proposes a lenslet image compression method scalable from low bitrates to fully lossless. The subaperture images are split into two sets: a set of reference views, encoded by a standard lossy or lossless compressor, and the set of dependent views, which are reconstructed by sparse prediction from the reference set using the geometrical information from the depth map. The set of reference views may contain all views and all views may also be dependent views, in which case the sparse predictive stage does not reconstruct from scratch the views, but it refines in a sequential order all views by combining in an optimal way the information about the same region existing in neighbor views. The encoder transmits to the decoder a segmented version of the scene depthmap, the encoded versions of the reference views, displacements for each region from the central view to each of the dependent views, and finally the sparse predictors for each region and each dependent view. The scheme can be configured to ensure random access to the dependent views, while the reference views are compressed in a backward compatible way, e.g., using JPEG 2000. The experimental results show performance better than that of the baseline standard compressor used, JPEG 2000.", "In recent years, the light field (LF) image as a new imaging modality has attracted much interest. While light field camera records both the luminance and direction of the rays in a scene, large amount of data makes it a great challenge for storage and transmission. Thus an adequate compression scheme is desired. In this paper, we propose a new prior, called linear approximation prior that reveals intrinsic property among the LF sub-views. It indicates that we can approximate a certain view with a weighted sum of other views. By fully exploiting this prior we propose a powerful coding scheme. The experiments show the superior performance of our scheme, which achieves as large as 45.51 BD-rate reduction and 37.41 BD-rate reduction on average compared with the High Efficiency Video Coding (HEVC).", "Plenoptic images are obtained from the projection of the light crossing a matrix of microlens arrays which replicates the scene from different direction into a camera device sensor. Plenoptic images have a different structure with respect to regular digital images, and novel algorithms for data compression are currently under research. This paper proposes an algorithm for the compression of plenoptic images. The micro images composing a plenoptic image are processed by an adaptive prediction tool, aiming at reducing data correlation before entropy coding takes place. The algorithm is compared with state-of-the-art image compression algorithms, namely, JPEG 2000 and JPEG XR. Obtained results demonstrate that the proposed algorithm improves the coding efficiency.", "This paper describes a graph-based coding scheme for light fields (LF). It first adapts graph-based representations (GBR) to describe color and geometry information of LF. Graph connections describing scene geometry capture inter-view dependencies. They are used as the support of a weighted Graph Fourier Transform (wGFT) to encode disoccluded pixels. The quality of the LF reconstructed from the graph is enhanced by adding extra color information to the representation for a sub-set of sub-aperture images. Experiments show that the proposed scheme yields rate-distortion gains compared with HEVC based compression (directly compressing the LF as a video sequence by HEVC)." ] }
1903.03443
2950870379
Modeling social interactions based on individual behavior has always been an area of interest, but prior literature generally presumes rational behavior. Thus, such models may miss out on capturing the effects of biases humans are susceptible to. This work presents a method to model egocentric bias, the real-life tendency to emphasize one's own opinion heavily when presented with multiple opinions. We use a symmetric distribution centered at an agent's own opinion, as opposed to the Bounded Confidence (BC) model used in prior work. We consider a game of iterated interactions where an agent cooperates based on its opinion about an opponent. Our model also includes the concept of domain-based self-doubt, which varies as the interaction succeeds or not. An increase in doubt makes an agent reduce its egocentricity in subsequent interactions, thus enabling the agent to learn reactively. The agent system is modeled with factions not having a single leader, to overcome some of the issues associated with leader-follower factions. We find that agents belonging to factions perform better than individual agents. We observe that an intermediate level of egocentricity helps the agent perform at its best, which concurs with conventional wisdom that neither overconfidence nor low self-esteem brings benefits.
Prior work has been done to model confirmation bias, but the most used model has been the Bounded Confidence (BC) model. The BC model was first introduced by Krause in 2000 @cite_5 . Later, Deffuant @cite_15 proposed a relative agreement model (RA) which extended the BC model. In the BC model, an agent considers only those opinions that are sufficiently close to its own, and shuns any opinion outside the confidence threshold. This model has been used to model confirmation bias in many papers @cite_2 @cite_35 @cite_9 @cite_65 @cite_8 .
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_9", "@cite_65", "@cite_2", "@cite_5", "@cite_15" ], "mid": [ "2083689991", "2962854160", "2113096089", "2604617587", "2098918961", "37686529", "1582135188" ], "abstract": [ "We present a model of opinion dynamics in which agents adjust continuous opinions as a result of random binary encounters whenever their difference in opinion is below a given threshold. High thresholds yield convergence of opinions towards an average opinion, whereas low thresholds result in several opinion clusters: members of the same cluster share the same opinion but are no longer influenced by members of other clusters.", "Online users tend to select claims that adhere to their system of beliefs and to ignore dissenting information. Confirmation bias, indeed, plays a pivotal role in viral phenomena. Furthermore, the wide availability of content on the web fosters the aggregation of likeminded people where debates tend to enforce group polarization. Such a configuration might alter the public debate and thus the formation of the public opinion. In this paper we provide a mathematical model to study online social debates and the related polarization dynamics. We assume the basic updating rule of the Bounded Confidence Model (BCM) and we develop two variations a) the Rewire with Bounded Confidence Model (RBCM), in which discordant links are broken until convergence is reached; and b) the Unbounded Confidence Model, under which the interaction among discordant pairs of users is allowed even with a negative feedback, either with the rewiring step (RUCM) or without it (UCM). From numerical simulations we find that the new models (UCM and RUCM), unlike the BCM, are able to explain the coexistence of two stable final opinions, often observed in reality. Lastly, we present a mean field approximation of the newly introduced models.", "When does opinion formation within an interacting group lead to consensus, polarization or fragmentation? The article investigates various models for the dynamics of continuous opinions by analytical methods as well as by computer simulations. Section 2 develops within a unified framework the classical model of consensus formation, the variant of this model due to Friedkin and Johnsen, a time-dependent version and a nonlinear version with bounded confidence of the agents. Section 3 presents for all these models major analytical results. Section 4 gives an extensive exploration of the nonlinear model with bounded confidence by a series of computer simulations. An appendix supplies needed mathematical definitions, tools, and theorems.", "We present an introduction to a novel model of an individual and group opinion dynamics, taking into account different ways in which different sources of information are filtered due to cognitive biases. The agent based model, using Bayesian updating of the individual belief distribution, is based on the recent psychology work by Dan Kahan. Open nature of the model allows to study the effects of both static and time-dependent biases and information processing filters. In particular, the paper compares the effects of two important psychological mechanisms: the confirmation bias and the politically motivated reasoning. Depending on the effectiveness of the information filtering (agent bias), the agents confronted with an objective information source may either reach a consensus based on the truth, or remain divided despite the evidence. In general, the model might provide an understanding into the increasingly polarized modern societies, especially as it allows mixing of different types of filters: psychological, social, and algorithmic.", "We present a model of opinion dynamics in which agents adjust continuous opinions as a result of random binary encounters whenever their difference in opinion is below a given threshold. High thresholds yield convergence of opinions towards an average opinion, whereas low thresholds result in several opinion clusters. The model is further generalised to threshold heterogeneity, adaptive thresholds and binary strings of opinions.", "Consensus formation among n experts is modeled as a positive discrete dynamical system in n dimensions. The well–known linear but non–autonomous model is extended to a nonlinear one admitting also various kinds of averaging beside the weighted arithmetic mean. For this model a sufficient condition for reaching a consensus is presented. As a special case consensus formation under bounded confidence is analyzed.", "Abstract: We model opinion dynamics in populations of agents with continuous opinion and uncertainty. The opinions and uncertainties are modified by random pair interactions. We propose a new model of interactions, called relative agreement model, which is a variant of the previously discussed bounded confidence. In this model, uncertainty as well as opinion can be modified by interactions. We introduce extremist agents by attributing a much lower uncertainty (and thus higher persuasion) to a small proportion of agents at the extremes of the opinion distribution. We study the evolution of the opinion distribution submitted to the relative agreement model. Depending upon the choice of parameters, the extremists can have a very local influence or attract the whole population. We propose a qualitative analysis of the convergence process based on a local field notion. The genericity of the observed results is tested on several variants of the bounded confidence model." ] }
1903.03443
2950870379
Modeling social interactions based on individual behavior has always been an area of interest, but prior literature generally presumes rational behavior. Thus, such models may miss out on capturing the effects of biases humans are susceptible to. This work presents a method to model egocentric bias, the real-life tendency to emphasize one's own opinion heavily when presented with multiple opinions. We use a symmetric distribution centered at an agent's own opinion, as opposed to the Bounded Confidence (BC) model used in prior work. We consider a game of iterated interactions where an agent cooperates based on its opinion about an opponent. Our model also includes the concept of domain-based self-doubt, which varies as the interaction succeeds or not. An increase in doubt makes an agent reduce its egocentricity in subsequent interactions, thus enabling the agent to learn reactively. The agent system is modeled with factions not having a single leader, to overcome some of the issues associated with leader-follower factions. We find that agents belonging to factions perform better than individual agents. We observe that an intermediate level of egocentricity helps the agent perform at its best, which concurs with conventional wisdom that neither overconfidence nor low self-esteem brings benefits.
Factions have been broadly considered to be specific sets of agents. However, a faction has been modeled in different ways. Some factions have been modeled as a leader-follower group, where the leader determines the group dynamics @cite_67 . Even if the group does not have an assigned leader to start with, it has been suggested that an agent with high cognitive capacity eventually emerges as a leader @cite_55 . Such a leader eventually impacts the performance of the entire group. Factions can also be modeled as a selfish herd, where each agent is a member for its own gain @cite_21 . However, this structure does not have a single leader and such models have proved useful in modeling certain group behaviors @cite_31 @cite_4 .
{ "cite_N": [ "@cite_67", "@cite_4", "@cite_55", "@cite_21", "@cite_31" ], "mid": [ "172598835", "2091087160", "2166876100", "1998143474", "2102746067" ], "abstract": [ "This paper presents a synthetic approach for generating role playing simulation games intended to support analysts (and trainees) interested in testing alternative competing courses of action (operations) and discovering what effects they are likely to precipitate in potential ethno-political conflict situations. Simulated leaders and followers capable of playing these games are implemented in a cognitive modeling framework, called PMFserv, which covers value systems, personality and cultural factors, emotions, relationships, perception, stress coping style and decision making. Of direct interest, as Sect. 1.1 explains, is mathematical representation and synthesis of best-of-breed behavioral science models within this framework to reduce dimensionality and to improve the realism and internal validity of the agent implementations. Sections 2 and 3 present this for leader profiling instruments and group membership decision-making, respectively. Section 4 serves as an existence proof that the framework has generated several training and analysis tools, and Sect. 5 concludes with lessons learned. Part II turns to the question of assessment of the synthesis and its usage in course of action studies.", "An informational cascade occurs when it is optimal for an individual, having observed the actions of those ahead of him, to follow the behavior of the preceding individual without regard to his own information. We argue that localized conformity of behavior and the fragility of mass behaviors can be explained by informational cascades.", "This study tracked the leadership development of236 male cadets from matriculation through graduation ata military college. Cognitive ability, physical fitness,prior influence experiences, and self-esteem measured in Year 1 were relevant to predictingthose who assumed formal leadership positions in Year 4.Physical fitness and prior influence experiencesmeasured when cadets entered the college predicted leader effectiveness rated in their fourthyear. Stress tolerance and moral reasoning levels didnot predict leader emergence or effectiveness, thoughthe set of individual difference measures significantly predicted emergence and effectiveness. Physicalfitness levels and moral reasoning increased over timefor all cadets, though surprisingly, levels ofself-esteem and stress tolerance did not increase over time. Overall the study demonstrated thatleadership effectiveness and emergence could bepredicted from early measures of individualdifferences.", "Abstract This paper presents an antithesis to the view that gregarious behaviour is evolved through benefits to the population or species. Following Galton (1871) and Williams (1964) gregarious behaviour is considered as a form of cover-seeking in which each animal tries to reduce its chance of being caught by a predator. It is easy to see how pruning of marginal individuals can maintain centripet al instincts in already gregarious species; some evidence that marginal pruning actually occurs is summarized. Besides this, simply defined models are used to show that even in non-gregarious species selection is likely to favour individuals who stay close to others. Although not universal or unipotent, cover-seeking is a widespread and important element in animal aggregation, as the literature shows. Neglect of the idea has probably followed from a general disbelief that evolution can be dysgenic for a species. Nevertheless, selection theory provides no support for such disbelief in the case of species with outbreeding or unsubdivided populations. The model for two dimensions involves a complex problem in geometrical probability which has relevance also in met allurgy and communication science. Some empirical data on this, gathered from random number plots, is presented as of possible heuristic value.", "Herding is a form of convergent social behaviour that can be broadly defined as the alignment of the thoughts or behaviours of individuals in a group (herd) through local interaction and without centralized coordination. We suggest that herding has a broad application, from intellectual fashion to mob violence; and that understanding herding is particularly pertinent in an increasingly interconnected world. An integrated approach to herding is proposed, describing two key issues: mechanisms of transmission of thoughts or behaviour between agents, and patterns of connections between agents. We show how bringing together the diverse, often disconnected, theoretical and methodological approaches illuminates the applicability of herding to many domains of cognition and suggest that cognitive neuroscience offers a novel approach to its study." ] }
1903.03837
2921355632
As the computational power of toady's devices increases, real-time physically-based rendering becomes possible, and is rapidly gaining attention across a variety of domains. These include gaming, where physically-based rendering enhances immersion and overall entertainment experience, all the way to medicine, where it constitutes a powerful tool for intuitive volumetric data visualization. However, leveraging the obvious benefits of physically-based rendering (also referred to as photo-realistic rendering) remains challenging on embedded devices such as optical see-through head-mounted displays because of their limited computational power, and restricted memory usage and power consumption. We propose methods that aim at overcoming these limitations, fueling the implementation of real-time physically-based rendering on embedded devices. We navigate the compromise between memory requirement, computational power, and image quality to achieve reasonable rendering results by introducing a flexible representation of plenoptic functions and adapting a fast approximation algorithm for image generation from our plenoptic functions. We conclude by discussing potential applications and limitations of the proposed method.
Several others followed 's considerations and investigated different geometrical primitives for parameterization. @cite_17 examined a sphere as primitive to provide more uniformly sampled light fields. They perform a binning approach based on a Bresenham-style discretization of the spherical surface which introduces two drawbacks: First, this sampling scheme is not perfectly uniform, and second, retrieving radiance information back from the data structure has linear time complexity and is dependent on the number of bins. This suggests that rendering time substantially increases with higher resolution, since both the number of rays as well as the retrieval time per ray increases.
{ "cite_N": [ "@cite_17" ], "mid": [ "114172488" ], "abstract": [ "Image-based or light field rendering has received much recent attention as an alternative to traditional geometric methods for modeling and rendering complex objects. A light field represents the radiance flowing through all the points in a scene in all possible directions. We explore two new techniques for efficiently acquiring, storing, and reconstructing light fields in a (nearly) uniform fashion. Both techniques sample the light field by sampling the set of lines that intersect a sphere tightly fit around a given object. Our first approach relies on uniformly subdividing the sphere and representing this subdivision in a compact data structure which allows efficient mapping of image pixels or rays to sphere points and then to subdivision elements. We sample a light field by joining pairs of subdivision elements and store the resulting samples in a multi-resolution, highly compressed fashion that allows efficient rendering. Our second method allows a uniform sampling of all five dimensions of the light field, using hierarchical subdivision for directional space and uniform grid sampling for positional space. Light field models are acquired using parallel projections along a set of uniform directions. Depth information can also be stored for high-quality image rendering. The system can provide bounds on key sources of error in the representation and can be generalized to arbitrary scenes comprising multiple complex objects." ] }
1903.03503
2920987997
A wide range of systems exhibit high dimensional incomplete data. Accurate estimation of the missing data is often desired, and is crucial for many downstream analyses. Many state-of-the-art recovery methods involve supervised learning using datasets containing full observations. In contrast, we focus on unsupervised estimation of missing image data, where no full observations are available - a common situation in practice. Unsupervised imputation methods for images often employ a simple linear subspace to capture correlations between data dimensions, omitting more complex relationships. In this work, we introduce a general probabilistic model that describes sparse high dimensional imaging data as being generated by a deep non-linear embedding. We derive a learning algorithm using a variational approximation based on convolutional neural networks and discuss its relationship to linear imputation models, the variational auto encoder, and deep image priors. We introduce sparsity-aware network building blocks that explicitly model observed and missing data. We analyze proposed sparsity-aware network building blocks, evaluate our method on public domain imaging datasets, and conclude by showing that our method enables imputation in an important real-world problem involving medical images. The code is freely available as part of the |neuron| library at this http URL.
Collaborative filtering systems include only a sparse observation of users preferences @cite_42 @cite_53 . Here, methods aim to learn from user preference to produce future recommendations. Often, these models build user representations using matrix completion methods, which share a goal with data imputation using linear embeddings. Recent methods have exploited convolutional neural networks for joint user representations with external information regarding content @cite_9 . Other methods use shallow auto-encoders with sparse data and propose specific regularized loss functions @cite_11 @cite_23 . Similar to linear subspace models, these methods can be characterized as instantiations of our model.
{ "cite_N": [ "@cite_9", "@cite_53", "@cite_42", "@cite_23", "@cite_11" ], "mid": [ "2725606191", "2042281163", "2159094788", "1720514416", "2099866409" ], "abstract": [ "Modern recommender systems usually employ collaborative filtering with rating information to recommend items to users due to its successful performance. However, because of the drawbacks of collaborative-based methods such as sparsity, cold start, etc., more attention has been drawn to hybrid methods that consider both the rating and content information. Most of the previous works in this area cannot learn a good representation from content for recommendation task or consider only text modality of the content, thus their methods are very limited in current multimedia scenario. This paper proposes a Bayesian generative model called collaborative variational autoencoder (CVAE) that considers both rating and content for recommendation in multimedia scenario. The model learns deep latent representations from content data in an unsupervised manner and also learns implicit relationships between items and users from both content and rating. Unlike previous works with denoising criteria, the proposed CVAE learns a latent distribution for content in latent space instead of observation space through an inference network and can be easily extended to other multimedia modalities other than text. Experiments show that CVAE is able to significantly outperform the state-of-the-art recommendation methods with more robust performance.", "Recommender systems apply knowledge discovery techniques to the problem of making personalized recommendations for information, products or services during a live interaction. These systems, especially the k-nearest neighbor collaborative ltering based ones, are achieving widespread success on the Web. The tremendous growth in the amount of available information and the number of visitors to Web sites in recent years poses some key challenges for recommender systems. These are: producing high quality recommendations, performing many recommendations per second for millions of users and items and achieving high coverage in the face of data sparsity. In traditional collaborative ltering systems the amount of work increases with the number of participants in the system. New recommender system technologies are needed that can quickly produce high quality recommendations, even for very large-scale problems. To address these issues we have explored item-based collaborative ltering techniques. Item-based techniques rst analyze the user-item matrix to identify relationships between di erent items, and then use these relationships to indirectly compute recommendations for users. In this paper we analyze di erent item-based recommendation generation algorithms. We look into di erent techniques for computing item-item similarities (e.g., item-item correlation vs. cosine similarities between item vectors) and di erent techniques for obtaining recommendations from them (e.g., weighted sum vs. regression model). Finally, we experimentally evaluate our results and compare them to the basic k-nearest neighbor approach. Our experiments suggest that item-based algorithms provide dramatically better performance than user-based algorithms, while at the same time providing better quality than the best available userbased algorithms.", "Recommendation algorithms are best known for their use on e-commerce Web sites, where they use input about a customer's interests to generate a list of recommended items. Many applications use only the items that customers purchase and explicitly rate to represent their interests, but they can also use other attributes, including items viewed, demographic data, subject interests, and favorite artists. At Amazon.com, we use recommendation algorithms to personalize the online store for each customer. The store radically changes based on customer interests, showing programming titles to a software engineer and baby toys to a new mother. There are three common approaches to solving the recommendation problem: traditional collaborative filtering, cluster models, and search-based methods. Here, we compare these methods with our algorithm, which we call item-to-item collaborative filtering. Unlike traditional collaborative filtering, our algorithm's online computation scales independently of the number of customers and number of items in the product catalog. Our algorithm produces recommendations in real-time, scales to massive data sets, and generates high quality recommendations.", "This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec's compact and efficiently trainable model outperforms state-of-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets.", "Most of the existing approaches to collaborative filtering cannot handle very large data sets. In this paper we show how a class of two-layer undirected graphical models, called Restricted Boltzmann Machines (RBM's), can be used to model tabular data, such as user's ratings of movies. We present efficient learning and inference procedures for this class of models and demonstrate that RBM's can be successfully applied to the Netflix data set, containing over 100 million user movie ratings. We also show that RBM's slightly outperform carefully-tuned SVD models. When the predictions of multiple RBM models and multiple SVD models are linearly combined, we achieve an error rate that is well over 6 better than the score of Netflix's own system." ] }
1903.03503
2920987997
A wide range of systems exhibit high dimensional incomplete data. Accurate estimation of the missing data is often desired, and is crucial for many downstream analyses. Many state-of-the-art recovery methods involve supervised learning using datasets containing full observations. In contrast, we focus on unsupervised estimation of missing image data, where no full observations are available - a common situation in practice. Unsupervised imputation methods for images often employ a simple linear subspace to capture correlations between data dimensions, omitting more complex relationships. In this work, we introduce a general probabilistic model that describes sparse high dimensional imaging data as being generated by a deep non-linear embedding. We derive a learning algorithm using a variational approximation based on convolutional neural networks and discuss its relationship to linear imputation models, the variational auto encoder, and deep image priors. We introduce sparsity-aware network building blocks that explicitly model observed and missing data. We analyze proposed sparsity-aware network building blocks, evaluate our method on public domain imaging datasets, and conclude by showing that our method enables imputation in an important real-world problem involving medical images. The code is freely available as part of the |neuron| library at this http URL.
Variational Bayes auto-encoders (VAEs) and similar models have been used to learn probabilistic generative models, often in the context of images @cite_16 @cite_40 @cite_2 . Similarly, deep denoising auto-encoders use neural networks to obtain embeddings that are robust to noise @cite_52 . Our method builds on these recent developments to approximate subspaces using neural networks. Importantly, we show that a principled treatment of a sparsity model leads to important and intuitive differences from the VAE.
{ "cite_N": [ "@cite_40", "@cite_16", "@cite_52", "@cite_2" ], "mid": [ "", "2949416428", "2145094598", "2963173382" ], "abstract": [ "", "The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.", "We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.", "Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires signicant model-specic analysis. These eorts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. In this paper, we present a box\" variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid dicult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We nd that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data." ] }
1903.03503
2920987997
A wide range of systems exhibit high dimensional incomplete data. Accurate estimation of the missing data is often desired, and is crucial for many downstream analyses. Many state-of-the-art recovery methods involve supervised learning using datasets containing full observations. In contrast, we focus on unsupervised estimation of missing image data, where no full observations are available - a common situation in practice. Unsupervised imputation methods for images often employ a simple linear subspace to capture correlations between data dimensions, omitting more complex relationships. In this work, we introduce a general probabilistic model that describes sparse high dimensional imaging data as being generated by a deep non-linear embedding. We derive a learning algorithm using a variational approximation based on convolutional neural networks and discuss its relationship to linear imputation models, the variational auto encoder, and deep image priors. We introduce sparsity-aware network building blocks that explicitly model observed and missing data. We analyze proposed sparsity-aware network building blocks, evaluate our method on public domain imaging datasets, and conclude by showing that our method enables imputation in an important real-world problem involving medical images. The code is freely available as part of the |neuron| library at this http URL.
Deep Image Priors (DIP) use a generative neural network as a structural prior, and can be used to synthesize missing data @cite_33 . For each image independently, the method finds network parameters that best explain the observed pixels. However, as parameters are image specific, this method is not amenable to extreme sparsity where image structure is hard to learn from the few observations of that image. Below, we discuss how our method is similar to DIP, how it differs, and perform a comparison in our experiments.
{ "cite_N": [ "@cite_33" ], "mid": [ "2964013315" ], "abstract": [ "Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductive bias captured by standard generator network architectures. It also bridges the gap between two very popular families of image restoration methods: learning-based methods using deep convolutional networks and learning-free methods based on handcrafted image priors such as self-similarity." ] }
1903.03503
2920987997
A wide range of systems exhibit high dimensional incomplete data. Accurate estimation of the missing data is often desired, and is crucial for many downstream analyses. Many state-of-the-art recovery methods involve supervised learning using datasets containing full observations. In contrast, we focus on unsupervised estimation of missing image data, where no full observations are available - a common situation in practice. Unsupervised imputation methods for images often employ a simple linear subspace to capture correlations between data dimensions, omitting more complex relationships. In this work, we introduce a general probabilistic model that describes sparse high dimensional imaging data as being generated by a deep non-linear embedding. We derive a learning algorithm using a variational approximation based on convolutional neural networks and discuss its relationship to linear imputation models, the variational auto encoder, and deep image priors. We introduce sparsity-aware network building blocks that explicitly model observed and missing data. We analyze proposed sparsity-aware network building blocks, evaluate our method on public domain imaging datasets, and conclude by showing that our method enables imputation in an important real-world problem involving medical images. The code is freely available as part of the |neuron| library at this http URL.
Several methods define neural networks in other contexts that are not directly related to our task, but still share nomenclature. For example, spatially-sparse CNNs assume a fully observed input, but the content itself is sparse, such as thin writing on a black background @cite_45 @cite_31 . Faster sparse convolutions are proposed by explicitly operating on the pixels that represent content, with the focus of efficient computation. Other methods propose sparsity of the parameter space in neural networks to improve various metrics of network efficiency @cite_7 @cite_32 @cite_10 .
{ "cite_N": [ "@cite_7", "@cite_32", "@cite_45", "@cite_31", "@cite_10" ], "mid": [ "2963674932", "", "189277179", "2624273542", "2963000224" ], "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "", "Convolutional neural networks (CNNs) perform well on problems such as handwriting recognition and image classification. However, the performance of the networks is often limited by budget and time constraints, particularly when trying to train deep networks. Motivated by the problem of online handwriting recognition, we developed a CNN for processing spatially-sparse inputs; a character drawn with a one-pixel wide pen on a high resolution grid looks like a sparse matrix. Taking advantage of the sparsity allowed us more efficiently to train and test large, deep CNNs. On the CASIA-OLHWDB1.1 dataset containing 3755 character classes we get a test error of 3.82 . Although pictures are not sparse, they can be thought of as sparse by adding padding. Applying a deep convolutional network using sparsity has resulted in a substantial reduction in test error on the CIFAR small picture datasets: 6.28 on CIFAR-10 and 24.30 for CIFAR-100.", "Convolutional network are the de-facto standard for analysing spatio-temporal data such as images, videos, 3D shapes, etc. Whilst some of this data is naturally dense (for instance, photos), many other data sources are inherently sparse. Examples include pen-strokes forming on a piece of paper, or (colored) 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce a sparse convolutional operation tailored to processing sparse data that differs from prior work on sparse convolutional networks in that it operates strictly on submanifolds, rather than \"dilating\" the observation with every layer in the network. Our empirical analysis of the resulting submanifold sparse convolutional networks shows that they perform on par with state-of-the-art methods whilst requiring substantially less computation.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 ." ] }
1903.03503
2920987997
A wide range of systems exhibit high dimensional incomplete data. Accurate estimation of the missing data is often desired, and is crucial for many downstream analyses. Many state-of-the-art recovery methods involve supervised learning using datasets containing full observations. In contrast, we focus on unsupervised estimation of missing image data, where no full observations are available - a common situation in practice. Unsupervised imputation methods for images often employ a simple linear subspace to capture correlations between data dimensions, omitting more complex relationships. In this work, we introduce a general probabilistic model that describes sparse high dimensional imaging data as being generated by a deep non-linear embedding. We derive a learning algorithm using a variational approximation based on convolutional neural networks and discuss its relationship to linear imputation models, the variational auto encoder, and deep image priors. We introduce sparsity-aware network building blocks that explicitly model observed and missing data. We analyze proposed sparsity-aware network building blocks, evaluate our method on public domain imaging datasets, and conclude by showing that our method enables imputation in an important real-world problem involving medical images. The code is freely available as part of the |neuron| library at this http URL.
During the development of this work, several contemporaneous works have been shown to tackle related problems. Partial convolutions @cite_30 have been developed to tackle image inpainting, where parts of the desired images are fully observed. A recent method uses adversarial training to guide a generator network to impute missing data, and introduces a discriminator mechanism to enable training @cite_21 . Within the medical image analysis domain, a recent method takes advantage of the similarity of local structure between different acquisition directions to enable subject-specific supervised training and imputation @cite_18 . Outside of imaging-specific methods, recent papers have also shown similar development based on deep generative models in for imputing tabular and time series data @cite_28 @cite_12 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_28", "@cite_21", "@cite_12" ], "mid": [ "2798365772", "2891378267", "2964010366", "2803403013", "2866415919" ], "abstract": [ "Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach.", "High resolution magnetic resonance (MR) images are desired in many clinical applications, yet acquiring such data with an adequate signal-to-noise ratio requires a long time, making them costly and susceptible to motion artifacts. A common way to partly achieve this goal is to acquire MR images with good in-plane resolution and poor through-plane resolution (i.e., large slice thickness). For such 2D imaging protocols, aliasing is also introduced in the through-plane direction, and these high-frequency artifacts cannot be removed by conventional interpolation. Super-resolution (SR) algorithms which can reduce aliasing artifacts and improve spatial resolution have previously been reported. State-of-the-art SR methods are mostly learning-based and require external training data consisting of paired low resolution (LR) and high resolution (HR) MR images. However, due to scanner limitations, such training data are often unavailable. This paper presents an anti-aliasing (AA) and self super-resolution (SSR) algorithm that needs no external training data. It takes advantage of the fact that the in-plane slices of those MR images contain high frequency information. Our algorithm consists of three steps: (1) We build a self AA (SAA) deep network followed by (2) an SSR deep network, both of which can be applied along different orientations within the original images, and (3) recombine the multiple orientations output from Steps 1 and 2 using Fourier burst accumulation. We perform our SAA+SSR algorithm on a diverse collection of MR data without modification or preprocessing other than N4 inhomogeneity correction, and demonstrate significant improvement compared to competing SSR methods.", "Multivariate time series data in practical applications, such as health care, geoscience, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative missingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Units (GRU), a state-of-the-art recurrent neural network. It takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provides useful insights for better understanding and utilization of missing values in time series analysis.", "We propose a novel method for imputing missing data by adapting the well-known Generative Adversarial Nets (GAN) framework. Accordingly, we call our method Generative Adversarial Imputation Nets (GAIN). The generator (G) observes some components of a real data vector, imputes the missing components conditioned on what is actually observed, and outputs a completed vector. The discriminator (D) then takes a completed vector and attempts to determine which components were actually observed and which were imputed. To ensure that D forces G to learn the desired distribution, we provide D with some additional information in the form of a hint vector. The hint reveals to D partial information about the missingness of the original sample, which is used by D to focus its attention on the imputation quality of particular components. This hint ensures that G does in fact learn to generate according to the true data distribution. We tested our method on various datasets and found that GAIN significantly outperforms state-of-the-art imputation methods.", "Variational autoencoders (VAEs), as well as other generative models, have been shown to be efficient and accurate to capture the latent structure of vast amounts of complex high-dimensional data. However, existing VAEs can still not directly handle data that are heterogenous (mixed continuous and discrete) or incomplete (with missing data at random), which is indeed common in real-world applications. In this paper, we propose a general framework to design VAEs, suitable for fitting incomplete heterogenous data. The proposed HI-VAE includes likelihood models for real-valued, positive real valued, interval, categorical, ordinal and count data, and allows to estimate (and potentially impute) missing data accurately. Furthermore, HI-VAE presents competitive predictive performance in supervised tasks, outperforming supervised models when trained on incomplete data." ] }
1903.03640
2946808941
The Nvidia GPU architecture has introduced new computing elements such as the , which are special processing units dedicated to perform fast matrix-multiply-accumulate (MMA) operations and accelerate applications. In this work we present the idea of using tensor cores for a different purpose such as the parallel arithmetic reduction problem, and propose a new GPU tensor-core based algorithm as well as analyze its potential performance benefits in comparison to a traditional GPU-based one. The proposed method, encodes the reduction of @math numbers as a set of @math MMA tensor-core operations (for Nvidia's Volta architecture @math ) and takes advantage from the fact that each MMA operation takes just one GPU cycle. When analyzing the cost under a simplified GPU computing model, the result is that the new algorithm manages to reduce a problem of @math numbers in @math steps with a speedup of @math .
The parallel reduction has been implemented using different frameworks. In the case of OpenMP there are high level abstract constructs that allow the programmer to express a parallel reduction via OpenMP pragma commands @cite_7 . In the case of GPUs, the parallel reduction has been addressed by Nickolls, Buck and Garland @cite_0 . The authors propose a parallel sum reduction of @math time, where each thread loads one element of the input array, and then adds pairs of values in parallel as @math where @math is the number of threads. The loop in this kernel implicitly builds a summation tree over the input elements, and at the end of this loop, the first data slot of each thread-block holds the reduction result of one iteration, , @math . More kernels are executed until the input problem is one value.
{ "cite_N": [ "@cite_0", "@cite_7" ], "mid": [ "2028499920", "2293586629" ], "abstract": [ "The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now parallel systems. Furthermore, their parallelism continues to scale with Moore's law. The challenge is to develop mainstream application software that transparently scales its parallelism to leverage the increasing number of processor cores, much as 3D graphics applications transparently scale their parallelism to manycore GPUs with widely varying numbers of cores.", "As data analytics are growing in importance they are also quickly becoming one of the dominant application domains that require parallel processing. This paper investigates the applicability of OpenMP, the dominant shared-memory parallel programming model in high-performance computing, to the domain of data analytics. We contrast the performance and programmability of key data analytics benchmarks against Phoenix++, a state-of-the-art shared memory map reduce programming system. Our study shows that OpenMP outperforms the Phoenix++ system by a large margin for several benchmarks. In other cases, however, the programming model is lacking support for this application domain." ] }
1903.03640
2946808941
The Nvidia GPU architecture has introduced new computing elements such as the , which are special processing units dedicated to perform fast matrix-multiply-accumulate (MMA) operations and accelerate applications. In this work we present the idea of using tensor cores for a different purpose such as the parallel arithmetic reduction problem, and propose a new GPU tensor-core based algorithm as well as analyze its potential performance benefits in comparison to a traditional GPU-based one. The proposed method, encodes the reduction of @math numbers as a set of @math MMA tensor-core operations (for Nvidia's Volta architecture @math ) and takes advantage from the fact that each MMA operation takes just one GPU cycle. When analyzing the cost under a simplified GPU computing model, the result is that the new algorithm manages to reduce a problem of @math numbers in @math steps with a speedup of @math .
Harris optimized for CUDA an algorithm for parallel reduction @cite_3 tree-based as well. In his work, the author illustrates seven different optimizations that are relevant to the parallel reduction, achieving a final version that is up to @math times faster than the initial GPU version presented. Harris mentions that although the time complexity of the parallel reduction is indeed @math , the cost is not efficient if @math threads are used. The author shows, with the help of Brent's Theorem, that using @math threads leads to a parallel efficient cost algorithm.
{ "cite_N": [ "@cite_3" ], "mid": [ "2076039939" ], "abstract": [ "Recently, graphics processors have emerged as a powerful computational platform. A variety of encouraging results, mostly from researchers using GPUs to accelerate scientific computing and visualization applications, have shown that significant speedups can be achieved by applying GPUs to data-parallel computational problems. However, attaining these speedups requires knowledge of GPU programming and architecture.The preceding chapters have described the architecture of modern GPUs and the trends that govern their performance and design. Continuing from the concepts introduced in those chapters, in this chapter we present intuitive mappings of standard computational concepts onto the special-purpose features of GPUs. After presenting the basics, we introduce a simple GPU programming framework and demonstrate the use of the framework in a short sample program." ] }
1903.03640
2946808941
The Nvidia GPU architecture has introduced new computing elements such as the , which are special processing units dedicated to perform fast matrix-multiply-accumulate (MMA) operations and accelerate applications. In this work we present the idea of using tensor cores for a different purpose such as the parallel arithmetic reduction problem, and propose a new GPU tensor-core based algorithm as well as analyze its potential performance benefits in comparison to a traditional GPU-based one. The proposed method, encodes the reduction of @math numbers as a set of @math MMA tensor-core operations (for Nvidia's Volta architecture @math ) and takes advantage from the fact that each MMA operation takes just one GPU cycle. When analyzing the cost under a simplified GPU computing model, the result is that the new algorithm manages to reduce a problem of @math numbers in @math steps with a speedup of @math .
In the case of distributed computing, there are two levels of parallelism that must occur for the reduction to be completed; (1) local reduction and (2) distributed reduction. For the local reduction, the process may be carried with multi-core CPU or GPU computation as recently described. For the case of distributed computation, the results of different compute nodes must be merged with message passing tools such as MPI @cite_17 . The result is an hybrid OpenMP-MPI or GPU-MPI reduction for massive scale systems. Recent tools such as MapReduce @cite_12 also offer a higher-level of abstraction to accomplish parallel reduction in cluster environments.
{ "cite_N": [ "@cite_12", "@cite_17" ], "mid": [ "2173213060", "2138782497" ], "abstract": [ "MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.", "Today most systems in high-performance computing (HPC) feature a hierarchical hardware design: Shared memory nodes with several multi-core CPUs are connected via a network infrastructure. Parallel programming must combine distributed memory parallelization on the node interconnect with shared memory parallelization inside each node. We describe potentials and challenges of the dominant programming models on hierarchically structured hardware: Pure MPI (Message Passing Interface), pure OpenMP (with distributed shared memory extensions) and hybrid MPI+OpenMP in several ?avors. We pinpoint cases where a hybrid programming model can indeed be the superior solution because of reduced communication needs and memory consumption, or improved load balance. Furthermore we show that machine topology has a signi?cant impact on performance for all parallelization strategies and that topology awareness should be built into all applications in the future. Finally we give an outlook on possible standardization goals and extensions that could make hybrid programming easier to do with performance in mind." ] }
1903.03640
2946808941
The Nvidia GPU architecture has introduced new computing elements such as the , which are special processing units dedicated to perform fast matrix-multiply-accumulate (MMA) operations and accelerate applications. In this work we present the idea of using tensor cores for a different purpose such as the parallel arithmetic reduction problem, and propose a new GPU tensor-core based algorithm as well as analyze its potential performance benefits in comparison to a traditional GPU-based one. The proposed method, encodes the reduction of @math numbers as a set of @math MMA tensor-core operations (for Nvidia's Volta architecture @math ) and takes advantage from the fact that each MMA operation takes just one GPU cycle. When analyzing the cost under a simplified GPU computing model, the result is that the new algorithm manages to reduce a problem of @math numbers in @math steps with a speedup of @math .
The most recent and relevant work about CUDA GPU tensor core programming is the one by Markidis @cite_13 , in which they studied current approaches to program NVIDIA Tensor Cores, as well as the performance and the precision loss due to computation in mixed precision. The authors show how NVIDIA CUDA provided three ways of programming the matrix-multiply-accumulate (MMA): CUDA Warp MMA (WMMA) API, CUTLASS, and cuBLAS GEMM. The tensor core programming is analyzed in different aspects such as programmability, performance and precision. The authors report that the maximum performance obtained was with the cuBLAS GEMM implementation, where they achieved 83 @math in their test environment (approximately @math @math @math @math $ higher in TOPS Watt (Tensor Operations per Second per Watt).
{ "cite_N": [ "@cite_13" ], "mid": [ "2791673912" ], "abstract": [ "The NVIDIA Volta GPU microarchitecture introduces a specialized unit, called \"Tensor Core\" that performs one matrix-multiply-and-accumulate on 4x4 matrices per clock cycle. The NVIDIA Tesla V100 accelerator, featuring the Volta microarchitecture, provides 640 Tensor Cores with a theoretical peak performance of 125 Tflops s in mixed precision. In this paper, we investigate current approaches to program NVIDIA Tensor Cores, their performances and the precision loss due to computation in mixed precision. Currently, NVIDIA provides three different ways of programming matrix-multiply-and-accumulate on Tensor Cores: the CUDA Warp Matrix Multiply Accumulate (WMMA) API, CUTLASS, a templated library based on WMMA, and cuBLAS GEMM. After experimenting with different approaches, we found that NVIDIA Tensor Cores can deliver up to 83 Tflops s in mixed precision on a Tesla V100 GPU, seven and three times the performance in single and half precision respectively. A WMMA implementation of batched GEMM reaches a performance of 4 Tflops s. While precision loss due to matrix multiplication with half precision input might be critical in many HPC applications, it can be considerably reduced at the cost of increased computation. Our results indicate that HPC applications using matrix multiplications can strongly benefit from using of NVIDIA Tensor Cores." ] }
1903.03348
2921879898
Deep learning has been extended to a number of new domains with critical success, though some traditional orienteering problems such as the Travelling Salesman Problem (TSP) and its variants are not commonly solved using such techniques. Deep neural networks (DNNs) are a potentially promising and under-explored solution to solve these problems due to their powerful function approximation abilities, and their fast feed-forward computation. In this paper, we outline a method for converting an orienteering problem into a classification problem, and design a customised multi-layer deep learning network to approximate traditional optimisation solutions to this problem. We test the performance of the network on a real-world parking violation dataset, and conduct a generic study that empirically shows the critical architectural components that affect network performance for this problem.
Additionally, there numerous recent works in effectively combining optimisation with deep learning. Fischetti and Jo modelled deep neural networks as a 0-1 mixed integer linear program @cite_2 . used a deep neural net to learn the structure of a combinatorial problem, and mentioned that such research is still at an early stage @cite_3 . Our work makes a small contribution to this area.
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "2805983332", "2777012514" ], "abstract": [ "Deep Neural Networks (DNNs) have been shaking the AI scene, for their ability to excel at Machine Learning tasks without relying on complex, hand-crafted, features. Here, we probe whether a DNN can learn how to construct solutions of a CSP, without any explicit symbolic information about the problem constraints. We train a DNN to extend a feasible solution by making a single, globally consistent, variable assignment. The training is done over intermediate steps of the construction of feasible solutions. From a scientific standpoint, we are interested in whether a DNN can learn the structure of a combinatorial problem, even when trained on (arbitrarily chosen) construction sequences of feasible solutions. In practice, the network could also be used to guide a search process, e.g. to take into account (soft) constraints that are implicit in past solutions or hard to capture in a traditional declarative model. This research line is still at an early stage, and a number of complex issues remain open. Nevertheless, we already have intriguing results on the classical Partial Latin Square and N-Queen completion problems.", "Deep Neural Networks (DNNs) are very popular these days, and are the subject of a very intense investigation. A DNN is made by layers of internal units (or neurons), each of which computes an affine combination of the output of the units in the previous layer, applies a nonlinear operator, and outputs the corresponding value (also known as activation). A commonly-used nonlinear operator is the so-called rectified linear unit (ReLU), whose output is just the maximum between its input value and zero. In this (and other similar cases like max pooling, where the max operation involves more than one input value), one can model the DNN as a 0-1 Mixed Integer Linear Program (0-1 MILP) where the continuous variables correspond to the output values of each unit, and a binary variable is associated with each ReLU to model its yes no nature. In this paper we discuss the peculiarity of this kind of 0-1 MILP models, and describe an effective bound-tightening technique intended to ease its solution. We also present possible applications of the 0-1 MILP model arising in feature visualization and in the construction of adversarial examples. Preliminary computational results are reported, aimed at investigating (on small DNNs) the computational performance of a state-of-the-art MILP solver when applied to a known test case, namely, hand-written digit recognition." ] }
1903.03410
2964314183
The rising number of IoT devices is accelerating the research on new solutions that will be able to efficiently deal with unreliable connectivity in highly dynamic computing applications. To improve the overall performance in IoT applications, there are multiple communication solutions available, either proprietary or open source, all of which satisfy different communication requirements. Most commonly, for this kind of communication, developers choose REST HTTP protocol as a result of its ease of use and compatibility with the existing computing infrastructure. In applications where mobility and unreliable connectivity play a significant role, ensuring a reliable exchange of data with the stateless REST HTTP protocol completely depends on the developer itself. This often means resending multiple request messages when the connection fails, constantly trying to access the service until the connection reestablishes. In order to alleviate this problem, in this paper, we combine REST HTTP with random linear network coding (RLNC) to reduce the number of additional retransmissions. We show how using RLNC with REST HTTP requests can decrease the reconnection time by reducing the additional packet retransmissions in unreliable highly dynamic scenarios.
Handling dynamic mobile scenarios has been one of the key issues for many real-time IoT based systems. In @cite_12 authors explain the limitations of cloud computing solutions in handling mobility issues in these kind of systems. As a solution they propuse a framework that combines cloud comuting with computing closer to end devices in a wireless IoT systems. The advantages of the fog computing in different dynamic IoT application scenarios have been also detailed in @cite_8 and @cite_13 . While @cite_8 offers a more general overview of these advantages, @cite_13 focuses on a specific scenario which includes communication between smart vehicles and their fog computing nodes positioned at base stations.
{ "cite_N": [ "@cite_13", "@cite_12", "@cite_8" ], "mid": [ "2891546119", "2598890134", "2472333518" ], "abstract": [ "There are several use cases that claim the need for a connected car. Among them there is the need for connectivity between vehicles and information sources, or V2V and V2X exchanges for accident prevention. In order to cope with the need for novel applications running on top of an interconnected network, the concept of fog computing appears as a realistic solution for both intra-car and inter-car data processing and decision making. This paper describes the proposed architecture and experimental evaluation of an innovative proof-of-concept (PoC) for a connected car, modeled with YANG, which can be remotely controlled using SDN NFV and fog computing technologies. As an example, the remote control of the car might be based on a service application running on a fog node, which can be located close to a road side unit (RSU). We also propose a fog architecture in order to enable cooperative perception between connected cars. Finally, the performance evaluation uses a RESTCONF server installed in a Raspberry Pi aboard of a small car. This server is responsible for the sensors and actuators of the car and allows for its remote control from a user terminal (e.g., a smartphone, tablet, or laptop) and through the fog node, running a control application as a service.", "Recently, big data analytics has received important attention in a variety of application domains including business, finance, space science, healthcare, telecommunication and Internet of Things (IoT). Among these areas, IoT is considered as an important platform in bringing people, processes, data and things objects together in order to enhance the quality of our everyday lives. However, the key challenges are how to effectively extract useful features from the massive amount of heterogeneous data generated by resource-constrained IoT devices in order to provide real-time information and feedback to the end-users, and how to utilize this data-aware intelligence in enhancing the performance of wireless IoT networks. Although there are parallel advances in cloud computing and edge computing for addressing some issues in data analytics, they have their own benefits and limitations. The convergence of these two computing paradigms, i.e., massive virtually shared pool of computing and storage resources from the cloud and real-time data processing by edge computing, could effectively enable live data analytics in wireless IoT networks. In this regard, we propose a novel framework for coordinated processing between edge and cloud computing processing by integrating advantages from both the platforms. The proposed framework can exploit the network-wide knowledge and historical information available at the cloud center to guide edge computing units towards satisfying various performance requirements of heterogeneous wireless IoT networks. Starting with the main features, key enablers and the challenges of big data analytics, we provide various synergies and distinctions between cloud and edge processing. More importantly, we identify and describe the potential key enablers for the proposed edge-cloud collaborative framework, the associated key challenges and some interesting future research directions.", "Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-to-things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT." ] }
1903.03410
2964314183
The rising number of IoT devices is accelerating the research on new solutions that will be able to efficiently deal with unreliable connectivity in highly dynamic computing applications. To improve the overall performance in IoT applications, there are multiple communication solutions available, either proprietary or open source, all of which satisfy different communication requirements. Most commonly, for this kind of communication, developers choose REST HTTP protocol as a result of its ease of use and compatibility with the existing computing infrastructure. In applications where mobility and unreliable connectivity play a significant role, ensuring a reliable exchange of data with the stateless REST HTTP protocol completely depends on the developer itself. This often means resending multiple request messages when the connection fails, constantly trying to access the service until the connection reestablishes. In order to alleviate this problem, in this paper, we combine REST HTTP with random linear network coding (RLNC) to reduce the number of additional retransmissions. We show how using RLNC with REST HTTP requests can decrease the reconnection time by reducing the additional packet retransmissions in unreliable highly dynamic scenarios.
However, even with the improvements gained with fog based system architectures, the issue of intermittent connections in highly dynamic IoT applications and disruptions that come as their consequence has still many open questions. This has led to many different research efforts in improving these solutions. In @cite_1 authors approach the problem by developing a handover mechanism for mobility support in a IoT-fog systems tested in a health monitoring application. The handover procedure has also been optimized for another fog based framework that tackles high dynamic scenario of connected vehicles in @cite_7 . Beside handover optimization, the choice of the application layer protocol has also been a subject of research when tackling consequences of unreliable connections in these kind of solutions. In @cite_13 authors are using a fog based solution and RESTCONF, an HTTP based protocol for smart vehicle related communication and data computations. In @cite_3 authors have presented a disruption-tolerant RESTful support, tested both with HTTP and CoAP. Their main goal was to improve communication in a dynamic scenario where many devices are prone to disconnections while moving. Idea of improving communication with the adaptation of REST can be used, this time by using network coding.
{ "cite_N": [ "@cite_13", "@cite_3", "@cite_1", "@cite_7" ], "mid": [ "2891546119", "2523856151", "2808659436", "2912804003" ], "abstract": [ "There are several use cases that claim the need for a connected car. Among them there is the need for connectivity between vehicles and information sources, or V2V and V2X exchanges for accident prevention. In order to cope with the need for novel applications running on top of an interconnected network, the concept of fog computing appears as a realistic solution for both intra-car and inter-car data processing and decision making. This paper describes the proposed architecture and experimental evaluation of an innovative proof-of-concept (PoC) for a connected car, modeled with YANG, which can be remotely controlled using SDN NFV and fog computing technologies. As an example, the remote control of the car might be based on a service application running on a fog node, which can be located close to a road side unit (RSU). We also propose a fog architecture in order to enable cooperative perception between connected cars. Finally, the performance evaluation uses a RESTCONF server installed in a Raspberry Pi aboard of a small car. This server is responsible for the sensors and actuators of the car and allows for its remote control from a user terminal (e.g., a smartphone, tablet, or laptop) and through the fog node, running a control application as a service.", "The Web of Things (WoT) extends the Internet of Things (IoT) considering that each physical object can be accessed and controlled using Web-based languages and protocols. However, due to the mobility of physical objects and to the short radio range of the wireless interfaces they are equipped with, frequent and unpredictable connectivity disruptions may occur between the physical objects and the Web clients used to control and access these objects. This paper presents a disruption-tolerant RESTful support for the WoT, in which resources offered by physical objects are identified by URIs and accessed through stateless services. Service requests and responses are forwarded using the store-carry-and-forward principle, and can be cached by intermediate nodes. A complete service invocation model is provided, allowing to perform unicast, anycast, multicast and broadcast service invocations either using HTTP or CoAP, which makes it particularly suited for the WoT. This disruption-tolerant support is illustrated by a scenario in the context of agricultural robotics.", "Handover mechanism for mobility support in a remote real-time streaming Internet-of-Things (IoT) system was proposed in this paper. The handover mechanism serves to keep the connection between sensor nodes and a gateway with a low latency. The handover mechanism also attentively considers oscillating nodes which often occur in many streaming IoT systems. By leveraging the strategic position of smart gateways and Fog computing in a real-time streaming IoT system, sensor nodes’ loads were alleviated whereas advanced services, like push notification and local data storage, were provided. The paper discussed and analyzed metrics for the handover mechanism based on Wi-Fi. In addition, a complete remote real-time health monitoring IoT system was implemented for experiments. The results from evaluating our mobility handover mechanism for mobility support shows that the latency of switching from one gateway to another is 10 –50 less than other state-of-the-art mobility support systems. The results show that the proposed handover mechanism is a very promising approach for mobility support in both Fog computing and IoT systems.", "Driven by the increasing number of connected vehicles and related services, powerful communication and computation capabilities are needed for vehicular communications, especially for real-time and safety-related applications. A cellular network consists of radio access technologies, including the current long-term evolution (LTE), the LTE advanced, and the forthcoming 5th generation mobile communication systems. It covers large areas and has the ability to provide high data rate and low latency communication services to mobile users. It is considered the most promising access technology to support real-time vehicular communications. Meanwhile, fog is an emerging architecture for computing, storage, and networking, in which fog nodes can be deployed at base stations to deliver cloud services close to vehicular users. In fog computing-enabled cellular networks, mobility is one of the most critical challenges for vehicular communications to maintain the service continuity and to satisfy the stringent service requirements, especially when the computing and storage resources are limited at the fog nodes. Service migration, relocating services from one fog server to another in a dynamic manner, has been proposed as an effective solution to the mobility problem. To support service migration, both computation and communication techniques need to be considered. Given the importance of protocol design to support the mobility of the vehicles and maintain high network performance, in this paper, we investigate the service migration in the fog computing-enabled cellular networks. We propose a quality-of-service aware scheme based on the existing handover procedures to support the real-time vehicular services. A case study based on a realistic vehicle mobility pattern for Luxembourg scenario is carried out, where the proposed scheme, as well as the benchmarks, are compared by analyzing latency and reliability as well as migration cost." ] }
1903.03410
2964314183
The rising number of IoT devices is accelerating the research on new solutions that will be able to efficiently deal with unreliable connectivity in highly dynamic computing applications. To improve the overall performance in IoT applications, there are multiple communication solutions available, either proprietary or open source, all of which satisfy different communication requirements. Most commonly, for this kind of communication, developers choose REST HTTP protocol as a result of its ease of use and compatibility with the existing computing infrastructure. In applications where mobility and unreliable connectivity play a significant role, ensuring a reliable exchange of data with the stateless REST HTTP protocol completely depends on the developer itself. This often means resending multiple request messages when the connection fails, constantly trying to access the service until the connection reestablishes. In order to alleviate this problem, in this paper, we combine REST HTTP with random linear network coding (RLNC) to reduce the number of additional retransmissions. We show how using RLNC with REST HTTP requests can decrease the reconnection time by reducing the additional packet retransmissions in unreliable highly dynamic scenarios.
Network coding (NC) can be dated in 2000 @cite_11 , a technique which allows network systems to combine several native messages into one coded message in order to expand the maximum bandwidth utilization. In @cite_10 authors use a network coded protocol operating between the network and transport layers in a wireless network. The results have shown that by using RLNC, this protocol was able to recover from packet losses. In order to improve performances of dynamic IoT scenarios the interesting path is the combination of network coding and fog based computing. Possible applications of NC in IoT and fog based systems have been described in @cite_0 with promosing results reported in @cite_9 , where authors have used NC to improve efficiency of data communication protocols in fog computing wireless sensor environment. In this paper we will explore combination of NC and REST HTTP protocol in IoT to fog communication scenario, as it is still application layer protocol of choice for developers according to multiple research efforts as the one reported in @cite_6 .
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_0", "@cite_10", "@cite_11" ], "mid": [ "2769633942", "2795655063", "2888212899", "2026688187", "2105831729" ], "abstract": [ "A communication protocol for fog computing should be efficient, lightweight and customizable. In this work we focus in a communication protocol for fog nodes composed of wireless sensors, which are spatially distributed autonomous sensors monitoring physical or environmental conditions. Problems with data congestion and limited physical resources are common in these networks. For the optimization of data flow, it is important to apply techniques that reduce the transmitted data. We use the network coding technique to demonstrate through experiments the degree of efficiency of data transmission optimization protocols. The experiments were performed through a wireless sensors programming framework composed of TinyOS operating system, NesC programming language and TOSSIM simulator. In addition, we use the Python programming language to simulate the wireless sensor network topology. The results obtained demonstrate a better performance (50 up to 60 ) when the network coding technique is applied to the data communication protocol.", "The fast increment in the number of IoT (Internet of Things) devices is accelerating the research on new solutions to make cloud services scalable. In this context, the novel concept of fog computing as well as the combined fog-to-cloud computing paradigm is becoming essential to decentralize the cloud, while bringing the services closer to the end-system. This article surveys e application layer communication protocols to fulfill the IoT communication requirements, and their potential for implementation in fog- and cloud-based IoT systems. To this end, the article first briefly presents potential protocol candidates, including request-reply and publish-subscribe protocols. After that, the article surveys these protocols based on their main characteristics, as well as the main performance issues, including latency, energy consumption, and network throughput. These findings are thereafter used to place the protocols in each segment of the system (IoT, fog, cloud), and thus opens up the discussion on their choice, interoperability, and wider system integration. The survey is expected to be useful to system architects and protocol designers when choosing the communication protocols in an integrated IoT-to-fog-to-cloud system architecture.", "", "This work studies the potential and impact of the FRANC network coding protocol for delivering high quality Dynamic Adaptive Streaming over HTTP (DASH) in wireless networks. Although DASH aims to tailor the video quality rate based on the available throughput to the destination, it relies on the TCP protocol for reliability in data delivery. TCP is known to drop its throughput performance by several fold in the presence of even 1 or 2 packet losses, which are common in wireless systems. This will force DASH to settle at a much lower video resolution, thus reducing the user's quality of experience. We show that the use of FRANC, an adaptive network coding protocol that provides both low delay and high throughput to upper layers, as a reliability mechanism for TCP can significantly increase video quality. As part of our analysis, we benchmark the performance of various TCP versions, including CUBIC, Reno, Veno, Vegas, and Westwood+, under different packet loss rates in wireless systems using a real testbed with Raspberry Pi devices. Our goal was to choose the most promising TCP version in terms of delay performance, in this case TCP Reno, and make a fair comparison between TCP running alone and using FRANC underneath for reliability. Our demonstrator with DASH in Raspberry Pi devices using the DASH benchmark, shows that the video rate delivered is 4× higher when using FRANC. Even in harsh packet loss conditions, FRANC is able to deliver higher data rates (increase 4×), while experiencing significantly shorter (decrease 10×) video lags.", "We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a \"fluid\" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems." ] }
1903.03358
2954269555
An eye-tracking study of 18 developers reading and summarizing Java methods is presented. The developers provide a written summary for methods assigned to them. In total, 63 methods are used from five different systems. Previous studies on this topic use only short methods presented in isolation usually as images. In contrast, this work presents the study in the Eclipse IDE allowing access to all the source code in the system. The developer can navigate via scrolling and switching files while writing the summary. New eye-tracking infrastructure allows for this improvement in the study environment. Data collected includes eye gazes on source code, written summaries, and time to complete each summary. Unlike prior work that concluded developers focus on the signature the most, these results indicate that they tend to focus on the method body more than the signature. Moreover, both experts and novices tend to revisit control flow terms rather than reading them for a long period. They also spend a significant amount of gaze time and have higher gaze visits when they read call terms. Experts tend to revisit the body of the method significantly more frequently than its signature as the size of the method increases. Moreover, experts tend to write their summaries from source code lines that they read the most.
@cite_7 observed that programmers tend to first read through the entire code snippet, and then focus on some parts. Furthermore, longer time spent thoroughly reading the code increases the efficiency of finding the defect in the code. This correlation was later confirmed by @cite_34 stating that the scan time plays an important role in defect detection time and visual effort required to review source code. Moreover, experts tend to focus on defects more than novices @cite_34 . @cite_43 found that experts read code less linearly than novices did. Bednarik and Tukiainen concluded that low-experience programmers repeatedly fixated on the same code sections, while experienced programmers target the output of the code, such as evaluation expressions @cite_28 @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_28", "@cite_43", "@cite_34" ], "mid": [ "1978755715", "2013788150", "2031043538", "2128966215", "" ], "abstract": [ "The challenges in empirical eye-tracking studies of usability or complex problem solving include 1) how to effectively analyze the eye-tracking data, and 2) how to interpret and relate the resulting measures to the user cognitive processing. We conducted a reanalysis of eye-tracking data from a recent study that involved programmers of two experience groups debugging a program with the help of multiple representations. The proportional fixation time on each area of interest (AOI), frequency of visual attention switches between the areas, and the type of switch were investigated during five consequential phases of ten minutes of debugging. We increased the granularity of the focus on the user processing several times, allowing us to construct a better picture of the process. In addition, plotting the areas of interest in time supported a visual analysis and comparison with the quantitative data. We found repetitive patterns of visual attention that were associated with less experience in programming and lower performance. We also discovered that at the beginning of the process programmers made use of both the code and visualization while frequently switching between them. At a later stage of debugging, more experienced programmers began to increasingly integrate also the output of the program and employed a high-frequency of visual attention switching to coordinate the three representations.", "This paper proposes to use eye movements to characterize the performance of individuals in reviewing source code of computer programs. We first present an integrated environment to measure and record the eye movements of the code reviewers. Based on the fixation data, the environment computes the line number of the source code that the reviewer is currently looking at. The environment can also record and play back how the eyes moved during the review process. We conducted an experiment to analyze 30 review processes (6 programs, 5 subjects) using the environment. As a result, we have identified a particular pattern, called scan, in the subjects' eye movements. Quantitative analysis showed that reviewers who did not spend enough time for the scan tend to take more time for finding defects.", "Program comprehension processes have previously been studied using methodologies such as think-aloud or comprehension summary analysis. Eye-tracking, however, has not been previously widely applied to studies of behavioral aspects of programming. We present a study in which program comprehension was investigated with a help of a remote eye-tracker. Novice and intermediate programmers used a program visualization tool to aid their comprehension while the location of fixations, fixation durations and attention switching between the areas of interest were recorded.In this paper 1) we propose an approach how to investigate trends in repeated-measures sparse-data of few cases captured by an eye-tracker and 2) using this technique, we characterize the development of program comprehension strategies during dynamic program visualization with help of eye-movement data.", "Code reading is an important skill in programming. Inspired by the linearity that people exhibit while natural language text reading, we designed local and global gaze-based measures to characterize linearity (left-to-right and top-to-bottom) in reading source code. Unlike natural language text, source code is executable and requires a specific reading approach. To validate these measures, we compared the eye movements of novice and expert programmers who were asked to read and comprehend short snippets of natural language text and Java programs. Our results show that novices read source code less linearly than natural language text. Moreover, experts read code less linearly than novices. These findings indicate that there are specific differences between reading natural language and source code, and suggest that non-linear reading skills increase with expertise. We discuss the implications for practitioners and educators.", "" ] }
1903.03358
2954269555
An eye-tracking study of 18 developers reading and summarizing Java methods is presented. The developers provide a written summary for methods assigned to them. In total, 63 methods are used from five different systems. Previous studies on this topic use only short methods presented in isolation usually as images. In contrast, this work presents the study in the Eclipse IDE allowing access to all the source code in the system. The developer can navigate via scrolling and switching files while writing the summary. New eye-tracking infrastructure allows for this improvement in the study environment. Data collected includes eye gazes on source code, written summaries, and time to complete each summary. Unlike prior work that concluded developers focus on the signature the most, these results indicate that they tend to focus on the method body more than the signature. Moreover, both experts and novices tend to revisit control flow terms rather than reading them for a long period. They also spend a significant amount of gaze time and have higher gaze visits when they read call terms. Experts tend to revisit the body of the method significantly more frequently than its signature as the size of the method increases. Moreover, experts tend to write their summaries from source code lines that they read the most.
used iTrace to conduct an eye tracking study on three bug fixing tasks. They found that developers focus on small parts of methods that are often related to data flow. When it comes to switches between methods, they found developers rarely follow call graph links and mostly switch to the elements in close proximity @cite_11 @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_11" ], "mid": [ "2324156403", "1999331443" ], "abstract": [ "Study findings based on eye-tracking and user interaction monitoring.Insights into the detailed navigation behavior of 22 developers.An approach to automatically capture the source code elements a developer looks at.A discussion on the value the data and the findings offer for developer support. The more we know about software developers detailed navigation behavior for change tasks, the better we are able to provide effective tool support. Currently, most empirical studies on developers performing change tasks are, however, limited to very small code snippets or limited by the granularity and detail of the data collected on developers navigation behavior. In our research, we extend this work by combining user interaction monitoring to gather interaction context the code elements a developer selects and edits with eye-tracking to gather more detailed and fine-granular gaze context-code elements a developer looked at. In a study with 12 professional and 10 student developers we gathered interaction and gaze contexts from participants working on three change tasks of an open source system. Based on an analysis of the data we found, amongst other results, that gaze context captures different aspects than interaction context and that developers only read small portions of code elements. We further explore the potential of the more detailed and fine-granular data by examining the use of the captured change task context to predict perceived task difficulty and to provide better and more fine-grained navigation recommendations. We discuss our findings and their implications for better tool support.", "What are software developers doing during a change task? While an answer to this question opens countless opportunities to support developers in their work, only little is known about developers' detailed navigation behavior for realistic change tasks. Most empirical studies on developers performing change tasks are limited to very small code snippets or are limited by the granularity or the detail of the data collected for the study. In our research, we try to overcome these limitations by combining user interaction monitoring with very fine granular eye-tracking data that is automatically linked to the underlying source code entities in the IDE. In a study with 12 professional and 10 student developers working on three change tasks from an open source system, we used our approach to investigate the detailed navigation of developers for realistic change tasks. The results of our study show, amongst others, that the eye tracking data does indeed capture different aspects than user interaction data and that developers focus on only small parts of methods that are often related by data flow. We discuss our findings and their implications for better developer tool support." ] }
1903.03358
2954269555
An eye-tracking study of 18 developers reading and summarizing Java methods is presented. The developers provide a written summary for methods assigned to them. In total, 63 methods are used from five different systems. Previous studies on this topic use only short methods presented in isolation usually as images. In contrast, this work presents the study in the Eclipse IDE allowing access to all the source code in the system. The developer can navigate via scrolling and switching files while writing the summary. New eye-tracking infrastructure allows for this improvement in the study environment. Data collected includes eye gazes on source code, written summaries, and time to complete each summary. Unlike prior work that concluded developers focus on the signature the most, these results indicate that they tend to focus on the method body more than the signature. Moreover, both experts and novices tend to revisit control flow terms rather than reading them for a long period. They also spend a significant amount of gaze time and have higher gaze visits when they read call terms. Experts tend to revisit the body of the method significantly more frequently than its signature as the size of the method increases. Moreover, experts tend to write their summaries from source code lines that they read the most.
With regards to code summarization approaches, propose techniques to automatically generate natural language comments for Java methods @cite_4 , sequences of statements @cite_24 , and formal parameters @cite_25 using NLP. Furthermore, propose a model that defines the high-level action of loops by analyzing linguistic and structure clues @cite_5 . They also presented an approach to automatically generate a natural language summary of object-oriented action units @cite_51 .
{ "cite_N": [ "@cite_4", "@cite_24", "@cite_5", "@cite_51", "@cite_25" ], "mid": [ "2082160726", "2117228548", "2133795027", "2600308295", "2166879716" ], "abstract": [ "Studies have shown that good comments can help programmers quickly understand what a method does, aiding program comprehension and software maintenance. Unfortunately, few software projects adequately comment the code. One way to overcome the lack of human-written summary comments, and guard against obsolete comments, is to automatically generate them. In this paper, we present a novel technique to automatically generate descriptive summary comments for Java methods. Given the signature and body of a method, our automatic comment generator identifies the content for the summary and generates natural language text that summarizes the method's overall actions. According to programmers who judged our generated comments, the summaries are accurate, do not miss important content, and are reasonably concise.", "An important part of the leading comments for a method are the comments for the formal parameters of the method. According to the Java documentation writing guidelines, developers should write a summary of the method'sactions followed by comments for each parameter. In this paper, we describe a novel technique to automatically generate descriptive comments for parameters of Java methods. Such generated comments can help alleviate the lack of developer written parameter comments. In addition, they can help a programmer in ensuring that a parameter comment is current with the code. We present heuristics to generate comments that provide a high-level overview of the role of a parameter in a method. We ensure that sufficient context is provided such that a developer can understand the role of the parameter in achieving the computational intent of the method. In the opinion of nine experienced developers, the automatically generated parameter comments for methods are accurate and provide a quick synopsis of the role of the parameter in achieving the desired functionality of the method.", "Some high level algorithmic steps require more than one statement to implement, but are not large enough to be a method on their own. Specifically, many algorithmic steps (e.g., count, compare pairs of elements, find the maximum) are implemented as loop structures, which lack the higher level abstraction of the action being performed, and can negatively affect both human readers and automatic tools. Additionally, in a study of 14,317 projects, we found that less than 20 of loops are documented to help readers. In this paper, we present a novel automatic approach to identify the high level action implemented by a given loop. We leverage the available, large source of high-quality open source projects to mine loop characteristics and develop an action identification model. We use the model and feature vectors extracted from loop code to automatically identify the high level actions implemented by loops. We have evaluated the accuracy of the loop action identification and coverage of the model over 7159 open source programs. The results show great promise for this approach to automatically insert internal comments and provide additional higher level naming for loop actions to be used by tools such as code search.", "Current source code analyses driving software maintenance tools treat methods as either a single unit or a set of individual statements or words. They often leverage method names and any existing internal comments. However, internal comments are rare, and method names do not typically capture the method's multiple high-level algorithmic steps that are too small to be a single method, but require more than one statement to implement. Previous work demonstrated feasibility of identifying high level actions automatically for loops; however, many high level actions remain unaddressed and undocumented, particularly sequences of consecutive statements that are associated with each other primarily by object references. We call these object-related action units. In this paper, we present an approach to automatically generate natural language descriptions of object-related action units within methods. We leverage the available, large source of high-quality open source projects to learn the templates of object-related actions, identify the statement that can represent the main action, and generate natural language descriptions for these actions. Our evaluation study of a set of 100 object-related statement sequences showed promise of our approach to automatically identify the action and arguments and generate natural language descriptions.", "One approach to easing program comprehension is to reduce the amount of code that a developer has to read. Describing the high level abstract algorithmic actions associated with code fragments using succinct natural language phrases potentially enables a newcomer to focus on fewer and more abstract concepts when trying to understand a given method. Unfortunately, such descriptions are typically missing because it is tedious to create them manually. We present an automatic technique for identifying code fragments that implement high level abstractions of actions and expressing them as a natural language description. Our studies of 1000 Java programs indicate that our heuristics for identifying code fragments implementing high level actions are widely applicable. Judgements of our generated descriptions by 15 experienced Java programmers strongly suggest that indeed they view the fragments that we identify as representing high level actions and our synthesized descriptions accurately express the abstraction." ] }
1903.03358
2954269555
An eye-tracking study of 18 developers reading and summarizing Java methods is presented. The developers provide a written summary for methods assigned to them. In total, 63 methods are used from five different systems. Previous studies on this topic use only short methods presented in isolation usually as images. In contrast, this work presents the study in the Eclipse IDE allowing access to all the source code in the system. The developer can navigate via scrolling and switching files while writing the summary. New eye-tracking infrastructure allows for this improvement in the study environment. Data collected includes eye gazes on source code, written summaries, and time to complete each summary. Unlike prior work that concluded developers focus on the signature the most, these results indicate that they tend to focus on the method body more than the signature. Moreover, both experts and novices tend to revisit control flow terms rather than reading them for a long period. They also spend a significant amount of gaze time and have higher gaze visits when they read call terms. Experts tend to revisit the body of the method significantly more frequently than its signature as the size of the method increases. Moreover, experts tend to write their summaries from source code lines that they read the most.
@cite_15 use method stereotypes @cite_26 and class stereotypes @cite_31 to generate natural language summaries for Java classes. @cite_38 @cite_27 use method stereotypes to generate a standard summary for C++ methods via static analysis. McBurney and McMillan propose generating documentation summaries for Java methods using the call graph @cite_45 . Furthermore, they propose an approach to evaluate a summary using textual similarity of that summary to the source code @cite_29 . @cite_16 investigate the suitability of several text summarization techniques to automatically generate term-based summaries for methods and classes. This was further extended by @cite_46 using a new technique named Hierarchical PAM.
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_26", "@cite_46", "@cite_29", "@cite_27", "@cite_45", "@cite_15", "@cite_16" ], "mid": [ "2131453061", "2064569422", "2125644985", "2018844270", "2037843344", "2767570792", "1970407057", "2081749632", "2133333349" ], "abstract": [ "An approach to automatically generate natural language documentation summaries for C++ methods is presented. The approach uses prior work by the authors on stereotyping methods along with the source code analysis framework srcML. First, each method is automatically assigned a stereotype(s) based on static analysis and a set of heuristics. Then, the approach uses the stereotype information, static analysis, and predefined templates to generate a natural-language summary for each method. This summary is automatically added to the code base as a comment for each method. The predefined templates are designed to produce a generic summary for specific method stereotypes. Static analysis is used to extract internal details about the method (e.g., parameters, local variables, calls, etc.). This information is used to specialize the generated summaries.", "An approach is presented to automatically determine a class's stereotype. The stereotype is based on the frequency and distribution of method stereotypes in the class. Method stereotypes are automatically determined using a defined taxonomy given in previous work. The stereotypes, boundary, control and entity are used as a basis but refined based on an empirical investigation of 21 systems. A number of heuristics, derived from empirical evidence, are used to determine a class's stereotype. For example, the prominence of certain types of methods can indicate a class's main role. The approach is applied to five open source systems and evaluated. The results show that 95 of the classes are stereotyped by the approach. Additionally, developers (via manual inspection) agreed with the approach's results.", "An approach to automatically identify the stereotypes of all the methods in an entire system is presented. A taxonomy for object-oriented class method stereotypes is given that unifies and extends the existing literature to address gaps and deficiencies. Based on this taxonomy, a set of definitions is given and method stereotypes are reverse engineered using lightweight static program analysis. Classification is done solely by programming language structures and idioms, in this case C++. The approach is used to automatically re-document each method by annotating the original source code with the stereotype information. A demonstration of the accuracy and scalability of the approach is given.", "During software evolution a developer must investigate source code to locate then understand the entities that must be modified to complete a change task. To help developers in this task, proposed text summarization based approaches to the automatic generation of class and method summaries, and via a study of four developers, they evaluated source code summaries generated using their techniques. In this paper we propose a new topic modeling based approach to source code summarization, and via a study of 14 developers, we evaluate source code summaries generated using the proposed technique. Our study partially replicates the original study by in that it uses the objects, the instruments, and a subset of the summaries from the original study, but it also expands the original study in that it includes more subjects and new summaries. The results of our study both support the findings of the original and provide new insights into the processes and criteria that developers use to evaluate source code summaries. Based on our results, we suggest future directions for research on source code summarization.", "Source code documentation often contains summaries of source code written by authors. Recently, automatic source code summarization tools have emerged that generate summaries without requiring author intervention. These summaries are designed for readers to be able to understand the high-level concepts of the source code. Unfortunately, there is no agreed upon understanding of what makes up a \"good summary.\" This paper presents an empirical study examining summaries of source code written by authors, readers, and automatic source code summarization tools. This empirical study examines the textual similarity between source code and summaries of source code using Short Text Semantic Similarity metrics. We found that readers use source code in their summaries more than authors do. Additionally, this study finds that accuracy of a human written summary can be estimated by the textual similarity of that summary to the source code.", "Two studies are conducted to evaluate an approach to automatically generate natural language documentation summaries for C++ methods. The documentation approach relies on a method's stereotype information. First, each method is automatically assigned a stereotype(s) based on static analysis and a set of heuristics. Then, the approach uses the stereotype information, static analysis, and predefined templates to generate a natural-language summary documentation for each method. This documentation is automatically added to the code base as a comment for each method. The result of the first study reveals that the generated documentation is accurate, does not include unnecessary information, and does a reasonable job describing what the method does. Based on statistical analysis of the second study, the most important part of the documentation is the short description as it describes the intended behavior of a method.", "A documentation generator is a programming tool that creates documentation for software by analyzing the statements and comments in the software's source code. While many of these tools are manual, in that they require specially-formatted metadata written by programmers, new research has made inroads towards automatic generation of documentation. These approaches work by stitching together keywords from the source code into readable natural language sentences. These approaches have been shown to be effective, but carry a key limitation: the generated documents do not explain the source code's context. They can describe the behavior of a Java method, but not why the method exists or what role it plays in the software. In this paper, we propose a technique that includes this context by analyzing how the Java methods are invoked. In a user study, we found that programmers benefit from our generated documentation because it includes context information.", "Most software engineering tasks require developers to understand parts of the source code. When faced with unfamiliar code, developers often rely on (internal or external) documentation to gain an overall understanding of the code and determine whether it is relevant for the current task. Unfortunately, the documentation is often absent or outdated. This paper presents a technique to automatically generate human readable summaries for Java classes, assuming no documentation exists. The summaries allow developers to understand the main goal and structure of the class. The focus of the summaries is on the content and responsibilities of the classes, rather than their relationships with other classes. The summarization tool determines the class and method stereotypes and uses them, in conjunction with heuristics, to select the information to be included in the summaries. Then it generates the summaries using existing lexicalization tools. A group of programmers judged a set of generated summaries for Java classes and determined that they are readable and understandable, they do not include extraneous information, and, in most cases, they are not missing essential information.", "During maintenance developers cannot read the entire code of large systems. They need a way to get a quick understanding of source code entities (such as, classes, methods, packages, etc.), so they can efficiently identify and then focus on the ones related to their task at hand. Sometimes reading just a method header or a class name does not tell enough about its purpose and meaning, while reading the entire implementation takes too long. We study a solution which mitigates the two approaches, i.e., short and accurate textual descriptions that illustrate the software entities without having to read the details of the implementation. We create such descriptions using techniques from automatic text summarization. The paper presents a study that investigates the suitability of various such techniques for generating source code summaries. The results indicate that a combination of text summarization techniques is most appropriate for source code summarization and that developers generally agree with the summaries produced." ] }
1903.03086
2920658888
This paper addresses a fundamental question of multi-agent knowledge distribution: what information should be sent to whom and when, with the limited resources available to each agent? Communication requirements for multi-agent systems can be rather high when an accurate picture of the environment and the state of other agents must be maintained. To reduce the impact of multi-agent coordination on networked systems, e.g., power and bandwidth, this paper introduces two concepts for partially observable Markov decision processes (POMDPs): 1) action-based constraints which yield constrained-action POMDPs (CA-POMDPs); and 2) soft probabilistic constraint satisfaction for the resulting infinite-horizon controllers. To enable constraint analysis over an infinite horizon, an unconstrained policy is first represented as a Finite State Controller (FSC) and optimized with policy iteration. The FSC representation then allows for a combination of Markov chain Monte Carlo and discrete optimization to improve the probabilistic constraint satisfaction of the controller while minimizing the impact to the value function. Within the CA-POMDP framework we then propose Intelligent Knowledge Distribution (IKD) which yields per-agent policies for distributing knowledge between agents subject to interaction constraints. Finally, the CA-POMDP and IKD concepts are validated using an asset tracking problem where multiple unmanned aerial vehicles (UAVs) with heterogeneous sensors collaborate to localize a ground asset to assist in avoiding unseen obstacles in a disaster area. The IKD model was able to maintain asset tracking through multi-agent communications while only violating soft power and bandwidth constraints 3 of the time, while greedy and naive approaches violated constraints more than 60 of the time.
Capitan's paper @cite_30 is a key comparison in analyzing the performance of our paper in a multi-agent coordination setting. However, @cite_30 assumes that point-to-point communications are instantaneous and cost-free between nodes, while applying a collaboration policy similar to consensus. In this work, we instead remove the assumption of instantaneous and cost-free communication to yield multi-agent communication that respects resource constraints. This results in knowledge distribution that is more nuanced than consensus, i.e., data flooding vs. putting data where it needs to be.
{ "cite_N": [ "@cite_30" ], "mid": [ "2164819104" ], "abstract": [ "Planning under uncertainty faces a scalability problem when considering multi-robot teams, as the information space scales exponentially with the number of robots. To address this issue, this paper proposes to decentralize multi-robot partially observable Markov decision processes (POMDPs) while maintaining cooperation between robots by using POMDP policy auctions. Auctions provide a flexible way of coordinating individual policies modeled by POMDPs and have low communication requirements. In addition, communication models in the multi-agent POMDP literature severely mismatch with real inter-robot communication. We address this issue by exploiting a decentralized data fusion method in order to efficiently maintain a joint belief state among the robots. The paper presents two different applications: environmental monitoring with unmanned aerial vehicles (UAVs); and cooperative tracking, in which several robots have to jointly track a moving target of interest. The first one is used as a proof of concept and illustrates the proposed ideas through different simulations. The second one adds real multi-robot experiments, showcasing the flexibility and robust coordination that our techniques can provide." ] }
1903.03086
2920658888
This paper addresses a fundamental question of multi-agent knowledge distribution: what information should be sent to whom and when, with the limited resources available to each agent? Communication requirements for multi-agent systems can be rather high when an accurate picture of the environment and the state of other agents must be maintained. To reduce the impact of multi-agent coordination on networked systems, e.g., power and bandwidth, this paper introduces two concepts for partially observable Markov decision processes (POMDPs): 1) action-based constraints which yield constrained-action POMDPs (CA-POMDPs); and 2) soft probabilistic constraint satisfaction for the resulting infinite-horizon controllers. To enable constraint analysis over an infinite horizon, an unconstrained policy is first represented as a Finite State Controller (FSC) and optimized with policy iteration. The FSC representation then allows for a combination of Markov chain Monte Carlo and discrete optimization to improve the probabilistic constraint satisfaction of the controller while minimizing the impact to the value function. Within the CA-POMDP framework we then propose Intelligent Knowledge Distribution (IKD) which yields per-agent policies for distributing knowledge between agents subject to interaction constraints. Finally, the CA-POMDP and IKD concepts are validated using an asset tracking problem where multiple unmanned aerial vehicles (UAVs) with heterogeneous sensors collaborate to localize a ground asset to assist in avoiding unseen obstacles in a disaster area. The IKD model was able to maintain asset tracking through multi-agent communications while only violating soft power and bandwidth constraints 3 of the time, while greedy and naive approaches violated constraints more than 60 of the time.
The above approaches to CMDPs and CPOMDPs apply constraints to the state-space of the model and projection into the value space, but our formulation requires constraints as we are limiting the utilization of resources that an action consumes which cannot be tied to physical constructs, such as states that represent no-fly'' or stay-away'' zones. As many scenarios require an indefinite length of operation and no predefined goal states (as in our case study), our CA-POMDP approaches needs to be solved as an infinite-horizon policy. Infinite horizon POMDPs are solved by policy iteration with a finite state controller representation @cite_26 , and thus in our context will require analyzing how a cyclic graph utilizes resources with respect to soft constraints. To the best of the authors' knowledge, it is not possible to represent soft resource constraints on actions of a cyclic controller in the state or value space of state-of-the-art CMDPs and CPOMDPs.
{ "cite_N": [ "@cite_26" ], "mid": [ "1494689917" ], "abstract": [ "Most algorithms for solving POMDPs iteratively improve a value function that implicitly represents a policy and are said to search in value function space. This paper presents an approach to solving POMDPs that represents a policy explicitly as a finite-state controller and iteratively improves the controller by search in policy space. Two related algorithms illustrate this approach. The first is a policy iteration algorithm that can outperform value iteration in solving infinitehorizon POMDPs. It provides the foundation for a new heuristic search algorithm that promises further speedup by focusing computational effort on regions of the problem space that are reachable, or likely to be reached, from a start state." ] }
1903.03086
2920658888
This paper addresses a fundamental question of multi-agent knowledge distribution: what information should be sent to whom and when, with the limited resources available to each agent? Communication requirements for multi-agent systems can be rather high when an accurate picture of the environment and the state of other agents must be maintained. To reduce the impact of multi-agent coordination on networked systems, e.g., power and bandwidth, this paper introduces two concepts for partially observable Markov decision processes (POMDPs): 1) action-based constraints which yield constrained-action POMDPs (CA-POMDPs); and 2) soft probabilistic constraint satisfaction for the resulting infinite-horizon controllers. To enable constraint analysis over an infinite horizon, an unconstrained policy is first represented as a Finite State Controller (FSC) and optimized with policy iteration. The FSC representation then allows for a combination of Markov chain Monte Carlo and discrete optimization to improve the probabilistic constraint satisfaction of the controller while minimizing the impact to the value function. Within the CA-POMDP framework we then propose Intelligent Knowledge Distribution (IKD) which yields per-agent policies for distributing knowledge between agents subject to interaction constraints. Finally, the CA-POMDP and IKD concepts are validated using an asset tracking problem where multiple unmanned aerial vehicles (UAVs) with heterogeneous sensors collaborate to localize a ground asset to assist in avoiding unseen obstacles in a disaster area. The IKD model was able to maintain asset tracking through multi-agent communications while only violating soft power and bandwidth constraints 3 of the time, while greedy and naive approaches violated constraints more than 60 of the time.
Driving communications from the similar to CA-POMDP is not unique, but has been the focus of research behind reward shaping and belief-dependent rewards @cite_37 @cite_12 . These techniques use information-theoretic measures between probability distributions, like KL-Distance, as part of the reward function for belief-dependent rewards @cite_12 and for determining communication @cite_37 . @cite_37 focuses on restricting communications to when they are needed but does not provide soft constraint satisfaction to the policy controller which is the focus of this work. The authors in @cite_37 use the Dec-POMDP-Comm as their basis but change the perspective from a cost to a reward for communication to highlight an opportunity to use a resource. They state efficient policy generation techniques will be adapted to allow for scalability, where they use the information-theoretic concepts from Dec-POMDP-Value-Comm @cite_1 as a measure of belief divergence. Alternatively, @cite_25 purposely restructures the action space so they can remain in a classic POMDP problem. In this paper, we approach the IKD problem similarly to @cite_25 where information rewards drive cooperative perception, but instead of restructuring the action space we restructure the observation model to describe belief on the value of information.
{ "cite_N": [ "@cite_37", "@cite_1", "@cite_25", "@cite_12" ], "mid": [ "1504352750", "197181061", "1984338812", "2099495504" ], "abstract": [ "Decentralised coordination in multi-agent systems is typically achieved using communication. However, in many cases, communication is expensive to utilise because there is limited bandwidth, it may be dangerous to communicate, or communication may simply be unavailable at times. In this context, we argue for a rational approach to communication --- if it has a cost, the agents should be able to calculate a value of communicating. By doing this, the agents can balance the need to communicate with the cost of doing so. In this research, we present a novel model of rational communication, that uses reward shaping to value communications, and employ this valuation in decentralised POMDP policy generation. In this context, reward shaping is the process by which expectations over joint actions are adjusted based on how coordinated the agent team is. An empirical evaluation of the benefits of this approach is presented in two domains. First, in the context of an idealised bench-mark problem, the multiagent Tiger problem, our method is shown to require significantly less communication (up to 30 fewer messages) and still achieves a 30 performance improvement over the current state of the art. Second, in the context of a larger-scale problem, RoboCupRescue, our method is shown to scale well, and operate without recourse to significant amounts of domain knowledge.", "Decentralised coordination in multi-agent systems is typically achieved using communication. However, in many cases, communication is expensive to utilise because there is limited bandwidth, it may be dangerous to communicate, or communication may simply be unavailable at times. In this context, we argue for a rational approach to communication --- if it has a cost, the agents should be able to calculate a value of communicating. By doing this, the agents can balance the need to communicate with the cost of doing so. In this research, we present a novel model of rational communication that uses information theory to value communications, and employ this valuation in a decision theoretic coordination mechanism. A preliminary empirical evaluation of the benefits of this approach is presented in the context of the RoboCupRescue simulator.", "Partially observable Markov decision processes (POMDPs) provide a principled framework for modeling an agent's decision-making problem when the agent needs to consider noisy state estimates. POMDP policies take into account an action's influence on the environment as well as the potential information gain. This is a crucial feature for robotic agents which generally have to consider the effect of actions on sensing. However, building POMDP models which reward information gain directly is not straightforward, but is important in domains such as robot-assisted surveillance in which the value of information is hard to quantify. Common techniques for uncertainty reduction such as expected entropy minimization lead to non-standard POMDPs that are hard to solve. We present the POMDP with Information Rewards (POMDP-IR) modeling framework, which rewards an agent for reaching a certain level of belief regarding a state feature. By remaining in the standard POMDP setting we can exploit many known results as well as successful approximate algorithms. We demonstrate our ideas in a toy problem as well as in real robot-assisted surveillance, showcasing their use for active cooperative perception scenarios. Finally, our experiments show that the POMDP-IR framework compares favorably with a related approach on benchmark domains.", "Partially Observable Markov Decision Processes (POMDPs) model sequential decision-making problems under uncertainty and partial observability. Unfortunately, some problems cannot be modeled with state-dependent reward functions, e.g., problems whose objective explicitly implies reducing the uncertainty on the state. To that end, we introduce ρPOMDPs, an extension of POMDPs where the reward function ρ depends on the belief state. We show that, under the common assumption that ρ is convex, the value function is also convex, what makes it possible to (1) approximate ρ arbitrarily well with a piecewise linear and convex (PWLC) function, and (2) use state-of-the-art exact or approximate solving algorithms with limited changes." ] }
1903.03090
2922074688
We introduce a new class of combinatorially defined rational functions and apply them to deduce explicit formulae for local ideal zeta functions associated to the members of a large class of nilpotent Lie rings which contains the free class-2-nilpotent Lie rings and is stable under direct products. Our results unify and generalize a substantial number of previous computations. We show that the new rational functions, and thus also the local zeta functions under consideration, enjoy a self-reciprocity property, expressed in terms of a functional equation upon inversion of variables. We establish a conjecture of Grunewald, Segal, and Smith on the uniformity of normal zeta functions of finitely generated free class-2-nilpotent groups.
Theorems and generalize and unify several previously known results. The most classical may be the formula for the @math -ideal zeta function of the (abelian Lie) ring @math ; cf. [Proposition 1.1] GSS 88 . The ideal zeta functions of the so-called @math were given in [Theorem 5] Voll 05 . Formulae for the ideal zeta functions of the free class- @math -nilpotent Lie rings @math on @math generators are the main result of @cite_9 . The paper @cite_19 contains formulae for all local factors of the ideal zeta functions of the Lie rings @math , i.e. the over an arbitrary number ring @math , which are indexed by primes unramified in @math . The uniform nature of these functions had already been established in [Theorem 3] GSS 88 . Formulae for factors indexed by non-split primes are given in @cite_0 . The ideal zeta functions of the Lie rings @math were computed in [Proposition 8.4] GSS 88 , whereas for the direct products @math they were computed in @cite_10 . The ideal zeta function of the Lie ring @math was computed in [Theorem 11.1] Paajanen 08 .
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_9", "@cite_10" ], "mid": [ "2963054497", "2091506870", "1986614647", "2963637632" ], "abstract": [ "We enumerate traceless square matrices over finite quotients of compact discrete valuation rings by their image sizes. We express the associated rational generating functions in terms of statistics on symmetric and hyperoctahedral groups, viz. Coxeter groups of types A and B, respectively. These rational functions may also be interpreted as local representation zeta functions associated to the members of an infinite family of finitely generated class-2-nilpotent groups.", "Let @math be a number field with ring of integers @math . We compute explicitly the local factors of the normal zeta functions of the Heisenberg groups @math that are indexed by rational primes which are unramified in @math . We show that these local zeta functions satisfy functional equations upon the inversion of the prime.", "Let F2,d denote the free class-2-nilpotent group on d generators. We compute the normal zeta functions ζ � F2,d (s), prove that they satisfy local functional equations and determine their abscissae of convergence and pole orders.", "We develop a practical method for computing local zeta functions of groups, algebras, and modules in fortunate cases. Using our method, we obtain a complete classification of generic local representation zeta functions associated with unipotent algebraic groups of dimension at most six. We also determine the generic local subalgebra zeta functions associated with gl2(Q). Finally, we introduce and compute examples of graded subobject zeta functions." ] }
1903.03090
2922074688
We introduce a new class of combinatorially defined rational functions and apply them to deduce explicit formulae for local ideal zeta functions associated to the members of a large class of nilpotent Lie rings which contains the free class-2-nilpotent Lie rings and is stable under direct products. Our results unify and generalize a substantial number of previous computations. We show that the new rational functions, and thus also the local zeta functions under consideration, enjoy a self-reciprocity property, expressed in terms of a functional equation upon inversion of variables. We establish a conjecture of Grunewald, Segal, and Smith on the uniformity of normal zeta functions of finitely generated free class-2-nilpotent groups.
Some of the members of the family of Lie rings @math have previously been studied with regards to related counting problems, each leading to a different class of zeta functions. We mention specifically four such classes: first, the of a (class- @math -nilpotent Lie) ring @math , enumerating the finite index subrings of @math ; second, the of @math , the finitely generated nilpotent group associated to @math via the Mal'cev correspondence, enumerating the subgroups of finite index of @math whose profinite completions are isomorphic to the one of @math ; third, the of @math , enumerating the twist-isoclasses of complex irreducible representations of @math ; fourth, the of @math , enumerating the class numbers (i.e. numbers of conjugacy classes) of congruence quotients of this group (see @cite_14 ).
{ "cite_N": [ "@cite_14" ], "mid": [ "2802168718" ], "abstract": [ "This is the first of two papers in which we introduce and study two bivariate zeta functions associated to unipotent group schemes over rings of integers of number fields. These zeta functions encode, respectively, the numbers of isomorphism classes of irreducible complex representations of finite dimensions and the numbers of conjugacy classes of congruence quotients of the associated groups. In this paper, we show that such bivariate zeta functions specialise to (univariate) class number zeta functions. In case of nilpotency class 2, bivariate representation zeta functions also specialise to (univariate) twist representation zeta functions. Moreover, we show that these zeta functions satisfy Euler factorisations and almost all of their Euler factors satisfy rationality and functional equations. In the second part of this work, we compute the above mentioned zeta functions of three infinite families of nilpotent groups of class two and deduce some combinatorial corollaries." ] }
1903.03090
2922074688
We introduce a new class of combinatorially defined rational functions and apply them to deduce explicit formulae for local ideal zeta functions associated to the members of a large class of nilpotent Lie rings which contains the free class-2-nilpotent Lie rings and is stable under direct products. Our results unify and generalize a substantial number of previous computations. We show that the new rational functions, and thus also the local zeta functions under consideration, enjoy a self-reciprocity property, expressed in terms of a functional equation upon inversion of variables. We establish a conjecture of Grunewald, Segal, and Smith on the uniformity of normal zeta functions of finitely generated free class-2-nilpotent groups.
The subring zeta functions of the Grenham Lie rings @math were computed in @cite_27 . Those of the free class- @math -nilpotent Lie rings @math are largely unknown, apart from @math ( @cite_8 ) and @math ( [Theorem 2.16] duSWoodward 08 , due to G. Taylor). The proisomorphic zeta functions of the members of a combinatorially defined class of groups that includes the Grenham groups @math were computed in @cite_25 , their normal zeta functions in @cite_18 . The representation zeta functions of the free class- @math -nilpotent groups @math were computed in [Theorem B] StasinskiVoll 14 , the ones of the groups @math in [Theorem A] Zordan 17 . The class number zeta functions of the groups @math and @math may be found in [Corollary 1.5] Lins2 18 .
{ "cite_N": [ "@cite_27", "@cite_18", "@cite_25", "@cite_8" ], "mid": [ "", "2939052374", "2962768297", "2088880658" ], "abstract": [ "", "We produce explicit formulae for various ideal zeta functions associated to the members of an infinite family of class- @math -nilpotent Lie rings, introduced in [8], in terms of Igusa functions. As corollaries we obtain information about analytic properties of global ideal zeta functions, local functional equations, topological, reduced, and graded ideal zeta functions, as well as representation zeta functions for the unipotent group schemes associated to the Lie rings in question.", "The pro-isomorphic zeta function ( ^ _ (s) ) of a finitely generated nilpotent group ( ) is a Dirichlet generating function that enumerates finite-index subgroups whose profinite completion is isomorphic to that of ( ). Such zeta functions can be expressed as Euler products of p-adic integrals over the ( Q _p )-points of an algebraic automorphism group associated to ( ). In this way they are closely related to classical zeta functions of algebraic groups over local fields.", "A ski is provided on its under surface with a longitudinal central part having a smoother surface configuration than the adjacent surfaces and this ski is to be used on a substantially hard smooth surface especially on an inclined belt which can be adjusted as to speed and inclination." ] }
1903.02915
2920531146
This paper describes jMet alPy, an object-oriented Python-based framework for multi-objective optimization with metaheuristic techniques. Building upon our experiences with the well-known jMet al framework, we have developed a new multi-objective optimization software platform aiming not only at replicating the former one in a different programming language, but also at taking advantage of the full feature set of Python, including its facilities for fast prototyping and the large amount of available libraries for data processing, data analysis, data visualization, and high-performance computing. As a result, jMet alPy provides an environment for solving multi-objective optimization problems focused not only on traditional metaheuristics, but also on techniques supporting preference articulation and dynamic problems, along with a rich set of features related to the automatic generation of statistical data from the results generated, as well as the real-time and interactive visualization of the Pareto front approximations produced by the algorithms. jMet alPy offers additionally support for parallel computing in multicore and cluster systems. We include some use cases to explore the main features of jMet alPy and to illustrate how to work with it.
In the last two decades, a number of software frameworks devoted to the implementation of multi-objective metaheuristics has been contributed to the community, such as ECJ @cite_18 , EvA @cite_30 , JCLEC-MO @cite_0 , jMet al @cite_7 @cite_27 , MOEA Framework @cite_1 , and Opt4J @cite_21 , which are written in Java; ParadisEO-MOEO @cite_48 , and PISA @cite_39 , developed in C C++; and PlatEMO @cite_3 , implemented in Matlab. They all have in common the inclusion of representative algorithms from the the state of the art, benchmark problems and quality indicators for performance assessment.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_7", "@cite_48", "@cite_21", "@cite_1", "@cite_3", "@cite_39", "@cite_0", "@cite_27" ], "mid": [ "2536999787", "2734841578", "2000825106", "1561599800", "2000199097", "", "2764251381", "1584555100", "1977232778", "" ], "abstract": [ "We describe the EvA software package which consists of parallel (and sequential) implementations of genetic algorithms (GAs) and evolution strategies (ESs) and a common graphical user interface. We concentrate on the descriptions of the two distributed implementations of GAs and ESs which are of most interest for the future. We present comparisons of different kinds of genetic algorithms and evolution strategies that include implementations of distributed algorithms on the Intel Paragon, a large MIMD computer and massively parallel algorithms on a 16384 processor MasPar MP-1, a large SIMD computer. The results show that parallelization of evolution strategies not only achieves a speedup in execution time of the algorithm, but also a higher probability of convergence and an increase of quality of the achieved solutions. In the benchmark functions we tested, the distributed ESs have a better performance than the distributed GAs.", "ECJ is a mature and widely used evolutionary computation library with particular strengths in genetic programming, massive distributed computation, and coevolution. In Fall of 2016 we received a three-year NSF grant to expand ECJ into a toolkit with wide-ranging facilities designed to serve the broader metaheuristics community. This report discusses ECJ's history, capabilities, and architecture, then details our planned extensions and expansions.", "This paper describes jMet al, an object-oriented Java-based framework aimed at the development, experimentation, and study of metaheuristics for solving multi-objective optimization problems. jMet al includes a number of classic and modern state-of-the-art optimizers, a wide set of benchmark problems, and a set of well-known quality indicators to assess the performance of the algorithms. The framework also provides support to carry out full experimental studies, which can be configured and executed by using jMet al's graphical interface. Other features include the automatic generation of statistical information of the obtained results, and taking advantage of the current availability of multi-core processors to speed-up the running time of the experiments. In this work, we include two case studies to illustrate the use of jMet al in both solving a problem with a metaheuristic and designing and performing an experimental study.", "This chapter presents ParadisEO-MOEO, a white-box object-oriented software framework dedicated to the flexible design of metaheuristics for multi-objective optimization. This paradigm-free software proposes a unified view for major evolutionary multi-objective metaheuristics. It embeds some features and techniques for multi-objective resolution and aims to provide a set of classes allowing to ease and speed up the development of computationally efficient programs. It is based on a clear conceptual distinction between the solution methods and the problems they are intended to solve. This separation confers a maximum design and code reuse. This general-purpose framework provides a broad range of fitness assignment strategies, the most common diversity preservation mechanisms, some elitistrelated features as well as statistical tools. Furthermore, a number of state-of-the-art search methods, including NSGA-II, SPEA2 and IBEA, have been implemented in a user-friendly way, based on the fine-grained ParadisEO-MOEO components.", "This paper presents a modular framework for meta-heuristic optimization of complex optimization tasks by decomposing them into subtasks that may be designed and developed separately. Since these subtasks are generally correlated, a separate optimization is prohibited and the framework has to be capable of optimizing the subtasks concurrently. For this purpose, a distinction of genetic representation (genotype) and representation of a solution of the optimization problem (phenotype) is imposed. A compositional genotype and appropriate operators enable the separate development and testing of the optimization of subtasks by a strict decoupling. The proposed concept is implemented as open source reference OPT4J [6]. The architecture of this implementation is outlined and design decisions are discussed that enable a maximal decoupling and flexibility. A case study of a complex real-world optimization problem from the automotive domain is introduced. This case study requires the concurrent optimization of several heterogeneous aspects. Exemplary, it is shown how the proposed framework allows to efficiently optimize this complex problem by decomposing it into subtasks that are optimized concurrently.", "", "Over the last three decades, a large number of evolutionary algorithms have been developed for solving multi-objective optimization problems. However, there lacks an upto-date and comprehensive software platform for researchers to properly benchmark existing algorithms and for practitioners to apply selected algorithms to solve their real-world problems. The demand of such a common tool becomes even more urgent, when the source code of many proposed algorithms has not been made publicly available. To address these issues, we have developed a MATLAB platform for evolutionary multi-objective optimization in this paper, called PlatEMO, which includes more than 50 multiobjective evolutionary algorithms and more than 100 multi-objective test problems, along with several widely used performance indicators. With a user-friendly graphical user interface, PlatEMO enables users to easily compare several evolutionary algorithms at one time and collect statistical results in Excel or LaTeX files. More importantly, PlatEMO is completely open source, such that users are able to develop new algorithms on the basis of it. This paper introduces the main features of PlatEMO and illustrates how to use it for performing comparative experiments, embedding new algorithms, creating new test problems, and developing performance indicators. Source code of PlatEMO is now available at: http: bimk.ahu.edu.cn index.php?s= Index Software index.html.", "This paper introduces an interface specification (PISA) that allows to separate the problem-specific part of an optimizer from the problem-independent part. We propose a view of the general optimization scenario, where the problem representation together with the variation operators is seen as an integral part of the optimization problem and can hence be easily separated from the selection operators. Both parts are implemented as independent programs, that can be provided as ready-to-use packages and arbitrarily combined. This makes it possible to specify and implement representation-independent selection modules, which form the essence of modern multiobjective optimization algorithms. The variation operators, on the other hand, have to be defined in one module together with the optimization problem, facilitating a customized problem description. Besides the specification, the paper contains a correctness proof for the protocol and measured efficiency results.", "The ongoing advances in multi-objective optimisation (MOO) are improving the way that complex real-world optimisation problems, mostly characterised by the definition of many conflicting objectives, are currently addressed. To put it into practice, developers require flexible implementations of these algorithms so that they can be adapted to the problem-specific needs. Here, metaheuristic optimisation frameworks (MOFs) are essential tools to provide end-user oriented development solutions. Even though consolidated MOFs are continuously evolving, they seem to have paid little attention to the new trends in MOO. Recently, new frameworks have emerged with the aim of providing support to these approaches, but they often offer less variety of basic functionalities like diversity of encodings and operators than other general-purpose solutions. In this paper we identify a number of relevant features serving to satisfy the requirements demanded by MOO nowadays, and propose a solution, called JCLEC-MOEA, on the basis of the JCLEC framework. As a key contribution, its architecture has been designed with a twofold purpose: reusing all the features already given by a mature framework like JCLEC, and extending it to enable new developments more flexibly than current alternatives.", "" ] }
1903.02835
2955899579
The lattice model proposed by Denning in her seminal work provided secure information flow analyses with an intuitive and uniform mathematical foundation. Different organisations, however, may employ quite different security lattices. In this paper, we propose a connection framework that permits different organisations to exchange information while maintaining both security of information flows as well as their autonomy in formulating and maintaining security policy. Our prescriptive framework is based on the rigorous mathematical framework of Lagois connections given by Melton, together with a simple operational model for transferring object data between domains. The merit of this formulation is that it is simple, minimal, adaptable and intuitive, and provides a formal framework for establishing secure information flow across autonomous interacting organisations. We show that our framework is semantically sound, by proving that the connections proposed preserve standard correctness notions such as non-interference.
The notion of Lagois connections @cite_19 has surprisingly not been employed much in computer science. The only cited use of this idea seems to be the work of Huth @cite_4 in establishing the correctness of programming language implementations. To our knowledge, our work is the only one to propose their use in secure information flow control.
{ "cite_N": [ "@cite_19", "@cite_4" ], "mid": [ "2057856707", "1803857739" ], "abstract": [ "Abstract In this paper we define a Lagois connection, which is a generalization of a special type of Galois connection. We begin by introducing two examples of Lagois connections. We then recall the definition of Galois connection and some of its properties; next we define Lagois connection, establish some of its properties, and compare these with properties of Galois connections; and then we (further) develop examples of Lagois connections. Via these examples it is shown that, as is the case of Galois connections, there is a plethora of Lagois connections. Also it is shown that several fundamental situations in computer science and mathematics that cannot be interpreted in terms of Galois connections naturally fit into the theory of Lagois connections.", "We study frameworks for the equivalence of abstract state-transition systems represented as posets. A basic notion of equivalence is proposed. A least fix-point operator transforms basic equivalences into strong equivalences (=Lagois Connections) which makes Lagois Connections into a category. In the absence of divergence, the two notions of equivalence coincide. We generalize these notions by adding a logical level to express divergence more precisely. Then both generalized notions of equivalence coincide even in the presence of divergence." ] }
1903.02835
2955899579
The lattice model proposed by Denning in her seminal work provided secure information flow analyses with an intuitive and uniform mathematical foundation. Different organisations, however, may employ quite different security lattices. In this paper, we propose a connection framework that permits different organisations to exchange information while maintaining both security of information flows as well as their autonomy in formulating and maintaining security policy. Our prescriptive framework is based on the rigorous mathematical framework of Lagois connections given by Melton, together with a simple operational model for transferring object data between domains. The merit of this formulation is that it is simple, minimal, adaptable and intuitive, and provides a formal framework for establishing secure information flow across autonomous interacting organisations. We show that our framework is semantically sound, by proving that the connections proposed preserve standard correctness notions such as non-interference.
Abstract Interpretation and type systems @cite_15 have been used in secure flow analyses, e.g., @cite_1 @cite_8 and @cite_22 , where security types are defined using Galois connections employing, for instance, a standard collecting semantics. Their use of two domains, concrete and abstract, with a Galois connection between them, for performing static analyses should not be confused with our idea of secure connections between independently-defined security lattices of two organisations.
{ "cite_N": [ "@cite_15", "@cite_1", "@cite_22", "@cite_8" ], "mid": [ "1963705166", "28659641", "2785696951", "" ], "abstract": [ "Starting from a denotational semantics of the eager untyped lambda-calculus with explicit runtime errors, the standard collecting semantics is defined as specifying the strongest program properties. By a first abstraction, a new sound type collecting semantics is derived in compositional fix-point form. Then by successive (semi-dual) Galois connection based abstractions, type systems and or type inference algorithms are designed as abstract semantics or abstract interpreters approximating the type collecting semantics. This leads to a hierarchy of type systems, which is part of the lattice of abstract interpretations of the untyped lambda-calculus. This hierarchy includes two new � la Church Curry polytype systems. Abstractions of this polytype semantics lead to classical Milner Mycroft and Damas Milner polymorphic type schemes, Church Curry monotypes and Hindley principal typing algorithm. This shows that types are abstract interpretations.", "We introduce an enhanced information-flow analysis for tracking the amount of confidential data that is possibly released to third parties by a mobile application. The main novelty of our solution is that it can explicitly keep track of the footprint of data sources in the expressions formed and manipulated by the program, as well as of transformations over them, yielding a lazy approach with finer granularity, which may reduce false positives with respect to state-of-the-art information-flow analyses.", "We introduce an abstract domain for information-flow analysis of software. The proposal combines variable dependency analysis with numerical abstractions, yielding to accuracy and efficiency improvements. We apply the full power of the proposal to the case of database query languages as well. Finally, we present an implementation of the analysis, called ( Sails ), as an instance of a generic static analyzer. Keeping the modular construction of the analysis, the tool allows one to tune the granularity of heap analysis and to choose the numerical domain involved in the reduced product. This way the user can tune the information leakage analysis at different levels of precision and efficiency.", "" ] }
1903.02835
2955899579
The lattice model proposed by Denning in her seminal work provided secure information flow analyses with an intuitive and uniform mathematical foundation. Different organisations, however, may employ quite different security lattices. In this paper, we propose a connection framework that permits different organisations to exchange information while maintaining both security of information flows as well as their autonomy in formulating and maintaining security policy. Our prescriptive framework is based on the rigorous mathematical framework of Lagois connections given by Melton, together with a simple operational model for transferring object data between domains. The merit of this formulation is that it is simple, minimal, adaptable and intuitive, and provides a formal framework for establishing secure information flow across autonomous interacting organisations. We show that our framework is semantically sound, by proving that the connections proposed preserve standard correctness notions such as non-interference.
There has been substantial work on SIF in a distributed setting at the systems level. DStar @cite_30 for example, uses sets of opaque identifiers to define security classes. The DStar framework extends a DIFC model @cite_3 @cite_9 for operating systems to a distributed network. The only partial order that is considered in DStar's security lattice is subset inclusion. So it is not clear if DStar can work on general IFC mechanisms such as FlowCaml @cite_6 , which can use any partial ordering. Nor can it express the labels of JiF @cite_2 or Fabric @cite_27 completely. DStar allows bidirectional communication between processes @math and @math only if @math and @math , i.e., if there is an order-isomorphism between the labels. Our motivating examples indicate such a requirement is far too restrictive for most practical arrangements for data sharing between organisations.
{ "cite_N": [ "@cite_30", "@cite_9", "@cite_6", "@cite_3", "@cite_27", "@cite_2" ], "mid": [ "1495495588", "", "2061056245", "2162283517", "2617413542", "2158126684" ], "abstract": [ "Recent operating systems [12, 21, 26] have shown that decentralized information flow control (DIFC) can secure applications built from mostly untrusted code. This paper extends DIFC to the network. We present DStar, a system that enforces the security requirements of mutually distrustful components through cryptography on the network and local OS protection mechanisms on each host. DStar does not require any fully-trusted processes or machines, and is carefully constructed to avoid covert channels inherent in its interface. We use DStar to build a three-tiered web server that mitigates the effects of untrustworthy applications and compromised machines.", "", "This paper presents a type-based information flow analysis for a call-by-value λ-calculus equipped with references, exceptions and let-polymorphism, which we refer to as ML. The type system is constraint-based and has decidable type inference. Its noninterference proof is reasonably light-weight, thanks to the use of a number of orthogonal techniques. First, a syntactic segregation between values and expressions allows a lighter formulation of the type system. Second, noninterference is reduced to subject reduction for a nonstandard language extension. Lastly, a semi-syntactic approach to type soundness allows dealing with constraint-based polymorphism separately.", "Decentralized Information Flow Control (DIFC) is an approach to security that allows application writers to control how data flows between the pieces of an application and the outside world. As applied to privacy, DIFC allows untrusted software to compute with private data while trusted security code controls the release of that data. As applied to integrity, DIFC allows trusted code to protect untrusted software from unexpected malicious inputs. In either case, only bugs in the trusted code, which tends to be small and isolated, can lead to security violations. We present Flume, a new DIFC model that applies at the granularity of operating system processes and standard OS abstractions (e.g., pipes and file descriptors). Flume was designed for simplicity of mechanism, to ease DIFC's use in existing applications, and to allow safe interaction between conventional and DIFC-aware processes. Flume runs as a user-level reference monitor onLinux. A process confined by Flume cannot perform most system calls directly; instead, an interposition layer replaces system calls with IPCto the reference monitor, which enforces data flowpolicies and performs safe operations on the process's behalf. We ported a complex web application (MoinMoin Wiki) to Flume, changingonly 2 of the original code. Performance measurements show a 43 slowdown on read workloadsand a 34 slowdown on write workloads, which aremostly due to Flume's user-level implementation.", "", "A promising technique for protecting privacy and integrity of sensitive data is to statically check information flow within programs that manipulate the data. While previous work has proposed programming language extensions to allow this static checking, the resulting languages are too restrictive for practical use and have not been implemented. In this paper, we describe the new language JFlow, an extension to the Java language that adds statically-checked information flow annotations. JFlow provides several new features that make information flow checking more flexible and convenient than in previous models: a decentralized label model, label polymorphism, run-time label checking, and automatic label inference. JFlow also supports many language features that have never been integrated successfully with static information flow control, including objects, subclassing, dynamic type tests, access control, and exceptions. This paper defines the JFlow language and presents formal rules that are used to check JFlow programs for correctness. Because most checking is static, there is little code space, data space, or run-time overhead in the JFlow implementation." ] }
1903.02928
2922502550
In this paper, we study the problem of resource allocation as well as pricing in the context of Internet of things (IoT) networks. We provide a novel pricing model for IoT services where all the parties involved in the communication scenario as well as their revenue and cost are determined. We formulate the resource allocation in the considered model as a multi-objective optimization problem where in addition to the resource allocation variables, the price values are also optimization variables. To solve the proposed multi-objective optimization problem, we use the scalarization method which gives different Pareto optimal solutions. We solve the resulting problems using the alternating approach based on the successive convex approximation (SCA) method which converges to a local solution with few iterations. We also consider a conventional approach where each entity tries to maximize its own revenue independently. Simulation results indicate that by applying the proposed joint framework, we can increase the total revenue compared to the conventional case while providing an almost complete fairness among the players. This is while the conventional approach fails to provide such a fairness.
There are a number of works in wireless networking literature that use pricing methods to model the trade-offs among different entities @cite_51 @cite_38 . Examples include secondary and primary operators in cognitive radio networks @cite_42 , device to device (D2D) communications @cite_6 , and different base stations in heterogeneous networks @cite_46 . The existing literature focuses on bandwidth as the resource to be traded @cite_53 .
{ "cite_N": [ "@cite_38", "@cite_53", "@cite_42", "@cite_6", "@cite_46", "@cite_51" ], "mid": [ "", "2571224123", "2540490069", "2748476900", "1994534328", "" ], "abstract": [ "", "The Internet of Things has drawn lots of research attention as the growing number of devices connected to the Internet. Long Term Evolution-Advanced (LTE-A) is a promising technology for wireless communication and it's also promising for IoT. The main challenge of incorporating IoT devices into LTE-A is a large number of IoT devices attempting to access the network in a short period which will greatly reduce the network performance. In order to improve the network utilization, we adopted a hierarchy architecture using a gateway for connecting the devices to the eNB and proposed a multiclass resource allocation algorithm for LTE based IoT communication. Simulation results show that the proposed algorithm can provide good performance both on data rate and latency for different QoS applications both in saturated and unsaturated environment.", "In this paper, we study resource allocation for a multicarrier-based cognitive radio (CR) network. More specifically, we investigate the problem of secondary users’ energy-efficiency (EE) maximization problem under secondary total power and primary interference constraints. First, assuming cooperation among the secondary base stations (BSs), a centralized approach is considered to solve the EE optimization problem for the CR network where the primary and secondary users are using either orthogonal frequency-division multiplexing (OFDM) or filter bank based multicarrier (FBMC) modulations. We propose an alternating-based approach to solve the joint power-subcarrier allocation problem. More importantly, in the first place, subcarriers are allocated using a heuristic method for a given feasible power allocation. Then, we conservatively approximate the nonconvex power control problem and propose a joint successive convex approximation-Dinkelbach algorithm (SCADA) to efficiently obtain a solution to the nonconvex power control problem. The proposed algorithm is shown to converge to a solution that coincides with the stationary point of the original nonconvex power control subproblem. Moreover, we propose a dual decomposition-based decentralized version of the SCADA. Second, under the assumption of no cooperation among the secondary BSs, we propose a fully distributed power control algorithm from the perspective of game theory. The proposed algorithm is shown to converge to a Nash-equilibrium (NE) point. Moreover, we propose a sufficient condition that guarantees the uniqueness of the achieved NE. Extensive simulation analyses are further provided to highlight the advantages and demonstrate the efficiency of our proposed schemes.", "Device-to-device (D2D) communication is developed as a new paradigm to enhance network performance according to LTE and WiMAX advanced standards. The D2D communication may have dedicated spectrum (overlay) or shared spectrum (underlay). However, the allocated dedicated spectrum may not be effectively used in the overlay mode, while interference between the D2D users and cellular users cause impairments in the underlay mode. Can the resource allocation of a D2D system be optimized using the cognitive approach where the D2D users opportunistically access the underutilized radio spectrum? That is the focus of this paper. In this paper, the transmission rate of the D2D users is optimized while simultaneously satisfying five sets of constraints related to power, interference, and data rate, modeling D2D users as cognitive secondary users. Furthermore, a two-stage approach is considered to allocate the radio resources efficiently. A new adaptive subcarrier allocation scheme is designed first, and then, a novel power allocation scheme is developed utilizing geometric water-filling approach that provides optimal solution with low computation complexity for this nonlinear problem. Numerical results show that the proposed approach achieved significant performance enhancement than the existing schemes.", "In this paper, we propose a joint subchannel and power allocation algorithm for the downlink of an orthogonal frequency-division multiple access (OFDMA) mixed femtocell macrocell network deployment. Specifically, the total throughput of all femtocell user equipments (FUEs) is maximized while the network capacity of an existing macrocell is always protected. Towards this end, we employ an iterative approach in which OFDM subchannels and transmit powers of base stations (BS) are alternatively assigned and optimized at every step. For a fixed power allocation, we prove that the optimal policy in each cell is to give each subchannel to the user with the highest signal-to-interference-plus-noise ratio (SINR) on that subchannel. For a given subchannel assignment, we adopt the successive convex approximation (SCA) approach and transform the highly nonconvex power allocation problem into a sequence of convex subproblems. In the arithmetic-geometric mean (AGM) approximation, we apply geometric programming to find optimal solutions after condensing a posynomial into a monomial. On the other hand, logarithmic and d ifference-of-two- c oncave-functions (D.C.) approximations lead us to solving a series of convex relaxation programs. With the three proposed SCA-based power optimization solutions, we show that the overall joint subchannel and power allocation algorithm converges to some local maximum of the original design problem. While a central processing unit is required to implement the AGM approximation-based solution, each BS locally computes the optimal subchannel and power allocation for its own servicing cell in the logarithmic and D.C. approximation-based solutions. Numerical examples confirm the merits of the proposed algorithm.", "" ] }
1903.02928
2922502550
In this paper, we study the problem of resource allocation as well as pricing in the context of Internet of things (IoT) networks. We provide a novel pricing model for IoT services where all the parties involved in the communication scenario as well as their revenue and cost are determined. We formulate the resource allocation in the considered model as a multi-objective optimization problem where in addition to the resource allocation variables, the price values are also optimization variables. To solve the proposed multi-objective optimization problem, we use the scalarization method which gives different Pareto optimal solutions. We solve the resulting problems using the alternating approach based on the successive convex approximation (SCA) method which converges to a local solution with few iterations. We also consider a conventional approach where each entity tries to maximize its own revenue independently. Simulation results indicate that by applying the proposed joint framework, we can increase the total revenue compared to the conventional case while providing an almost complete fairness among the players. This is while the conventional approach fails to provide such a fairness.
Few papers in the literature have considered pricing schemes for resources other than spectrum. In D2D communications, power has been considered as a subject of trading in @cite_25 . In @cite_32 a D2D communication framework is considered in which the authors design a power-pricing framework based on the principle of the Stackelberg game. In @cite_48 , relay servers are a subject of pricing where sellers offer cooperative services at the cost of resources such as power by way of auction.
{ "cite_N": [ "@cite_48", "@cite_32", "@cite_25" ], "mid": [ "2023260531", "2288011440", "2331789166" ], "abstract": [ "On one hand, cooperative communication has been gaining more and more popularity since it has great potential to increase the capacity of wireless networks. On the other hand, the applications of cooperative communication technology are rarely seen in reality, even in some scenarios where the demands for bandwidth-hungry applications have pushed the system designers to develop innovative network solutions. A main obstacle lying between the potential capability of channel capacity improvement and the wide adoption of cooperative communication is the lack of incentives for the participating wireless nodes to serve as relay nodes. Hence, in this paper, we design TASC, an auction scheme for the cooperative communications, where wireless node can trade relay services. TASC makes an important contribution of maintaining truthfulness while fulfilling other design objectives. We show analytically that TASC is truthful and has polynomial time complexity. Extensive experiments show that TASC can achieve multiple economic properties without significant performance degradation compared with pure relay assignment algorithms.", "The Device-to-Device (D2D) communication is a promising technique to empower local wireless communications. However, without proper management it may generate interference to the existing network and degrade the overall performance. By treating each multipath as a virtual antenna, time-reversal (TR) signal transmission in a rich-scattering environment produces a spatial-temporal resonance which efficiently suppresses the inter-user interference (IUI) while boosting the signal power at the target receiver. In this work, we design a TR-based D2D hybrid network, where both primary users (PUs) and D2D pairs share the same time-frequency resources and use TR focusing effect to combat interference. With the purpose of enhancing D2D performance while providing a performance protection to PUs, an efficient optimal pricing algorithm is proposed to dynamically control interference through TR focusing strength control.", "Device-to-device (D2D) communication is offering smart phone users a choice to share files with each other without communicating with the cellular network. In this letter, we discuss the behaviors of two characters in the D2D data transaction model from an economic point of view: the data buyers who wish to buy a certain quantity of data, as well as the data sellers who wish to sell data through the D2D network. The optimal price and purchasing strategies are analyzed and deduced based on game theory." ] }
1903.02928
2922502550
In this paper, we study the problem of resource allocation as well as pricing in the context of Internet of things (IoT) networks. We provide a novel pricing model for IoT services where all the parties involved in the communication scenario as well as their revenue and cost are determined. We formulate the resource allocation in the considered model as a multi-objective optimization problem where in addition to the resource allocation variables, the price values are also optimization variables. To solve the proposed multi-objective optimization problem, we use the scalarization method which gives different Pareto optimal solutions. We solve the resulting problems using the alternating approach based on the successive convex approximation (SCA) method which converges to a local solution with few iterations. We also consider a conventional approach where each entity tries to maximize its own revenue independently. Simulation results indicate that by applying the proposed joint framework, we can increase the total revenue compared to the conventional case while providing an almost complete fairness among the players. This is while the conventional approach fails to provide such a fairness.
The authors of @cite_1 propose a hierarchical mobile edge computing architecture based on the LTE advanced networks. They study two time scale mechanisms to allocate the computing and communications resources. In the computing resources allocation, they consider an auction-based pricing model to maximize the utility of the service provider where the price of each virtual machine is updated at the beginning of each frame. To solve this problem, they apply a heuristic algorithm. Moreover, they propose a centralized optimal solution based on Lagrange multipliers for the bandwidth allocation. The authors of @cite_29 consider a fog computing based system as an appropriate choice to provide low latency services. The considered network consists of a few data service operators each of which controls several fog nodes. The fog nodes provide the required data service to a set of subscribers. They formulate a Stackelberg game to study the pricing model for the data service operators as well as the resource allocation problem for the subscribers. They proposed a many-to-many matching game to investigate the pairing problem between data service operators and fog nodes. Moreover, they applied another layer of many-to-many matching between the paired fog nodes and serving data service subscribers.
{ "cite_N": [ "@cite_29", "@cite_1" ], "mid": [ "2578151840", "2558116332" ], "abstract": [ "Fog computing is a promising architecture to provide economical and low latency data services for future Internet of Things (IoT)-based network systems. Fog computing relies on a set of low-power fog nodes (FNs) that are located close to the end users to offload the services originally targeting at cloud data centers. In this paper, we consider a specific fog computing network consisting of a set of data service operators (DSOs) each of which controls a set of FNs to provide the required data service to a set of data service subscribers (DSSs). How to allocate the limited computing resources of FNs to all the DSSs to achieve an optimal and stable performance is an important problem. Therefore, we propose a joint optimization framework for all FNs, DSOs, and DSSs to achieve the optimal resource allocation schemes in a distributed fashion. In the framework, we first formulate a Stackelberg game to analyze the pricing problem for the DSOs as well as the resource allocation problem for the DSSs. Under the scenarios that the DSOs can know the expected amount of resource purchased by the DSSs, a many-to-many matching game is applied to investigate the pairing problem between DSOs and FNs. Finally, within the same DSO, we apply another layer of many-to-many matching between each of the paired FNs and serving DSSs to solve the FN-DSS pairing problem. Simulation results show that our proposed framework can significantly improve the performance of the IoT-based network systems.", "The multitiered concept of Internet of Things (IoT) devices, cloudlets, and clouds is facilitating a user-centric IoT. However, in such three tier network, it is still desirable to investigate efficient strategies to offer the computing, storage, and communications resources to the users. To this end, this paper proposes a new hierarchical model by introducing the concept of field , shallow , and deep cloudlets where the cloudlet tier itself is designed in three hierarchical levels based on the principle of LTE-advanced backhaul network. Accordingly, we explore a two time scale approach in which the computing resources are offered in an auction-based profit maximization manner and then the communications resources are allocated to satisfy the users’ quality of service." ] }
1903.02874
2922303317
There are substantial instructional videos on the Internet, which enables us to acquire knowledge for completing various tasks. However, most existing datasets for instructional video analysis have the limitations in diversity and scale,which makes them far from many real-world applications where more diverse activities occur. Moreover, it still remains a great challenge to organize and harness such data. To address these problems, we introduce a large-scale dataset called "COIN" for COmprehensive INstructional video analysis. Organized with a hierarchical structure, the COIN dataset contains 11,827 videos of 180 tasks in 12 domains (e.g., vehicles, gadgets, etc.) related to our daily life. With a new developed toolbox, all the videos are annotated effectively with a series of step descriptions and the corresponding temporal boundaries. Furthermore, we propose a simple yet effective method to capture the dependencies among different steps, which can be easily plugged into conventional proposal-based action detection methods for localizing important steps in instructional videos. In order to provide a benchmark for instructional video analysis, we evaluate plenty of approaches on the COIN dataset under different evaluation criteria. We expect the introduction of the COIN dataset will promote the future in-depth research on instructional video analysis for the community.
The approaches for instructional video analysis can be roughly divided into three categories: unsupervised learning-based, weakly-supervised learning-based and fully-supervised learning-based. For the first category, the step localization task usually takes a video and the corresponding narration or subtitle as multi-modal inputs The language signal should not be treated as supervision since the steps are not directly given, but need to be further explored in an unsupervised manner. . For example, Sener @cite_11 developed a joint generative model to parse both video frames and subtitles into activity steps. Alayrac @cite_15 leveraged the complementary nature of the instructional video and its narration to discover and locate the main steps of a certain task. Generally speaking, the advantages to employ the narration or subtitle is to avoid human annotation, which may cost huge workload. However, these narration or subtitles may be inaccurate @cite_41 or even irrelevant to the video For example, in a video with YouTube ID CRRiYji , the instructor talks a lot about other things when she performs the task injection''. .
{ "cite_N": [ "@cite_41", "@cite_15", "@cite_11" ], "mid": [ "2784025607", "2962795934", "805710393" ], "abstract": [ "", "We address the problem of automatically learning the main steps to complete a certain task, such as changing a car tire, from a set of narrated instruction videos. The contributions of this paper are three-fold. First, we develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method solves two clustering problems, one in text and one in video, applied one after each other and linked by joint constraints to obtain a single coherent sequence of steps in both modalities. Second, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains about 800,000 frames for five different tasks1 that include complex interactions between people and objects, and are captured in a variety of indoor and outdoor settings. Third, we experimentally demonstrate that the proposed method can automatically discover, in an unsupervised manner, the main steps to achieve the task and locate the steps in the input videos.", "Human communication typically has an underlying structure. This is reflected in the fact that in many user generated videos, a starting point, ending, and certain objective steps between these two can be identified. In this paper, we propose a method for parsing a video into such semantic steps in an unsupervised way. The proposed method is capable of providing a semantic \"storyline\" of the video composed of its objective steps. We accomplish this utilizing both visual and language cues in a joint generative model. The proposed method can also provide a textual description for each of identified semantic steps and video segments. We evaluate this method on a large number of complex YouTube videos and show results of unprecedented quality for this new and impactful problem." ] }
1903.02874
2922303317
There are substantial instructional videos on the Internet, which enables us to acquire knowledge for completing various tasks. However, most existing datasets for instructional video analysis have the limitations in diversity and scale,which makes them far from many real-world applications where more diverse activities occur. Moreover, it still remains a great challenge to organize and harness such data. To address these problems, we introduce a large-scale dataset called "COIN" for COmprehensive INstructional video analysis. Organized with a hierarchical structure, the COIN dataset contains 11,827 videos of 180 tasks in 12 domains (e.g., vehicles, gadgets, etc.) related to our daily life. With a new developed toolbox, all the videos are annotated effectively with a series of step descriptions and the corresponding temporal boundaries. Furthermore, we propose a simple yet effective method to capture the dependencies among different steps, which can be easily plugged into conventional proposal-based action detection methods for localizing important steps in instructional videos. In order to provide a benchmark for instructional video analysis, we evaluate plenty of approaches on the COIN dataset under different evaluation criteria. We expect the introduction of the COIN dataset will promote the future in-depth research on instructional video analysis for the community.
For the second category, @cite_44 developed a hierarchical model based on HMMs and a context-free grammar to parse the main steps in the cooking activities. Richard @cite_27 @cite_32 adopted Viterbi algorithm to solve the probabilistic model of weakly supervised segmentation. Ding @cite_26 proposed a temporal convolutional feature pyramid network to predict frame-wise labels and use soft boundary assignment to iteratively optimize the segmentation results. In this work, we also evaluate these three methods The details of the weak supervisions are described in section 5.2. to provide a benchmark results on COIN.
{ "cite_N": [ "@cite_44", "@cite_27", "@cite_26", "@cite_32" ], "mid": [ "2099614498", "2798345491", "2964311439", "2962916463" ], "abstract": [ "This paper describes a framework for modeling human activities as temporally structured processes. Our approach is motivated by the inherently hierarchical nature of human activities and the close correspondence between human actions and speech: We model action units using Hidden Markov Models, much like words in speech. These action units then form the building blocks to model complex human activities as sentences using an action grammar. To evaluate our approach, we collected a large dataset of daily cooking activities: The dataset includes a total of 52 participants, each performing a total of 10 cooking activities in multiple real-life kitchens, resulting in over 77 hours of video footage. We evaluate the HTK toolkit, a state-of-the-art speech recognition engine, in combination with multiple video feature descriptors, for both the recognition of cooking activities (e.g., making pancakes) as well as the semantic parsing of videos into action units (e.g., cracking eggs). Our results demonstrate the benefits of structured temporal generative approaches over existing discriminative approaches in coping with the complexity of human daily life activities.", "Action detection and temporal segmentation of actions in videos are topics of increasing interest. While fully supervised systems have gained much attention lately, full annotation of each action within the video is costly and impractical for large amounts of video data. Thus, weakly supervised action detection and temporal segmentation methods are of great importance. While most works in this area assume an ordered sequence of occurring actions to be given, our approach only uses a set of actions. Such action sets provide much less supervision since neither action ordering nor the number of action occurrences are known. In exchange, they can be easily obtained, for instance, from meta-tags, while ordered sequences still require human annotation. We introduce a system that automatically learns to temporally segment and label actions in a video, where the only supervision that is used are action sets. An evaluation on three datasets shows that our method still achieves good results although the amount of supervision is significantly smaller than for other related methods.", "In this work, we address the task of weakly-supervised human action segmentation in long, untrimmed videos. Recent methods have relied on expensive learning models, such as Recurrent Neural Networks (RNN) and Hidden Markov Models (HMM). However, these methods suffer from expensive computational cost, thus are unable to be deployed in large scale. To overcome the limitations, the keys to our design are efficiency and scalability. We propose a novel action modeling framework, which consists of a new temporal convolutional network, named Temporal Convolutional Feature Pyramid Network (TCFPN), for predicting frame-wise action labels, and a novel training strategy for weakly-supervised sequence modeling, named Iterative Soft Boundary Assignment (ISBA), to align action sequences and update the network in an iterative fashion. The proposed framework is evaluated on two benchmark datasets, Breakfast and Hollywood Extended, with four different evaluation metrics. Extensive experimental results show that our methods achieve competitive or superior performance to state-of-the-art methods.", "Video learning is an important task in computer vision and has experienced increasing interest over the recent years. Since even a small amount of videos easily comprises several million frames, methods that do not rely on a frame-level annotation are of special importance. In this work, we propose a novel learning algorithm with a Viterbi-based loss that allows for online and incremental learning of weakly annotated video data. We moreover show that explicit context and length modeling leads to huge improvements in video segmentation and labeling tasks and include these models into our framework. On several action segmentation benchmarks, we obtain an improvement of up to 10 compared to current state-of-the-art methods." ] }
1903.02874
2922303317
There are substantial instructional videos on the Internet, which enables us to acquire knowledge for completing various tasks. However, most existing datasets for instructional video analysis have the limitations in diversity and scale,which makes them far from many real-world applications where more diverse activities occur. Moreover, it still remains a great challenge to organize and harness such data. To address these problems, we introduce a large-scale dataset called "COIN" for COmprehensive INstructional video analysis. Organized with a hierarchical structure, the COIN dataset contains 11,827 videos of 180 tasks in 12 domains (e.g., vehicles, gadgets, etc.) related to our daily life. With a new developed toolbox, all the videos are annotated effectively with a series of step descriptions and the corresponding temporal boundaries. Furthermore, we propose a simple yet effective method to capture the dependencies among different steps, which can be easily plugged into conventional proposal-based action detection methods for localizing important steps in instructional videos. In order to provide a benchmark for instructional video analysis, we evaluate plenty of approaches on the COIN dataset under different evaluation criteria. We expect the introduction of the COIN dataset will promote the future in-depth research on instructional video analysis for the community.
For the third category, we focus on step localization. This task is related to the area of action detection, where promising progress has also been achieved recently. For example, Zhao @cite_14 developed structured segment networks (SSN) to model the temporal structure of each action instance with a structured temporal pyramid. Xu @cite_55 introduced a Region Convolutional 3D Network (R-C3D) architecture, which was built on C3D @cite_2 and Faster R-CNN @cite_37 , to explore the region information of video frames. Compared with these methods, we attempt to further explore the dependencies of different steps, which lies in the intrinsic structure of instructional videos. Towards this goal, we proposed a new method with a bottom-up strategy and a top-down scheme. Our method can be easily plugged into recent proposal-based action detection methods and enhance the performance of step localization in instructional videos.
{ "cite_N": [ "@cite_55", "@cite_14", "@cite_37", "@cite_2" ], "mid": [ "2963247196", "2964216549", "2613718673", "1522734439" ], "abstract": [ "We address the problem of activity detection in continuous, untrimmed video streams. This is a difficult task that requires extracting meaningful spatio-temporal features to capture activities, accurately localizing the start and end times of each activity. We introduce a new model, Region Convolutional 3D Network (R-C3D), which encodes the video streams using a three-dimensional fully convolutional network, then generates candidate temporal regions containing activities, and finally classifies selected regions into specific activities. Computation is saved due to the sharing of convolutional features between the proposal and the classification pipelines. The entire model is trained end-to-end with jointly optimized localization and classification losses. R-C3D is faster than existing methods (569 frames per second on a single Titan X Maxwell GPU) and achieves state-of-the-art results on THUMOS’14. We further demonstrate that our model is a general activity detection framework that does not rely on assumptions about particular dataset properties by evaluating our approach on ActivityNet and Charades. Our code is available at http: ai.bu.edu r-c3d", "Detecting actions in untrimmed videos is an important yet challenging task. In this paper, we present the structured segment network (SSN), a novel framework which models the temporal structure of each action instance via a structured temporal pyramid. On top of the pyramid, we further introduce a decomposed discriminative model comprising two classifiers, respectively for classifying actions and determining completeness. This allows the framework to effectively distinguish positive proposals from background or incomplete ones, thus leading to both accurate recognition and localization. These components are integrated into a unified network that can be efficiently trained in an end-to-end fashion. Additionally, a simple yet effective temporal action proposal scheme, dubbed temporal actionness grouping (TAG) is devised to generate high quality action proposals. On two challenging benchmarks, THUMOS14 and ActivityNet, our method remarkably outperforms previous state-of-the-art methods, demonstrating superior accuracy and strong adaptivity in handling actions with various temporal structures.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use." ] }
1903.03087
2922522967
Recently, label consistent k-svd (LC-KSVD) algorithm has been successfully applied in image classification. The objective function of LC-KSVD is consisted of reconstruction error, classification error and discriminative sparse codes error with L0-norm sparse regularization term. The L0-norm, however, leads to NP-hard problem. Despite some methods such as orthogonal matching pursuit can help solve this problem to some extent, it is quite difficult to find the optimum sparse solution. To overcome this limitation, we propose a label embedded dictionary learning (LEDL) method to utilise the L1-norm as the sparse regularization term so that we can avoid the hard-to-optimize problem by solving the convex optimization problem. Alternating direction method of multipliers and blockwise coordinate descent algorithm are then exploited to optimize the corresponding objective function. Extensive experimental results on six benchmark datasets illustrate that the proposed algorithm has achieved superior performance compared to some conventional classification algorithms.
SRC was proposed by @cite_24 . Assume that we have @math classes of training samples, denoted by @math , where @math is the training sample matrix of class @math . Each column of the matrix @math is a training sample feature from the @math class. The whole training sample matrix can be denoted as @math , where @math represents the dimensions of the sample features and @math is the number of training samples. Supposing that @math is a testing sample vector, the sparse representation algorithm aims to solve the following objective function: where, @math is the regularization parameter to control the tradeoff between fitting goodness and sparseness. The sparse representation based classification is to find the minimum value of the residual error for each class. where @math represents the predictive label of @math , @math is the sparse code of @math class. The procedure of SRC is shown in Algorithm . Obviously, the residual @math is associated with only a few images in class @math .
{ "cite_N": [ "@cite_24" ], "mid": [ "2129812935" ], "abstract": [ "We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims." ] }
1903.02891
2918777711
Federated Learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server. This form of privacy-preserving collaborative learning however comes at the cost of a significant communication overhead during training. To address this problem, several compression methods have been proposed in the distributed training literature that can reduce the amount of required communication by up to three orders of magnitude. These existing methods however are only of limited utility in the Federated Learning setting, as they either only compress the upstream communication from the clients to the server (leaving the downstream communication uncompressed) or only perform well under idealized conditions such as iid distribution of the client data, which typically can not be found in Federated Learning. In this work, we propose Sparse Ternary Compression (STC), a new compression framework that is specifically designed to meet the requirements of the Federated Learning environment. Our experiments on four different learning tasks demonstrate that STC distinctively outperforms Federated Averaging in common Federated Learning scenarios where clients either a) hold non-iid data, b) use small batch sizes during training, or where c) the number of clients is large and the participation rate in every communication round is low. We furthermore show that even if the clients hold iid data and use medium sized batches for training, STC still behaves pareto-superior to Federated Averaging in the sense that it achieves fixed target accuracies on our benchmarks within both fewer training iterations and a smaller communication budget.
methods reduce the entropy of the weight updates by restricting all updates to a reduced set of values. propose signSGD @cite_6 , a compression method with theoretical convergence guarantees on iid data that quantizes every gradient update to it's binary sign, thus reducing the bit size per update by a factor of @math . signSGD also incorporates download compression by aggregating the binary updates from all clients by means of a majority vote. Other authors propose to stochastically quantize the gradients during upload in an unbiased way (TernGrad @cite_2 , QSGD @cite_17 , ATOMO @cite_5 ). These methods are theoretically appealing, as they inherit the convergence properties of regular SGD under relatively mild assumptions. However their empirical performance and compression rates do not match those of sparsification methods.
{ "cite_N": [ "@cite_5", "@cite_17", "@cite_6", "@cite_2" ], "mid": [ "2805997383", "2769644379", "2786602455", "2617766261" ], "abstract": [ "Distributed model training suffers from communication overheads due to frequent gradient updates transmitted between compute nodes. To mitigate these overheads, several studies propose the use of sparsified stochastic gradients. We argue that these are facets of a general sparsification method that can operate on any possible atomic decomposition. Notable examples include element-wise, singular value, and Fourier decompositions. We present ATOMO, a general framework for atomic sparsification of stochastic gradients. Given a gradient, an atomic decomposition, and a sparsity budget, ATOMO gives a random unbiased sparsification of the atoms minimizing variance. We show that methods such as QSGD and TernGrad are special cases of ATOMO and show that sparsifiying gradients in their singular value decomposition (SVD), rather than the coordinate-wise one, can lead to significantly faster distributed training.", "Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to its excellent scalability properties. A fundamental barrier when parallelizing SGD is the high bandwidth cost of communicating gradient updates between nodes; consequently, several lossy compresion heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always guarantee convergence, and it is not clear whether they can be improved. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes for gradient updates which provides convergence guarantees. QSGD allows the user to smoothly trade off and : nodes can adjust the number of bits sent per iteration, at the cost of possibly higher variance. We show that this trade-off is inherent, in the sense that improving it past some threshold would violate information-theoretic lower bounds. QSGD guarantees convergence for convex and non-convex objectives, under asynchrony, and can be extended to stochastic variance-reduced techniques. When applied to training deep neural networks for image classification and automated speech recognition, QSGD leads to significant reductions in end-to-end training time. For example, on 16GPUs, we can train the ResNet152 network to full accuracy on ImageNet 1.8x faster than the full-precision variant.", "Training large neural networks requires distributing learning across multiple workers, where the cost of communicating gradients can be a significant bottleneck. signSGD alleviates this problem by transmitting just the sign of each minibatch stochastic gradient. We prove that it can get the best of both worlds: compressed gradients and SGD-level convergence rate. signSGD can exploit mismatches between L1 and L2 geometry: when noise and curvature are much sparser than the gradients, signSGD is expected to converge at the same rate or faster than full-precision SGD. Measurements of the L1 versus L2 geometry of real networks support our theoretical claims, and we find that the momentum counterpart of signSGD is able to match the accuracy and convergence speed of Adam on deep Imagenet models. We extend our theory to the distributed setting, where the parameter server uses majority vote to aggregate gradient signs from each worker enabling 1-bit compression of worker-server communication in both directions. Using a theorem by Gauss, we prove that the non-convex convergence rate of majority vote matches that of distributed SGD. Thus, there is great promise for sign-based optimisation schemes to achieve both communication efficiency and high accuracy.", "High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels -1,0,1 , which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet does not incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2 on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available." ] }
1903.03094
2922424071
We introduce a large scale crowdsourced text adventure game as a research platform for studying grounded dialogue. In it, agents can perceive, emote, and act whilst conducting dialogue with other agents. Models and humans can both act as characters within the game. We describe the results of training state-of-the-art generative and retrieval models in this setting. We show that in addition to using past dialogue, these models are able to effectively use the state of the underlying world to condition their predictions. In particular, we show that grounding on the details of the local environment, including location descriptions, and the objects (and their affordances) and characters (and their previous actions) present within it allows better predictions of agent behavior and dialogue. We analyze the ingredients necessary for successful grounding in this setting, and how each of these factors relate to agents that can talk and act successfully.
Several position papers have proposed virtual embodiment as a strategy for language research @cite_25 @cite_42 @cite_23 @cite_14 @cite_16 . Single-player text adventure game frameworks for training reinforcement learning agents exist, i.e., and TextWorld @cite_31 , but these do not have human dialogue within the game. and proposed small world setups for instruction following or labeling, but these are much more restricted than the large multi-player text adventure game environment with rich dialogue that we propose here.
{ "cite_N": [ "@cite_14", "@cite_42", "@cite_23", "@cite_31", "@cite_16", "@cite_25" ], "mid": [ "2963367022", "2542258308", "2531240212", "2810346659", "2885825670", "2397253692" ], "abstract": [ "The development of intelligent machines is one of the biggest unsolved challenges in computer science. In this paper, we propose some fundamental properties these machines should have, focusing in particular on communication and learning. We discuss a simple environment that could be used to incrementally teach a machine the basics of natural-language-based communication, as a prerequisite to more complex interaction with human users. We also present some conjectures on the sort of algorithms the machine should support in order to profitably learn from the environment.", "Meaning has been called the \"holy grail\" of a variety of scientific disciplines, ranging from linguistics to philosophy, psychology and the neurosciences. The field of Artifical Intelligence (AI) is very much a part of that list: the development of sophisticated natural language semantics is a sine qua non for achieving a level of intelligence comparable to humans. Embodiment theories in cognitive science hold that human semantic representation depends on sensori-motor experience; the abundant evidence that human meaning representation is grounded in the perception of physical reality leads to the conclusion that meaning must depend on a fusion of multiple (perceptual) modalities. Despite this, AI research in general, and its subdisciplines such as computational linguistics and computer vision in particular, have focused primarily on tasks that involve a single modality. Here, we propose virtual embodiment as an alternative, long-term strategy for AI research that is multi-modal in nature and that allows for the kind of scalability required to develop the field coherently and incrementally, in an ethically responsible fashion.", "A distinguishing property of human intelligence is the ability to flexibly use language in order to communicate complex ideas with other humans in a variety of contexts. Research in natural language dialogue should focus on designing communicative agents which can integrate themselves into these contexts and productively collaborate with humans. In this abstract, we propose a general situated language learning paradigm which is designed to bring about robust language agents able to cooperate productively with humans.", "We introduce TextWorld, a sandbox learning environment for the training and evaluation of RL agents on text-based games. TextWorld is a Python library that handles interactive play-through of text games, as well as backend functions like state tracking and reward assignment. It comes with a curated list of games whose features and challenges we have analyzed. More significantly, it enables users to handcraft or automatically generate new games. Its generative mechanisms give precise control over the difficulty, scope, and language of constructed games, and can be used to relax challenges inherent to commercial text games like partial observability and sparse rewards. By generating sets of varied but similar games, TextWorld can also be used to study generalization and transfer learning. We cast text-based games in the Reinforcement Learning formalism, use our framework to develop a set of benchmark games, and evaluate several baseline agents on this set and the curated list.", "Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models.", "Brooks, R.A., Intelligence without representation, Artificial Intelligence 47 (1991) 139159. Artificial intelligence research has foundered on the issue of representation. When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. In this paper we outline our approach to incrementally building complete intelligent Creatures. The fundamental decomposition of the intelligent system is not into independent information processing units which must interface with each other via representations. Instead, the intelligent system is decomposed into independent and parallel activity producers which all interface directly to the world through perception and action, rather than interface to each other particularly much. The notions of central and peripheral systems evaporateeverything is both central and peripheral. Based on these principles we have built a very successful series of mobile robots which operate without supervision as Creatures in standard office environments." ] }
1903.03094
2922424071
We introduce a large scale crowdsourced text adventure game as a research platform for studying grounded dialogue. In it, agents can perceive, emote, and act whilst conducting dialogue with other agents. Models and humans can both act as characters within the game. We describe the results of training state-of-the-art generative and retrieval models in this setting. We show that in addition to using past dialogue, these models are able to effectively use the state of the underlying world to condition their predictions. In particular, we show that grounding on the details of the local environment, including location descriptions, and the objects (and their affordances) and characters (and their previous actions) present within it allows better predictions of agent behavior and dialogue. We analyze the ingredients necessary for successful grounding in this setting, and how each of these factors relate to agents that can talk and act successfully.
Several position papers have proposed virtual embodiment as a strategy for language research @cite_25 @cite_42 @cite_23 @cite_14 @cite_16 and there are some existing gaming and simulation platforms that do support language research. Single-player text adventure game frameworks have been proposed, typically for training reinforcement learning agents given the text input of the game such as @cite_38 and TextWorld @cite_31 , but do not have human dialogue within the game, while @cite_24 @cite_36 proposed small world setups for instruction following or labeling. In contrast, we propose a large multi-player text adventure game environment with rich dialogue.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_36", "@cite_42", "@cite_24", "@cite_23", "@cite_31", "@cite_16", "@cite_25" ], "mid": [ "2949801941", "2963367022", "2112177991", "2542258308", "2770646692", "2531240212", "2810346659", "2885825670", "2397253692" ], "abstract": [ "In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-of-words and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations.", "The development of intelligent machines is one of the biggest unsolved challenges in computer science. In this paper, we propose some fundamental properties these machines should have, focusing in particular on communication and learning. We discuss a simple environment that could be used to incrementally teach a machine the basics of natural-language-based communication, as a prerequisite to more complex interaction with human users. We also present some conjectures on the sort of algorithms the machine should support in order to profitably learn from the environment.", "We present a general framework and learning algorithm for the task of concept labeling: each word in a given sentence has to be tagged with the unique physical entity (e.g. person, object or location) or abstract concept it refers to. Our method allows both world knowledge and linguistic information to be used during learning and prediction. We show experimentally that we can learn to use world knowledge to resolve ambiguities in language, such as word senses or reference resolution, without the use of handcrafted rules or features.", "Meaning has been called the \"holy grail\" of a variety of scientific disciplines, ranging from linguistics to philosophy, psychology and the neurosciences. The field of Artifical Intelligence (AI) is very much a part of that list: the development of sophisticated natural language semantics is a sine qua non for achieving a level of intelligence comparable to humans. Embodiment theories in cognitive science hold that human semantic representation depends on sensori-motor experience; the abundant evidence that human meaning representation is grounded in the perception of physical reality leads to the conclusion that meaning must depend on a fusion of multiple (perceptual) modalities. Despite this, AI research in general, and its subdisciplines such as computational linguistics and computer vision in particular, have focused primarily on tasks that involve a single modality. Here, we propose virtual embodiment as an alternative, long-term strategy for AI research that is multi-modal in nature and that allows for the kind of scalability required to develop the field coherently and incrementally, in an ethically responsible fashion.", "Contrary to most natural language processing research, which makes use of static datasets, humans learn language interactively, grounded in an environment. In this work we propose an interactive learning procedure called Mechanical Turker Descent (MTD) and use it to train agents to execute natural language commands grounded in a fantasy text adventure game. In MTD, Turkers compete to train better agents in the short term, and collaborate by sharing their agents' skills in the long term. This results in a gamified, engaging experience for the Turkers and a better quality teaching signal for the agents compared to static datasets, as the Turkers naturally adapt the training data to the agent's abilities.", "A distinguishing property of human intelligence is the ability to flexibly use language in order to communicate complex ideas with other humans in a variety of contexts. Research in natural language dialogue should focus on designing communicative agents which can integrate themselves into these contexts and productively collaborate with humans. In this abstract, we propose a general situated language learning paradigm which is designed to bring about robust language agents able to cooperate productively with humans.", "We introduce TextWorld, a sandbox learning environment for the training and evaluation of RL agents on text-based games. TextWorld is a Python library that handles interactive play-through of text games, as well as backend functions like state tracking and reward assignment. It comes with a curated list of games whose features and challenges we have analyzed. More significantly, it enables users to handcraft or automatically generate new games. Its generative mechanisms give precise control over the difficulty, scope, and language of constructed games, and can be used to relax challenges inherent to commercial text games like partial observability and sparse rewards. By generating sets of varied but similar games, TextWorld can also be used to study generalization and transfer learning. We cast text-based games in the Reinforcement Learning formalism, use our framework to develop a set of benchmark games, and evaluate several baseline agents on this set and the curated list.", "Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models.", "Brooks, R.A., Intelligence without representation, Artificial Intelligence 47 (1991) 139159. Artificial intelligence research has foundered on the issue of representation. When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. In this paper we outline our approach to incrementally building complete intelligent Creatures. The fundamental decomposition of the intelligent system is not into independent information processing units which must interface with each other via representations. Instead, the intelligent system is decomposed into independent and parallel activity producers which all interface directly to the world through perception and action, rather than interface to each other particularly much. The notions of central and peripheral systems evaporateeverything is both central and peripheral. Based on these principles we have built a very successful series of mobile robots which operate without supervision as Creatures in standard office environments." ] }
1903.03094
2922424071
We introduce a large scale crowdsourced text adventure game as a research platform for studying grounded dialogue. In it, agents can perceive, emote, and act whilst conducting dialogue with other agents. Models and humans can both act as characters within the game. We describe the results of training state-of-the-art generative and retrieval models in this setting. We show that in addition to using past dialogue, these models are able to effectively use the state of the underlying world to condition their predictions. In particular, we show that grounding on the details of the local environment, including location descriptions, and the objects (and their affordances) and characters (and their previous actions) present within it allows better predictions of agent behavior and dialogue. We analyze the ingredients necessary for successful grounding in this setting, and how each of these factors relate to agents that can talk and act successfully.
A number of visual, rather than text, platforms have been proposed such as House3D @cite_41 , HoME @cite_17 , MINOS @cite_6 , Matterport3D @cite_40 and AI2-THOR @cite_12 , and the Minecraft MALMO project @cite_27 , but they typically are suited to reinforcement learning of actions, and involve templated language for navigation or question answering tasks, if at all @cite_13 @cite_8 .
{ "cite_N": [ "@cite_8", "@cite_41", "@cite_6", "@cite_40", "@cite_27", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2963738360", "2783375473", "2772390515", "2755286543", "2480004914", "2953326374", "2776202271", "2774661387" ], "abstract": [ "We marry two powerful ideas: deep representation learning for visual recognition and language understanding, and symbolic program execution for reasoning. Our visual question answering (VQA) system first recovers a structural scene representation from the image and a program trace from the question. It then executes the program on the scene representation to obtain an answer. Incorporating symbolic structure as prior knowledge offers three advantages. First, executing programs on a symbolic space is more robust to long program traces. Our model can solve complex reasoning tasks better, achieving an accuracy of 99.8 on the CLEVR dataset. Second, the model is more data- and memory-efficient: it learns to perform well on a small number of training data; it can also encode an image into a compact representation and answer questions offline, using only 1 of the storage needed by the best competing methods. Third, symbolic program execution offers full transparency to the reasoning process; we are thus able to interpret and diagnose each execution step. Our model recovers the ground truth programs precisely.", "Towards bridging the gap between machine and human intelligence, it is of utmost importance to introduce environments that are visually realistic and rich in content. In such environments, one can evaluate and improve a crucial property of practical intelligent systems, namely . In this work, we build , a rich, extensible and efficient environment that contains 45,622 human-designed 3D scenes of houses, ranging from single-room studios to multi-storeyed houses, equipped with a diverse set of fully labeled 3D objects, textures and scene layouts, based on the SUNCG dataset (, 2017). With an emphasis on semantic-level generalization, we study the task of concept-driven navigation, , using a subset of houses in House3D. In RoomNav, an agent navigates towards a target specified by a semantic concept. To succeed, the agent learns to comprehend the scene it lives in by developing perception, understand the concept by mapping it to the correct semantics, and navigate to the target by obeying the underlying physical rules. We train RL agents with both continuous and discrete action spaces and show their ability to generalize in new unseen environments. In particular, we observe that (1) training is substantially harder on large house sets but results in better generalization, (2) using semantic signals (e.g., segmentation mask) boosts the generalization performance, and (3) gated networks on semantic input signal lead to improved training performance and generalization. We hope House3D, including the analysis of the RoomNav task, serves as a building block towards designing practical intelligent systems and we wish it to be broadly adopted by the community.", "We present MINOS, a simulator designed to support the development of multisensory models for goal-directed navigation in complex indoor environments. The simulator leverages large datasets of complex 3D environments and supports flexible configuration of multimodal sensor suites. We use MINOS to benchmark deep-learning-based navigation methods, to analyze the influence of environmental complexity on navigation performance, and to carry out a controlled study of multimodality in sensorimotor learning. The experiments show that current deep reinforcement learning approaches fail in large realistic environments. The experiments also indicate that multimodality is beneficial in learning to navigate cluttered scenes. MINOS is released open-source to the research community at this http URL . A video that shows MINOS can be found at this https URL", "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.", "We present Project Malmo - an AI experimentation platform built on top of the popular computer game Minecraft, and designed to support fundamental research in artificial intelligence. As the AI research community pushes for artificial general intelligence (AGI), experimentation platforms are needed that support the development of flexible agents that learn to solve diverse tasks in complex environments. Minecraft is an ideal foundation for such a platform, as it exposes agents to complex 3D worlds, coupled with infinitely varied game-play. Project Malmo provides a sophisticated abstraction layer on top of Minecraft that supports a wide range of experimentation scenarios, ranging from navigation and survival to collaboration and problem solving tasks. In this demo we present the Malmo platform and its capabilities. The platform is publicly released as open source software at IJCAI, to support openness and collaboration in AI research.", "As a step towards developing zero-shot task generalization capabilities in reinforcement learning (RL), we introduce a new RL problem where the agent should learn to execute sequences of instructions after learning useful skills that solve subtasks. In this problem, we consider two types of generalizations: to previously unseen instructions and to longer sequences of instructions. For generalization over unseen instructions, we propose a new objective which encourages learning correspondences between similar subtasks by making analogies. For generalization over sequential instructions, we present a hierarchical architecture where a meta controller learns to use the acquired skills for executing the instructions. To deal with delayed reward, we propose a new neural architecture in the meta controller that learns when to update the subtask, which makes learning more efficient. Experimental results on a stochastic 3D domain show that the proposed ideas are crucial for generalization to longer instructions as well as unseen instructions.", "We introduce The House Of inteRactions (THOR), a framework for visual AI research, available at this http URL AI2-THOR consists of near photo-realistic 3D indoor scenes, where AI agents can navigate in the scenes and interact with objects to perform tasks. AI2-THOR enables research in many different domains including but not limited to deep reinforcement learning, imitation learning, learning by interaction, planning, visual question answering, unsupervised representation learning, object detection and segmentation, and learning models of cognition. The goal of AI2-THOR is to facilitate building visually intelligent models and push the research forward in this domain.", "We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting." ] }
1903.03094
2922424071
We introduce a large scale crowdsourced text adventure game as a research platform for studying grounded dialogue. In it, agents can perceive, emote, and act whilst conducting dialogue with other agents. Models and humans can both act as characters within the game. We describe the results of training state-of-the-art generative and retrieval models in this setting. We show that in addition to using past dialogue, these models are able to effectively use the state of the underlying world to condition their predictions. In particular, we show that grounding on the details of the local environment, including location descriptions, and the objects (and their affordances) and characters (and their previous actions) present within it allows better predictions of agent behavior and dialogue. We analyze the ingredients necessary for successful grounding in this setting, and how each of these factors relate to agents that can talk and act successfully.
Other examples are instruction-following in the Neverwinter Nights game @cite_34 , dialogue about soccer videogames @cite_20 , placing blocks appropriately given a final plan @cite_18 and a more open ended building task using a grid of voxels @cite_32 . 0 investigate instruction tasks in the Neverwinter Nights game. investigate dialogue given video of computer soccer games. In an interactive language learning game is set up between human and machine in order to place blocks appropriately given a final plan. propose a more open ended building task using a grid of voxels. In the latter two cases the communication is one-sided with only the human issuing instructions, rather than dialogue, with the agent only able to act.
{ "cite_N": [ "@cite_18", "@cite_34", "@cite_32", "@cite_20" ], "mid": [ "2964193163", "2066060456", "2609753328", "2950402709" ], "abstract": [ "We introduce a new language learning setting relevant to building adaptive natural language interfaces. It is inspired by Wittgenstein’s language games: a human wishes to accomplish some task (e.g., achieving a certain configuration of blocks), but can only communicate with a computer, who performs the actual actions (e.g., removing all red blocks). The computer initially knows nothing about language and therefore must learn it from scratch through interaction, while the human adapts to the computer’s capabilities. We created a game called SHRDLURN in a blocks world and collected interactions from 100 people playing it. First, we analyze the humans’ strategies, showing that using compositionality and avoiding synonyms correlates positively with task performance. Second, we compare computer strategies, showing that modeling pragmatics on a semantic parsing model accelerates learning for more strategic players.", "Natural language interfaces designed for situationally embedded domains (e.g. cars, videogames) must incorporate knowledge about the users' context to address the many ambiguities of situated language use. We introduce a model of situated language acquisition that operates in two phases. First, intentional context is represented and inferred from user actions using probabilistic context free grammars. Then, utterances are mapped onto this representation in a noisy channel framework. The acquisition model is trained on unconstrained speech collected from subjects playing an interactive game, and tested on an understanding task.", "Our goal is to create a convenient natural language interface for performing well-specified but complex actions such as analyzing data, manipulating text, and querying databases. However, existing natural language interfaces for such tasks are quite primitive compared to the power one wields with a programming language. To bridge this gap, we start with a core programming language and allow users to \"naturalize\" the core language incrementally by defining alternative, more natural syntax and increasingly complex concepts in terms of compositions of simpler ones. In a voxel world, we show that a community of users can simultaneously teach a common system a diverse language and use it to build hundreds of complex voxel structures. Over the course of three days, these users went from using only the core language to using the naturalized language in 85.9 of the last 10K utterances.", "Current dialogue systems focus more on textual and speech context knowledge and are usually based on two speakers. Some recent work has investigated static image-based dialogue. However, several real-world human interactions also involve dynamic visual context (similar to videos) as well as dialogue exchanges among multiple speakers. To move closer towards such multimodal conversational skills and visually-situated applications, we introduce a new video-context, many-speaker dialogue dataset based on live-broadcast soccer game videos and chats from Twitch.tv. This challenging testbed allows us to develop visually-grounded dialogue models that should generate relevant temporal and spatial event language from the live video, while also being relevant to the chat history. For strong baselines, we also present several discriminative and generative models, e.g., based on tridirectional attention flow (TriDAF). We evaluate these models via retrieval ranking-recall, automatic phrase-matching metrics, as well as human evaluation studies. We also present dataset analyses, model ablations, and visualizations to understand the contribution of different modalities and model components." ] }
1903.03094
2922424071
We introduce a large scale crowdsourced text adventure game as a research platform for studying grounded dialogue. In it, agents can perceive, emote, and act whilst conducting dialogue with other agents. Models and humans can both act as characters within the game. We describe the results of training state-of-the-art generative and retrieval models in this setting. We show that in addition to using past dialogue, these models are able to effectively use the state of the underlying world to condition their predictions. In particular, we show that grounding on the details of the local environment, including location descriptions, and the objects (and their affordances) and characters (and their previous actions) present within it allows better predictions of agent behavior and dialogue. We analyze the ingredients necessary for successful grounding in this setting, and how each of these factors relate to agents that can talk and act successfully.
There are also setups that consider static language and perception, for example image captioning @cite_26 , video captioning @cite_28 , visual QA @cite_0 and visual dialogue @cite_9 @cite_33 @cite_39 . While grounded, the agent has no ability to act in these tasks. Talk the Walk @cite_19 introduces a navigation game that involves action, perception and two-way dialogue, but is limited to small grids.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_28", "@cite_9", "@cite_39", "@cite_0", "@cite_19" ], "mid": [ "1861492603", "2899513582", "2963576560", "", "2583186419", "2950761309", "2835434549" ], "abstract": [ "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.", "To achieve the long-term goal of machines being able to engage humans in conversation, our models should be engaging. We focus on communication grounded in images, whereby a dialogue is conducted based on a given photo, a setup that is naturally engaging to humans (, 2014). We collect a large dataset of grounded human-human conversations, where humans are asked to play the role of a given personality, as the use of personality in conversation has also been shown to be engaging (, 2018). Our dataset, Image-Chat, consists of 202k dialogues and 401k utterances over 202k images using 215 possible personality traits. We then design a set of natural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. Automatic metrics and human evaluations show the efficacy of approach, in particular where our best performing model is preferred over human conversationalists 47.7 of the time", "We present an approach that exploits hierarchical Recurrent Neural Networks (RNNs) to tackle the video captioning problem, i.e., generating one or multiple sentences to describe a realistic video. Our hierarchical framework contains a sentence generator and a paragraph generator. The sentence generator produces one simple short sentence that describes a specific short video interval. It exploits both temporal-and spatial-attention mechanisms to selectively focus on visual elements during generation. The paragraph generator captures the inter-sentence dependency by taking as input the sentential embedding produced by the sentence generator, combining it with the paragraph history, and outputting the new initial state for the sentence generator. We evaluate our approach on two large-scale benchmark datasets: YouTubeClips and TACoS-MultiLevel. The experiments demonstrate that our approach significantly outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and 0.305 respectively.", "", "The popularity of image sharing on social media and the engagement it creates between users reflects the important role that visual context plays in everyday conversations. We present a novel task, Image-Grounded Conversations (IGC), in which natural-sounding conversations are generated about a shared image. To benchmark progress, we introduce a new multiple-reference dataset of crowd-sourced, event-centric conversations on images. IGC falls on the continuum between chit-chat and goal-directed conversation models, where visual grounding constrains the topic of conversation to event-driven utterances. Experiments with models trained on social media data show that the combination of visual and textual context enhances the quality of generated conversational turns. In human evaluation, the gap between human performance and that of both neural and retrieval architectures suggests that multi-modal IGC presents an interesting challenge for dialogue research.", "We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).", "We introduce \"Talk The Walk\", the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a \"guide\" and a \"tourist\") that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task." ] }
1903.02775
2921926653
Robust segmentation of hair from portrait images remains challenging: hair does not conform to a uniform shape, style or even color; dark hair in particular lacks features. We present a novel computational imaging solution that tackles the problem from both input and processing fronts. We explore using Time-of-Flight (ToF) RGBD sensors on recent mobile devices. We first conduct a comprehensive analysis to show that scattering and inter-reflection cause different noise patterns on hair vs. non-hair regions on ToF images, by changing the light path and or combining multiple paths. We then develop a deep network based approach that employs both ToF depth map and the RGB gradient maps to produce an initial hair segmentation with labeled hair components. We then refine the result by imposing ToF noise prior under the conditional random field. We collect the first ToF RGBD hair dataset with 20k+ head images captured on 30 human subjects with a variety of hairstyles at different view angles. Comprehensive experiments show that our approach outperforms the RGB based techniques in accuracy and robustness and can handle traditionally challenging cases such as dark hair, similar hair background, similar hair foreground, etc.
Hair is the most challenging type of objects in recognition, segmentation, and reconstruction. Hair modeling and reconstruction aims to produce lifelike hair for virtual human in game and film production as well as to beautify portraits for hairstyle tryon. Image-based approaches achieve higher quality with less efforts than physical simulation based methods (see @cite_8 and @cite_25 for a comprehensive survey). The core of the problem lies in how to segment the hair component in images.
{ "cite_N": [ "@cite_25", "@cite_8" ], "mid": [ "2789626086", "2164774782" ], "abstract": [ "With the tremendous performance increase of today’s graphics technologies, visual details of digital humans in games, online virtual worlds, and virtual reality applications are becoming significantly more demanding. Hair is a vital component of a person’s identity and can provide strong cues about age, background, and even personality. More and more researchers focus on hair modeling in the fields of computer graphics and virtual reality. Traditional methods are physics-based simulation by setting different parameters. The computation is expensive, and the constructing process is non-intuitive, difficult to control. Conversely, image-based methods have the advantages of fast modeling and high fidelity. This paper surveys the state of the art in the major topics of image-based techniques for hair modeling, including single-view hair modeling, static hair modeling from multiple images, video-based dynamic hair modeling, and the editing and reusing of hair modeling results. We first summarize the single-view approaches, which can be divided into the orientation-field and data-driven-based methods. The static methods from multiple images and dynamic methods are then reviewed in Sections III and IV . In Section V , we also review the editing and reusing of hair modeling results. The future development trends and challenges of image-based methods are proposed in the end.", "Realistic hair modeling is a fundamental part of creating virtual humans in computer graphics. This paper surveys the state of the art in the major topics of hair modeling: hairstyling, hair simulation, and hair rendering. Because of the difficult, often unsolved problems that arise in alt these areas, a broad diversity of approaches is used, each with strengths that make it appropriate for particular applications. We discuss each of these major topics in turn, presenting the unique challenges facing each area and describing solutions that have been presented over the years to handle these complex issues. Finally, we outline some of the remaining computational challenges in hair modeling" ] }
1903.02775
2921926653
Robust segmentation of hair from portrait images remains challenging: hair does not conform to a uniform shape, style or even color; dark hair in particular lacks features. We present a novel computational imaging solution that tackles the problem from both input and processing fronts. We explore using Time-of-Flight (ToF) RGBD sensors on recent mobile devices. We first conduct a comprehensive analysis to show that scattering and inter-reflection cause different noise patterns on hair vs. non-hair regions on ToF images, by changing the light path and or combining multiple paths. We then develop a deep network based approach that employs both ToF depth map and the RGB gradient maps to produce an initial hair segmentation with labeled hair components. We then refine the result by imposing ToF noise prior under the conditional random field. We collect the first ToF RGBD hair dataset with 20k+ head images captured on 30 human subjects with a variety of hairstyles at different view angles. Comprehensive experiments show that our approach outperforms the RGB based techniques in accuracy and robustness and can handle traditionally challenging cases such as dark hair, similar hair background, similar hair foreground, etc.
More recent hair segmentation techniques employ deep convolutional neural networks (CNN) by learning to produce more discriminative features even better than the hand-crafted ones. Approaches in this category generally train on a large number of manually annotated portrait image datasets and employ semantic segmentation pipelines such as PSPNet @cite_0 to automatically obtain pixel-wise hair masks. @cite_30 adopts a multi-objective CNN for both pixelwise labels and a pairwise edge map. @cite_19 applies Region-CNN (R-CNN) to estimate hair distribution classes and then generate a hair mask with a directional map. @cite_10 and @cite_28 show that fully convolutional networks (FCN) achieve higher accuracy in hair segmentation. By far nearly all approaches use RGB color features whereas we exploit the depth channel, more precisely, the noise pattern on the depth channel.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_0", "@cite_19", "@cite_10" ], "mid": [ "", "2950568822", "2560023338", "2468764576", "2605339450" ], "abstract": [ "", "Imagine taking a selfie video with your mobile phone and getting as output a 3D model of your head (face and 3D hair strands) that can be later used in VR, AR, and any other domain. State of the art hair reconstruction methods allow either a single photo (thus compromising 3D quality) or multiple views, but they require manual user interaction (manual hair segmentation and capture of fixed camera views that span full 360 degree). In this paper, we describe a system that can completely automatically create a reconstruction from any video (even a selfie video), and we don't require specific views, since taking your -90 degree, 90 degree, and full back views is not feasible in a selfie capture. In the core of our system, in addition to the automatization components, hair strands are estimated and deformed in 3D (rather than 2D as in state of the art) thus enabling superior results. We provide qualitative, quantitative, and Mechanical Turk human studies that support the proposed system, and show results on a diverse variety of videos (8 different celebrity videos, 9 selfie mobile videos, spanning age, gender, hair length, type, and styling).", "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval.", "Selfies have become commonplace. More and more people take pictures of themselves, and enjoy enhancing these pictures using a variety of image processing techniques. One specific functionality of interest is automatic skin and hair segmentation, as this allows for processing one's skin and hair separately. Traditional approaches require user input in the form of fully specified trimaps, or at least of “scribbles” indicating foreground and background areas, with high-quality masks then generated via matting. Manual input, however, can be difficult or tedious, especially on a smartphone's small screen. In this paper, we propose the use of fully convolutional networks (FCN) and fully-connected CRF to perform pixel-level semantic segmentation into skin, hair and background. The trimap thus generated is given as input to a standard matting algorithm, resulting in accurate skin and hair alpha masks. Our method achieves state-of-the-art performance on the LFW Parts dataset [1]. The effectiveness of our method is also demonstrated with a specific application case." ] }
1903.02494
2932869281
Common object counting in a natural scene is a challenging problem in computer vision with numerous real-world applications. Existing image-level supervised common object counting approaches only predict the global object count and rely on additional instance-level supervision to also determine object locations. We propose an image-level supervised approach that provides both the global object count and the spatial distribution of object instances by constructing an object category density map. Motivated by psychological studies, we further reduce image-level supervision using a limited object count information (up to four). To the best of our knowledge, we are the first to propose image-level supervised density map estimation for common object counting and demonstrate its effectiveness in image-level supervised instance segmentation. Comprehensive experiments are performed on the PASCAL VOC and COCO datasets. Our approach outperforms existing methods, including those using instance-level supervision, on both datasets for common object counting. Moreover, our approach improves state-of-the-art image-level supervised instance segmentation with a relative gain of 17.8 in terms of average best overlap, on the PASCAL VOC 2012 dataset. Code link: this https URL
Chattopadhyay al @cite_18 investigated regression-based common object counting, using image-level (per-category count) and instance-level (bounding box) supervisions. The image-level supervised strategy, denoted as glancing, used count annotations from both within and beyond the subitizing range to predict the global count of objects, without providing information about their location. The instance-level (bounding box) supervised strategy, denoted as subitizing, estimated a large number of objects by dividing an image into non-overlapping regions, assuming the object count in each region falls within the subitizing range. Instead, our ILC supervised approach requires neither bounding box annotation nor beyond subitizing range count information during training. It then predicts the global object count, even beyond the subitizing range, together with the spatial distribution of object instances.
{ "cite_N": [ "@cite_18" ], "mid": [ "2963686699" ], "abstract": [ "We are interested in counting the number of instances of object classes in natural, everyday images. Previous counting approaches tackle the problem in restricted domains such as counting pedestrians in surveillance videos. Counts can also be estimated from outputs of other vision tasks like object detection. In this work, we build dedicated models for counting designed to tackle the large variance in counts, appearances, and scales of objects found in natural scenes. Our approach is inspired by the phenomenon of subitizing – the ability of humans to make quick assessments of counts given a perceptual signal, for small count values. Given a natural scene, we employ a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Our approach offers consistent improvements over numerous baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets. Subsequently, we study how counting can be used to improve object detection. We then show a proof of concept application of our counting methods to the task of Visual Question Answering, by studying the how many? questions in the VQA and COCO-QA datasets." ] }
1903.02652
2920528119
End-to-end training has been a popular approach for knowledge base question answering (KBQA). However, real world applications often contain answers of varied quality for users' questions. It is not appropriate to treat all available answers of a user question equally. This paper proposes a novel approach based on multiple instance learning to address the problem of noisy answers by exploring consensus among answers to the same question in training end-to-end KBQA models. In particular, the QA pairs are organized into bags with dynamic instance selection and different options of instance weighting. Curriculum learning is utilized to select instance bags during training. On the public CQA dataset, the new method significantly improves both entity accuracy and the Rouge-L score over a state-of-the-art end-to-end KBQA baseline.
A related, but different question answering approach is based on Machine Reading Comprehension (MRC: @cite_28 @cite_3 ). Recent MRC research utilizes relation graphs among entities to integrate evidences in text @cite_7 . Our work is different from MRC in that the knowledge graph in KBQA is manually curated, while MRC extracts the answer from free text.
{ "cite_N": [ "@cite_28", "@cite_7", "@cite_3" ], "mid": [ "2952389302", "2889646190", "2798858969" ], "abstract": [ "This paper describes a novel hierarchical attention network for reading comprehension style question answering, which aims to answer questions for a given narrative paragraph. In the proposed method, attention and fusion are conducted horizontally and vertically across layers at different levels of granularity between question and paragraph. Specifically, it first encode the question and paragraph with fine-grained language embeddings, to better capture the respective representations at semantic level. Then it proposes a multi-granularity fusion approach to fully fuse information from both global and attended representations. Finally, it introduces a hierarchical attention network to focuses on the answer span progressively with multi-level softalignment. Extensive experiments on the large-scale SQuAD and TriviaQA datasets validate the effectiveness of the proposed method. At the time of writing the paper (Jan. 12th 2018), our model achieves the first position on the SQuAD leaderboard for both single and ensemble models. We also achieves state-of-the-art results on TriviaQA, AddSent and AddOne-Sent datasets.", "Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question. Previous work approximates global evidence with local coreference information, encoding coreference chains with DAG-styled GRU layers within a gated-attention reader. However, coreference is limited in providing information for rich inference. We introduce a new method for better connecting global evidence, which forms more complex graphs compared to DAGs. To perform evidence integration on our graphs, we investigate two recent graph neural networks, namely graph convolutional network (GCN) and graph recurrent network (GRN). Experiments on two standard datasets show that richer global information leads to better answers. Our method performs better than all published results on these datasets.", "Current end-to-end machine reading and question answering (Q &A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q &A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8." ] }
1903.02652
2920528119
End-to-end training has been a popular approach for knowledge base question answering (KBQA). However, real world applications often contain answers of varied quality for users' questions. It is not appropriate to treat all available answers of a user question equally. This paper proposes a novel approach based on multiple instance learning to address the problem of noisy answers by exploring consensus among answers to the same question in training end-to-end KBQA models. In particular, the QA pairs are organized into bags with dynamic instance selection and different options of instance weighting. Curriculum learning is utilized to select instance bags during training. On the public CQA dataset, the new method significantly improves both entity accuracy and the Rouge-L score over a state-of-the-art end-to-end KBQA baseline.
Multi-instance learning @cite_24 is a variant of supervised learning where inputs are bags of instances. One successful application in NLP is distant supervision of relation extractors by only learning from some of the instances @cite_11 @cite_2 , or assigning different weights to the instances under mechanisms such as selective attention @cite_1 . Our task is a generation problem, where classification techniques developed for relation extraction cannot be applied directly.
{ "cite_N": [ "@cite_24", "@cite_1", "@cite_2", "@cite_11" ], "mid": [ "2110119381", "2515462165", "2251135946", "1604644367" ], "abstract": [ "The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89 correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms.", "", "Two problems arise when using distant supervision for relation extraction. First, in this method, an already existing knowledge base is heuristically aligned to texts, and the alignment results are treated as labeled data. However, the heuristic alignment can fail, resulting in wrong label problem. In addition, in previous approaches, statistical models have typically been applied to ad hoc features. The noise that originates from the feature extraction process can cause poor performance. In this paper, we propose a novel model dubbed the Piecewise Convolutional Neural Networks (PCNNs) with multi-instance learning to address these two problems. To solve the first problem, distant supervised relation extraction is treated as a multi-instance problem in which the uncertainty of instance labels is taken into account. To address the latter problem, we avoid feature engineering and instead adopt convolutional architecture with piecewise max pooling to automatically learn relevant features. Experiments show that our method is effective and outperforms several competitive baseline methods.", "Several recent works on relation extraction have been applying the distant supervision paradigm: instead of relying on annotated text to learn how to predict relations, they employ existing knowledge bases (KBs) as source of supervision. Crucially, these approaches are trained based on the assumption that each sentence which mentions the two related entities is an expression of the given relation. Here we argue that this leads to noisy patterns that hurt precision, in particular if the knowledge base is not directly related to the text we are working with. We present a novel approach to distant supervision that can alleviate this problem based on the following two ideas: First, we use a factor graph to explicitly model the decision whether two entities are related, and the decision whether this relation is mentioned in a given sentence; second, we apply constraint-driven semi-supervision to train this model without any knowledge about which sentences express the relations in our training KB. We apply our approach to extract relations from the New York Times corpus and use Freebase as knowledge base. When compared to a state-of-the-art approach for relation extraction under distant supervision, we achieve 31 error reduction." ] }
1903.02671
2919361114
Word embeddings are already well studied in the general domain, usually trained on large text corpora, and have been evaluated for example on word similarity and analogy tasks, but also as an input to downstream NLP processes. In contrast, in this work we explore the suitability of word embedding technologies in the specialized digital humanities domain. After training embedding models of various types on two popular fantasy novel book series, we evaluate their performance on two task types: term analogies, and word intrusion. To this end, we manually construct test datasets with domain experts. Among the contributions are the evaluation of various word embedding techniques on the different task types, with the findings that even embeddings trained on small corpora perform well for example on the word intrusion task. Furthermore, we provide extensive and high-quality datasets in digital humanities for further investigation, as well as the implementation to easily reproduce or extend the experiments.
Several factors contribute to the recent popularity of fantasy novels as source for analysis in NLP: i) such books often have a linear timeline suitable for timeline and storyline extraction @cite_38 , ii) they feature a profound amount of direct speech for dialogue @cite_20 and social network analysis @cite_11 .
{ "cite_N": [ "@cite_38", "@cite_20", "@cite_11" ], "mid": [ "2250736529", "2251149241", "2507252477" ], "abstract": [ "We formulate a proposal that covers a new definition of StoryLines based on the shared data provided by the NewsStory workshop. We re-use the SemEval 2015 Task 4: Timelines dataset to provide a gold-standard dataset and an evaluation measure for evaluating StoryLines extraction systems. We also present a system to explore the feasibility of capturing StoryLines automatically. Finally, based on our initial findings, we also discuss some simple changes that will improve the existing annotations to complete our initial Story-", "This study focuses on personality prediction of protagonists in novels based on the Five-Factor Model of personality. We present and publish a novel collaboratively built dataset of fictional character personality and design our task as a text classification problem. We incorporate a range of semantic features, including WordNet and VerbNet sense-level information and word vector representations. We evaluate three machine learning models based on the speech, actions and predicatives of the main characters, and show that especially the lexical-semantic features significantly outperform the baselines. The most predictive features correspond to reported findings in personality psychology.", "We investigate social networks of characters found in cultural works such as novels and films. These character networks exhibit many of the properties of complex networks such as skewed degree distribution and community structure, but may be of relatively small order with a high multiplicity of edges. Building on recent work of Beveridge and Shan [4], we consider graph extraction, visualization, and network statistics for three novels: Twilight by Stephanie Meyer, Steven King’s The Stand, and J.K. Rowling’s Harry Potter and the Goblet of Fire. Coupling with 800 character networks from films found in the http: moviegalaxies.com database, we compare the data sets to simulations from various stochastic complex networks models including random graphs with given expected degrees (also known as the Chung-Lu model), the configuration model, and the preferential attachment model. Using machine learning techniques based on motif (or small subgraph) counts, we determine that the Chung-Lu model best fits character networks and we conjecture why this may be the case." ] }
1903.02671
2919361114
Word embeddings are already well studied in the general domain, usually trained on large text corpora, and have been evaluated for example on word similarity and analogy tasks, but also as an input to downstream NLP processes. In contrast, in this work we explore the suitability of word embedding technologies in the specialized digital humanities domain. After training embedding models of various types on two popular fantasy novel book series, we evaluate their performance on two task types: term analogies, and word intrusion. To this end, we manually construct test datasets with domain experts. Among the contributions are the evaluation of various word embedding techniques on the different task types, with the findings that even embeddings trained on small corpora perform well for example on the word intrusion task. Furthermore, we provide extensive and high-quality datasets in digital humanities for further investigation, as well as the implementation to easily reproduce or extend the experiments.
Modern Distributional Semantics is built under the assumption that the sense of the word can be represented as a dense vector (otherwise called embedding) and the similarity between two words can be computed as the cosine between two corresponding vectors @cite_17 @cite_0 . There are numerous techniques to generate such vectors. Count-based methods date back to Latent Semantic Analysis (LSA) @cite_34 and Singular Value Decomposition on positive PMI word-context matrices as well as other weighting schemes and dimensionality reduction techniques @cite_0 . Predictive models became very popular in recent years for language modeling and feature learning, esp. since the work of @cite_26 on the word2vec toolkit in 2013. Other well-known word embedding types including GloVe @cite_36 , fastText @cite_25 or LexVec @cite_30 have received a lot of attention even outside of the NLP community. The variety of embedding models have been tested and evaluated on a set of common tasks and datasets. According to @cite_17 @cite_0 these tasks usually include synonym detection, word analogies, and word dissimilarity.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_36", "@cite_0", "@cite_34", "@cite_25", "@cite_17" ], "mid": [ "2410064807", "1614298861", "2250539671", "2251803266", "2147152072", "2952566282", "2508293255" ], "abstract": [ "In this paper we take a state-of-the-art model for distributed word representation that explicitly factorizes the positive pointwise mutual information (PPMI) matrix using window sampling and negative sampling and address two of its shortcomings. We improve syntactic performance by using positional contexts, and solve the need to store the PPMI matrix in memory by working on aggregate data in external memory. The effectiveness of both modifications is shown using word similarity and analogy tasks.", "", "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "Context-predicting models (more commonly known as embeddings or neural language models) are the new kids on the distributional semantics block. Despite the buzz surrounding these models, the literature is still lacking a systematic comparison of the predictive models with classic, count-vector-based distributional semantic approaches. In this paper, we perform such an extensive evaluation, on a wide range of lexical semantics tasks and across many parameter settings. The results, to our own surprise, show that the buzz is fully justified, as the context-predicting models obtain a thorough and resounding victory against their count-based counterparts.", "A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.", "Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character @math -grams. A vector representation is associated to each character @math -gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.", "" ] }
1903.02671
2919361114
Word embeddings are already well studied in the general domain, usually trained on large text corpora, and have been evaluated for example on word similarity and analogy tasks, but also as an input to downstream NLP processes. In contrast, in this work we explore the suitability of word embedding technologies in the specialized digital humanities domain. After training embedding models of various types on two popular fantasy novel book series, we evaluate their performance on two task types: term analogies, and word intrusion. To this end, we manually construct test datasets with domain experts. Among the contributions are the evaluation of various word embedding techniques on the different task types, with the findings that even embeddings trained on small corpora perform well for example on the word intrusion task. Furthermore, we provide extensive and high-quality datasets in digital humanities for further investigation, as well as the implementation to easily reproduce or extend the experiments.
@cite_33 identify problems associated with evaluating embedding models only on word similarity tasks and suggested to rather conduct task- and domain-specific evaluations.
{ "cite_N": [ "@cite_33" ], "mid": [ "2387546565" ], "abstract": [ "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily on word similarity tasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of semantic similarity is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods." ] }
1903.02671
2919361114
Word embeddings are already well studied in the general domain, usually trained on large text corpora, and have been evaluated for example on word similarity and analogy tasks, but also as an input to downstream NLP processes. In contrast, in this work we explore the suitability of word embedding technologies in the specialized digital humanities domain. After training embedding models of various types on two popular fantasy novel book series, we evaluate their performance on two task types: term analogies, and word intrusion. To this end, we manually construct test datasets with domain experts. Among the contributions are the evaluation of various word embedding techniques on the different task types, with the findings that even embeddings trained on small corpora perform well for example on the word intrusion task. Furthermore, we provide extensive and high-quality datasets in digital humanities for further investigation, as well as the implementation to easily reproduce or extend the experiments.
Linzen @cite_13 discusses potential pitfalls of the vector offset method of analogy, and presents baselines to improve the utility of vector space evaluations. We apply the suggested baselines in our evaluations.
{ "cite_N": [ "@cite_13" ], "mid": [ "2963176474" ], "abstract": [ "The offset method for solving word analo- gies has become a standard evaluation tool for vector-space semantic models: it is considered desirable for a space to repre- sent semantic relations as consistent vec- tor offsets. We show that the method's re- liance on cosine similarity conflates offset consistency with largely irrelevant neigh- borhood structure, and propose simple baselines that should be used to improve the utility of the method in vector space evaluation." ] }
1903.02671
2919361114
Word embeddings are already well studied in the general domain, usually trained on large text corpora, and have been evaluated for example on word similarity and analogy tasks, but also as an input to downstream NLP processes. In contrast, in this work we explore the suitability of word embedding technologies in the specialized digital humanities domain. After training embedding models of various types on two popular fantasy novel book series, we evaluate their performance on two task types: term analogies, and word intrusion. To this end, we manually construct test datasets with domain experts. Among the contributions are the evaluation of various word embedding techniques on the different task types, with the findings that even embeddings trained on small corpora perform well for example on the word intrusion task. Furthermore, we provide extensive and high-quality datasets in digital humanities for further investigation, as well as the implementation to easily reproduce or extend the experiments.
The majority of terms in our analogy and word intrusion datasets are named entities denoted with proper names. In contrast to other types of noun phrases, proper names are not well studied in DS @cite_37 @cite_1 . In recent years, some works appeared that investigate the specifics of entities. @cite_24 predict discrete referential attributes of entities from DS models. @cite_35 study entities and concepts in DS, esp. the respective instance-of and hypernymy relations. While our datasets are not directed at specific types of relations, they contain instance-of analogy tasks. Herbelot @cite_37 contextualizes concepts with local entity information, and point out important characteristics of proper names (uniqueness, instantiation and individuality). @cite_1 predict semantic relations between entities collected from Freebase with the analogy method and a feed-forward neural network, and analyze factors for task difficulty, such as 1:n relations and relations with many instances.
{ "cite_N": [ "@cite_24", "@cite_35", "@cite_37", "@cite_1" ], "mid": [ "2250382531", "2741207472", "2250606819", "2741361404" ], "abstract": [ "Distributional methods have proven to excel at capturing fuzzy, graded aspects of meaning (Italy is more similar to Spain than to Germany). In contrast, it is difficult to extract the values of more specific attributes of word referents from distributional representations, attributes of the kind typically found in structured knowledge bases (Italy has 60 million inhabitants). In this paper, we pursue the hypothesis that distributional vectors also implicitly encode referential attributes. We show that a standard supervised regression model is in fact sufficient to retrieve such attributes to a reasonable degree of accuracy: When evaluated on the prediction of both categorical and numeric attributes of countries and cities, the model consistently reduces baseline error by 30 , and is not far from the upper bound. Further analysis suggests that our model is able to “objectify” distributional representations for entities, anchoring them more firmly in the external world in measurable ways.", "The authors have received funding from DFG (SFB 732, project B9). and from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 715154; AMORE), as well as under the Marie Sklodowska-Curie grant agreement No 655577 (LOVe).", "This paper investigates the representation of proper names in distributional semantics. We define three properties we expect names to display: uniqueness (being a unique entity), instantiation (being an instance of a relevant kind) and individuality (being separable from the subspace of concepts). We show that taking a standard distribution as the representation of a name does not satisfy those properties particularly well. We propose an alternative method to compute a name vector, which relies on re-weighting the distribution of the appropriate named entity type – in effect, producing an individual out of a kind. We illustrate the behaviour of such representations over some characters from two English novels.", "Comunicacio presentada a la 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), celebrat els dies 3 i 4 d'agost de 2017 a Vancouver, Canada." ] }
1903.02460
2924689635
Abstract Labeled data sets are necessary to train and evaluate anomaly-based network intrusion detection systems. This work provides a focused literature survey of data sets for network-based intrusion detection and describes the underlying packet- and flow-based network data in detail. The paper identifies 15 different properties to assess the suitability of individual data sets for specific evaluation scenarios. These properties cover a wide range of criteria and are grouped into five categories such as data volume or recording environment for offering a structured search. Based on these properties, a comprehensive overview of existing data sets is given. This overview also highlights the peculiarities of each data set. Furthermore, this work briefly touches upon other sources for network-based data such as traffic generators and data repositories. Finally, we discuss our observations and provide some recommendations for the use and the creation of network-based data sets.
This section reviews related work on network-based data sets for intrusion detection. It should be noted that host-based intrusion detection data sets like ADFA @cite_70 are not considered in this paper. Interested readers may find details on host-based intrusion detection data in Glass- @cite_5 .
{ "cite_N": [ "@cite_70", "@cite_5" ], "mid": [ "1981738628", "2803476375" ], "abstract": [ "Intrusion detection systems are generally tested using datasets compiled at the end of last century, justified by the need for publicly available test data and the lack of any other alternative datasets. Prominent amongst this legacy group is the KDD project. Whilst a seminal contribution at the time of compilation, these datasets no longer represent relevant architecture or contemporary attack protocols, and are beset by data corruptions and inconsistencies. Hence, testing of new IDS approaches against these datasets does not provide an effective performance metric, and contributes to erroneous efficacy claims. This paper introduces a new publicly available dataset which is representative of modern attack structure and methodology. The new dataset is contrasted with the legacy datasets, and the performance difference of commonly used intrusion detection algorithms is highlighted.", "This survey focuses on intrusion detection systems (IDS) that leverage host-based data sources for detecting attacks on enterprise network. The host-based IDS (HIDS) literature is organized by the input data source, presenting targeted sub-surveys of HIDS research leveraging system logs, audit data, Windows Registry, file systems, and program analysis. While system calls are generally included in audit data, several publicly available system call datasets have spawned a flurry of IDS research on this topic, which merits a separate section. Similarly, a section surveying algorithmic developments that are applicable to HIDS but tested on network data sets is included, as this is a large and growing area of applicable literature. To accommodate current researchers, a supplementary section giving descriptions of publicly available datasets is included, outlining their characteristics and shortcomings when used for IDS evaluation. Related surveys are organized and described. All sections are accompanied by tables concisely organizing the literature and datasets discussed. Finally, challenges, trends, and broader observations are throughout the survey and in the conclusion along with future directions of IDS research." ] }
1903.02460
2924689635
Abstract Labeled data sets are necessary to train and evaluate anomaly-based network intrusion detection systems. This work provides a focused literature survey of data sets for network-based intrusion detection and describes the underlying packet- and flow-based network data in detail. The paper identifies 15 different properties to assess the suitability of individual data sets for specific evaluation scenarios. These properties cover a wide range of criteria and are grouped into five categories such as data volume or recording environment for offering a structured search. Based on these properties, a comprehensive overview of existing data sets is given. This overview also highlights the peculiarities of each data set. Furthermore, this work briefly touches upon other sources for network-based data such as traffic generators and data repositories. Finally, we discuss our observations and provide some recommendations for the use and the creation of network-based data sets.
Furthermore, several other recent papers touch upon network-based data sets, even though they have a different primary focus. @cite_44 present a comprehensive review of network anomaly detection. The authors describe nine existing data sets and analyze data sets which are used by existing anomaly detection methods. Similarly, @cite_68 focus on unsupervised methods for intrusion detection and briefly refer to 12 existing network-based data sets. Yavanoglu and Aydos @cite_0 analyze and compare the most commonly used data sets for intrusion detection. However, their review contains only seven data sets including other data sets like HTTP CSIC 2010 @cite_39 . All in all, these works tend to have different research objectives and only touch upon network-based data sets marginally.
{ "cite_N": [ "@cite_44", "@cite_68", "@cite_0", "@cite_39" ], "mid": [ "1966809779", "2870670057", "2783664444", "" ], "abstract": [ "Network anomaly detection is an important and dynamic research area. Many network intrusion detection methods and systems (NIDS) have been proposed in the literature. In this paper, we provide a structured and comprehensive overview of various facets of network anomaly detection so that a researcher can become quickly familiar with every aspect of network anomaly detection. We present attacks normally encountered by network intrusion detection systems. We categorize existing network anomaly detection methods and systems based on the underlying computational techniques used. Within this framework, we briefly describe and compare a large number of network anomaly detection methods and systems. In addition, we also discuss tools that can be used by network defenders and datasets that researchers in network anomaly detection can use. We also highlight research directions in network anomaly detection.", "Over the last five years there has been an increase in the frequency and diversity of network attacks. This holds true, as more and more organizations admit compromises on a daily basis. Many misuse and anomaly based intrusion detection systems (IDSs) that rely on either signatures, supervised or statistical methods have been proposed in the literature, but their trustworthiness is debatable. Moreover, as this paper uncovers, the current IDSs are based on obsolete attack classes that do not reflect the current attack trends. For these reasons, this paper provides a comprehensive overview of unsupervised and hybrid methods for intrusion detection, discussing their potential in the domain. We also present and highlight the importance of feature engineering techniques that have been proposed for intrusion detection. Furthermore, we discuss that current IDSs should evolve from simple detection to correlation and attribution. We descant how IDS data could be used to reconstruct and correlate attacks to identify attackers, with the use of advanced data analytics techniques. Finally, we argue how the present IDS attack classes can be extended to match the modern attacks and propose three new classes regarding the outgoing network communication.", "It is an undeniable fact that currently information is a pretty significant presence for all companies or organizations. Therefore protecting its security is crucial and the security models driven by real datasets has become quite important. The operations based on military, government, commercial and civilians are linked to the security and availability of computer systems and network. From this point of security, the network security is a significant issue because the capacity of attacks is unceasingly rising over the years and they turn into be more sophisticated and distributed. The objective of this review is to explain and compare the most commonly used datasets. This paper focuses on the datasets used in artificial intelligent and machine learning techniques, which are the primary tools for analyzing network traffic and detecting abnormalities.", "" ] }
1903.02613
2920949083
Language-based ecosystems (LBE), i.e., software ecosystems based on a single programming language, are very common. Examples include the npm ecosystem for JavaScript, and PyPI for Python. These environments encourage code reuse between packages, and incorporate utilities - package managers - for automatically resolving dependencies. However, the same aspects that make these systems popular - ease of publishing code and importing external code - also create novel security issues, which have so far seen little study. We present an a systematic study of security issues that plague LBEs. These issues are inherent to the ways these ecosystems work and cannot be resolved by fixing software vulnerabilities in either the packages or the utilities, e.g., package manager tools, that build these ecosystems. We systematically characterize recent security attacks from various aspects, including attack strategies, vectors, and goals. Our characterization and in-depth analysis of npm and PyPI ecosystems, which represent the largest LBEs, covering nearly one million packages indicates that these ecosystems make an opportune environment for attackers to incorporate stealthy attacks. Overall, we argue that (i) fully automated detection of malicious packages is likely to be unfeasible; however (ii) tools and metrics that help developers assess the risk of including external dependencies would go a long way toward preventing attacks.
The work most closely related to ours is perhaps Hejderup's master thesis @cite_19 . This work quantifies the presence of vulnerable packages within the npm repository, and the extent to which other packages depend - directly or indirectly - on them. This analysis is relevant to ours, as it establishes useful practices for quantitative analysis of a package dependency graph. However, our scope is clearly different from this work---we consider attacks that are inherent to LBEs rather than those arising from software vulnerabilities.
{ "cite_N": [ "@cite_19" ], "mid": [ "2138318139" ], "abstract": [ "We present the Maven Dependency Dataset (MDD), containing metrics, changes and dependencies of 148,253 jar files. Metrics and changes have been calculated at the level of individual methods, classes and packages of multiple library versions. A complete call graph is also presented which includes call, inheritance, containment and historical relationships between all units of the entire repository. In this paper, we describe our dataset and the methodology used to obtain it. We present different conceptual views of MDD and we also describe limitations and data quality issues that researchers using this data should be aware of." ] }
1903.02613
2920949083
Language-based ecosystems (LBE), i.e., software ecosystems based on a single programming language, are very common. Examples include the npm ecosystem for JavaScript, and PyPI for Python. These environments encourage code reuse between packages, and incorporate utilities - package managers - for automatically resolving dependencies. However, the same aspects that make these systems popular - ease of publishing code and importing external code - also create novel security issues, which have so far seen little study. We present an a systematic study of security issues that plague LBEs. These issues are inherent to the ways these ecosystems work and cannot be resolved by fixing software vulnerabilities in either the packages or the utilities, e.g., package manager tools, that build these ecosystems. We systematically characterize recent security attacks from various aspects, including attack strategies, vectors, and goals. Our characterization and in-depth analysis of npm and PyPI ecosystems, which represent the largest LBEs, covering nearly one million packages indicates that these ecosystems make an opportune environment for attackers to incorporate stealthy attacks. Overall, we argue that (i) fully automated detection of malicious packages is likely to be unfeasible; however (ii) tools and metrics that help developers assess the risk of including external dependencies would go a long way toward preventing attacks.
A related line of work is on the study of application ecosystems, most recently of mobile application markets such as the Google Play store @cite_44 @cite_17 @cite_16 @cite_25 . These works are primarily concerned with applications used by consumers, rather than application components (i.e. packages) that are specific to the language ecosystem and are used by developers. As such, characterization of app markets (and defenses proposed against malicious applications) are largely orthogonal to our work. The closest work to our own is in the detection of applications, whereby a lesser-known or actively malicious developer will re-package and re-publish a better-known app. Detecting application clones has typically been done via code similarity metrics @cite_2 or behavior @cite_14 . In contrast, our approach is based entirely on the metadata of the entire package repository.
{ "cite_N": [ "@cite_14", "@cite_44", "@cite_2", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "2072139392", "2141554582", "2401896535", "", "", "2794995912" ], "abstract": [ "Smartphones rely on their vibrant application markets; however, plagiarism threatens the long-term health of these markets. We present a scalable approach to detecting similar Android apps based on their semantic information. We implement our approach in a tool called AnDarwin and evaluate it on 265,359 apps collected from 17 markets including Google Play and numerous third-party markets. In contrast to earlier approaches, AnDarwin has four advantages: it avoids comparing apps pairwise, thus greatly improving its scalability; it analyzes only the app code and does not rely on other information—such as the app’s market, signature, or description—thus greatly increasing its reliability; it can detect both full and partial app similarity; and it can automatically detect library code and remove it from the similarity analysis. We present two use cases for AnDarwin: finding similar apps by different developers (“clones”) and similar apps from the same developer (“rebranded”). In 10 hours, AnDarwin detected at least 4,295 apps that are the victims of cloning and 36,106 rebranded apps. Additionally, AnDarwin detects similar code that is injected into many apps, which may indicate the spread of malware. Our evaluation demonstrates AnDarwin’s ability to accurately detect similar apps on a large scale.", "Although millions of users download and use third-party Android applications from the Google Play store, little information is known on an aggregated level about these applications. We have built PlayDrone, the first scalable Google Play store crawler, and used it to index and analyze over 1,100,000 applications in the Google Play store on a daily basis, the largest such index of Android applications. PlayDrone leverages various hacking techniques to circumvent Google's roadblocks for indexing Google Play store content, and makes proprietary application sources available, including source code for over 880,000 free applications. We demonstrate the usefulness of PlayDrone in decompiling and analyzing application content by exploring four previously unaddressed issues: the characterization of Google Play application content at large scale and its evolution over time, library usage in applications and its impact on application portability, duplicative application content in Google Play, and the ineffectiveness of OAuth and related service authentication mechanisms resulting in malicious users being able to easily gain unauthorized access to user data and resources on Amazon Web Services and Facebook.", "The appearance of the Android platform and its popularity has resulted in a sharp rise in the number of reported vulnerabilities and consequently in the number of mobile threats. Leveraging openness of Android app markets and the lack of security testing, malware authors commonly plagiarize Android applications (e.g., through code reuse and repackaging) boosting the amount of malware on the markets and consequently the infection rate.", "", "", "Survivors of intimate partner violence increasingly report that abusers install spyware on devices to track their location, monitor communications, and cause emotional and physical harm. To date there has been only cursory investigation into the spyware used in such intimate partner surveillance (IPS). We provide the first in-depth study of the IPS spyware ecosystem. We design, implement, and evaluate a measurement pipeline that combines web and app store crawling with machine learning to find and label apps that are potentially dangerous in IPS contexts. Ultimately we identify several hundred such IPS-relevant apps. While we find dozens of overt spyware tools, the majority are \"dual-use\" apps — they have a legitimate purpose (e.g., child safety or anti-theft), but are easily and effectively repurposed for spying on a partner. We document that a wealth of online resources are available to educate abusers about exploiting apps for IPS. We also show how some dual-use app developers are encouraging their use in IPS via advertisements, blogs, and customer support services. We analyze existing anti-virus and anti-spyware tools, which universally fail to identify dual-use apps as a threat." ] }
1903.02613
2920949083
Language-based ecosystems (LBE), i.e., software ecosystems based on a single programming language, are very common. Examples include the npm ecosystem for JavaScript, and PyPI for Python. These environments encourage code reuse between packages, and incorporate utilities - package managers - for automatically resolving dependencies. However, the same aspects that make these systems popular - ease of publishing code and importing external code - also create novel security issues, which have so far seen little study. We present an a systematic study of security issues that plague LBEs. These issues are inherent to the ways these ecosystems work and cannot be resolved by fixing software vulnerabilities in either the packages or the utilities, e.g., package manager tools, that build these ecosystems. We systematically characterize recent security attacks from various aspects, including attack strategies, vectors, and goals. Our characterization and in-depth analysis of npm and PyPI ecosystems, which represent the largest LBEs, covering nearly one million packages indicates that these ecosystems make an opportune environment for attackers to incorporate stealthy attacks. Overall, we argue that (i) fully automated detection of malicious packages is likely to be unfeasible; however (ii) tools and metrics that help developers assess the risk of including external dependencies would go a long way toward preventing attacks.
Other authors have looked at the more general problem of , i.e., vulnerabilities in the open-source applications on which a software package depends. Tellnes' Master's thesis @cite_22 investigates the effect of various classes of dependencies (including those among software components) on the reliability of a system. Various approaches to the containment of vulnerable dependencies are proposed, such as secure wrappers. However, such approaches are explicitly designed for "benign" failure scenarios and unlikely to be effective against malicious dependency injections. @cite_31 investigated vulnerable dependencies in a set of 75 production systems in the Netherlands, finding that over 70
{ "cite_N": [ "@cite_31", "@cite_22" ], "mid": [ "2060337373", "1999265552" ], "abstract": [ "Known security vulnerabilities can be introduced in software systems as a result of being dependent upon third-party components. These documented software weaknesses are “hiding in plain sight” and represent low hanging fruit for attackers. In this paper we present the Vulnerability Alert Service (VAS), a tool-based process to track known vulnerabilities in software systems throughout their life cycle. We studied its usefulness in the context of external software product quality monitoring provided by the Software Improvement Group, a software advisory company based in Amsterdam, the Netherlands. Besides empirically assessing the usefulness of the VAS, we have also leveraged it to gain insight and report on the prevalence of third-party components with known security vulnerabilities in proprietary applications.", "An unpatched vulnerability can lead to security breaches. When a new vulnerability is discovered, it needs to be assessed so that it can be prioritized. A major challenge in software security is the assessment of the potential risk due to vulnerability exploitability. CVSS metrics have become a de facto standard that is commonly used to assess the severity of a vulnerability. The CVSS Base Score measures severity based on exploitability and impact measures. CVSS exploitability is measured based on three metrics: Access Vector, Authentication, and Access Complexity. However, CVSS exploitability measures assign subjective numbers based on the views of experts. Two of its factors, Access Vector and Authentication, are the same for almost all vulnerabilities. CVSS does not specify how the third factor, Access Complexity, is measured, and hence we do not know if it considers software properties as a factor. In this paper, we propose an approach that assesses the risk of vulnerability exploitability based on two software properties - attack surface entry points and reach ability analysis. A vulnerability is reachable if it is located in one of the entry points or is located in a function that is called either directly or indirectly by the entry points. The likelihood of an entry point being used in an attack can be assessed by using damage potential-effort ratio in the attack surface metric and the presence of system calls deemed dangerous. To illustrate the proposed method, five reported vulnerabilities of Apache HTTP server 1.3.0 have been examined at the source code level. The results show that the proposed approach, which uses more detailed information, can yield a risk assessment that can be different from the CVSS Base Score." ] }
1903.02628
2922263328
In this paper we present a formulation of the unit commitment problem with AC power flow constraints. It is solved by a Benders decomposition in which the unit commitment master problem is formulated as a mixed-integer problem with linearization of the power generation constraints for improved convergence. Semidefinite programming relaxation of the rectangular AC optimal power flow is used in the subproblem, providing somewhat conservative cuts. Numerical case studies, including a 6-bus and the IEEE 118-bus network, are provided to test the effectiveness of our proposal. We show in our numerical experiments that the use of such strategy improves the quality of feasibility and optimality cuts generated by the solution of the convex relaxation of the subproblem, therefore reducing the number of iterations required for algorithm convergence.
A properly comprehensive bibliographical review of solution methods proposed for the UC problem and its many variants is out of our intended scope for the paper. Therefore, this section traces back past work related to a lineage of algorithms of interest to AC transmission-constrained problem formulations, especially those employing convex relaxation. Since its inception, several techniques to solve the UC problem have been reported @cite_13 , with dynamic programming (DP) having been explored in early works. presented a fully linearized formulation framework @cite_25 , for which a so-called security function is proposed such that cold starts are penalized. This function has also been used to assess system security in an hourly, probabilistic basis @cite_5 . and have considered information'' for the removal of infeasible paths to allow for computational tractability. Because of its combinatorial nature, the use of heuristics has then been shown to be suitable for large instances of the problem @cite_29 , e.g. proposed the use of greedy algorithms ordered by the average operation cost--these solutions are then evaluated for full supply of demand, and tested for feasibility by means of an optimal power flow (OPF).
{ "cite_N": [ "@cite_5", "@cite_29", "@cite_13", "@cite_25" ], "mid": [ "2002780669", "2104031208", "2159528632", "2127046496" ], "abstract": [ "A method is described for determining the most economical generating utiit commitment policy and loading schedule for a day's operation of an electric utility system while maintaining a desired level of system reliability.", "In modern world, every industries including the power industry address technological changes. Therefore, it is necessary to keep track of international experiences and activities taking place in every field. This paper provides an overview of the concept of Unit Commitment (UC) problem with a bibliographical survey of relevant background, the present state and potential methodologies used for solving the concern problem. From the literature, UC problem can be categorized based on the system characteristics and policies applied. This paper reviews not only on regulated and deregulated power systems but also on power systems with renewable energy sources and storage system. In addition, it presents a comprehensive review of the methodologies, which covers a wide span of deterministic, meta-heuristic and hybrid approaches including multi-agent system approach. In terms of contribution, it formulates the problem clearly and describes appropriate approaches to solve the problem. The collected literature of methodologies has been divided into many sections, so that new researchers do not face any difficulty in carrying out research on UC problem under both the regulated and deregulated power industries as well as power system with renewable energy sources and storage system for next generation.", "With the fast-paced changing technologies in the power industry, new power references addressing new technologies are coming to the market. So there is an urgent need to keep track of international experiences and activities taking place in the field of modern unit-commitment (UC) problem. This paper gives a bibliographical survey, mathematical formulations, and general backgrounds of research and developments in the field of UC problem for past 35 years based on more than 150 published articles. The collected literature has been divided into many sections, so that new researchers do not face any difficulty in carrying out research in the area of next-generation UC problem under both the regulated and deregulated power industry.", "The paper describes a new method of scheduling thermal generating units to achieve minimum operating costs including both running and start-up costs while at the same time maintaining a desired level of system security." ] }
1903.02628
2922263328
In this paper we present a formulation of the unit commitment problem with AC power flow constraints. It is solved by a Benders decomposition in which the unit commitment master problem is formulated as a mixed-integer problem with linearization of the power generation constraints for improved convergence. Semidefinite programming relaxation of the rectangular AC optimal power flow is used in the subproblem, providing somewhat conservative cuts. Numerical case studies, including a 6-bus and the IEEE 118-bus network, are provided to test the effectiveness of our proposal. We show in our numerical experiments that the use of such strategy improves the quality of feasibility and optimality cuts generated by the solution of the convex relaxation of the subproblem, therefore reducing the number of iterations required for algorithm convergence.
In Lagrangian relaxation (LR) applications to UC it was common to relax reserve and demand coupling constraints in order to create individual subproblems for every unit. Other common approaches include the representation of individual unit subproblems as mixed-integer problems @cite_23 with techniques to select the Lagrange multipliers that maximize lower bounds produced by the relaxation, as well as the identification of identical solutions @cite_8 that cause dual solutions to be far from the optimum by means of successive subproblems.
{ "cite_N": [ "@cite_23", "@cite_8" ], "mid": [ "2109425988", "2168324649" ], "abstract": [ "Two major decisions are made when scheduling the operations of a fossil-fuel power-generating system over a short time horizon. The “unit commitment” decision indicates what generating units are to be in use at each point in time. The “economic dispatch” decision is the allocation of system demand among the generating units in operation at any point in time. Both these decisions must be considered to achieve a least-cost schedule over the short time horizon. In this paper we present a mixed integer programming model for the short time horizon power-scheduling problem. The objective of the model is to minimize the sum of the unit commitment and economic dispatch costs subject to demand, reserve, and generator capacity and generator schedule constraints. A branch-and-bound algorithm is proposed using a Lagrangian method to decompose the problem into single generator problems. A sub gradient method is used to select the Lagrange multipliers that maximize the lower bound produced by the relaxation. We present...", "When the Lagrangian relaxation based methods are applied to solve power system unit commitment, the identical solutions to the subproblems associated with identical units may cause the dual solution to be far away from the optimal solution and serious solution oscillations. As a result, the quality of the feasible solution obtained may be very unsatisfactory. This issue has been long recognized as an inherent disadvantage of Lagrangian relaxation based methods. In this paper, the homogeneous solution issue is identified and analyzed through a simple example. Based on this analysis, a successive subproblem solving method is developed. The new method combines the concepts of augmented Lagrangian relaxation and surrogate subgradient to produce a good search direction at the high level. The low level subproblems including those corresponding to the identical units are solved successively so that the commitments of the identical units may not be homogeneous in the dual solution. Compared with the standard Lagrangian relaxation method, the new method can obtain better dual solutions and avoid the solution oscillations. Numerical testing shows the new method is efficient and the quality of the feasible solution is greatly improved." ] }
1903.02308
2967782694
Ground robots which are able to navigate a variety of terrains are needed in many domains. One of the key aspects is the capability to adapt to the ground structure, which can be realized through movable body parts coming along with additional degrees of freedom (DoF). However, planning respective locomotion is challenging since suitable representations result in large state spaces. Employing an additional abstract representation—which is coarser, lower-dimensional, and semantically enriched—can support the planning. While a desired robot representation and action set of such an abstract representation can be easily defined, the cost function requires large tuning efforts. We propose a method to represent the cost function as a CNN. Training of the network is done on generated artificial data, while it generalizes well to the abstraction of real world scenes. We further apply our method to the problem of search-based planning of hybrid driving-stepping locomotion. The abstract representation is used as a powerful informed heuristic which accelerates planning by multiple orders of magnitude.
Most robot motion planning approaches are either sampling-based, such as Rapidly-exploring Random Trees (RRT) @cite_19 or Probabilistic Roadmaps (PRM) @cite_12 , search-based, such as A* @cite_8 or a combination of those @cite_9 . Low-dimensional motion planning in 2D or 3D state spaces, can be seen as solved with these approaches. However, it is still challenging to solve high-dimensional, large planning problems since the required computational power and memory significantly increase with an increasing state space size.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_12", "@cite_8" ], "mid": [ "131069610", "2062710279", "2128990851", "1969483458" ], "abstract": [ "", "The RRT algorithm can deal with the motion planning problems in consideration of non-holonomic differential constraints, but it does not taken into consideration the optimal path problem in planning process. Random selection of nodes leads to every planning cost relatively different because of only use of metric function in new node selection. In this paper, an improved heuristic RRT-A* algorithm is proposed for robot motion planning with non-holonomic constraints. In this algorithm, the cost function of A-Star(A*) is introduced into the RRT algorithm to optimize the performance. Meanwhile, several metric functions are used as the heuristic information functions respectively to measure the performance of different metric function. The simulation results shown that the Manhattan heuristic information function based RRT-A* planning algorithm is better than the other improved RRT algorithms in optimization path and computational cost.", "A new motion planning method for robots in static workspaces is presented. This method proceeds in two phases: a learning phase and a query phase. In the learning phase, a probabilistic roadmap is constructed and stored as a graph whose nodes correspond to collision-free configurations and whose edges correspond to feasible paths between these configurations. These paths are computed using a simple and fast local planner. In the query phase, any given start and goal configurations of the robot are connected to two nodes of the roadmap; the roadmap is then searched for a path joining these two nodes. The method is general and easy to implement. It can be applied to virtually any type of holonomic robot. It requires selecting certain parameters (e.g., the duration of the learning phase) whose values depend on the scene, that is the robot and its workspace. But these values turn out to be relatively easy to choose, Increased efficiency can also be achieved by tailoring some components of the method (e.g., the local planner) to the considered robots. In this paper the method is applied to planar articulated robots with many degrees of freedom. Experimental results show that path planning can be done in a fraction of a second on a contemporary workstation ( spl ap 150 MIPS), after learning for relatively short periods of time (a few dozen seconds).", "Although the problem of determining the minimum cost path through a graph arises naturally in a number of interesting applications, there has been no underlying theory to guide the development of efficient search procedures. Moreover, there is no adequate conceptual framework within which the various ad hoc search strategies proposed to date can be compared. This paper describes how heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching and demonstrates an optimality property of a class of search strategies." ] }
1903.02308
2967782694
Ground robots which are able to navigate a variety of terrains are needed in many domains. One of the key aspects is the capability to adapt to the ground structure, which can be realized through movable body parts coming along with additional degrees of freedom (DoF). However, planning respective locomotion is challenging since suitable representations result in large state spaces. Employing an additional abstract representation—which is coarser, lower-dimensional, and semantically enriched—can support the planning. While a desired robot representation and action set of such an abstract representation can be easily defined, the cost function requires large tuning efforts. We propose a method to represent the cost function as a CNN. Training of the network is done on generated artificial data, while it generalizes well to the abstraction of real world scenes. We further apply our method to the problem of search-based planning of hybrid driving-stepping locomotion. The abstract representation is used as a powerful informed heuristic which accelerates planning by multiple orders of magnitude.
A solution to handle large environment sizes is multi-resolution planning @cite_2 . To handle high-dimensional state spaces, a local adaptation of the robot representation is an option. In previous work @cite_5 , we have proposed a search-based approach to plan hybrid driving-stepping locomotion. Similarly, @cite_4 have planned multi-modal paths for a humanoid with a search-based planner. Both approaches handle the occurring high-dimensional state spaces by separating the planning problem with respect to the locomotion mode and apply high-dimensional planning only if required. Nevertheless, both works suffer the problem of handling large scenarios in feasible time since the high-dimensional represented areas are still too large.
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_2" ], "mid": [ "2771957408", "2963655875", "73627945" ], "abstract": [ "Hybrid driving-stepping locomotion is an effective approach for navigating in a variety of environments. Long, sufficiently even distances can be quickly covered by driving while obstacles can be overcome by stepping. Our quadruped robot Momaro, with steerable pairs of wheels located at the end of each of its compliant legs, allows such locomotion. Planning respective paths attracted only little attention so far. We propose a navigation planning method which generates hybrid locomotion paths. The planner chooses driving mode whenever possible and takes into account the detailed robot footprint. If steps are required, the planner includes those. To accelerate planning, steps are planned first as abstract manoeuvres and are expanded afterwards into detailed motion sequences. Our method ensures at all times that the robot stays stable. Experiments show that the proposed planner is capable of providing paths in feasible time, even for challenging terrain.", "In this work, we present an approach to planning for humanoid mobility. Humanoid mobility is a challenging problem, as the configuration space for a humanoid robot is intractably large, especially if the robot is capable of performing many types of locomotion. For example, a humanoid robot may be able to perform such tasks as bipedal walking, crawling, and climbing. Our approach is to plan for all these tasks within a single search process. This allows the search to reason about all the capabilities of the robot at any point, and to derive the complete solution such that the plan is guaranteed to be feasible. A key observation is that we often can roughly decompose a mobility task into a sequence of smaller tasks, and focus planning efforts to reason over much smaller search spaces. To this end, we leverage the results of a recently developed framework for planning with adaptive dimensionality, and incorporate the capabilities of available controllers directly into the planning process. The resulting planner can also be run in an interleaved fashion alongside execution so that time spent idle is much reduced.", "Grid-based methods for finding cost optimal robot paths around obstacles are popular because of their flexibility and simple implementation. However, their computational complexity becomes unfeasible for real-time path planning if the resolution of the grid is high." ] }
1903.02308
2967782694
Ground robots which are able to navigate a variety of terrains are needed in many domains. One of the key aspects is the capability to adapt to the ground structure, which can be realized through movable body parts coming along with additional degrees of freedom (DoF). However, planning respective locomotion is challenging since suitable representations result in large state spaces. Employing an additional abstract representation—which is coarser, lower-dimensional, and semantically enriched—can support the planning. While a desired robot representation and action set of such an abstract representation can be easily defined, the cost function requires large tuning efforts. We propose a method to represent the cost function as a CNN. Training of the network is done on generated artificial data, while it generalizes well to the abstraction of real world scenes. We further apply our method to the problem of search-based planning of hybrid driving-stepping locomotion. The abstract representation is used as a powerful informed heuristic which accelerates planning by multiple orders of magnitude.
However, those approaches only neglect information in their coarse low-dimensional representations which might result in wrong assessments, especially for complex terrain. This is addressed by abstraction: Representations are coarser but semantically enriched to compensate the information loss. A theoretical basis for abstraction for search-based planning has been given by Holte al @cite_18 . @cite_6 , we have extended hybrid driving-stepping locomotion planning to three levels of abstraction. With increasing abstraction, the environment is represented in a coarser resolution but with additional hand-crafted features such as height differences or terrain classes. In addition, the robot representation has a coarser resolution and less dimensions with increasing abstraction. The costs functions were manually tuned to obtain . This was done by iteratively comparing costs on a small set of exemplary tasks and adjusting parameters. The abstract representations accelerate planning by multiple orders of magnitude while the path quality stays comparable. Especially the utilization of the most abstract representation as a heuristic leads to significant speedup. However, the design of descriptive features and tuning of cost functions require extensive manual parametrization and are very dependent on the used set of exemplary tasks.
{ "cite_N": [ "@cite_18", "@cite_6" ], "mid": [ "128952136", "2891700736" ], "abstract": [ "Abstraction, in search, problem solving, and planning, works by replacing one state space by another (the \"abstract\" space) that is easier to search. The results of the search in the abstract space are used to guide search in the original space. For instance, the length of the abstract solution can be used as a heuristic for A* in searching in the original space. However, there are two obstacles to making this work efficiently. The first is a theorem (Valtorta, 1984) stating that for a large class of abstractions, \"embedding abstractions,\" every state expanded by blind search must also be expanded by A* when its heuristic is computed in this way. The second obstacle arises because in solving a problem A* needs repeatedly to do a full search of the abstract space while computing its heuristic. This paper introduces a new abstraction-induced search technique, \"Hierarchical A*,\" that gets around both of these difficulties: first, by drawing from a different class of abstractions, \"homomorphism abstractions,\" and, secondly, by using novel caching techniques to avoid repeatedly expanding the same states in successive searches in the abstract space. Hierarchical A* outperforms blind search on all the search spaces studied.", "Navigating in search and rescue environments is challenging, since a variety of terrains has to be considered. Hybrid driving-stepping locomotion, as provided by our robot Momaro, is a promising approach. Similar to other locomotion methods, it incorporates many degrees of freedom—offering high flexibility but making planning computationally expensive for larger environments. We propose a navigation planning method, which unifies different levels of representation in a single planner. In the vicinity of the robot, it provides plans with a fine resolution and a high robot state dimensionality. With increasing distance from the robot, plans become coarser and the robot state dimensionality decreases. We compensate this loss of information by enriching coarser representations with additional semantics. Experiments show that the proposed planner provides plans for large, challenging scenarios in feasible time." ] }
1903.02308
2967782694
Ground robots which are able to navigate a variety of terrains are needed in many domains. One of the key aspects is the capability to adapt to the ground structure, which can be realized through movable body parts coming along with additional degrees of freedom (DoF). However, planning respective locomotion is challenging since suitable representations result in large state spaces. Employing an additional abstract representation—which is coarser, lower-dimensional, and semantically enriched—can support the planning. While a desired robot representation and action set of such an abstract representation can be easily defined, the cost function requires large tuning efforts. We propose a method to represent the cost function as a CNN. Training of the network is done on generated artificial data, while it generalizes well to the abstraction of real world scenes. We further apply our method to the problem of search-based planning of hybrid driving-stepping locomotion. The abstract representation is used as a powerful informed heuristic which accelerates planning by multiple orders of magnitude.
In recent years, learning-based approaches for solving robot motion planning problems have been proposed. @cite_16 and @cite_10 , CNNs have been trained to map camera images directly to motor commands, for manipulation tasks or steering of a self-driving car. However, the long-term goal-directed behavior of such approaches is usually poor or the training would require unreasonable amounts of data and time. @cite_23 have proposed a differentiable approximation of the value iteration algorithm which can be represented as a CNN---the Value Iteration Networks. Their performance has been evaluated on small 2D grid worlds. Similarly, Karkus al @cite_0 have proposed QMDP-Net which is also capable of planning in 2D grid worlds. @cite_22 have proposed Universal Planning Networks which map images of the initial and goal scene to actions. These three approaches point out the general problem of learning-based approaches at the current state-of-the-art: The required amount of training data and the required network complexity are not manageable for large, high-dimensional planning problems.
{ "cite_N": [ "@cite_22", "@cite_0", "@cite_23", "@cite_16", "@cite_10" ], "mid": [ "2795756076", "2962893898", "2948138929", "2964161785", "2342840547" ], "abstract": [ "A key challenge in complex visuomotor control is learning abstract representations that are effective for specifying goals, planning, and generalization. To this end, we introduce universal planning networks (UPN). UPNs embed differentiable planning within a goal-directed policy. This planning computation unrolls a forward model in a latent space and infers an optimal action plan through gradient descent trajectory optimization. The plan-by-gradient-descent process and its underlying representations are learned end-to-end to directly optimize a supervised imitation learning objective. We find that the representations learned are not only effective for goal-directed visual imitation via gradient-based trajectory optimization, but can also provide a metric for specifying goals using images. The learned representations can be leveraged to specify distance-based rewards to reach new target states for model-free reinforcement learning, resulting in substantially more effective learning when solving new tasks described via image-based goals. We were able to achieve successful transfer of visuomotor planning strategies across robots with significantly different morphologies and actuation capabilities.", "This paper introduces the QMDP-net, a neural network architecture for planning under partial observability. The QMDP-net combines the strengths of model-free learning and model-based planning. It is a recurrent policy network, but it represents a policy for a parameterized set of tasks by connecting a model with a planning algorithm that solves the model, thus embedding the solution structure of planning in a network learning architecture. The QMDP-net is fully differentiable and allows for end-to-end training. We train a QMDP-net on different tasks so that it can generalize to new ones in the parameterized task set and “transfer” to other similar tasks beyond the set. In preliminary experiments, QMDP-net showed strong performance on several robotic tasks in simulation. Interestingly, while QMDP-net encodes the QMDP algorithm, it sometimes outperforms the QMDP algorithm in the experiments, as a result of end-to-end learning.", "We introduce the value iteration network (VIN): a fully differentiable neural network with a 'planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.", "Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.", "We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS)." ] }
1903.02308
2967782694
Ground robots which are able to navigate a variety of terrains are needed in many domains. One of the key aspects is the capability to adapt to the ground structure, which can be realized through movable body parts coming along with additional degrees of freedom (DoF). However, planning respective locomotion is challenging since suitable representations result in large state spaces. Employing an additional abstract representation—which is coarser, lower-dimensional, and semantically enriched—can support the planning. While a desired robot representation and action set of such an abstract representation can be easily defined, the cost function requires large tuning efforts. We propose a method to represent the cost function as a CNN. Training of the network is done on generated artificial data, while it generalizes well to the abstraction of real world scenes. We further apply our method to the problem of search-based planning of hybrid driving-stepping locomotion. The abstract representation is used as a powerful informed heuristic which accelerates planning by multiple orders of magnitude.
To summarize, learning-based planning approaches can handle local problems with limited state space sizes quickly without performing extensive searches. In contrast, traditional planning approaches show good goal-directed behavior but might get stuck in extensive searches for complex high-dimensional problems. Hence, it promising to combine these approaches and merge the advantages of both. Faust al @cite_21 use a reinforcement learning agent to learn short-range, point-to-point navigation policies for 2D and 3D action spaces which capture the robot dynamic and task constraint without considering the large-scale topology. Sampling-based planning is used to plan waypoints which give the planning a long-range goal-directed behavior.
{ "cite_N": [ "@cite_21" ], "mid": [ "2962917939" ], "abstract": [ "We present PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling-based path planning with reinforcement learning (RL). The RL agents learn short-range, point-to-point navigation policies that capture robot dynamics and task constraints without knowledge of the large-scale topology. Next, the sampling-based planners provide roadmaps which connect robot configurations that can be successfully navigated by the RL agent. The same RL agents are used to control the robot under the direction of the planning, enabling long-range navigation. We use the Probabilistic Roadmaps (PRMs) for the sampling-based planner. The RL agents are constructed using feature-based and deep neural net policies in continuous state and action spaces. We evaluate PRM-RL, both in simulation and on-robot, on two navigation tasks with non-trivial robot dynamics: end-to-end differential drive indoor navigation in office environments, and aerial cargo delivery in urban environments with load displacement constraints. Our results show improvement in task completion over both RL agents on their own and traditional sampling-based planners. In the indoor navigation task, PRM-RL successfully completes up to 215 m long trajectories under noisy sensor conditions, and the aerial cargo delivery completes flights over 1000 m without violating the task constraints in an environment 63 million times larger than used in training." ] }
1903.02217
2921019477
Tendon-driven snake-like arms have been used to create highly dexterous continuum robots so that they can bend around anatomical obstacles to access clinical targets. In this paper, we propose a design algorithm for developing patient-specific surgical continuum manipulators optimized for oriental dexterity constrained by task-space obstacles. The algorithm uses a sampling-based approach to finding the dexterity distribution in the workspace discretized by voxels. The oriental dexterity measured in the region of interest in the task-space formed a fitness function to be optimized through differential evolution. This was implemented in the design of a tendon-driven manipulator for knee arthroscopy. The results showed a feasible design that achieves significantly better dexterity than a rigid tool. This highlights the potential of the proposed method to be used in the process of designing dexterous surgical manipulators in the field.
Optimizing the design for snake-like robotic arms has often involved finding trade-offs in relationships with the mechanical design and its task space performance. The initial studies for optimal snake-like robots involved trade-offs with the workspace and certain mechanical properties like the stiffness in a flexible backbone joint @cite_8 . When the patient-specific paradigm emerged, many literature in the field developed cost-functions to describe trade-offs for designing continuum robots for better workspaces and path-planning in patient-specific anatomical environments. This was strongly evident in the design of concentric tube robots where patient 3D anatomies were collected by Magnetic Resonance Images (MRI) and optimizations were done to find the minimal tube lengths and their curvatures while penalizing collisions in a simulated navigation task to some desired targets @cite_5 @cite_9 . In other literature, a sampling-based motion planning approach to the problem was proposed to avoid complex inverse kinematics calculations @cite_2 .
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_2", "@cite_8" ], "mid": [ "2162646641", "2111015103", "2078850086", "" ], "abstract": [ "Concentric tube robots are a novel continuum robot technology that is well suited to minimally invasive surgeries inside small body cavities such as the heart. These robots are constructed of concentrically combined pre-curved elastic tubes to form 3D curves. Each telescopic section of the robot is either of fixed or variable curvature. One advantage of this approach is that the component tube curvatures, lengths and stiffnesses can easily be fabricated to be procedure- and patient-specific. This paper proposes an optimization framework for solving the robot design problem. Given a 3D description of the constraining anatomy, the number of fixed and variable curvature robot sections and a tip workspace description, the algorithm solves for the robot design that possesses the desired workspace, remains inside the anatomical constraints and minimizes the curvature and length of all sections. The approach is illustrated in the context of beating-heart closure of atrial septal defects.", "We propose a novel systematic approach to optimizing the design of concentric tube robots for neurosurgical procedures. These procedures require that the robot approach specified target sites while navigating and operating within an anatomically constrained work space. The availability of preoperative imaging makes our approach particularly suited for neurosurgery, and we illustrate the method with the example of endoscopic choroid plexus ablation. A novel parameterization of the robot characteristics is used in conjunction with a global pattern search optimization method. The formulation returns the design of the least-complex robot capable of reaching single or multiple target points in a confined space with constrained optimization metrics. A particular advantage of this approach is that it identifies the need for either fixed-curvature versus variable-curvature sections. We demonstrate the performance of the method in four clinically relevant examples.", "We introduce a method for task-oriented design of concentric tube robots, which are tentacle-like robots with the potential to enable new minimally invasive surgical procedures. Our objective is to create a robot design on a patient-specific and surgery-specific basis to enable the robot to reach multiple clinically relevant sites while avoiding anatomical obstacles. Our method uses a mechanically accurate model of concentric tube robot kinematics that considers a robot's time-varying shape throughout the performance of a task. Our method combines a search over a robot's design space with sampling-based motion planning over its configuration space to compute a design under which the robot can feasibly perform a specified task without damaging surrounding tissues. To accelerate the algorithm, we leverage design coherence, the observation that collision-free configuration spaces of robots of similar designs are similar. If a solution exists, our method is guaranteed, as time is allowed to increase, to find a design and corresponding feasible motion plan. We provide examples illustrating the importance of using mechanically accurate models during design and motion planning and demonstrating our method's effectiveness in a medically motivated simulated scenario involving navigation through the lung.", "" ] }
1903.02217
2921019477
Tendon-driven snake-like arms have been used to create highly dexterous continuum robots so that they can bend around anatomical obstacles to access clinical targets. In this paper, we propose a design algorithm for developing patient-specific surgical continuum manipulators optimized for oriental dexterity constrained by task-space obstacles. The algorithm uses a sampling-based approach to finding the dexterity distribution in the workspace discretized by voxels. The oriental dexterity measured in the region of interest in the task-space formed a fitness function to be optimized through differential evolution. This was implemented in the design of a tendon-driven manipulator for knee arthroscopy. The results showed a feasible design that achieves significantly better dexterity than a rigid tool. This highlights the potential of the proposed method to be used in the process of designing dexterous surgical manipulators in the field.
Later in the field, new algorithms appeared that focused more towards on task space reachability in a volume rather than navigation to a point. Among those works were methods that discretised the workspace into voxels and generated an objective function to maximize the coverage of the concentric tube workspace in the region of interest @cite_12 . Other algorithms turned this approach into an occupancy grid map where obstacles were dilated and sampled configurations of a centre-line representation of the robot were used to develop a cost function for motion planning @cite_13 . One algorithm maximized reachability using a sampling-based motion planner to enable motions to a variety of target points in the lung @cite_1 . All of these methods for optimizing continuum robots involve defining a cost function that would be optimized using generic optimization algorithms like generalized pattern search, the Nelder-Mead simplex algorithm and adaptive simulated annealing. These studies provided continuum robot algorithms for better navigation and reachability in patient-specific anatomies but none of them focused on optimizing for dexterity.
{ "cite_N": [ "@cite_1", "@cite_13", "@cite_12" ], "mid": [ "2209444443", "2063706473", "2014100751" ], "abstract": [ "Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube pa- rameters can improve the robot's effectiveness on a procedure- and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. I. INTRODUCTION Concentric tube robots are tentacle-like medical robots that can potentially enable safer minimally invasive interventions at many sites in the human body, including the lungs, the skull base, and the heart (1). These robots are composed of nested nitinol tubes that each are precurved, typically with a straight segment followed by a curved segment. To perform a task, the robot axially rotates and translates each tube relative to one another, causing the entire device's shape to change. Concentric tube robots act like shape-changing robotic needles that can curve around anatomical obstacles (e.g., bones, blood vessels, critical nerves, etc.) to reach clinical targets not easily accessed using traditional straight medical instruments. The curvilinear shapes achievable by concentric tube robots are highly dependent on the component tubes' physi- cal specifications. The design of the concentric tubes, includ- ing the tubes' lengths and precurvatures, affects the robot's workspace and the space of the robot's attainable shapes. Consequently, the design of the concentric tubes determines the set of clinical targets that the robot can safely reach. Even with the shape-changing capabilities of a concentric tube robot (as shown in Fig. 2), due to kinematic constraints", "Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of precurved superelastic tubes and are capable of assuming complex 3-D curves. The family of 3-D curves that the robot can assume depends on the number, curvatures, lengths, and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedure- or patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery.", "Concentric tube continuum robots provide an infinite-dimensional design space, consisting of individual tube space curves and other tube parameters. Even when design choices are made to restrict the design space to a small number of discrete parameters, ad hoc selection of parameter values to achieve coverage of a desired volume, in the presence of geometric workspace constraints, is essentially impossible - even for experienced researchers. General design algorithms proposed to date have focused on reaching a discrete set of specific points, and have made non-physical approximations in the robot model (most significantly assuming infinite torsional rigidity), to speed up model computation. In this paper, we extend prior algorithms to use more accurate models and incorporate volume-based objectives. These extensions are illustrated in a case study on the design of a concentric tube robot for endonasal pituitary surgery. We show that volume-based design optimization increases the reachable percentage of the surgical workspace by an average of approximately 50 , in comparison to various sets of manually selected design parameters. We conclude that volume-based objectives should be included in future multi-objective design optimization procedures for concentric tube continuum robots." ] }
1903.02074
2925724262
Autonomous harvesting may provide a viable solution to mounting labor pressures in the United States's strawberry industry. However, due to bottlenecks in machine perception and economic viability, a profitable and commercially adopted strawberry harvesting system remains elusive. In this research, we explore the feasibility of using deep reinforcement learning to overcome these bottlenecks and develop a practical algorithm to address the sub-objective of viewpoint optimization, or the development of a control policy to direct a camera to favorable vantage points for autonomous harvesting. We evaluate the algorithm's performance in a custom, open-source simulated environment and observe encouraging results. Our trained agent yields 8.7 times higher returns than random actions and 8.8 percent faster exploration than our best baseline policy, which uses visual servoing. Visual investigation shows the agent is able to fixate on favorable viewpoints, despite having no explicit means to propagate information through time. Overall, we conclude that deep reinforcement learning is a promising area of research to advance the state of the art in autonomous strawberry harvesting.
To the best of our knowledge, this work is the first research exploring viewpoint optimization via reinforcement learning for autonomous harvesting applications (any crop). A related idea is visual servoing @cite_0 , which is a class of control algorithms acting on image features that has been applied in the autonomous harvesting domain. Unlike our method, visual servoing typically involves hand-specifying features, whereas our algorithm learns a control policy with a data-driven approach. @cite_16 , Mehta and Burks implement visual servoing by means of two cameras: one in the hand of a citrus harvesting robot and one stationary camera with a wide field of view. The feedback from the cameras is then used to create a perspective image and guide the robotic manipulator towards an artificial citrus fruit. One of the main limitations of this approach is that it requires the target fruit to be visible by the fixed camera, which cannot be guaranteed in unstructured environments.
{ "cite_N": [ "@cite_0", "@cite_16" ], "mid": [ "2082991751", "2072150826" ], "abstract": [ "This paper is the first of a two-part series on the topic of visual servo control using computer vision data in the servo loop to control the motion of a robot. In this paper, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: image-based and position-based visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques", "The main contribution of this paper is in the development of vision-based estimation and control system for robotic fruit harvesting and rigorous stability analysis to guarantee performance of the closed-loop system. The presented cooperative visual servo controller benefits from the large field-of-view of a fixed camera and the accuracy of a camera-in-hand (CiH). Computationally inexpensive perspective transformation-based range estimation method obtains 3D fruit position using a monocular camera to enable real-time manipulator control. A rotation controller is developed to orient the robot such that the target fruit selected by the fixed camera can be viewed by the CiH attached to the end-effector. Subsequently, the end-effector can be servoed to the target fruit location using the presented pursuit guidance based hybrid translation controller. Lyapunov-based stability analysis guarantees global exponential regulation of the end-effector. Numerical simulations verify the feasibility of the developed controller while the performance is evaluated on a seven degrees-of-freedom kinematically redundant manipulator using an artificial citrus tree. The position of the fruit was randomly selected, and the closed-loop visual servo control experiment was performed 21 times to analyze the repeatability and accuracy of the developed controller. With 95 confidence level the expected position of the robot end-effector is observed to lie within the confidence ellipsoid. The accuracy of the controller was observed to be about 15mm, thus making the system suitable for harvesting medium and large varieties of citrus fruit but may limit operation for small varieties such as page and blood oranges." ] }
1903.02074
2925724262
Autonomous harvesting may provide a viable solution to mounting labor pressures in the United States's strawberry industry. However, due to bottlenecks in machine perception and economic viability, a profitable and commercially adopted strawberry harvesting system remains elusive. In this research, we explore the feasibility of using deep reinforcement learning to overcome these bottlenecks and develop a practical algorithm to address the sub-objective of viewpoint optimization, or the development of a control policy to direct a camera to favorable vantage points for autonomous harvesting. We evaluate the algorithm's performance in a custom, open-source simulated environment and observe encouraging results. Our trained agent yields 8.7 times higher returns than random actions and 8.8 percent faster exploration than our best baseline policy, which uses visual servoing. Visual investigation shows the agent is able to fixate on favorable viewpoints, despite having no explicit means to propagate information through time. Overall, we conclude that deep reinforcement learning is a promising area of research to advance the state of the art in autonomous strawberry harvesting.
The main algorithms used in this research are DDPG @cite_11 and You Only Look Once, Version 2 (YOLOv2) @cite_30 . DDPG is an off-policy, actor-critic deep reinforcement learning algorithm that is used as the underlying machinery for the viewpoint optimization problem. YOLOv2 is a single-shot object detection algorithm using convolutional neural networks. It is capable of outputting labeled bounding boxes at above real-time speeds using consumer-level hardware. In this research, we train YOLOv2 to detect strawberries and use its output as a feedback mechanism during the reinforcement learning process.
{ "cite_N": [ "@cite_30", "@cite_11" ], "mid": [ "2570343428", "2173248099" ], "abstract": [ "We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time.", "We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs." ] }